top of page

What AI content generators mean for the future of search

an image of author Mordy Oberstein, accompanied by various search-related iconography, including a search bar, a globe, and a micro chip

How the web—and search engines in particular—handle the proliferation of AI-written content (as sparked by OpenAI’s breakthroughs with ChatGPT) is the central question facing SEOs and content creators at the moment.

Foretelling how AI-written content will shape the future of search and SEO is multi-faceted in that it includes both how search engines algorithmically handle that content and how search engines themselves will incorporate AI chat functionality into their ecosystems.

Both of these issues are complex in their own way and, at this point, no one has all the answers. The best I can do is apply my SEO outlook—with respect to how search engines have evolved—to the situation at hand in order to describe what I see as Google’s inevitable reaction to AI-written content and AI chat technology.

Table of contents:

The problem AI-written content poses for the web

In my opinion, the place to start this exploration is not within the scope of SEO. Rather, I think we should examine the (perhaps obvious) problem AI-written content presents to the web as web content is the clay that search engines use to mold their result pages.

Before anything else, it’s vital to understand just how powerful Al content generators are—and I don’t mean the power and the proficiency of the technology per se. Rather, the power AI content generators have to solve a very serious pain point for most of the web: content is hard.

If you come from a content background like myself, it’s sometimes difficult to appreciate just how hard it is to create “high-quality content.” Content creation is really an art form. In my humble opinion, creating strong content relies on both the ability to connect to the recesses of your persona(s) and to then be able to deliver that connection in a scaffolded, methodological, and engaging manner, all at the same time.

Content requires profundity and the unique ability to distribute that profundity into digestible chunks.

At the same time, content is the lifeblood of the web. Without content, there is no such thing as a website.

What a predicament for the average site owner: In a way, when a search engine like Google asks a site owner to “create good content” it’s an unreasonable request. Being able to create a substantial series of content to fill a website is a unique skill. Just like the average person probably can’t change a car transmission, they also can’t create professional-level content.

We get fooled into thinking this is a possibility because everyone can read and write. Being able to write and being able to create content are not one and the same. A site owner who first starts dipping their toes into content creation will quickly realize just how tough and time consuming it is.

For the record, I’m not saying that the average site owner can’t create enough content to put forth a substantial website. What I am saying is that there is a ceiling here.

In addition, the amount of time it takes to create content can be prohibitive for many site owners. So even if a site owner is a fabulous writer, what are the chances that they’ll have the time to create content as it’s a time-consuming process even for the best of us. (Parenthetically, and with much bias, this is one of the great advantages of a platform like Wix in that it frees up time to focus on content creation and this is why, regardless of my bias, the platform represents the future of the web in a certain regard).

And now we arrive at an inflection point: AI content generators seemingly solve both of these pain points. They certainly save time thereby making the content creation process more accessible and ostensibly spin up pretty decent content to boot.

The temptation is real.

To the unsuspecting person, AI content generators open up a huge world of possibilities and cost savings. In other words, the pain and struggle of content creation are so significant and the possible solution AI content generators present is so strong that, inevitably, the web will become filled to capacity with AI-written content.

The problem, of course, is, AI-written content is not a panacea and is in many cases a gateway drug to the proliferation of thin, inaccurate, and unhelpful content.

One could argue the web is already overfilled with such content. That’s why Google has done everything from releasing its “new” set of core updates that began in 2018 to the Product Review Updates in 2021 to the more recent Helpful Content Update.

However, with the newfound capability and accessibility of AI content generators, there is going to be a proliferation of this sort of content unlike anything the internet has ever seen. It will be such a proliferation that Google will have to respond, because if it doesn’t, it faces criticism by discerning users for the irrelevance of its results.

The question is, how will Google solve this inevitability?

Google’s inevitable response to AI-written content

Let me propose a wild scenario: What if every web page whose content answers the question what are snow tires? was created by AI? What if we went really wild with this hypothetical and said all of the content AI content generators spun up to answer this question was more or less the same? (Now that I put this into writing, the latter doesn’t seem that wild.)

A screenshot of the google homepage with the query “what are snow tires?” in the search bar

In such a scenario, if someone went to Google what are snow tires?, how would Google know what page to rank if all of the content out there was of nearly identical quality?

If all snippet-level content is equal, then what will rank at the top?

In a world that may very well be driven by AI-written content, this scenario (while hyperbolic) isn’t that far-fetched.

How much value does human experience lend to snippet-level topics that have been answered across the web a thousand times over? What new insights are you going to add to a static topic like what are snow tires? that hasn’t already been done before?

Snippet-level content has the potential to be a great fit for AI content generators, assuming the technology allows for topical accuracy. So flash forward five years in time when all of this content will be written (in theory) by AI—how does Google decide what to rank for the query what are snow tires? (or whatever snippet-level query) when all of the snippets are relatively the same?

AI-written content and the search engine’s emphasis on domain-level metrics

The problem I laid out above, to borrow an SEO cliche, is “old, not new.”

The truth is, AI-written content amplifies the quality conundrum that already faces the web. There is a proliferation of mediocre content on the web today. The web is, unfortunately, full of fluff content that is more concerned with ranking, traffic, or whatever acquisitional metric, than with helping its target audience.

Google has the same problem with this content as it does with AI-written content. A less-than-stellar site owner can spin up a lot of snippet-level content without exerting a ton of effort, as again, the nature of this content isn’t exactly prolific.

It’s for this reason that “quality” has long been a domain-level metric for Google. (For the record, it’s a long-standing myth among SEOs that Google doesn’t rank sites and instead only ranks pages). Meaning, if the pages on a site for snippet-level content are of “adequate” quality but the other pages that target deeper content needs are not, the performance of those snippet-level pages would be negatively impacted (all other things being equal).

This concept culminated with the advent of the Helpful Content Update, which according to Google’s own documentation:

“ …introduces a new site-wide signal that we consider among many other signals for ranking web pages. Our systems automatically identify content that seems to have little value, low-added value or is otherwise not particularly helpful to those doing searches.”

This issue really comes into focus within niche topics or once you begin to move past surface-level understanding. To me, this is why Google explicitly focuses on sites that don’t offer a level of nuance and depth in their content.

When explaining what sites should avoid (within the context of the Helpful Content Update) the search engine asks content creators:

“Did you decide to enter some niche topic area without any real expertise, but instead mainly because you thought you'd get search traffic?”

Simply put, ranking snippet-level content (that doesn’t really vary from one site to the next) is contextualized by how the site handles content that should be very differentiated. The performance of snippet-level content that is easily churned out doesn’t exist in a vacuum, but is dependent on how well you handle more niche topics and subject matters that require more nuance and expertise.

In other words, ranking is a semantic proposition. What you do on the site, as a whole, impacts page-level performance. And it’s not just a matter of “quality” in the sense that the pages don’t present a negative user experience (i.e., intrusive interstitials or filled to the brim with ads).

Quality is far more holistic than that and far more semantic than that.

Quality, with regard to Google’s algorithms, very much overlaps with relevance. Google doesn’t consider it to be a quality experience if the user is bombarded with all sorts of topics that are not interrelated when navigating through a website (and rightly so).

Is it really a quality site or a quality experience if the user encounters a lack of topical cohesion across the site? A website should have an “identity” and it should be very clear to the user what the function of the site is and what sort of information they might expect to find on it.

Don’t take my word for it, here’s what Google again advises site owners to consider when looking to avoid being negatively impacted by the Helpful Content Update:

“Does your site have a primary purpose or focus? Are you producing lots of content on different topics in hopes that some of it might perform well in search results?”

Having a strong content focus (and the quality of that content itself) sends signals to Google about the entire domain.

This is the answer to the AI-written content conundrum: Domain-level metrics help Google differentiate sites for ranking when the content at the page level is generic. Google will inevitably double down on these domain-level signals as it already has with the Helpful Content Update.

To apply this to our original construct (where all of the snippet-level content answering what is a snow tire? is written by AI and is therefore relatively undistinguishable), ranking this content will involve looking not just at the content on the page, but how the site deals with the topic across the board.

If two pages have basically the same AI-written content about what a snow tire is, Google will be forced to look at the strength of the domain itself with regard to snow tires. Which site has a prolific set of content around snow tires? Which has in-depth, niche knowledge? Which site goes down the snow tire rabbit hole and which site has a heap of snippetable AI-written content?

Parsing out the quality of the domain is how a search engine will function in spaces where AI-written content has legitimately made generic all of the content that answers a top-level query—which makes a great deal of sense.

Don’t look at the query what is a snow tire? simply as someone looking to get an answer to a specific question. Rather, zoom out. This person is looking for information about snow tires as a topic. Which site then makes the most sense for them to visit: a site that has a few pages of generic information about snow tires or a site that is dedicated to diving into the wonderful world of tires?

Domain-level metrics also make sense without the problem AI-written content poses. All AI-written content does is make this approach an absolute necessity for search engines and for them to place the construct at the forefront of how they operate.

In an era of the web that will be saturated with content that lacks a certain degree of differentiation (i.e., AI-written content), what you do on the other pages of your site will increasingly be as important as what you do on the page you want to rank.

Google will, in my opinion, differentiate that which can’t be differentiated by redoubling its focus on domain quality.

My concerns for SMBs and ranking in the era of AI-written content

What worries me the most about the above-mentioned ranking construct (i.e., one that is heavily weighted on the strength of the domain overall) is that it might leave smaller sites in a bit of a pickle.

A site competing for a snippet-level query with AI-written content (similar to all of the other AI-written content around that topic) will be able to rely on the strength of its other content to rank here, according to what I’m proposing. A large site with an efficient content team that has created all sorts of topical clusters related to the overarching topic should thrive, all other considerations being equal.

However, a smaller site (typically run by a smaller business) does not have those resources. So while they may have strong content, they may lack quantity. In a world where semantics rule the ranking day, such sites would (in theory) be at a disadvantage as they simply would not be able to create the strong semantic signals needed to differentiate the pages that target snippet-level queries.

One could argue that, given the current ecosystem, this is already a problem. While there might be something to that, if Google increases its emphasis on domain-level metrics, the problem will only increase exponentially—in theory.

Experience and language structure: How Google can combat the over-proliferation of AI content

What about cases where content is not predominantly generated by AI?

Even if most snippet-level content is eventually spun up by AI content generators, that still leaves many content categories that are probably not best-served by this method of content creation. How would Google go about handling AI-written content where the content demands more nuance and depth, and is a bit more “long tail” in nature?

Again, I don’t think search engines will be able to ignore this problem as AI content generators “solve” an extreme pain point that will inevitably lead to mass (and most likely improper) usage. Google will have to figure out a way to “differentiate” AI-written content if it expects to keep user satisfaction at acceptable levels.

The truth is, we may have already been given the answer. In December 2022, Search Engine Journal’s Roger Montii reported on a research paper that points to Google being able to use machine learning to determine if content was written by AI (sort of like turning the machines on the machines).

Of course, we don’t know for sure how (or even if) Google deploys this technology, but it does point to a logical construct: language structure can be analyzed to determine if an author is likely human or not.

This is fundamentally the basis of a plethora of tools on the market that analyze chunks of text to determine the likelihood that it was constructed by AI (Glenn Gabe has a nice list of the tools you can use to determine AI authorship).

The language structures humans tend to use contrast sharply with the language structures employed by AI content generators. The schism between the two language structures is so deep that a number of companies make a living analyzing the difference and letting you know what content has and hasn’t been written by humans.

This is precisely what a tool called GLTR did with a section of this very article below:

The output from GLTR, color-coding various words in an excerpt from this article. The output is a mixture of green, yellow, red, and purple text.

Notice all of the words in purple and in red—these indicate the content was written by a human, which it was (me).

Now compare that with something ChatGPT spun up about how Google will rank AI-written content:

The output from GLTR for an excerpt written by ChatGPT, in which the majority of the text is highlighted in green.

There’s a far lower ratio of red and purple wording, indicating that this was written by AI.

Language structure cannot be overemphasized when differentiating human-written content from AI-written content. Should you use AI to spin up a generic piece of “fluff” content, you are far more likely to create something that seems like it was not written by a human.

Going forward, I see Google differentiating low-quality content and AI-written content (which runs the risk of being low quality) by examining language, as it is exactly what machine learning algorithms do: profile language structures.

Profiling language constructs is very much within Google's wheelhouse and is a potentially effective way to assess human authorship and quality overall.

What’s more, this perfectly aligns with both the extra “E” (for experience) in E-E-A-T and Google’s guidance around Product Review Updates, both of which focus on first-hand experience (as in, that which AI cannot provide).

How can Google know if you have actual experience with something? One way is language structure. Imagine you were reading reviews on the best vacuum cleaners on the market and in describing these “best” vacuum cleaners, one page wrote, “Is great on carpet,” while another page wrote, “great on carpet but not for pet hair on carpet.” Which of these two pages was probably written by someone who actually used the darn thing? It’s obvious.

It’s obvious to us and it’s also not far-fetched to think that Google can parse the language structure of these two sentences to realize that the modification of the original statement (as in “but not for pet hair on carpet”) represents a more complex language structure which is more closely associated with text based on actual first-hand experience.

Aside from domain-level metrics, language structure analysis, to me, will play a vital role in Google determining if AI wrote the content and if the content is generally of sound quality.

The integration of AI chat technology into the SERP

Let’s talk a bit now about the other elephant in the room: the integration of AI chat technology into search engine results.

Clearly, search engines will integrate AI chat experiences into their result pages. I say this because, from Bing to, they already have. Google (at the time of writing this) has indicated that AI chat will be a part of its ecosystem with the announcement of BARD.

The question is, what will these systems look like as they mature and how will they impact the ecosystem?

More succinctly, will AI chat on search engines be the end of all organic traffic?

Understandably, there’s been a lot of concern around the impact of AI chat experiences on organic clicks. If the search engine answers the query within an AI chat experience, what need will there be for clicks?

There’s a lot to chew on here. Firstly, for top-level queries that have an immediate and clear answer, the ecosystem already prevents clicks with Direct Answers (as shown below).

A screenshot of a direct answer on Google showing that Aaron Judge hit 62 homeruns in the 2022 season, with no attribution.
A Direct Answer for how many homeruns Aaron Judge hit in 2022 precludes the need to visit an actual website.

Is this Google “stealing” clicks? Personally, I don’t abide by this view. While I do think there are things that Google can improve on to better the organic ecosystem, I don’t think abolishing Direct Answers is one of them (also, every ecosystem has areas for improvement, so don’t take my statements here as being overly critical in that way).

I think the web has evolved to the point where users want to consume information in the most expeditious manner possible and Direct Answers fill that need.

To the extent that AI chat features within search prevent clicks, we need to consider this dynamic as well. Is the chat stealing clicks or simply aligning with the user’s desire to not have to click, to begin with? If it’s the latter, our problem as SEOs is with people, not search engines.

However, because of how frequently users might engage with AI chat features, including organic links for the sake of citation is critical to the health of the web—both in terms of traffic incentives and in terms of topical accuracy.

It’s vital that users know the source of the information presented by the AI chat feature so that they can verify its accuracy. I’ve seen many instances where these chat tools present out-of-date information. It’s really not that hard to find at this point, so including citations is key (what would be even better is if the providers pushed their tools to be more accurate, but hopefully that will come with time as we are still in the infancy of this technology’s application).

Take this result from Neeva’s AI chat feature as an example:

A screenshot of Neeva’s AI response to the question “are the yankees a good team?” The response reads: “The yankees have been playing at a historic rate, with a 116-win pace that would tie the all-time record. Their success has been attributed to Aaron Judge’s contract year for the ages, healthy ligaments and muscles, and their ability to mash a baseball…” There are three citations, with the sources shown as clickable links at the bottom of the AI response.

The result implies that the Yankees have a good defensive shortstop (the player who stands between second and third base).

This was true…at the start of the 2022 season as indicated in the first citation within the chat’s response:

A headline from Yahoo! Sports on July 6, 2022, that reads “Why the yankees are winning at a historic rate: 5 players, stats and trends that explain MLB’s best team”

Fast forward to the end of the season and there were many concerns about one particular player’s defensive abilities:

A screenshot of a headline from September 6, 2022, that reads “Diving deep into Isiah Kiner-Falefa’s defensive limitations”

At least with citations, a user might be clued into the potential issues with the results (again, the better path would be for AI chat tools to evolve).

The point is that citations are very important for the health of the web both because they contextualize the answer and because they enable a site to receive traffic.

This is even a point that Bing acknowledged in its blog post outlining how its AI chat experience functions:

“Prometheus is also able to integrate citations into sentences in the Chat answer so that users can easily click to access those sources and verify the information. Sending traffic to these sources is important for a healthy web ecosystem and remains one of our top Bing goals.”

I’m actually very happy to hear Bing say that. I think a lot of the organic market share has to do with how the search engines decide to present their chat results. In Bing’s case, the AI chat feature sits to the right of the organic results (on desktop)—not above them.

A screenshot of Bing’s search results showing the main column of traditional results, alongside the AI chat feature to the right-hand side of the traditional results.
Bing’s AI chat sits to the right of the organic results—not on top of them, facilitating clicks to the traditional results.

My eye initially sees the organic results, then the summary from the AI chat feature., for example, makes you move from the initial organic results to a specific tab and then places organic results to the right of the chat box.

A screenshot of’s results for “Who wrote uptown girl,” with the “Chat” tab highlighted, showing chat results in the main column and traditional results to the right of them.’s AI chat requires you to move to a specific tab and places organic results to the right of the chat box.

Search engines will need to be responsible in how they present their AI-produced content so as to maintain a healthy and functioning web. And again, a lot of that does not comes from the functionality per se, but how these search engines go about accenting the AI content with organic opportunities.

As AI chat ecosystems evolve, more opportunities for clicks to sites might exist. Personally, I don’t think the novelty of these tools is in their function as a direct answer. For that, we already have Direct Answers and featured snippets. The novelty, to me at least, is in their ability to refine queries.

Look at the conversation I had with’s chat feature about pizza in NYC:

A screenshot of a conversation with’s AI chat feature, including questions from the user like “where can I find get pizza in nyc,” “can you give me some places that are gluten free” and “what about in the west village?” as query refinement.

Here, the lack of URLs within the chat was a major limitation to my overall user experience. I think the example above (i.e., query refinement) is where users will find chat tools more useful and presenting relevant URLs will be critical. To be fair, there are organic results to the side, but I would have much preferred (and even expected) the chat to offer me two or three curated URLs for local pizza places that fit my criteria.

Parenthetically, you can see how this format might wreak havoc on rank tracking (should URLs be placed at each stage of the chat). How valuable is ranking at the top position when there is a URL within a citation provided by the AI chat experience? Will rank trackers see those initial citations? Possibly, but as you refine the query with additional chat prompts as I did above, they certainly won’t be able to!

AI chat integrated into the SERP could put a far greater emphasis on data sources like Search Console (where you can see the total impressions), and may make rank tracking within third-party tools less reliable than it currently is.

So, does the integration of AI chat into the SERP mean the end of organic traffic? Probably not.

It would appear that search engines seem to generally understand the need to incentivize content creation by pushing organic traffic and offering context to the user via citation and beyond.

To again use Bing as an example, there seems like plenty of opportunity to take notice of the organic results on the SERP below:

A screenshot of Bing search results for the query “plan me a workout for my arms and abs with no situps and no gym equipment. It should only take 30 minutes.” The main column of results shows several video results along with articles, and the AI chat results appear to the right of them.

My read on Bing is that it is using the chat functionality to accent search. I see the Bing SERP, just for example, as trying to use the chat feature to offer a more layered and well-rounded search experience—not to replace it.

At a minimum, there are some early and encouraging signs that the search engines understand that they cannot leave organic traffic out of the equation.

AI content generation: A pivotal moment for the web and SEO

Over the course of my time in the SEO industry, I’ve seen all sorts of trends come and go. I’m still waiting for the dominance of voice search to materialize.

AI content generators are not a fad. The problems that they “solve” are too attractive and the technology too advanced to ever be put back in the bottle. Search engines, as we’ve already seen, are going to incorporate the technology into their ecosystems. Search engines are also going to have to adapt their algorithms accordingly so as to handle the impending wave of AI-written content.

Whatever the outcome of all of this is, I can say one thing with total certainty—I cannot remember a more determinative moment for the web and for SEO.


mordy oberstein

Mordy is the Head of SEO Branding at Wix. Concurrently he also serves as a communications advisor for Semrush. Dedicated to SEO education, Mordy is one of the organizers of SEOchat and a popular industry author and speaker. Twitter | Linkedin

Get the Searchlight newsletter to your inbox

* By submitting this form, you agree to the Wix Terms of Use

and acknowledge that Wix will treat your data in accordance

with Wix's Privacy Policy

Thank you for subscribing

bottom of page