top of page

Episode 80 | March 27, 2024

Does AI content rank?

Does AI content rank on the SERP? Can you count on AI content to rank in the future?

Wix’s Mordy Oberstein and Crystal Carter investigate the place of AI content in Google’s ranking algorithm. Join them as they take a deep dive into how successful websites are delivering their content and what others in the SEO community think about AI-generated content.

Listen as special guest John Wall, host of the Marketing Over Coffee podcast, guides you through creating AI content that DOES rank with his generative AI for marketers framework.

Please rank responsibly, as this week, we delve into Google’s evolving approach for ranking AI content here on the SERP’s Up SEO Podcast.

00:00 / 52:35
SERP's Up Podcast: Does Google rank AI content? With John Wall

This week’s guest

John Wall

"John J. Wall writes and practices at the intersection of marketing, sales, and technology.

He is the producer of Marketing Over Coffee, a weekly audio program that discusses marketing and technology. John is also a partner at Trust Insights and has been cited by CBS Evening News, The Associated Press, Inc. Magazine, Forbes, The Boston Globe, DM News, and Featured Apple Podcasts."

Transcript

Mordy Oberstein:

It's the new wave of SEO Podcasting. Welcome to SERP's Up. Aloha. Mahalo for joining the SERP's Up Podcast. We're pushing out some groovy new insights around what's happening in SEO. I'm already overseeing the SEO brand here at Wix and I'm joined by the ever-constant, the ever-ranking, the ever-green. I say green because you have a plant in your background now. Head of SEO communications, Crystal Carter.

Crystal Carter:

This is an audio-only experience. The people don't know that I have a green thing behind me.

Mordy Oberstein:

You do. You have a green... I like it because it's not green green. It's sage green. Is that the name of the green? Is that right?

Crystal Carter:

Yeah, it's actually only green because it's a fake olive tree.

Mordy Oberstein:

Ah, I should know that because I have an olive tree right next to my house.

Crystal Carter:

This is true. But my grandma used to have an orange tree in their backyard. That was nice.

Mordy Oberstein:

You want to hear a crazy story? I used to live in an apartment and it had a garden.

Crystal Carter:

That is crazy.

Mordy Oberstein:

Yes, that is crazy, right? Living is crazy. It had an olive tree. And the way the garden was laid out, it was right next to a staircase, a publicly used staircase that went down to the next street. You can imagine the next street was a level lower, you had to go down the staircase.

And the branches extended over the fence onto the staircase. And we come home one day, and the olive tree, all the branches are cut down. Some maniac, I guess, got upset that the branches were overhanging onto the stair-

Crystal Carter:

Yeah.

Mordy Oberstein:

Yeah. And cut down almost the entire olive tree.

Crystal Carter:

Just from that side?

Mordy Oberstein:

Yeah, he must've climbed over the fence a little bit and...

Crystal Carter:

Whoa. Wowza.

Mordy Oberstein:

Yeah. Psycho, right?

Crystal Carter:

So I used to work for the parks department for the city, and legally, if it's over your side, that part of the tree is yours. Legally speaking, that side of the tree is yours. So I used to also... Really telling everybody my business here. This is totally relevant, but not. Anyway, basically, I used to forage for free apples and stuff around. And basically, if the apple tree was hanging over the sidewalk, those apples are mine. I can have those apples. They're mine and I'm going to eat them. So that's what I'm doing. You don't like your apples being in the public domain, get your tree out of the public domain.

Mordy Oberstein:

How do you like them apples?

Crystal Carter:

Basically. So yeah, it's very complex, trees.

Mordy Oberstein:

Trees are complex. They are complex. The SERP's Up Podcast is brought to you by Wix, where you can not only subscribe to our monthly SEO newsletter, Searchlight, over at wix.com/SEO/learn/newsletter, but where you can also use our AI meta tag-creator to spit up title tags and meta descriptions in no time flat because time is not flat, time is round. Also, I don't write meta descriptions anymore. I let the AI do it every time because when it comes to meta descriptions, I don't care. Why? Because AI content ranks.

And also because meta descriptions, whatever, who cares, right? I'm saying that as Crystal's looking at me like, "Why are you saying that?" Because I don't care. I don't think they are impactful. One of my least important SEO tasks are meta descriptions. I guess it might help with clicks if Google didn't rewrite half of them. I'm real salty about meta descriptions, but you can use your AI meta tag writer to write them. How's that for a pitch?

Crystal Carter:

Okay. Okay, that's cool. Okay. So let's just clear this up. I have time for meta descriptions because I've seen them work, right? I've seen them work, but I don't think that you should be hand crafting them artisanally. I don't think-

Mordy Oberstein:

No, there's no reason. Either way, there's no reason. Just let the AI write that.

Crystal Carter:

Right. And if you're not using AI, you don't necessarily have to use AI. You can also just do programmatically. So in Wix, you have the option for both. You have the option for either the programmatic setup where you insert keyword for this-

Mordy Oberstein:

Yeah. Or take product description, make it meta description.

Crystal Carter:

Right. This is the title of the product, brand new, et cetera. You can set the template for it so that the template does the work for you. That I am all here for and following best practices, et cetera. But I have seen for good pages for your big money pages, putting a good CTA on there.

Mordy Oberstein:

No. It could be impactful for conversions. Fine. Yeah, I'm with that when Google's not rewriting them 99.9% of the time. I agree. We all agree to disagree. One thing we do agree on is that today we're talking about AI content and does it rank? Insert dramatic music. Why understanding if AI content ranks matters? Why understanding of AI content ranks on the SERP is just the beginning? And will AI content continue to rank on the SERP? Assuming it already does, but I feel like I'm giving it away there that we're telling you that it already does.

Because we surveyed you, the SEOs, to see what you think. So y'all are our guests today, but also our guests that'd be Marketing Over Coffee's host, John Wall, who will talk about when and when not to use AI for content generation. Plus we have the Snappy SEO News for you and who you should be following for more SEO awesomeness on social media. So get out your best AI prompts and put on a funny little cone hat like you went to a birthday party as episode number 80 of the SERP's Up Podcast plays in the AI content on the rankings.

Crystal Carter:

Okay. Thank you for that exciting introduction. So let's just talk about why we're talking about this.

Mordy Oberstein:

Exciting introduction. Wow, sarcasm much.

Crystal Carter:

I don't know. Okay. All right.

Mordy Oberstein:

I was hallucinating because I was using AI.

Crystal Carter:

Okay. So why are we talking about this? The reason why we're talking about this is because it's been very up-and-down. When AI was in the backburner, Google was like, "Don't use AI, don't use AI everyone. I know you've heard about all these tools, but don't use AI. Be good little SEOs, don't use AI." And then ChatGPT broke virally, it was way more accessible. And then they were like, "Okay, you can use AI as long as it's helpful. You can use helpful AI. If it's helpful, that's fine, we won't penalize you. It's fine."

And there's been up and down. People have been testing AI content for years, testing pretty much unedited AI content for years. There's a few people who've been doing a lot of experiments around this. Mark Williams-Cook has a very well-documented experiment that he's been running on this content as well. And for a while, people were saying, "Oh, yeah, it doesn't really rank," or "It ranks for a while and then it will completely tank." And that's something that people have said.

However, it's my opinion and it's something that I've observed that basically, if people remember back around this time of year or around the spring of 2023 when Bing was like, "Yo, we have new Bing. We are putting AI in the SERP." And Google was going, "Oh, we also have AI," and they were trying to catch up, I've started to see a lot more content that is AI-generated being openly AI-generated and ranking. And so I'm going to share a couple of examples of that. One is a big example, which is LinkedIn's advice folder, which has been going gangbusters pretty much since they started doing it. They built this up in the springtime of 2023, and they've seen some incredible activities for this.

If you haven't seen this, basically you haven't been on LinkedIn. And basically, when you go on LinkedIn, LinkedIn will ask you questions, "What do you think about this? What do you think about that?" And they call them collaborative articles in the folders under advice and things like, "What does a production coordinator do? What is regression testing and why is it important? What are the best practices for this?" Now, the way I stumbled upon this wasn't actually through LinkedIn; it was actually through a featured snippet. I found a featured snippet. It was talking about a technical SEO term, and it actually didn't have any contributions. But at the top of every article, it says that this article was created by AI and the LinkedIn community and they're doing incredibly well.

So they started building up this folder around March 2023. They peaked with their traffic at 2.8 million globally in about 2023 September. And it went down a little bit, but it's got down to 1.7 million according to Semrush's stats. And I take that traffic. I'll take that. If that's where we're dropping back to, that's fine. And they're not the only ones. Another from a smaller example is a site called Wellnite.com, which is a site that's actually working more in the YMYL space. So they are something that talks about counseling and they've got lots of articles.

One of them is bottling up emotions, how to let go, acknowledge your emotions, peaceful mind practice and things like that. And at the bottom, it says, "PS, this blog was created with AI software as a tool to supplement the author accompanied by Wellnite staff overview and supervision." And that is an example of a website that had been going, ticking along through 2020, 2021 at getting around global traffic according to Semrush of about 400 or so. There are lots of blogs that are like that, lots of company blogs that are like that for smaller websites.

And theirs started ticking along. But then in 2023, they started adding in these AI-generated contents, and they were able to increase the number of articles that they were ranking. And they've now been able to double their traffic monthly because of that. And again, it's still fairly small traffic, but compared to where they were, that's a very significant jump. And the amount of traffic that they've seen increased between the start of 2023 and where we are in 2024 is significant. It's the most significant growth they've ever seen across their domain. So to my mind, AI content is doing just fine and there's lots of evidence to show that, but there's a few things that people can do to make it better. And I think the people who are doing it well are taking advantage of some of those elements.

Mordy Oberstein:

Okay. I think very much, it depends. The linked articles I think are a great case. First off, the linked articles are from LinkedIn, so you are not LinkedIn. So that's one thing to be careful of. But the second thing is the linked articles are interesting. I actually like them because they make me think because I don't like the answers. I don't like the content that they offer. I find I comment on them, so I get a little badge thing on LinkedIn because I'm being like that. And most of my takes are like, "Nah, that's not how you should actually think about it." But interestingly enough, and I wonder if this plays into it or how it plays into it, you're actually getting first-person experience on those articles in the comments themselves. And that's my point that it all depends with this kind of thing.

For example, Mark Williams-Cook has an article on Search Engine Land where he talks about LLMs generating content. And when he ran an experience, he created 10,000 URLs on unsupervised AI. And you see it ranks and it just gets killed off. And there are a bunch of examples like that. So it's using this or thinking about, "Does it rank unequivocally?" The answer is it depends what you mean by that. If you're just spinning up random content or unsupervised content, the answer is it'll rank for a while. I think it's very much spam content in general. It ranks for a while, and then it falls off. So I was reading in Traffic Think Tank recently, was Andy Chapa talking about a case where I think someone all of a sudden got... They must have bought tons of links.

And you see this, people buy tons of links, they start ranking for a while and then Google eventually figures it out and gets rid of it. I think it's very similar to that or any other kind of spam practice. If you're using AI in a spammy kind of way, you'll rank two, three, four months and then it'll fall off. And that's been a lot of the consensus around what's been shared in the SEO community about this. And we actually asked the SEO community on January 29th, "Does Google consistently regularly rank AI content?" And out of 120 so votes, 82% of people said yes, and around 70% of people said no.

And then the comments are filled with these anecdotes. For example, Kristine Schachinger said, "It does and then it will not." And I think what she's talking about are those kind of cases where what Mark did, where you're just unsupervised, this doesn't make any sense, it's not good content, it's not helpful, it'll get killed off, which is what Google's saying. I think there's a lot of politics behind what Google's saying also, but whatever. We'll leave that aside. It's not for this podcast.

Darth Autocrat, Lyndon NA. He wrote, "Yes, but it's kind of skewed due to the sheer volume of it and the overall scope of AI content. Even if you utterly ignore the spammer flood, legitimate networks are always, if not partially using, so it shows for news, et cetera." So that's a really good point, how you use it, how you go about using it, it's really important.

Pedro Diaz wrote, "I anticipate the answers are all going towards experience people had and seen recently within their search bubble," which I think is a very good point to having broader views in a wider spectrum of experiences. And I don't think I've seen a wide study on AI content ranking. And I think that would be fascinating to see to address that point.

Crystal Carter:

Yeah. And I think that there's so many different variations in the ways that people are using it because the other thing is that there's lots of people who aren't ranking with AI content. But there's also the case that, and we've seen this with the AI tools that we have in Wix, we've used AI tools to help people do things like the meta descriptions, for instance. We talked about those. And what we've seen with that is that there's a lot more people who have accessibility, with lowercase a, to some of these techniques because they don't have to worry about the barriers to entry regarding grammar, for instance, or regarding even sometimes the ideation.

So that Wellnite website is a classic example. They were publishing occasionally, but they were able to increase the rate of publishing because they were using some of these AI tools. So I think that there's going to be a lot of people who are getting more access to these things, who are able to articulate themselves better with the help of some of these tools. And that I think is a win. I think that's a good thing that people who were previously not able to understand or use meta descriptions at all, for instance, are able to engage with that content. I think that's a benefit, and I think that that's something that will affect which pages are ranking and will affect how many pages are ranking. However, there's going to be a lot of people who are just putting out junk, but those people were putting out junk anyway before in lots of other ways.

Mordy Oberstein:

Yeah. That's what I was going to say. It's really a matter of mindset, and we'll talk about it with John later on that how do you go about building the content utilizing AI and expediting your processes? Because at the same time, I think something very important to keep in the back of your mind when you're using AI to create content, which you should certainly be doing the right way, is where's Google trying to go? And this speaks to a lot of the Reddit controversy on the SERP at the moment as we're recording, and who knows if it'll fixed by the time we're done recording.

But there's been a lot of pushback about the amount of Reddit results Google's showing on the SERP. But a lot of that has to do with the fact that Google's trying to push for first-person, first knowledge, experience-based content and Google's having a lot of issues with this. But you see this trend keep coming up with things like Reddit ranking, Gisele Navarro put out an interesting post about product review websites and how folks like Rolling Stone have jumped into the product review space.

And one of the things that they're doing to rank is relying on first-person experience in a way. I'll put a little caveat, a little asterisk on that, by using first-person expressions, like I, we, our, which I've personally seen a huge influx of folks doing that over time. So we take the same product review page now, and you put it in the Wayback Machine, the amount of our, we, first-person language has increased exponentially.

And there's a recent study that Cyrus Shepard did that shows, and again, it's a correlation study, so no one freak out like, "Oh, no, it's correlation." But correlation sometimes can point you in the right direction and correlation does mean something. And one of the things that he noticed in websites that are winning is the usage of first-person pronouns: me, we, I, that sort of thing. The direction where Google is trying to go kind of contradicts a lot of the things that people are doing with AI content. So when you're building AI content, you need to keep in mind where the ecosystem is shifting and leverage AI the right way within that context.

Crystal Carter:

Right. So I think that the first-person experience is super important for that. And there's a couple of reasons why writing... Whenever I'm doing content evaluations or making content recommendations, particularly to blog-facing content or customer service attributions and things, I very often say that people should speak in the second person, like, "You should do this, you can do that, you could do this. We do this for you because it is good and you will like it," and that sort of thing. And the reason why is because a lot of people are on their mobiles, and that is a one-person situation. That is a one-person thing. Even if you share that with somebody, even if you will share it to their mobile and they will read it personally on their mobile phone, individually. So it's like, "Hi, I am talking to you," it's very much an individual situation.

So users are going to be responding to that in lots of ways. And I think that when we think about AI content, I don't think that AI content is necessarily opposed to that. I think it's a way to organize that sort of thing. One really good example that I saw in terms of product reviews was Spruce Pets. They had a great product review of dog carriers, and clearly they had people who were testing it with their dogs. They were like, "Here I am with my dog, here is my dog in the dog carrier," that sort of thing. But then here's where you use the AI.

The AI is where you pull in how to organize all the product detail between the different ones. How you say, "Okay, this one has a carrier, this one has a pocket, this one has a thing for treats," all of that sort of stuff. That's what you use the AI for. And you maybe use the AI to pull out some of the common threads of some of the first-person things that you're using. So I don't think it's necessarily opposed, but I think you should use it to clean up some of the qualitative information.

Mordy Oberstein:

Well, that's part of the problem with the discussion is that when you look at a zero-sum, either there's AI content or not AI content. So I'll tell you one of the things that I'm a big proponent of is what I'll call situational content writing. So one of the ways you can actually show expertise in a real way, other than just loading in the page with we, I, me, which anybody can do, an LLM can do that if you trick it to do that for you, it is actually predicting the situation the consumer's going to face and then writing about that situation. Because that actually demonstrates you actually know what the heck you're talking about and have actual experience. Because you can't predict the next scenario unless you have some kind of situational experience.

However, if you're talking about let's go with a pet carrier thing, situation. How to get your pet into the pet carrier if they don't want to go in? You'll predict as someone who has a pet, you can try this. If that doesn't work, then try this. But the general like, "What is a pet carrier?" Let's say you wanted to put that there, but you probably don't need, but let's say you did, then you can have an AI spin out like, "What is the pet carrier?" Sure, go ahead and write that part. It's not a zero-sum. So let the AI save you time and let it make you more efficient in the right spots within the expert and the experience-driven content you want to create.

Crystal Carter:

Right. And I think that it's also important to remember that there's going to be some content that people don't care what an AI thinks about it like, "Is your pet happy in the pet carrier?" for instance. That's something I don't really care what ChatGPT thinks about whether or not my Cocker Spaniel is happy in the carrier. If I hear from other people, "Yeah, my dog was really happy. He was wagging his tail, he kept sniffing my ear," or whatever, that sort of thing. That's something that I would like to hear first-person knowledge of and that's something that you should be aware of.

And then that situational stuff is really important because that situational information is stuff that you can get from users, from real humans, from real human users and from real personal experience. And I think that you can use, again, if not zero-sum, you can use AI to help you collate and to help you organize some of the things that you're getting from user videos, user interviews, customer feedback reforms, that sort of stuff to help you bring some of that together. But you're going to be able to add value with a cyborg kind of approach, if that makes sense.

Mordy Oberstein:

And look, that's going to be the kind of content that ranks fundamentally. In fact, now at some point, Google's... Listen, there's two possibilities in my opinion. Either Google will figure this out to make sure that the content that has actual expertise and actual experience, which may be supplemented by AI ranks, or it won't be a search engine anymore that we'll go to. So either way, it doesn't matter. Okay.

Well, since we're already talking about AI and content and ranking, I think it behooves us to talk about AI for content generation so that you know how to create the AI content that ranks and not the AI content that ranks, but really shouldn't rank. So rank responsibly. Please rank responsibly. To help us talk about this, we have a very special guest with the host of the Marketing Over Coffee Podcast, John Wall, as we move beyond SEO and into the great beyond. Hey John, welcome to the podcast. How are you?

John Wall:

Great, thanks. Glad to be here.

Mordy Oberstein:

So you're the host of the Marketing Over Coffee Podcast. You also work for Trust Insights and do a lot with AI. Now is the time on the podcast for you to pitch.

John Wall:

Yeah. So with Trust Insights, we've done a lot of stuff with generative AI. Our chief technologist, Christopher Penn, has been using AI in PR and marketing for over 15 years. So we already had a bunch of stuff that we were using AI for as far as attribution and predictive analysis for creating content calendars, things like that. And so yeah, generative AI has now spun up though and he is just in demand everywhere. In fact, he's speaking in London this week.

Yeah, it's become huge. And so we actually have put together a framework of generative AI for marketers, stuff that you can do to create content, do better in SEO. There's a whole bunch of different avenues and strategies, everything just from the basic habit, my blog post, which is what you're talking about, stuff that comes out and is weak at best. And then, at the other end of the spectrum, you're trying to create stuff that nobody else is doing and actually has some novelty, and we'll get you to the higher amounts of traffic and positioning because it's quality stuff.

Crystal Carter:

I think that one of the things that stood out from that was you said you've been working in this space for years, and I think that that's one of the things that a lot of people don't realize. People say, "Oh, AI is new." Google's been using AI in the SERPs for years. But people like folks from your team have been using these tools for many, many years. And I think that that gives you particularly interesting insights on this. There's a lot of people who are just new to the game and just getting involved, but I think that there's going to be some things that you've tried and understand more than other people.

John Wall:

Yeah, I think it is very different than a lot of the other trends that come up through marketing. When we had cyber currency, it was a huge deal and NFTs and all this kind of stuff, they were created. But AI, as a concept, was created back in the 1950s, right?

Crystal Carter:

Right.

John Wall:

The academic community understood what this could do and where it was going, and it was just that the computing power wasn't there.

Yeah. So idea of being able to figure out, "Identify the difference between human and AI, can you fool people?" And of course, there's always been turks, right? There's always machines that can make you think you're talking to a computer when you're really not, because there's still some human interaction in there on the back-end.

But yeah, we're at least getting close to the point where ChatGPT could fool somebody for a little while for four or five prompts before you figure out that they're a person. But better customer service tools have already been able to do that to some point. They can get you there. But yeah, so now people have kind of jumped on the AI bandwagon and Gartner's peak hype right now where every product has AI attached to it.

Mordy Oberstein:

Oh, my gosh.

John Wall:

My tires were rotated last week with AI over at the gas station.

Mordy Oberstein:

I have AI tires. Yours were rotated with AI. I have actual AI tires.

John Wall:

You have AI in the tires.

Mordy Oberstein:

Yeah.

Crystal Carter:

Do they tell you when you're in the wrong lane?

Mordy Oberstein:

No, they don't do anything for me whatsoever. They hallucinate and tell me I'm in a desert on the highway and where I'm really driving in my actual driveway. So I don't know what's flying. I'm going to use my AI tires.

John Wall:

Yeah, it is everywhere.

Mordy Oberstein:

Yeah. So if we're going to actually use AI in a real way, we're going to say, "Okay, let's create content and let's use AI." Is it carte blanche, like just go wild?

John Wall:

No, no. There's a lot of ways to go. In fact, I would even back up. I would not start with generative AI. We did and have done predictive models. So for example, we've done a bunch of stuff in the food space, and it's worked so well we had to come up with a sample for the rest of the world. So we have the cheese report, which is an annual report that comes out and it talks about, "Okay, which cheeses are most searched for every week of the year?" And so you can-

Crystal Carter:

It's cheddar, right?

Mordy Oberstein:

American.

John Wall:

Cheddar. We're just coming off hot cheddar with the Super Bowl here in America. That's very popular. Mozzarella surges as we get close to Christmas.

Mordy Oberstein:

Kraft singles, American cheese is not-

Crystal Carter:

Yeah, it's cheddar. Cheddar is your best cheese. It's good for almost every situation.

John Wall:

Cheddar is always ranking high. It's definitely top 10 most of the year. But look, for a content marketer, the big one now you'd look at is halloumi where you've got grillable cheeses will be popular June, July when that time of the year comes. So if you've got your content calendar, you should be starting to script out those halloumi videos and recipes and all that stuff now, so that you've got that stuff dropping in May and you're able to get some Google juice to that before peak search season in June, July.

Mordy Oberstein:

Some hot cheese right there.

John Wall:

Yeah, that's the hot cheese. And we have it for a whole bunch of other foods, but the brands that get those reports don't let us share that with anybody else. So we're not able to share our food insight outside of cheese, but that gives you an example of using some predictive AI to actually create content that is in demand. And that way, you've got your stuff all updated when you hit peak search.

Crystal Carter:

And I think is there a little bit of overlap for content like that, for instance? Let's say, if you're running a stats tool throughout the year or something, for instance, the Billboard Hot 100 changes every day or that sort of thing. If you're running a SaaS tool like that, how much is there crossover between programmatic elements and AI elements with creating content around that?

John Wall:

It completely overlaps. AI is not going to come up with anything new. That's really what it is. It's just you're applying programmatic strategies to figuring out what's going to be coming on and where it goes. And there's a ton of ways to apply that too. Another way we see it all over the place is with reviews or other huge libraries of content where instead of generating, have it do summarization or classification.

And take a huge batch of reviews, have it come up with, "Okay, what are the five most common things that people like/don't like about this product?" And now that's a blog post that's based on your data, that's proprietary that somebody else using GenAI can't just come up with that post. You're the only one that can do that and it's going to be on target, but it's unique and it's your voice and it's probably going to be stronger across the board.

Crystal Carter:

I think that's great because with something like that, you can summarize and hit some of the key points with the keywords there. And also, it's got good user value because me, as a user, if I'm trying to decide whether or not I should use halloumi cheese or Havarti cheese on my cheeseburger, for instance, it might be like, "Yeah, this was a good cheese, but it didn't quite melt as much on the burger, for instance." And I can get that summary without reading 400 reviews that include like, "Oh, I dropped the cheese on the ground and things. It's one star." It's like, "No, you dropped it. That's your problem."

John Wall:

That's like the Amazon classic of the products that suck because the box arrived destroyed. There's all these-

Mordy Oberstein:

Right. I love that. I don't care.

John Wall:

Yeah.

Mordy Oberstein:

Because you know who's going to destroy the box in three seconds anyway? My children.

John Wall:

That's just been part of the customer experience.

Mordy Oberstein:

But this is a similar point to something we discussed. We had a webinar with Mike King and Ross Hudgens where we talked about giving AI rules and confines to work within as opposed to, "Here's a very open, unconfined scenario. But if you give it parameters to work with, it does much better and it does what you want it to do. And that's where I feel like it can offer real insights and real ability to produce content for you that you couldn't have done otherwise or you couldn't have done otherwise as quickly." But again, it's giving an open prompt and just telling it to go without any borders or any confines, is probably a recipe for disaster.

John Wall:

Oh, yeah, absolutely. So we even have a whole course that's seven hours of training, and a huge chunk of it is writing effective prompts. And so that's where some of the artistry is. And we have a whole framework, we call it the RACE Framework where for any prompt you want to do R-A-C-E. So you give it the role, you say, "Hey, you are an engineer that is working with stereo equipment. A, is for action. Your task is to come up with a list of whatever.

"You give it context. That's the C, where you're saying that the audience is this level of professional. Is it engineers with 10 years' experience? Or is it people that have no experience with audio equipment? And then execute." You actually give it the instructions as far as how to write this. It should be at X grade level, it should cover X number of bullet points, have a summary. Basically, the best prompts, you have these huge paragraphs of stuff that you're using.

Mordy Oberstein:

That's the thing. AI is great. You have to shape it to what you want it to be. And I think the problem is that it's so easy and there's no barrier to entry that people think, "Oh, I can do this." To me, it's like picking up a baseball bat. Yes, you can pick up a baseball bat and you can swing it, but if you look at what the pros are actually doing, there's so much more that goes into it: managing the load, and where's your weight shifting? And when is your weight shifting? And where's your elbow? And how is your wrist turning?

There's a million things that go into actually swinging a baseball bat the real way versus you just taking a whack at it with your wonky-ass swing. And it's very similar to AI. Yes, you could put in a prompt and yes, you can get an output but that's not actually swinging the bat.

John Wall:

Yeah. Another way to think of it, you have that thing, we've always talked about this in software is tools for experts versus expert tools. Look, right now, the state of AI, it's like a router. If you're a carpenter, who knows what the heck they're doing, you can do amazing things with this tool. If you are somebody who's just playing around, you could end up losing some fingers.

Mordy Oberstein:

That's okay. The AI will add the fingers and some back for you.

John Wall:

Man, I hadn't thought about how much that hits. Yeah. But thankfully, AI has tons of fingers that it can spread around liberally to everyone. I even saw that on Amazon, they have a fake plastic finger you can buy so that you can wear it around and then you can tell people, "Oh, no, that's obviously AI-generated, because-

Mordy Oberstein:

People are interesting, huh?

John Wall:

Yeah. I'm thinking I don't need to go through the work of making sure the finger matches. That seems like a lot of effort to perpetrate a fraud, so I'm not going to bother with that.

Mordy Oberstein:

You have to really take on the next load because let's say, I don't know, you're out at the beach and your fingers get tanned. You're going to have to have a tanned finger.

Crystal Carter:

Also, sometimes they think arms in different places that shouldn't be there. The arms coming out in the middle and you're like, "What is going on there?" But I've seen pop stars, there was a whole thing. I'm going to show myself here, but there was a whole thing on Nicki Minaj as a popular rapper and she had a new album art. She had a song she put out unannounced. And the album art she used for, it was clearly generated with AI, and it clearly had not gone through QA. And it was supposed to be police tape and it didn't say police all the way through, and the things had different arms and they had all of this sort of stuff.

And even people who have the means... She's a multi-millionaire, and she has a PR team, and she has many people available. But even people who have the means aren't going through the QA. In terms of process, you have your prompting thing, you have your data thing, how much is the QA of the AI part of your process of getting good quality stuff?

John Wall:

Yeah. For us, that's a huge part of it. Really, in fact, there's nothing that can be released or put out there until it's been pounded on by experts who know what it should be doing and where it should be going. That's the real challenge of this. And then people don't get this either, is that you don't just build it and start using it. No, you build it, you run it and then you train it. And it needs to be constantly trained. Training needs to be a permanent part of your process.

Mordy Oberstein:

You mean, like anything else, it requires hard work?

John Wall:

You don't just hire it and then fire your whole marketing team the next day.

Mordy Oberstein:

You completely kill what AI means to me, and I am now completely uninterested in it.

Crystal Carter:

It's magic. Isn't it just magic?

Mordy Oberstein:

Yeah. If it's not magic, I don't care.

John Wall:

Yes. In fact, it gets your coffee right in the morning and it will drive you home at the end of the day. Yeah. No, it doesn't do all the things.

Mordy Oberstein:

If it doesn't watch my kids, I don't care.

Crystal Carter:

I think it's important to remember the learning part of the machine learning because that's the other thing. For marketers, PPC, for instance, has had machine learning going on for years, for years and years. Facebook has had it in there. Google Ads has had it in there. And you had to do the machine learning part of it. You'd have to, and you'd have to train it and train the model and retrain the model and retrain your parameters and all that sort of stuff. So for people who just think you can set it and forget it and it will just do magic, it's just a rude awakening, I think.

Mordy Oberstein:

It's not the Ronco slow cooker, set it and forget it, which is my favorite-

Crystal Carter:

Hey, I love my slow cooker. My slow cooker is that. I just put all the things and then...

John Wall:

Set it-

Mordy Oberstein:

And then forget it.

John Wall:

One thing that we've been doing that is pretty interesting because of this idea that these models do read everything that's out there, we've played around with actually doing press releases again. We had originally abandoned press releases as a complete waste of time because they were just lost in the five million other press releases that came out today. But now we've been working with some copy that's optimized for large language models to scan and grab.

So you can write about unique content. And it's funny, it's classic spammer stuff in that these press releases don't read that well. A human reads them, they don't make a lot of sense. There's some thread there, but the key is you've got 15 or 20 phrases in there that you ultimately want a large language model to think that you are the answer for.

Crystal Carter:

Right. And to come into the corpus of their knowledge on that particular topic.

John Wall:

Right. Exactly.

Crystal Carter:

So one of the things that we find interesting, particularly with large language models that are public-facing, like SGE and banks, Bing chat for instance. So one of the things that they're doing, because it's so expensive to answer questions with AI, it's so expensive for them to run, a lot of times they will truncate the answer. So you might write a really long question about, "What was the breed of dog that Dorothy had in The Wizard of Oz starring Judy Garland? What exactly? Which kind of breed?"

So you might write all of that, but they will truncate the answer. So even if you ask a similar question that's not exactly the same on the LLM, they'll essentially distill it to, "Dorothy's dog in The Wizard of Oz," and they'll distill it and give you, roughly, the same answers. When you're creating a PR release with those kinds of phrases and things, are you thinking for those questions in mind, those lowest common denominator questions? How are you identifying the phrases you want to surface for?

John Wall:

Yeah. So our analysis on that has been just analyzing what we're getting from answers now, and seeing the format and types of stuff that it wants to see. But yeah, you've hit upon a whole nother area of this study that is a big deal, this idea of managing your tokens. And for a lot of these systems, if you go with the paid version now, suddenly, you get to send larger queries and get larger queries back and get more in depth of. So it's a different level of information and quality.

The other one is, as you're building prompts, we find that it's much more effective to do long strings. You keep continuing to correct an ad. And so one trick with that is after you've done four or five prompts, have the model summarize what you've learned so far so that it can boil down your previous 10 queries into one paragraph. And then, when you do more research, you start with that one summarized paragraph and you basically get to skip the initial round. Because yeah, these all have moving windows of after four or five prompts, they start to forget the original stuff and they will start hallucinating again on things that you've walled in.

Crystal Carter:

So say I want it without the hat on it, and they're like, "With a hat?" And you're like, "No, that's not what I said." And you have to go all the way back to the beginning.

Mordy Oberstein:

All the way. I find that with images. Using Copilot or Gemini, I find that it loses its train of thought like my grandmother.

Crystal Carter:

You're like, "Come back, come back." And it's like, "Right over here." And you're like, "No, no. No, this way."

John Wall:

Yeah. Yeah, they completely start to run afield. And I don't know. And then really, for this whole space, there is this question of they are just doing an obscene amount of background computing that costs money. And sooner or later-

Mordy Oberstein:

So much.

John Wall:

Now, thankfully... Well, not thankfully for me, but thankfully for folks in AI, all of the MarTech and cyber currency VC money has dumped to AI for this next year. So there is a pile of cash there now to give everybody a free ride. But the question is how long is that going to last? Sooner or later... And the good news is it's free and they're looking at $20 a month kind of things, which is great. It doesn't cover the cost, but at least will probably boil down to two or three champions and then yeah, maybe we can finally open up that Alzheimer's window a little bit wider so that they can remember where the heck we're going.

Mordy Oberstein:

If people want to keep track of this by keeping track of you, where can they find you?

John Wall:

Oh, I'm always over at marketingovercoffee.com. And then for work stuff, we're at trustinsights.ai. We have a Slack group, Analytics for Marketers. If analytics is your thing, come on over there. We're always talking about it every day, and it's a great place to-

Mordy Oberstein:

Do you know how to use GA4?

John Wall:

Oh, we are all about GA4. Actually, the big news for us now is we've been doing GA stuff forever, but we actually offer Matomo in-house for people that are sick of GA4 already, which is 98% of people. And we do a bunch of work with Adobe too. So yeah, it's funny, we do all this cool AI stuff, but the reality is 98% of our customers have problem with the plumbing, and so that's the dirty work that we get done.

Mordy Oberstein:

Even imagined as toilets.

John Wall:

This is true.

Crystal Carter:

You should get an AI to clean the..

John Wall:

Siri, clean my basement. What's happening?

Crystal Carter:

Whenever I think of AI, I always think of the housekeeper from the Jetsons. When is that coming?

Mordy Oberstein:

Oh, Rosey.

John Wall:

Rosey. Oh, yeah.

Crystal Carter:

When is Rosey coming to my house? That's what I'm-

Mordy Oberstein:

That's it. I need someone to watch my kids.

Crystal Carter:

She has a lot of sass though, but it's fine.

John Wall:

You want to read a crazy one, read about some of these disasters with the automated vacuuming machines. If the pet gets sick and then the machine hits it, and then spreads it around the whole house, fantastic.

Mordy Oberstein:

No, no, no. Okay. On that happy note, give John a follow and check out all the great stuff they're doing over there at Trust Insights and the Marketing Over Coffee Podcast. Give that a listen as well. John, thank you so much.

John Wall:

Thank you.

Mordy Oberstein:

You know what I wonder? I wonder if there'll be some AI-related news in the SEO News.

Crystal Carter:

Maybe something about BARD or BERT.

Mordy Oberstein:

Or Ernie or Gemini or Scorpio.

Crystal Carter:

Or Elmo.

Mordy Oberstein:

Or Elmo or Barry

Crystal Carter:

Or MUM.

Mordy Oberstein:

Or Barry.

Crystal Carter:

Oh, my gosh. Oh, my gosh!

Mordy Oberstein:

Is Barry a constellation?

Crystal Carter:

There should be a Barry algo. I think they keep-

Mordy Oberstein:

That would be amazing if Google, the most penalizing algorithm ever created, they called it Barry Schwartz. I think that would go over well.

Crystal Carter:

Yeah.

Mordy Oberstein:

People already blame him for when they lose their rankings.

Crystal Carter:

No.

Mordy Oberstein:

Yeah, all the time.

Crystal Carter:

What?

Mordy Oberstein:

Yeah, ask him. When Barry-

Crystal Carter:

Okay. I would like to just make a public service announcement, Barry's not responsible for your rankings.

Mordy Oberstein:

No. Barry's just reporting on what happened. There appears to be an update that Google didn't announce. He's just reporting. But there have been many times, Barry's... Actually, I've interviewed him about this. Barry's gotten death threats.

Crystal Carter:

No.

Mordy Oberstein:

Yeah, over rankings. He's just reporting, people.

Crystal Carter:

He's not in charge of it.

Mordy Oberstein:

He's not in charge.

Crystal Carter:

He didn't do it.

Mordy Oberstein:

You shouldn't threaten anybody with death.

Crystal Carter:

Barry's a nice man. You shouldn't do that.

Mordy Oberstein:

I wouldn't go that far, but yeah. No, I'm kidding. Barry's a gem, which brings us to the Snappy SEO News. In case you haven't realized, this is our time for the Snappy News. Snappy News, Snappy News, Snappy News. The update is over. No, not that one. Not that one. For Barry Schwartz over at Search Engine Land, the Google March 2024 spam update is done rolling out, the March 2024 core update is very much still alive, seeing a whole bunch of reversals in the second wave of volatility that came out. Treated it last week. If you want, check that out. But anyway, onto the spam update.

It's done. After 15 days on March 20th, Google announced it's done, it's complete, it's rolled out. If you'll recall, Google announced there were three new elements that are being integrated into the spam algorithm. One of them is not hitting until May. And as part of these announcements, Google released the March 2024 spam update, which obviously heavily focused on the new things being integrated into its spam algorithms, which are scaled content abuse, expired domain abuse and site reputation abuse. Site reputation abuse, not happening until May.

And that's parasite SEO. That's again where, I don't know, I want to push my content. I don't have a very authoritative website. I go to Sports Illustrated. I say, "I will buy a page on your website, write up an article, have nothing to do with sports and get traffic through that parasite SEO." The other elements, the expired domain abuse, where you say, "Hey, that domain, that used to be about whatever topic, which is I can get the domain, bring it back to life. Google will think, 'Oh, wow, it's so authoritative. That was such a great website way back when, and I can write whatever I want, whatever garbage I want, and it'll rank.'"

So that and the scaled content abuse part of the algorithm is also live right now, and that's where basically you're just throwing up tons of just AI garbage, not curating it at all, not thinking about it at all. You're just pushing out content at scale without any thought whatsoever to essentially manipulate rank and users. So that was also in the March 2024 spam update, and we saw a lot of activity around that. Part of all of this were all of the manual actions, which we discussed on the podcast as well.

Thousands of sites have seen manual actions. A lot of the examples being shared have to do with that scaled content abuse where Google's killing off the entire website because again, people are just spinning up random... Some of the cases I've seen, it's not even good AI. It's just bad garbage AI. It's nonsensical even. So those websites have been completely killed off. If you have been doing those kind of things, and you've seen your content and your rankings rather just falls to the bottom floor, that might be something you want to take a look at.

If you are doing good, decent work, you shouldn't really be affected by a spam update. Now, one of the things that you're going to be thinking is, "How do I know if I was hit by the spam update or by the March 2024 core update?" Well, one way to know is if you're doing things like scaled content abuse or expired domain abuse, then it's probably the spam update. Again, if you're not doing these ridiculously spammy, absurd things and you're seeing a ranking loss, one, the March 2024 core update's still not done rolling out. Again, as I mentioned, I'm seeing tons of reversals. So don't do anything yet. As Google mentioned, be patient.

If, after the March 2024 core update is done and rolled out and completed, and you're still trying to figure out was it the spam update or the core update, again, if you haven't been doing these ridiculously, overtly spammy... I don't even know how to... mind-boggling things, then it's probably the core update. That's my way of looking at it. Okay, onto article number two, again from Barry Schwartz, but this time over at Search Engine around it... By the way, got to say happy birthday to Barry Schwartz. March 22nd was Barry's birthday. I hope you got a lovely birthday cake. Anyway, Barry writes, "Google SGE feedback on affiliate results and Google News: Does This Interest You Pop-Up."

Hope you all caught all of that in the headline. Let me explain. Google has done this for a while, but this is a different way of doing it. This one was picked up by a friend of the show, friend of baseball, Glenn Gabe, who saw that for an affiliate website ranking on the results, Google had a little widget there that says, "How helpful was the result above? One extreme being not helpful at all, the other extreme being extremely helpful." And you just selected the dot on the spectrum of how helpful you thought that it was. So that's really interesting to see that on affiliate stuff.

Another example was within a Google News result. So there was a news card that showed up and there was a little pop-up thing that says, "Does this interest you? Thumbs up, thumbs down." Google has been doing these things for a long, long, long time. There's a bunch of iterations of this with feature snippets, "Did you find the feature snippet helpful? Yada, yada, yada." So this is not new to quote Barry, but it is interesting to see. Google, again, has done this many, many times in many, many different ways over the years. I think it's really interesting that it's happening now, especially for the affiliate result that Glenn showed.

I just find it really interesting that you have all of these things going on in the ecosystem, the whole Reddit question, the spam updates, the core updates, the quality of results, AI content, SGE. And Google's now rolling out this little test of showing a widget, you can offer to your feedback to how helpful the result was. So I think that aligns to what's going on in the ecosystem overall and especially on something that's product review related or affiliate related. So definitely interesting to see Google doing that there. And that is this week's Snappy SEO News. It would be nice though if they had a Barry update, but it was one that was only helpful and rewarding to all websites that were good.

Crystal Carter:

And then afterwards everyone would go, "Thank you."

Mordy Oberstein:

Thank you. And they could say, "It's new. It's a new algorithm."

Crystal Carter:

Indeed. Indeed. You were saying, "Wouldn't it be nice?" And now I'm thinking of that Beach Boys song.

Mordy Oberstein:

That's a great song, actually.

Crystal Carter:

It's a great song.

Mordy Oberstein:

How did Beach Boys stand up? That's a good song.

Crystal Carter:

Generally, Beach Boys, they've got that good album. They've got one really, really good album.

Mordy Oberstein:

Right, Pet Shops.

Crystal Carter:

Yeah, that's a great album. Listen to it start to finish. It's a great album. It's really well-produced. Great album.

Mordy Oberstein:

Yeah. You know what else is great? Our follow of the week. And our follow of the week is none other than Dale Bertrand.

Crystal Carter:

Dale Bertrand. If you've been following How to Rank with AI, then you must be following Dale Bertrand. If you're not following him, please do. He's very active on LinkedIn, and you can see him all over the place speaking all over in lots of different places. So he's a founder and president of Fire&Spark with over 15 years experience working out of Boston. And he talks and shares some great resources around AI and content and using it intelligently and using it in a way that really, really works. So highly recommend.

Mordy Oberstein:

And just a super nice guy, super nice. I met him in BrightonSEO in November in San Diego and he was super nice.

Crystal Carter:

Super nice. And he's very often at BrightonSEO in the UK. So yeah, he's a great speaker, great writer, great SEO marketer, so highly recommend. Great follow.

Mordy Oberstein:

We'll link to his profiles in the show notes. Now I realize, by the way, I completely botched... It was not called Pets Shops; it's called Pet Sounds by the Beach Boys. What the hell am I talking about? I am ridiculous.

Crystal Carter:

Pet Shops. You're thinking of Pet Shop Boys. You hallucinated. Are you AI?

Mordy Oberstein:

Yes.

Crystal Carter:

I knew it.

Mordy Oberstein:

It explains a lot. Right? So I want to point out the irony of not saying we're not Beach Boy fans on a podcast built on surfing themes. That's not my favorite. I'll say another hot take, the Beatles aren't my favorite.

Crystal Carter:

Same.

Mordy Oberstein:

I enjoy the songs. I have Beatle albums or I had Beatle albums back in the day. Who has albums anymore? But they're historically super important.

Crystal Carter:

Sgt. Pepper's.

Mordy Oberstein:

Yeah. Even that, I don't know. I'm not...

Crystal Carter:

Come on. Their Come Together, that's a jam.

Mordy Oberstein:

They're not dark enough for me. They're too happy. All this music, it's just too happy. I need a little bit of an edge. I'm more of a Rolling Stones person than I am a Beatles and Beach Boys person.

Crystal Carter:

Okay, I can see that.

Mordy Oberstein:

I need sorrow in my music.

Crystal Carter:

They're good bands. I had a very long conversation on a car ride and we decided that Led Zeppelin was the best band.

Mordy Oberstein:

What? I like Led Zeppelin. Don't get me wrong.

Crystal Carter:

There's a lot of criteria that we went into. We discussed it for a very long time.

Mordy Oberstein:

It's just wrong.

Crystal Carter:

It's not wrong. It's not wrong.

Mordy Oberstein:

That's so Wrong. And I love Zeppelin. I actually saw Robert Plant live.

Crystal Carter:

Where?

Mordy Oberstein:

The lead singer of Led Zeppelin, I've seen him live.

Crystal Carter:

No, I know who he is. Where did you see him?

Mordy Oberstein:

Oh, where. At Madison Square Garden. He opened for The Who back in 2002.

Crystal Carter:

Oh, Zeppelin's definitely better than The Who.

Mordy Oberstein:

What? No, that's ridonkulous.

Crystal Carter:

Okay. All right.

Mordy Oberstein:

Were the godfather's of punk. All right. Anyway, we'll talk about this the second we end this podcast, which we're going to do right now, so we can talk about this because this is ballistically insane. I think you've been smoking too much AI, Crystal. Anyway, thank you for joining us on the SERP's Up Podcast. Aren't you going to miss us? Not to worry, we're back next week as we dive into Building Strong Operations for SEO and Beyond.

Look for wherever you consume your podcast or on the Wix SEO Learning Hub over at wix.com/SEO/learn. Looking to learn a little more about SEO? Check out all the great content, webinars and resources over on the Wix SEO Learning Hub at, you guessed it, wix.com/SEO/learn. Don't forget to give us a review on iTunes or a rating on Spotify. Until next time, peace, love and SEO.

Related Episodes

Don't miss out on any new episodes

Thanks for subscribing!

bottom of page