top of page

Search Results

300 results found with an empty search

  • Recession-proof value propositions for eCommerce SEO

    Author: Dan Taylor During an economic recession, buyer behavior tends to become more conservative as prices increase but consumer disposable income doesn’t. As a result, some commerce sectors (e.g., one-off luxury purchases) tend to slow down, whereas eCommerce in general sees users shift their perception of “value.” In this article, I’ll explain how eCommerce businesses can utilize SEO and content to adapt to consumer behavior changes and maintain a presence (and sales) through the downturn. Table of contents: How to identify and define value propositions for eCommerce consumers How eCommerce SEO strategy needs to evolve during a recession Value proposition examples for changing buyer behaviors How to define value proposition triggers for eCommerce consumers A value proposition is a statement that defines the unique benefits a company offers its customers. It is a way of differentiating your product from those of your competitors. A good value proposition should be clear, concise, and easy to understand. Within eCommerce, many marketers take value propositions for granted, but during an uncertain economic period, they can become differentiators for decision making. This is especially the case when you can’t compete on price with other vendors of similar or alternative products. Differentiation in eCommerce can come from a number of places and functions within the business. Some will be more ingrained into wider processes and functions than others (such as your customer service approach or how your website handles personalized content), but this is where you should also understand the variables you can and can’t influence. Common value propositions that potential customers may prioritize include: Level of customer service and “on-hand” question answering Level of supporting content and product information, as well as social validation (of the company and product) Product and information personalization After-purchase considerations (delivery speed, return policy, etc.) These value propositions will vary in significance depending on the type of product and the user. Nobody expects next day delivery on a Steingraeber & Söhne piano, but they might on AAA batteries or groceries. When crafting your eCommerce value propositions, you should start by focusing on the customer and the most relevant direct need/value points. During an economic downturn, the weighting and importance of these value propositions may shift, and it’s important that you shift with them and recalibrate your content messaging. More often than not, this comes down to an economic concept known as price elasticity. How much you’re affected by changes in the economic climate often depends on your prices and how your products are regarded. For example, gas/fuel are purchased regardless of price, as they are a necessity (i.e., price inelastic), but high-end sports sneakers are not essentials, so they are more price elastic. Supply (i.e., competition) can also play a role. In a marketplace with multiple vendors, supply is elastic, and without strong differentiators cost becomes a more prominent factor. This is why your messaging is critical in preventing prospects from making incorrect assumptions. It’s the relative cost and value associated with your product versus the opportunity cost of not buying your product that ultimately leads to a sale (or not, if the calculus goes unfavorably for you). How eCommerce strategy needs to evolve during a recession eCommerce SEO typically starts off with two basic questions: What are you selling? Where are you selling it? Next, through understanding the product and business model, we can identify relevant modifiers (such as “cheap” and “free delivery”) to use to capture long tail demand. For the most part, this is where we focus on keywords and search volumes, and do the standard optimizing of category product pages, with supporting blog articles to aid potential customers with decision making. This isn’t necessarily the wrong approach (as it works for so many eCommerce stores), but during a recession, in my opinion, it is the wrong approach to be taking if you want to grow and secure your consumer base. During a recession, resources on both sides of the transaction are often stretched and I’d argue most marketers, agencies, and consultants have heard the words “bang for the buck” spoken in a meeting reviewing channel performance. As we know, this translates to “we need to see more ROI,” which either means increased consumer spend with maintained budgets or maintained consumer spend with decreased budgets. More often than not, it’s the latter, which is natural because that’s the lever you have direct control over. By moving away from the big marquee search volume queries, you’re free to focus on value propositions and customer feedback. You can then build your wider SEO strategy whilst achieving ROI and focusing on competitive advantages that you can use existing reviews and consumer feedback to socially validate. This doesn’t mean overhauling your existing personas or segmentations, but it does mean you need to understand how circumstances may have changed your audience’s value perceptions. A good example of this during economic downturns is a shift from wanting free shipping/next day delivery, to increased accessibility through pricing (e.g., bundling products and offers for increased value, or as a wider business decision, working with a vendor to enable installment-based payments). Value proposition examples for changing buyer behaviors You will primarily communicate your value propositions through content, some of which can be built into PLP (product listing page) or PDP (product detail page) templates, but others will require pages of their (own such as blogs, support guides, or resource pages). Not all of the value propositions I mention below will be relevant to your products, so bear that in mind and you may identify others unique to your market that your competitors aren’t communicating effectively, opening the door for you to do so. Availability and shipping Accurately communicating your stock level can be a major driving force for first-time and repeat customers. After all, the customer wants to know that they can get what they need in a timely manner. When you craft content to communicate product availability as a value proposition, use clear wording to appeal to customers and frame expectations: Fast and reliable delivery. Call out fast shipping options—including same-day, next-day, and express delivery. This helps customers feel confident that their item(s) will arrive quickly and safely (which can certainly be a consideration for businesses selling high-value products, like electronics or jewelry). Product range. Highlight your selection of products across different categories (e.g., clothing, beauty products, etc.) to assure shoppers that you have what they need and drive them to explore your site further. Stock level transparency. Accurately reflect how much of each product is available and remind customers of limited editions or seasonal items so they know what’s available before it runs out. Express shipping is a great way to add value to whatever you’re selling. Customers always appreciate discounts and free shipping, but they also value their time—that’s why fast delivery is important. Write copy that emphasizes the value of express shipping to draw attention to this benefit. Here are a few tips on how to highlight your expedited shipping options in your copy/content: 01. Be explicit about what's included: Clearly explain what comes with your express shipping service, including how quickly orders are fulfilled and the types of packaging options. You can also call out any other added benefits of the service, like tracking or signature confirmation. 02. Highlight customer testimonials: Feature positive customer reviews about your express shipping options to show how reliable and efficient your delivery times are. This is an effective way to communicate the value of fast delivery without directly talking about it yourself. 03. Focus on convenience: Stress the convenience of fast delivery—as long as customers can trust that their order will arrive in a timely manner, they're likely to place an order. This can be an especially important element to highlight to appeal to last-minute shoppers during your busiest seasons. Social validation via reviews and testimonials Reviews and testimonials offer social proof that a product is worth buying and that the business actively works to support its customers. Incorporating these into your content will give potential customers the confidence they need to make an informed decision. When customers are more discerning, reviews and testimonials can not only enforce the actual product quality, but also validate your other value propositions. You can gather reviews/testimonials specifically for this purpose by incorporating prompts and focused questions when you reach out to past customers to leave a review. What do you think of our product/service? How would you rate/describe your experience with us? Was our customer service helpful? Would you recommend us to your friends and family? What could we do better next time? Make sure to customize these questions to fit the specific products your business offers. When phrased correctly, they can give you valuable feedback that helps improve the customer experience and better inform marketing strategies, in addition to providing social validation for the value propositions you’re communicating. Product use cases/case studies Creating content around specific use cases can be so beneficial. Use cases are detailed stories that allow potential consumers to better understand the product and visualize how they would use it. Examples of this could include outlining any technical specifications, listing any special offers, or talking about any personalized customer service. You can even deploy an automated content experience to help potential customers make informed decisions. As an eCommerce store owner, you can do this by designing content in a way that allows customers to forecast their own experience with the product. The content experience you design can range from product knowledge hubs and advanced buying guides, efficiency and cost calculators, through to VR experiences for tangible products. To create effective product use cases for your eCommerce store, try the following tactics: 01. Make use of virtual assistants: This AI-powered technology can give customers personalized recommendations based on past interactions with the product or service. 02. Highlight customer reviews: Reviews from real customers give others an insight into how the product or service works and if it is suitable for them. 03. Offer video tutorials: Video tutorials can be extremely helpful for customers who want to gain a better understanding of how your product works before making a purchase. 04. Integrate chatbots: Chatbots provide customers with instant responses to their questions, allowing them to make faster purchasing decisions without having to wait for a human response. By integrating these automated content experiences, you can provide potential consumers with the knowledge they need to make the decision that best suits their situation, without a heavy need for content production resources. Adjusting your SEO strategy for a recession can help you come out on top Marketing is probably one of the most important things you should continue to do throughout a recession. You may find your competitors dialing back spend across multiple channels, leaving an opportunity for you to step in and fill the void. The key is to pivot your messaging and understand that the value that prospects once saw in your product might have changed as times got tougher. This change in messaging and brand portrayal can also work to improve retention as well as appeal to your SOM’s (serviceable obtainable markets) new, pressing pain points. Dan Taylor - Head of Technical SEO at SALT.agency Dan Taylor is an experienced SEO and has consulted for companies such as Cloudflare, Gitlab, and Proton. In 2018, he won the inaugural TechSEO Boost Research prize and has previously talked at BrightonSEO, TechSEO Boost (Boston, US), and Optimization (Moscow). Twitter | Linkedin

  • YouTube Clips: What SEOs need to know

    Author: Crystal Carter When we think about SEO, we generally prioritize the written word. Blogs and written copy make up a great deal of what site owners need to optimize. However, optimizing multimedia makes your content accessible to more users and can help your content appear on multiple parts of search engine results pages, which is why SEO strategies should also include tactics for images and videos. YouTube is the second-most popular search engine in the world and can play an important role in your strategy. With the continued increase in available video formats and sharing options, SEOs have multiple ways to make use of YouTube content to add value across their websites. Though they have been around since January 2021, YouTube Clips appear to be relatively underused. In this article, I’ll point out some of the potential opportunities for SEO and how YouTube Clips can be used to improve content by covering: What YouTube Clips are How to take a Clip from a YouTube video How YouTube Clips perform in social shares Why Clips work best as blog embeds How to use YouTube Clips for SEO What are YouTube Clips? YouTube Clips are simply extracts from a longer YouTube video. They can be between 5–60 seconds long and have their own URL. Clips are public and can be viewed by anyone with access to the Clip (and permission to view the underlying video the Clip is taken from). “They can also be seen on select search, discovery, and analytics surfaces available to viewers and creators on YouTube,” according to Google, and are viewable by the creator of the underlying video as well. How to take a Clip from a YouTube video To make a YouTube Clip, click the “Clip” button (with a scissor icon) above the YouTube video’s description. From here, you’re taken to an editing window where you’ll use a slider to select a continuous 5- to 60-second timeframe to extract. Before you can extract the Clip you must add a description of the Clip. Select “Share Clip” and you’ll be shown the same options to share or embed the Clip that you would with a full YouTube video. It took me about five minutes to make my first ever clip from a recent Wix webinar. Who can make a Clip on YouTube? Anyone with a YouTube channel can make a Clip. Though I tested a video from the Wix team, the video was not created on my YouTube account. Anyone could clip any section of any video that has clips enabled. YouTube creators can opt out of Clips of their content via their channel settings. So, if this is something that you are unsure of you can take a moment or regroup. How do YouTube Clips perform in social shares? Though they are clearly meant to be shared, one of the reasons why I think Clips are underused is that, when I tested them on a few different channels, the implementation was underwhelming—especially compared to the way TikTok embeds have been implemented. On one hand, views of the Clip are added to the overall view count of the underlying YouTube video, which is nice. And, speaking completely selfishly as someone who regularly participates in and organizes long-form SEO webinars, I think there are many opportunities here. The notion of extracting a particularly insightful segment of a one-hour conversation without needing to create a new video is very appealing. On the other hand, if the aim here is to create something that rivals search-share challenger TikTok, then I think Google has missed the mark. A key component to TikTok’s success is that TikTok videos look native on every platform they are viewed on. Social feeds for Instagram, Twitter, Facebook, and Pinterest are full of TikToks that play automatically without the user needing to download or visit the app. Wherever they are, the videos preroll (with an added watermark) and let the FOMO do the heavy lifting. When I tested YouTube Clips on Twitter, Pinterest, LinkedIn, and Facebook, none of them had the native preroll capability that I’ve seen from TikTok. YouTube Clips on Twitter When shared on Twitter, for instance, the video does not display natively. This means that you have to click on it in order to see the video and, on mobile, it only shows as a link. When you click through, you are able to see the video but, in comparison to TikTik or even a standard YouTube video share, the window is very busy. Information about the source videos and the clip are crammed into the Twitter card. And, although you’re asked to add a YouTube Clip description of up to 140 characters, the Twitter card cuts it off after about 60 characters. And, if you click again, then you arrive on the YouTube landing page for the Clip rather than the video it was taken from. YouTube Clips on Pinterest On Pinterest, YouTube Clips are lacking in that they don’t preroll and the Pin thumbnail is the same for the main video as it is for the Clip. In contrast, pinned TikTok videos play on Pinterest. So, the emphasis is on showcasing the content, which in turn drives traffic to the app. YouTube Clips on LinkedIn On LinkedIn, YouTube Clips display with the thumbnail from the full video. And, when you click on one, you cannot watch the video on LinkedIn. Instead, you’re redirected to YouTube to view it. However, one thing that LinkedIn’s sharing settings do well is show a good amount of the description that I wrote in the YouTube Clip share card. The sharing experience was almost exactly the same on Facebook as it was on LinkedIn. Overall, the social implementation is limited, which is why I am convinced that this is an SEO tool. Clips work best as blog embeds It wasn’t until I embedded a Clip into a blog post that I realized the true potential of YouTube Clips. It works really well to enable you to extract the most important segments from your long-form videos and pop them into your blog (to provide additional context, cite source material, or supplement the written content, for example). This is not the same as sharing a YouTube video embed that starts at a given timestamp because you only extract 60 seconds of video per Clip. Even though you’re limited to one minute, you can make lots of them and all of them will contribute to the overall value of your source video. So, it’s essentially the video equivalent of a quote from a book or a sample of a most excellent guitar riff. YouTube Clips play in the same way that standard YouTube embeds play in a blog post and they include any subtitles from the original video as well. They also loop, making them easier for users to view a few times and process the content before moving on to the next step. They are ideal for blogs with written how-to’s supported by a video demonstration. How to use YouTube Clips for SEO In my opinion, SEOs should be paying attention to YouTtube Clips for a few reasons: They present a great opportunity to revisit written content and add relevant instructional segments, quotes, and context, allowing you to consolidate your video efforts for maximum user value. Clips are rankable and are impacting the search engine results page (SERP). Clips may be impacting Key Moments shown in the SERP. Here’s how you make the most of this for SEO. Update longer instructional content with step-by-step segments Refreshing content that is already ranking can add value for users. Recipe blogs, how-to guides, and tutorials could share multiple video steps in a blog post at the most relevant points, and then share the full video at the end. This could also be really useful in the age of Google’s Multitask Unified Model (MUM) technology. Create searchable YouTube Clips within your videos YouTube Clips and the associated descriptions are crawlable and can rank. In one example from a video of a Maha Shivaratri festival, I found a 60-second clip of a specific performance from a 10-hour live stream ranking for terms that were included in the Clip’s description but not in the source video transcript or description. This suggests that Clips provide an opportunity to associate relevant keywords and phrases with video content, which can then be searched on Google. This also means that anyone can find them online by searching for them in the usual way. Since the description for the Clip is a mandatory field, Google may potentially be able to gather and assess tags on every clip that is shared, which means that clips that include relevant keywords could add more optimization to the source video and the blog it's embedded in. Social proof for video Key Moments Because the clips are also shareable and indexable, there is an implication that Google could have more social proof for which Key Moments are most valuable to users. While this is not confirmed, this could mean that YouTube Clips enable Google to better refine or identify new Key Moments that show in the SERP. In the case of the Maha Shivratri video, we see that, though there were many performances in the livestream, the second key moment shown in the SERP corresponded with the content from the clip. Additionally, we may possibly see new Clips showing on their own in the SERP in the same way that YouTube Shorts have increased visibility during 2022. In any case, I think this new tool presents a number of interesting SEO opportunities. YouTube Clips align with Google’s evolving search results Over the last few years, many of Google’s advancements have centered around how it can provide more relevant information to users via a range of media. The integration of Google Lens and visual search elements into the search experience is part of this trend, and so too is the increase of videos and images on search engine results pages. With these developments as the backdrop, tools like YouTube Clips enable us to curate, tag, and distribute our videos in a way that can strengthen our content as well as our overall search visibility. Crystal Carter - Head of SEO Communications, Wix Crystal is an SEO & digital marketing professional with over 15 years of experience. Her global business clients have included Disney, McDonalds, and Tomy. An avid SEO communicator, her work has been featured at Google Search Central, Brighton SEO, Moz, DeepCrawl, Semrush, and more. Twitter | Linkedin

  • How to assess the impact of Google algorithm updates

    Updated: March 13, 2023 Author: Crystal Carter My general advice for managing Google algorithm updates is to “keep your nose clean.” By that, I mean consistently making content that is relevant to users and doing so on a technically sound website, using SEO best practices with high levels of demonstrable experience, expertise, authoritativeness, and trustworthiness. I’ve worked on sites that follow this approach and they consistently fare better than sites that don’t during Google's algorithm updates. That said, sometimes (particularly when a core update negatively impacts your site) you need to explain what happened to clients and other stakeholders. Here are some steps that can help you understand how and why a Google algorithm update impacted your site. These can be crucial steps to help you make a plan to address changes in organic traffic or recover from negative impacts of a core update. Date the changes Many SEO monitoring and analytics tools will give you the option to add annotations to reports. These annotations let you take note of when the update started rolling out and when it finished. For many years, marketers have relied on Google’s Universal analytics for annotations, but (at time of writing) it is only possible to annotate GA4 using third-party tools. Some external tools will automatically annotate your reporting timeline with large scale update announcements, but these tend to be general rather than site specific. A Google core update can take months and impact individual sites and regions at different times. Typically, US domains are impacted first during core updates and other markets follow. So, if you are outside of the US, you might need to keep checking your data as the update rolls out. It is important to make note of any changes within your own account so that you can compare your data accurately. You want to make a note when you start seeing an impact but also add other significant activity to your notes so that you have context for additional variables that could impact your rankings. Site migrations, viral PR activity, new high-quality backlinks, and hosting changes can also have a sitewide impact. Having this information will be valuable when you are reporting later and will help you benchmark your data as you carry out your audits. Compare the impact on your competitors Comparing your site to others can help identify if ranking changes are specific to you or part of a wider trend for your business vertical. Algorithm changes may impact certain verticals more than others. For instance, during the Medic update in 2018, studies showed that the update significantly affected websites with medical and health-related content. Many sites with content that included information on “Your Money or Your Life” topics were evaluated on new criteria. Content that did not already satisfy that criteria saw reduced visibility while sites that met the new expectations fared better. If the update is focused on sites within a particular vertical, then you may see competitors experience similar changes in visibility after the update. If this is observed, then it may reflect that Google has changed how it understands and shows results for a topic or tactic you have in common. This means that it's unlikely that anything you, specifically, have done has influenced your ranking changes. If you do not see similar changes across your vertical, then it might be the case that some aspect of your strategy or approach isn’t aligning with the new algorithm criteria. In this instance, I would keep investigating before drawing any conclusions. How to check on your competitors Third-party SEO monitoring tools like Semrush, Sistrix, and others can offer actionable insights on competitor visibility changes for specific keywords and domains. In many cases, you will need to identify the keywords and businesses you’d like to track well ahead of time in order to build up the data you want to benchmark, but this data can be important for contextualizing your performance. How to check your industry There are a number of tools that track Google algorithm updates. During a core update, these tools offer insights into how different types of websites, keywords, and SERP features have been affected. Popular free tools for monitoring algorithm activity include: Mozcast by Moz Semrush Sensor Google Grump by Accuranker Rank Risk Index by Rank Ranger Cognitive SEO Signals Algoroo SERPmetrics SERP Fluctuations Advanced Web Rankings Tool Sistrix Google Update Checker Each tool uses different data sets and metrics so it is worth comparing multiple sources to get a full idea of the impact. Comparing this data with what you see on your site can help you give useful guidance to clients about what to do next. For instance, if you identify that a change in rankings has to do with a new SERP feature or rich result, then you can adjust your content accordingly. Isolate affected queries It is worth reviewing Google Search Console to identify any trends in the specific types of queries that were impacted by the update. It is important to understand which queries were impacted because not all keywords offer the same value. Over the last few years, myself and other SEOs have observed that, sometimes during Google’s updates, the keyword positions that decline are terms that were irrelevant to the domain in the first place. I have personally seen client sites rank for content like partner logos and other seemingly random or “junk” terms. When Google makes adjustments, sometimes it sends users to more relevant content or finetunes the intent of the content entirely so if you were previously ranking for irrelevant terms and you aren’t anymore, then you may see this reflected in your overall domain ranking and traffic after. However, it is likely that the quality of traffic that you receive after the algorithm update will be more aligned with the objectives of your brand marketing funnel. Mordy Oberstein, who has been tracking Google's updates carefully for years has often noted that you don't want to dilute your site's authority by creating content in areas outside of the site's identity. If you have seen a big shift in some of your core keywords, then you may need to review this further to understand the potential business impact. Assess the SERP The SERPs are constantly changing—both in format and content—so reviewing how content is being surfaced can help you understand changes from a user’s perspective. What are you looking for? Here are a few SERP changes that can cause sudden shifts in traffic. New content formats Google makes multiple algorithm adjustments throughout the year. Some of these changes result in content shifting in position. Sometimes, a Google update will change the SERP entirely. Job-related SERPs are a good example of this—the top of these SERPs almost exclusively includes content from Job Search on Google (as shown above) before you get to the plain blue links. When this change came, there were big swings in traffic patterns from users to recruitment websites. These don’t typically occur during a core update, but changes like this can result in substantial shifts in clicks and click-through rate. So, it’s worth checking the desktop and mobile SERP to see if there have been any significant changes in how content for your most relevant queries is being displayed. Prioritizing official sources As occurred during the initial Covid-19 outbreak for highly sensitive topics, Google will surface content from official topic authorities like the CDC or the World Health Organization. For topics like this, Google will often remove advertisements and curate the SERP so that the most authoritative sources are immediately visible. If this has affected relevant SERPs, then you should consider aligning any SEO activity with additional channels to manage your visibility and traffic overall. In the past, I have seen clients engage digital PR, PPC, and social to successfully drive traffic to a site after an algorithm update. Done well, this approach can make your traffic more resilient in the long term. Who is now in the top positions If you have seen volatility in some of your top positions, visit the live SERP for your most relevant keywords and have a look at the content that replaced yours. Examine the individual pages and the domain overall to see how it satisfies the query. Understanding your content’s relevance to the query is particularly important for core updates. When analyzing visibility changes following a 2021 algorithm update, Oberstein observed that Google has been “refining its ability to offer highly relevant content to new extremes” during recent core updates. “Highly relevant content means that the content is nuanced and substantially detailed in nature,” he explained. So, when you are assess the new top-ranking pages, consider the following: How does the publishing team demonstrate expertise and depth of knowledge? Which technical implementations is the page excelling on? Which media types is the site using? How does the domain demonstrate topic authority around the query? How relevant is the top content and domain for the query? Ask these same questions about your content to identify gaps, assess the quality of your content, and relevance for the keyword or topic. Keep calm and keep optimizing Don’t panic—this is an important point. Planning is fine. Panicking can mean that you act impulsively when you don’t have enough information, which could certainly worsen the situation. Take a moment to assess the impact, consider how you can address any changes in business-critical traffic, and then move forward. Crystal Carter - Head of SEO Communications, Wix Crystal is an SEO & digital marketing professional with over 15 years of experience. Her global business clients have included Disney, McDonalds, and Tomy. An avid SEO communicator, her work has been featured at Google Search Central, Brighton SEO, Moz, DeepCrawl, Semrush, and more. Twitter | Linkedin

  • Programmatic content expansion with Python and Velo

    Author: Colt Sliva You might’ve heard that content is king when it comes to SEO. While that is absolutely true, it is an open-ended directive. You could create content about anything, in any writing style, for any keyword. Keyword research can give you an idea of search demand, but you truly don’t know how content will perform until it's created. So, we have a classic chicken and egg problem: You don’t want to invest in content that won’t convert, but you don’t know if traffic will really convert until you’ve created the content. The brilliant thing about software is that it lets you do things at scale without huge costs. For this demo, we’re going to be using programmatic content to rapidly build out an eCommerce store. By spinning up a framework of content, we can prototype and test whether something is sticky enough to get traffic. The goal is not to produce a complete website. Rather, the framework should be used to scaffold out a set of products to test their traffic potential. Google has some rules against automatically generated content, so we’ll have to be cognizant about quality through the process. What’s covered This is going to be an advanced SEO strategy with some programming walkthroughs. We’re going to rely on Python for some data manipulation and we’re going to build an API endpoint with Wix’s Velo, which uses JavaScript. I would urge you to bookmark this resource because parts of this project can be adapted to many web projects. Here are some of the core skills we are going to work through: Scripting with Chrome’s Snippets feature Scraping a website with Puppeteer Cleaning up a file of messy data How to write strong a programmatic product description Recoloring product images on the fly Building an API with Wix Velo For those familiar with data engineering, this is a classic ETL process: We will extract product data, transform it into a format that we can use, and then load it into our Wix website with Velo. What you need to build a programmatic SEO eCommerce store eCommerce stores don’t need much to get started. You need a content management system (CMS), a brand, and a product. 01. CMS For this demo, we’ll be working with Wix as our eCommerce CMS. This is the CMS of choice because of its powerful Velo platform—it is a JavaScript IDE that runs NodeJS to interact with the frontend and backend, which will allow us to programmatically add content. 02. Brand Say hello to Candle Crafty, a boutique homemade candle store. We don’t yet know which candles we should make. Keyword research has been helpful, but doesn’t give us enough direction on which scents or colors will be popular. Instead, let’s programmatically build many variations of our product. Then, we can rely on search engines to send customers to the right products. 03. Product For our product (supplier) we will be using scents and colors from candlescience.com. They offer a large number of scents with naming ideas and color suggestions to help new candle businesses. We can use this to scaffold out. Scraping products with Chrome Getting all product pages For product extraction, we’ll need to discover all of the available scents. Luckily, the Candle Science product listing page has a complete list of URLs. There are tons of ways to scrape all tags from a given page. One of the fastest ways to prototype scripts is right in the browser. In Chrome: Right click “Inspect” > “Sources” tab > “New Snippet” button. From there, you can run JavaScript on the page. Here’s our script. let x = document.getElementsByClassName('products')[0] let links = x.getElementsByTagName("a"); let rows = ['Links']; for (link of links) { rows.push(link.href) } let csvContent = "data:text/csv;charset=utf-8," + rows.join("\n"); var encodedUri = encodeURI(csvContent); window.open(encodedUri); This script starts by getting the first element with the “products” class. Then, it gets all of the links within that class. Lines 3–5 put the links into an array, and then lines 6+ export that array into a CSV. Now we have a CSV file with all of their products! Scraping product pages with Puppeteer For this next section, I’ll be relying on Python. Normally, I would use the python requests library for scraping, but I quickly realized that the supplier’s website was built with Nuxt.js and the description portion of the product pages were not server-side-rendered. We can work around that with Pyppeteer, though. It’s a wrapper for Google’s headless Chrome product that allows you to render JavaScript websites. Looking at the page, there are a number of features we can extract that will be helpful to build our products: Product Title Product Description Top Notes Mid Notes Base Notes Blend Ideas Color Ideas First, let’s create a file called extract.py. At the top of that file, we import our necessary libraries. Pandas to read and write CSVs, BeautifulSoup to parse HTML, asyncio and Pyppeteer to launch our web browser. import pandas as pd from bs4 import BeautifulSoup import asyncio from pyppeteer import launch Our web browser needs a user agent. You can use the Chrome one or identify yourself in other ways. headers = { 'User-Agent': 'CandleCrafty 1.0', } Then, we need a parse function to look for CSS classes containing the text features we’re interested in. It will take HTML content, turn it into “soup” and then let us grab titles, scent notes, or product descriptions. We return that in list form, so it’s easy to add to a CSV. def parse(content): soup = BeautifulSoup(content, 'html.parser') #Title title = soup.find(class_="product-headline").text.strip() # Get Notes notes = soup.find(class_="fragrance-notes") txt = notes.findAll('span') res = [] [res.append(spans.getText().strip()) for spans in txt if spans.getText().strip() not in res] res = [i for i in res if i] try: top_notes = res[1] except: top_notes = '' try: mid_notes = res[3] except: mid_notes = '' try: base_notes = res[5] except: base_notes = '' notes_fallback = res # Get Product Description txt = soup.find(class_="text") description_p = txt.text.split('\n') blend_ideas = '' brand_ideas = '' color_ideas = '' note = '' complete_list = '' paragraphs = '' for p in description_p: if ':' in p and 'blend' in p.lower(): blend_ideas = p elif ':' in p and 'brand' in p.lower(): brand_ideas = p elif ':' in p and 'color' in p.lower(): color_ideas = p elif ':' in p and 'note' in p.lower(): note = p elif 'complete list' in p.lower(): complete_list = p else: paragraphs = paragraphs + '\n' + p return [title, top_notes, mid_notes, base_notes, notes_fallback, blend_ideas, brand_ideas, color_ideas, note, complete_list, paragraphs] Lastly, we run our main function. It reads the CSV of all our target URLs and loops through them, using a launched browser. The results of that are saved. async def main(): df = pd.read_csv("download.csv") urls = list(df['Links']) browser = await launch() page = await browser.newPage() data = [] for url in urls: await page.goto(url) content = await page.content() data.append(parse(content)) df = pd.DataFrame(data, columns=['Title', 'Top notes', 'Mid notes', 'Base notes', 'Notes Fallback', 'Blend ideas', 'Brand ideas', 'Color ideas', 'Note', 'Complete list', 'Paragraphs']) df.to_csv('save.csv') await browser.close() asyncio.get_event_loop().run_until_complete(main()) And with that, we’re done with our scraping script! Here’s our first row of data (slightly truncated). Data cleaning This isn’t the most fun topic, but it tends to be a large part of data or web projects. Taking abstract data off the web and making it orderly is a hugely helpful skill. Messy text Our data has started out a little bit messy. We can clean up the Blend ideas, Brand ideas, and Color ideas columns by removing some of the extra text and only returning the comma-separated values that are really helpful. Each of those has a colon separating the values, so we can grab all the text following that colon. Think something like this: Split at the string of characters at the colon. That will return an array that looks like this: [“Blend recommendations”, “Saffron Cedarwood, Fireside”] We can take the 2nd part of that and then strip out any whitespace. Wrap it in a try and except because we might end up with an empty row. def clean_pretext(data): try: return data.split(':')[1].strip() except: return data clean_pretext( row['Blend ideas'] ) We can also clean up several issues with the paragraphs: Whitespace before or after the content Multiline text Broken encoding that might look like “’” Cleaning those three action items would look like the example below. It's a lot to take in, but we are essentially breaking apart the content at broken sections and putting it back together with a join() function. s = row['Paragraphs'].strip() s = ' '.join(s.splitlines()) s = "".join([xiford(x) < 128else''forxins]) # This is a fancy way to fix any text encoding issues like ’ Transcribing color text to color hex codes In some cases, we need to turn text to data. Let’s map some color ideas to hex codes. Python has a cool library called Colour, which will help us find the right color for our text description. For this section of code, I set a default hex value (white). Then I pass in the text that has color ideas. The regex grabs only the words and turns it into an array. Then for each word, we loop through and look for a matching color. If we find one, we set the hex variable to the new results. Otherwise, we continue the loop. After the loop is complete, the result will either be a new hex code, or fallback to white. from colour import Color def get_color(data): hex = '#ffffff' if isinstance(data, str): arr = re.findall(r"[\w']+", data) #extract the words for item in arr: try: hex = Color(item).hex except: pass return hex get_color(row['Color ideas']) Putting it all together This script loops through every row, cleaning data and generating colors. Then, it saves two files: a new CSV with improved data and a new .txt file with each description on a single row. import pandas as pd import re from colour import Color df = pd.read_csv("../extract/data.csv") # Open csv df['hexcodes'] = '#ffffff' # Creates a new hexcodes column with the default value of white df['color_literal'] = 'white' txt = '' # Create a text variable which will eventually become our text file def clean_pretext(data): try: return data.split(':')[1].strip() except: return data def get_color(data): hex = '#ffffff' color_literal = 'white' if isinstance(data, str): arr = re.findall(r"[\w']+", data) #extract the words for item in arr: try: color_literal = Color(item) hex = color_literal.hex except: pass return {"color": color_literal, "hex": hex} for i, row in df.iterrows(): # Remove pretext df.at[i,'Blend ideas'] = clean_pretext( row['Blend ideas'] ) df.at[i,'Brand ideas'] = clean_pretext( row['Brand ideas'] ) df.at[i,'Color ideas'] = clean_pretext( row['Color ideas'] ) # Generate hexcodes color_data = get_color(row['Color ideas']) df.at[i,'hexcodes'] = color_data['hex'] df.at[i,'color_literal'] = color_data['color'] # Clean paragraph s = row['Paragraphs'].strip() s = ' '.join(s.splitlines()) s = "".join([x if ord(x) < 128 else '' for x in s]) # This is a fancy way to fix any text encoding issues like ’ df.at[i,'Paragraphs'] = s #Add cleaned paragraph to text file txt = txt + '\n' + s # Export new CSV df.to_csv("cleaned.csv") # Export new content text file with open('content.txt', 'w') as f: f.write(txt) How to write strong a programmatic product description You might remember that automatically generated content can result in a manual action. Specifically, Google highlighted “text generated using automated synonymizing or obfuscation techniques.” We also need to be mindful of the Helpful Content update. We’ll need to ensure the descriptions are helpful and useful in describing the products we create. The flip side of that is this explanation of how Google handles duplicate product descriptions: It picks the most relevant site out of all the pages that have the duplicate content. That means there’s a very good chance a new store cannot compete with the default text from a supplier. To deal with this, we meet Google half way: It wants to make sure the description is helpful to real humans. Google specifically calls out “human review or curation before publishing.” Basically, if you generate it, make sure it’s readable. This leads us to our two strategies for text generation: Text generated through machine learning Ad lib style text generation Machine learning has the power to be incredibly helpful for content writing, but it is as much an art as a science. For the purpose of this demo, the results from machine learning text generation were too poor and needed too much editorial work. It was the kind of content that Google doesn’t want. With the second strategy, we can make unique enough descriptions that are helpful to the reader and detail the product, so we’re going to use that. “Fill in the blank” strategy This strategy is very similar to Ad Lib-type games. We provide a basic template, add in a little variety, and then get back a description. A good product description can do a couple of things: Allows the consumer to picture themselves using the product Describe the product’s benefits Use sensory words Provide social proof Use numbers With that, we can prebuild some arrays of helpful text. verb = ['drifting', 'wafting', 'floating', 'whirling'] noun = ['treat', 'delight', 'joy'] adj = ['charming', 'delightful'] feeling = ['elated', 'happy', 'gratified', 'blissful', 'delighted'] craftsmanship = ['well-crafted', 'homemade', 'handpoured', 'designer', 'architected', 'tailored', 'curated'] benefits = ['Find you zen with', 'Design your happy place with', 'Entice your guests with', 'Relax and thrive with'] socialproof = ['One of our most popular scents, ', 'A fan favorite, ', 'Always getting rave reactions, '] cta = ['Get yours now!', 'Order today!', 'Come join the candle club and order now!', 'Order your ideal scent today!' 'Take this scent home today!'] numbers = ['These 8.5 ounce candles will last for an average of 60 hours.', 'Our 60 hour candles will fill your home time and time.', 'These long-lasting candles with provide scents for up to 60 hours.'] product = ['soy wax candle', 'soy candle', 'all-natural candle', 'scent'] For the description, we want to use some of the candle details. We have quite a few options to pull from: Color Title Parts of the supplier description Scent notes We’re going to loop through each row of the CSV and pull those details to build some randomized sentences. def getNotes(data): try: return re.findall(r"[\w']+", data) except: return [] for i, row in df.iterrows(): color = row['color_literal'] # Borrow the first sentence from the supplier sentence = tokenize.sent_tokenize(row['Paragraphs'])[0] title = row['Title'] # Grab all our scent notes top_notes = getNotes( row['Top notes'] ) mid_notes = getNotes( row['Mid notes'] ) base_notes = getNotes( row['Base notes'] ) # Merge them into a single array all_notes = [*top_notes, *mid_notes, *base_notes] # Format them into comma separated text. Add 'and' before the last note. notes_txt = '' for x in all_notes[:-1]: notes_txt = notes_txt + x + ', ' notes_txt = notes_txt + 'and ' + all_notes[-1] Next, we can take those variables and merge them into a list of sentences. #Build some sentences sentences = [ f'{title} is a {random.choice(craftsmanship)} {random.choice(product)} sure to be a {random.choice(noun)} in your home.', f'{random.choice(socialproof)} this {color} {random.choice(product)} will leave you feeling {random.choice(feeling)}.', f'{random.choice(benefits)} {title} - {random.choice(craftsmanship)} with notes of {notes_txt}.', sentence ] We can add another layer of randomization by shuffling those sentences into a random order. We wrote these so they would still make sense in any order. random.shuffle(sentences) #Shuffle the order to add some variety The last sentence should be a call to action. Let’s add that in. sentences.append( random.choice(cta) ) #Append a random Call to action to the end of our content. Finally, we can merge the array of sentences into a final description. description = ' '.join(sentences) Putting it all together And, here’s what the script looks like once it’s all put together. import pandas as pd from nltk import tokenize import random import re df = pd.read_csv("../../transform/cleaned.csv") df['adlib'] = df['Paragraphs'] verb = ['drifting', 'wafting', 'floating', 'whirling'] noun = ['treat', 'delight', 'joy'] adj = ['charming', 'delightful'] feeling = ['elated', 'happy', 'gratified', 'blissful', 'delighted'] craftsmanship = ['well-crafted', 'homemade', 'handpoured', 'designed', 'architected', 'tailored', 'curated'] benefits = ['Find your zen with', 'Design your happy place with', 'Entice your guests with', 'Relax and thrive with'] socialproof = ['One of our most popular, ', 'A fan favorite, ', 'Always getting rave reactions, '] cta = ['Get yours now!', 'Order today!', 'Come join the candle club and order now!', 'Order your ideal scent today!', 'Take this scent home today!'] numbers = ['These 8.5 ounce candles will last for an average of 60 hours.', 'Our 60 hour candles will fill your home time and time.', 'These long-lasting candles with provide scents for up to 60 hours.'] product = ['soy wax candle', 'soy candle', 'all-natural candle', 'scent'] def getNotes(data): try: return re.findall(r"[\w']+", data) except: return [] for i, row in df.iterrows(): color = row['color_literal'] # Borrow the first sentence from the supplier sentence = tokenize.sent_tokenize(row['Paragraphs'])[0] title = row['Title'] # Grab all our scent notes top_notes = getNotes( row['Top notes'] ) mid_notes = getNotes( row['Mid notes'] ) base_notes = getNotes( row['Base notes'] ) # Merge them into a single array all_notes = [*top_notes, *mid_notes, *base_notes] # Format them into comma separated text. Add 'and' before the last note. notes_txt = '' for x in all_notes[:-1]: notes_txt = notes_txt + x + ', ' notes_txt = notes_txt + 'and ' + all_notes[-1] #Build some sentences sentences = [ f'{title} is a {random.choice(craftsmanship)} {random.choice(product)} sure to be a {random.choice(noun)} in your home.', f'{random.choice(socialproof)} this {color} {random.choice(product)} will leave you feeling {random.choice(feeling)}.', f'{random.choice(benefits)} {title} - {random.choice(craftsmanship)} with notes of {notes_txt}.', sentence ] random.shuffle(sentences) #Shuffle the order to add some variety sentences.append( random.choice(cta) ) #Append a random Call to action to the end of our content. description = ' '.join(sentences) df.at[i,'adlib'] = description df.to_csv('adlib.csv') Generating product images on the fly With some base data, we can look to generate images. There are a few really incredible machine learning image-generation tool kits now available, like Google’s Imagen or OpenAI’s DALL·E 2, but we don’t need to be that fancy. I originally tried to use line art and Scalable Vector Graphics (SVGs) to programmatically generate the images. It’s a good strategy because it’s very similar to HTML and you can even use CSS. All the elements of the image are programmable. However, the quality just wasn’t there. Maybe if you’re a better designer, this is a viable strategy, which is why it's still worth mentioning. Instead, let’s try using regular bitmap pixel images. The idea here will be to have two layers of PNGs. One will be our base layer (shown above), which will be a static background. The other image (shown below) we will tint with various RGB settings. Then, we can dynamically color our images. On the left, we have a base image that is fairly nice looking by itself. There’s room to change wax color or add text onto the jar. We can color/tint the layer on the right, and overlay it back onto the base layer. Let’s power up the Python Imaging Library (PIL) for this demo to dynamically generate the images. Start a new Python file and begin it with the following imports: from PIL import Image, ImageOps, ImageDraw, ImageFont We need to load in some preliminary assets, like our PNGs and some fonts. I’ve added the same font twice, just once at 28 pixels and once at 10 pixels. foreground = Image.open("image/Candle01.png") background = Image.open("image/Candle02.png") fnt = ImageFont.truetype("Shrikhand-Regular.ttf", 28) fnt_sml = ImageFont.truetype("Shrikhand-Regular.ttf", 10) Next, we need to define a magic function called tint_image. It’s going to take the image, copy its opacity, make it grayscale, pop in some color, and then put the opacity back. The result is a tinted image! Using that on the foreground dynamically generates our colored candle. We’ll make our demo orange. def tint_image(src, color="#FFFFFF"): src.load() r, g, b, alpha = src.split() gray = ImageOps.grayscale(src) result = ImageOps.colorize(gray, (0, 0, 0, 0), color) result.putalpha(alpha) return result color = "orange" foreground = tint_image(foreground, color) We can go ahead and merge the colored foreground into the background. background.paste(foreground, (0, 0), foreground) As a next step, we can add some text. Here I’m writing the name as “Apricot Grove.” The text is centered 330 pixels to the right and 400 pixels down (it was just trial and error to find the right location to place the text). We use the small text to fill out some white space. d = ImageDraw.Draw(background) d.multiline_text((330, 400), "Apricot\nGrove", font=fnt, fill=(100, 100, 100, 10), align='center', spacing=28, anchor="mm") d.multiline_text((330, 650), "Homemade soy wax candles", font=fnt_sml, fill=(80, 80, 80, 10), align='center', spacing=14, anchor="mm") background.show() Lastly, we save the complete image as a PNG. background.save("save.png") The result is radically better than the line drawing and creates a very predictable result. Loading the products Using all of the scripts we’ve written so far, we can connect our new product information to Velo. Building an API endpoint Velo is an extremely powerful platform that allows direct interaction of both the frontend and backend of a Wix website. Velo offers fine-tuned control across a spectrum of CMS features. Today, we are just expanding the functionality of our backend with a few Velo functions. This guide is just the first step into Velo and what could be built in the future. At the top of the Wix Editor, make sure you’ve enabled Dev mode. Next, we’ll want to hop into the Public & Backend section. Under Backend, you can select Expose site API, which will auto generate a file called http-functions.js. This opens up your Wix website as an API, enabling you to write custom functions or services as endpoints. Think of it like a backend with NodeJS, but with direct integrations into Wix. You can make GET or POST requests against these and access all of the Velo tooling. Functions just need to be prepended with get_funcName() or post_funcName() to define their purpose. With our script, here are the Velo libraries we’ll need: Import Wix Media Backend, Wix Stores Backend, and Wix HTTP Functions. import { mediaManager } from'wix-media-backend'; import wixStoresBackend from'wix-stores-backend'; import { ok, notFound, serverError } from'wix-http-functions'; The mediaManager library from wix-media-backend allows us to manipulate the product images we upload. The wix-store-backend is where we will upload products. Lastly, the wix-http-functions allows us to build our API responses. Here are the steps that we need to take in our Velo endpoint: 01. Accept a POST request with a payload containing product and image data. 02. Create a product with our product data. 03. Upload an image from our local PC to our Wix site. 04. Apply that image to the previously repeated product. To start, let’s create a new endpoint: export async function post_echo(request) { let response = { "headers": { "Content-Type": "application/json" } } response.body = 'This endpoint works!' ok(response) } After saving the file, this example function can be accessed at https://{my-username}.wixsite.com/{my-store-name}/_functions-dev/echo. Hitting it with a POST request should send back the message! Next, let’s write some code to create a product. So, we create a new promise to allow our function to run asynchronously. Modern JavaScript just makes life easier. Then, we use wixStoresBackend, which we imported earlier, and call createProduct with our product data that we pass in. Then we get back information from Velo, like the product ID or extra details about what was just created. That’s all we need for this step. functioncreateProduct(product){ returnnewPromise(resolve => { wixStoresBackend.createProduct(product).then(res => { resolve(res) }).catch(err => {console.error(err)}) }) } The next step is to upload an image from our local PC to the Wix site. We will need a base64 encoded image, a folder name, image filename, and a mimetype (like png). The Base64 encoded image gets turned into a buffer and streamed over to Wix. Then, we just give the Wix mediaManager all the info it needs and it will upload the image for us! function uploadImage(image_base64, image_folder, image_filename, image_mimeype) { return new Promise(resolve => { let buf = Buffer.from(image_base64, 'base64') mediaManager.upload( image_folder, buf, image_filename, { "mediaOptions": { "mimeType": image_mimeype, "mediaType": "image" }, "metadataOptions": { "isPrivate": false, "isVisitorUpload": false, } } ).then(res => { mediaManager.getDownloadUrl(res.fileUrl).then(url => { resolve(url) }) }); }) } For the last step, we need a function to put everything together. That looks something like this: Get the data from the post request. Upload the image. Create the product. Use the product ID we get back from the upload to map the image url to the product. Profit! export async function post_upload(request) { let response = { "headers": { "Content-Type": "application/json" } } let body = await request.body.text() let data = JSON.parse(body) let img_url = await uploadImage(data.image.base64, data.image.folder, data.image.filename, data.image.mimeype) let product = await createProduct(data.product) response.body = product.productPageUrl ok(response) wixStoresBackend.addProductMedia(product._id, [{'url':img_url}]) } In summary, here is that code all put together: import { mediaManager } from'wix-media-backend'; import wixStoresBackend from'wix-stores-backend'; import { ok, notFound, serverError } from'wix-http-functions'; exportasyncfunctionpost_upload(request) { let response = { "headers": { "Content-Type": "application/json" } } let body = await request.body.text() let data = JSON.parse(body) let img_url = await uploadImage(data.image.base64, data.image.folder, data.image.filename, data.image.mimeype) let product = await createProduct(data.product) response.body = product.productPageUrl ok(response) wixStoresBackend.addProductMedia(product._id, [{'url':img_url}]) } // Upload Image // Returns a URL which can be assigned to a product functionuploadImage(image_base64, image_folder, image_filename, image_mimeype) { returnnewPromise(resolve => { let buf = Buffer.from(image_base64, 'base64') mediaManager.upload( image_folder, buf, image_filename, { "mediaOptions": { "mimeType": image_mimeype, "mediaType": "image" }, "metadataOptions": { "isPrivate": false, "isVisitorUpload": false, } } ).then(res => { mediaManager.getDownloadUrl(res.fileUrl).then(url => { resolve(url) }) }); }) } functioncreateProduct(product){ returnnewPromise(resolve => { let productID = '' wixStoresBackend.createProduct(product).then(res => { resolve(res) }).catch(err => {console.error(err)}) }) } Using the API endpoint Now that we have somewhere to send out data, let’s create some products! First, create a new Python script called “load.py.” We’ll have a number of steps to build all of the information that makes up a product, including: A filename for the image A SKU for the product A high CTR meta description A rich, search-optimized title Something to open the image and convert it to base64 Here’s what all of those look like: def get_filename(name): return name.lower().replace(" ", "_") + '.png' def get_sku(name): return name.lower().replace(" ", "_") + '_g' def get_metadescription(color, name): year = date.today().year return f'{year} edition{color} soy wax candles - hand poured and hand crafted. CandleCraftys {name} best candles for living spaces and ambience. Buy now!' def get_image(name): filename = f'image/saves/save-{name}.png' with open(filename, "rb") as f: im_bytes = f.read() im_b64 = base64.b64encode(im_bytes).decode("utf8") return im_b64 def get_seotitle(row): name = row['Title'] notes = (row['Top notes'].split(', ') + row['Mid notes'].split(', ') + row['Base notes'].split(', ')) note = random.choice(notes) color = row['color_literal'] return f'{name} - {note} scented {color} soy candles' Next, we need to build the product and image object for every row in our CSV of product data. First, we open the CSV in pandas and loop through it. We can grab some common things we’ll need from that row like name and color. Then, I check to see if the product has “Discontinued” in the name as a final quality check. After that, we map our functions or variables to relevant fields in the big data object. This is the magic sauce that this whole guide has been building up to. df = pd.read_csv('data.csv') for i, row in df.iterrows(): name = row['Title'] color = row['color_literal'] if 'Discontinued' not in name: print(name) data = { 'image': { 'base64': get_image(name), 'folder': 'programmatic', 'filename': get_filename(name), 'mimetype': 'image/png' }, 'product': { 'name': name, 'description': row['adlib'], 'price': 20, 'sku': get_sku(name), 'visible': True, 'productType': 'physical', 'product_weight': 1, 'product_ribbon': '', "seoData": { "tags": [{ "type": "title", "children": get_seotitle(row), "custom": False, "disabled": False }, { "type": "meta", "props": { "name": "description", "content": get_metadescription(color, name) }, "custom": False, "disabled": False } ] } } } upload(data) The last line on that section of code is an upload() function, which we don’t have yet. Let’s go through that now. Get the product data and convert it to JSON that the Wix API can read. Then, send the payload off. def upload(data): url = 'https://username.wixsite.com/candle-crafty/_functions-dev/upload' headers = {'Content-type': 'application/json', 'Accept': 'text/plain'} payload = json.dumps(data) ## Pass in all product details at once response = requests.post(url, data=payload, headers=headers) try: data = response.json() print(data) except requests.exceptions.RequestException: print(response.text) Let the automated store run If the stars align and all our code works, we’ll have a fully functioning storefront piled high with dynamically built products. Occasionally refresh the page and watch as products appear. Enjoy the numerous products which you can now A/B test, optimize, and look for top-selling variations. Why content expansion works This entire strategy is centered around the idea of casting a wide net: SEO is already top-of-funnel-marketing. At the very top of the SEO funnel itself, is keywords. By creating many new diverse and keyword-driven pages, we are increasing impressions. Then through optimization strategies, we can increase clickthrough rate. The real secret to content expansion, though, is to collect data. If you don’t have data, you won’t be empowered to make informed decisions. By generating a broad range of keyword-relevant pages, you can begin to iterate. Build in high impact areas, and reduce or redirect low impact pages. Remember, content is king, but not all kingdoms are prosperous. Customize to fit your needs with Velo Velo is a powerful CMS IDE—this article is just the beginning of what you can accomplish with it. I would urge anyone to review the API Overview to see just how much can be achieved. Adding features to most content management systems feels hacky and leaves you wondering if the software might break at any time. Integrating with Velo was the opposite. It felt like the Wix website wanted me to customize it to fit any custom request I had. If you’ve ever needed more from your CMS, I would consider Wix and enabling Velo. It’s the best spectrum between no-code, low-code, and full-code sites. Colt Sliva - Senior Technical SEO Analyst Colt Sliva is a technical SEO who has experience working with SaaS, eCommerce, UGC Platforms, and News Publishers across the Fortune 500. His main area of study is SEO at scale, automations, and breaking things to see how they really work. Twitter | Linkedin

  • HARO link building: The backlink strategy everyone should use

    Author: Aaron Anderson When site owners think about link building, they typically associate it with outreach—getting in touch with other site owners and politely convincing them that your content is relevant for their audience. While this can be an effective way to gain backlinks, it can also be a time-consuming process, making it difficult to prioritize. A more ideal approach would be to showcase what you have to offer and allow site owners to come to you. HARO (which stands for Help A Reporter Out) link building is akin to this, except that you’re allowing reporters to come to you and you’re providing them with something you likely have tons of—expert insights about your industry, service, or product. But, if it’s so effective, why isn’t everyone building links with HARO? That’s because the sheer volume of queries from reporters can be overwhelming. Fortunately, all it takes to sort through these requests and make link building with HARO more manageable is a clever system of Gmail labels and filters. Here’s everything you need to know to get started with HARO link building, including: Why link building is important for SEO How to get started with HARO link building How to identify relevant HARO requests using Gmail filters 01. Sign Up For HARO 02. Create a Gmail label for HARO emails 03. Set a filter for your HARO emails 04. Start receiving HARO emails and check for relevant queries 05. Compile a list of keywords to find relevant queries 06. Create a second label and a new filter for your keywords 07. Evaluate the emails in the new label and start pitching 08. Earn backlinks Why link building is important for SEO Backlinks are a key aspect of any site’s SEO. They work as letters of recommendation—both the number and the quality of the links that point to your site play an important role in how Google determines your site’s rankings. Links to your site could come naturally, but proactive steps to increase the number of high-quality backlinks that point to your site can only further your SEO efforts. Link building helps you do just that. “Even if you’re not in a competitive niche and have been publishing good content consistently, if you find that your site only shows up on the second or third page of search results, you can benefit from getting more backlinks (so long as your site doesn’t have any major technical issues).” — Debbie Chew, global SEO manager at Dialpad This is particularly important for SMBs that haven’t built much brand exposure yet and will therefore have more trouble getting natural links from other sites. But, link building is (generally) hard to outsource and execute So, you’ve determined you want to improve your site’s SEO through link building. But, where do you start? You could try to build links yourself. But, determining which link-building strategies work best for you and your business and learning how to implement them can involve a steep learning curve. It takes time and a lot of trial and error, especially if you have no prior link-building experience. You might also think about hiring someone else to build links for you. However, this can be a risky move. You could end up paying for poor links and notice no benefits to your site’s organic visibility. What’s even worse, these low-quality links can negatively impact your backlink profile and overall SEO. It’s not all doom and gloom, though. There is a link-building strategy that you can execute on your own and without any prior link-building experience: HARO link building. How to get started with HARO link building HARO is an online platform that journalists and bloggers use to find knowledgeable sources for articles they are working on. Reporters ask questions about specific topics they want to cover and receive expert answers that they can quote in their stories. The platform compiles reporters’ requests and sends them out via email to subscribed sources (i.e., experts like you) three times a day (morning, afternoon, and evening, Monday through Friday). When signing up as a source, you can select the topics you want to receive requests for. The requests include: A summary of the query The name of the reporter The media outlet/blog the article will be published on An anonymous email address to send pitches to A detailed description of the query The deadline Any additional requirements/restrictions the reporter may have Here’s a sample request: When journalists select a response for their article, they will often include the source’s name, the name of their company, and a link to their company website in return. Below is an example of an article from Hive that was likely written using HARO sources. The article gathers productivity tips from several business leaders and includes some quotes from them. As you can see, a link to their business has been included: Before moving on to the system you should create to make the most out of HARO for your link building, let’s take a look at why the HARO approach is a uniquely accessible strategy in the first place. Just about any business or site owner can build links using HARO Unlike other link-building strategies, HARO allows you to leverage an existing asset that you likely have an abundance of: expertise. When you’re experienced in a certain field, you’re qualified to answer questions about it. Knowledge about your niche, how your industry works, and how to run a business are all things you can bring to the table with HARO to earn you links. As I’ve mentioned before, reporters usually link to their sources’ sites when including their quotes in an article. That makes HARO a perfect way for SMBs to offer valuable insight and get high-quality links in return. Plus, it helps you build credibility as a source in your field of expertise and can increase your brand exposure by being featured in relevant industry publications. However, in order to scale this strategy and make it more manageable, you need to create a system to identify the most relevant requests and filter out the rest. You’ll need to manage HARO requests—a system can help When you sign up as a HARO source, you’ll start receiving emails with reporter requests for each category that you’ve signed up for three times a day. That’s three daily emails per category. This means a lot of emails (and a lot of requests within each email). Without a proper system in place, building links with HARO can be daunting and get out of hand quickly. You might end up spending a lot of time going through all of the requests and miss out on good opportunities or fail to meet deadlines. A system that simplifies this process will make the strategy more manageable and effective. And, once you’ve put your system in place, you can turn this strategy into something you can do on an ongoing basis. Below, I’ve explained the system I created within Gmail to identify relevant HARO requests. Let’s take a step-by-step look at how it works so you can replicate it for your site or business. How to identify relevant HARO requests using Gmail filters The main goal of setting up a system to answer HARO queries is to filter out irrelevant requests so that you can focus on the ones that you can potentially answer and get attribution for (i.e., gain a backlink). These are the steps you need to follow: 01. Sign Up For HARO Head over to HARO and read the rules. Then, click on the sign-up button at the top and fill out the form. In the next steps, I walk through how to set up labels and filters to organize emails on Gmail, so make sure you use a personal Gmail account or a business email that you can access on Gmail when registering for HARO. Once you’ve submitted the form, you’ll receive an email with a link to activate your account. After you’ve activated your account, you’ll be able to access your account details and select the categories/topics you want to receive requests for: 02. Create a Gmail label for HARO emails Next, you’ll move into Gmail to set up the rest of your system. Within Gmail, click on the + sign next to Labels on the sidebar to create a new label. Enter a name for your label and click on Create. 03. Set a filter for your HARO emails Next, you’ll need to set a filter so that HARO emails bypass your inbox and go straight to the label you’ve just created. To do this, click on the filter icon in the search bar and paste the following email address into the “From” field: haro@helpareporter.com. Then click on Create filter. A new menu will appear, prompting you to select what Gmail should do when a message matches the search criteria you just entered. Here, you want to check the “Skip the Inbox” and “Apply the label” options, choosing the label you created for HARO queries. Then, click on Create filter. 04. Start receiving HARO emails and check for relevant queries HARO emails will now bypass your inbox and land straight into the HARO label you created in the step above. Wait until you’ve received several batches of HARO queries and review these emails to get familiar with the types of questions in each topic. Identify any requests that you can answer and any keywords within the query or the summary of these requests that you could use to locate similar queries in the future. I also recommend that you use the Gmail search bar to search for different keywords and see whether any HARO emails show up in the results. 05. Compile a list of keywords to find relevant queries Compile a list with the keywords you identified in the previous step, plus any others that may be relevant to your niche or your expertise. If, for example, you work at a recruiting agency, relevant keywords for your list might include: HR staffing hiring recruit employee manager etc. 06. Create a second label and a new filter for your keywords Repeat step two to create a second label—this time, for queries that are a better fit for your niche/expertise. You can call this label “Relevant HARO Queries” or something along those lines. Then, set up a new filter that sends HARO emails that contain any of the keywords on your list to this new label. Alternatively, if you’d rather receive relevant HARO queries in your main inbox and just filter out the emails that don’t contain any of the keywords on your list, you can edit the first label that you created. To do so, click on the gear icon next to the search bar, and click on See all settings. Then, click on Filters and Blocked Addresses. Here, you’ll see any filters that you’ve created. The filter (and label) for the HARO email address that you’ve created should appear here. Click on Edit to change the settings of the filter. Use the “Doesn’t have” field to add the keywords on your list and click on Continue. And, check the same two options that you used when you initially created the HARO label. With this method, the emails that don’t contain any of the keywords will bypass the inbox and land in the HARO label, while those containing at least one of the keywords will appear in your inbox. 07. Evaluate the emails in the new label and start pitching Now that you have a filtered list of HARO requests, you can more easily identify relevant queries to respond to. Check this filtered list regularly (I check mine once per day) to make sure you don’t miss any deadlines or good opportunities. You can also edit the filters to add or remove keywords to better refine the emails that you’re checking. Here are a few additional tips to increase the likelihood that your pitches will get published: Provide concise, easy-to-quote answers. Answer exactly what the query is asking and avoid going on tangents or including irrelevant information. Essentially, what you’re providing to reporters is the equivalent of a written sound bite. Be creative. The more unique your insight is, the higher the likelihood that it stands out to reporters among all answers they receive, thus improving your odds at getting published and earning a backlink. Avoid being promotional. Provide informational answers that actually help the reporters and their readers, not answers that promote a product or a business (especially your own) unless directly requested by the reporter. 08. Earn backlinks Once you’ve started answering questions on a consistent basis, the final step of the process is to evaluate your results. If you’re providing unique insights into topics you’re already well-versed in, you’ll probably start earning high-quality backlinks for your site. Some journalists will let you know when they’ve included your pitch in their article, but this is not always the case. You can track the backlinks you earn by using a tool (such as Ahrefs) or you can search for the name of your business (in quotation marks) on Google. It may take a few weeks for queries to be published as articles, so I recommend that you check for new links once or twice per month, depending on the volume of pitches you’ve sent. Another aspect you’re probably thinking of is whether the links you’ll earn will be follow or nofollow: Some queries will tell you this up front, but many queries don’t mention it. If the query includes the name of the media outlet, you can search for articles from that media outlet that gather expert information from HARO sources to identify how they link to other sites on similar articles. You should keep in mind that, although follow links are generally more desirable for SEO because they pass link equity, nofollow links from high-quality sites can still be very valuable since they can drive high-intent traffic and are a necessary part of a healthy backlink profile. Turn your expertise into high-quality links for your business with HARO While link building should be a core part of every business’s SEO strategy, figuring out where to start or how to do it can require a lot of trial and error. HARO offers everyone—especially SMBs—an excellent opportunity to earn backlinks in exchange for knowledge about topics that reporters and bloggers are writing about. A key aspect of implementing HARO link building efficiently is to have a system that filters emails so that only relevant queries are constantly evaluated and answered. By following the steps I’ve laid out in this guide, you’ll be able to build such a system, and you’ll be one step closer to earning backlinks for your business with your knowledge and expertise. Aaron Anderson - Founder & Lead Link Builder at Linkpitch.io Aaron Anderson is the founder and lead link builder at Linkpitch.io, an outreach-driven link building agency. He's also the host of the "Let's Talk Link Building” podcast. When not building backlinks, he enjoys traveling the world with his family. Linkedin

  • How to approach SEO localization and SEO website translations

    Author: Adriana Stein Available in: English, Português, 日本語, Deutsch, and Français I’d like to introduce you to Company X, a multinational company that wants to strengthen their international expansion efforts. During a recent strategy meeting, one of the stakeholders at Company X decided that it would be more efficient to create content in English and then use Google Translate for other languages. The thinking was that this would allow them to save their resources for the more technical stuff. Although the marketing team had originally wanted to create targeted messaging based on regions, the idea of more multilingual SEO felt like too much manual work. Plus, they’d already done such robust SEO work on their English website, so wouldn’t directly translating content be enough to keep bringing in traffic? Nope. In the next meeting, their regional sales manager complained that they didn’t receive any leads from their respective international language websites. In fact, none of the translated pages were even ranking or converting anyone. So, what happened? Here’s the root of the issue: Company X focused on translation, not SEO localization. Instead of implementing SEO localization, they copy/pasted what worked in English and simply waited for the traffic to come in. Unfortunately, this is a fairly common scenario when working on a multilingual website. But, it doesn’t need to (and shouldn’t) be like this! It’s entirely possible to use your website to fuel international growth in a manner that ranks well, resonates with your local target audience, and ultimately, generates revenue for your business. To help you maximize your international SEO growth efforts, I’ll go over: The difference between SEO localization and SEO website translation The benefits of SEO localization Keyword localization By search intent By search volume and competition By messaging and regulations What is the difference between SEO localization and SEO website translation? SEO website translation focuses on translating your website content from one language directly into another, often done via machine translation. The goal of SEO website translation is to increase audience reach and rankings within a specific geographic area in a specific language. This is usually done through directly translating the: Keywords Headings Meta descriptions Copy For example, content in English is translated into German using Google Translate in an effort to reach people living in Hamburg. It’s not reviewed by a native speaker and doesn’t carry the same intent as the English original. For example: If you’re from Germany, you’ll recognize instantly that the direct translation makes no sense. Alternatively, SEO localization has the same foundational concept as translation in that words are adapted from one language into another, but it’s carried out by a native-speaker SEO strategist in order to ensure that: Keywords are relevant to the local audience Keywords have adequate search volume in order to impact holistic SEO growth in that specific market Keywords match the local search intent Headings, URLs, meta data and copy are optimized for local keywords and target the particular audience Content has the right tone of voice Content adheres to local messaging guidelines Correct SEO settings are implemented for the multilingual site Simply put: SEO localization includes crucial research and processes that SEO translation does not. When you rely solely on translation, you're missing out on a huge opportunity to rank for localized keywords and create content that effectively nurtures your local audience toward conversion. What are the benefits of SEO localization? When doing international SEO, the localization approach is the more effective option because it helps you: 01. Efficiently expand into new markets: Since the message will be curated based on local research, messaging guidelines, and search intent, it resonates more deeply with those who see it. And, when your audience feels that you’re “speaking their language” and meeting their needs, they’re much more likely to convert. Those conversions can help propel business growth (which is likely the reason you’re expanding internationally to begin with). 02. Enhance your overall user experience: When someone arrives on your website, how long do they stay? What pages do they look at? Do they convert? SEO localization reduces the bounce rate and increases the average amount of time someone spends browsing your website. Improving these metrics often goes hand in hand with higher conversion rates. 03. Develop market-specific organic search growth: Simply translating your content doesn’t mean it’s guaranteed to rank. Keywords that have a ton of search volume in the US may have no search volume in Italy. In France, they may use an entirely different phrase to describe something than in Canadian French (consider multi-regional vs. multilingual). Ideally, each market would have its own SEO strategy aimed at capitalizing on that unique search intent in the local language. By focusing on SEO localization, you ensure that the content you’re investing time and resources into creating has the best possible chance of ranking, resonating with your audience, and generating conversions. What is keyword localization? A crucial part of SEO localization, keyword localization is the process of researching and identifying language-equivalent keywords that are relevant to a specific local market and have search volume. These are the keywords that you use to build your SEO strategy and set content production priorities. Important: It’s a must that localized keywords have appropriate search volume to have an impact on organic traffic in the target market and consequently generate conversions. In many cases, directly translating keywords has little to no impact on SEO performance. To strengthen your ability to manage your multilingual sites and grow with SEO localization, I’ve outlined three examples on how I’d approach the keyword localization process. By search intent Scenario: A company based in the US sells a highly popular heated tobacco product. They want to expand into Hungary with an SEO-driven approach. When researching the direct Hungarian translation for “heated tobacco”, the phrase “fűtött dohány” showed no search volume. However, dohányhevítő (tobacco heater) has a search volume of close to 900. Although there are slight linguistic differences, the information the user is searching for when typing the Hungarian phrases dohányhevítő vs. fűtött dohány is the same, meaning they have the same search intent. This is a prime example of when identifying matching keywords with localized search intent and search volume becomes a great content opportunity. It’s much more worthwhile to target keywords that local audiences actually use, which becomes clear in this case when we look at the difference in search volume. The lesson here: If you don’t check out each keyword’s local search volume and consider similar phrases, you’ll lose out on valuable organic traffic due to minor linguistic or cultural nuances. By search volume and competition level Scenario: You need to localize a blog about “what is process management” from English to German to reach your audience in Germany. When you review the direct translation without taking anything else into account, the search volume in Germany is a mere fraction of the volume in the US, so you may be hesitant to invest resources in creating content around this keyword. However, you’ll also want to zoom out and consider the level of competition in the US vs. in Germany. Here are the top search results in the US for the phrase what is process management: Now, take a look at the top results in Germany for the phrase was ist prozessmanagement: If you scroll further down the results on both (not shown in the image due to length), you’ll notice much bigger competitors rank in the US as compared to Germany. In fact, the keyword difficulty for was ist prozessmanagement in Germany is 31, which is much lower than the US equivalent what is process management at 51. From this perspective, creating content for this keyword becomes much more lucrative because it’s easier to rank for. You can further capitalize on this opportunity by building out the localized keyword cluster in German (related group of keywords you want to target in a piece of content): The search intent behind was ist prozessmanagement and prozessmanagement definition are the same. And to make things even better, prozessmanagement definition has a higher search volume than the English keyword. The lesson here: Ranking opportunities differ across markets, so take the time to dig deeper. In fact, it’s often easier to rank for important keywords in local markets than in English, because competitors are typically smaller and don’t have the website authority of larger competitors. Combine that with keywords that match the search intent in the local language and you’ve just found an SEO golden nugget. By messaging and regulations Scenario: You have an English-language article that ranks for the keyword how does crowdfunding work and you want to localize it into Spanish for audiences in Spain. Here, we have a case where the direct keyword translation both matches search intent and has search volume—a rare case indeed! That makes the bones of the content easier to localize: the URL, headings, meta data, etc., can just be translated (but it’s always better to use a human translator than it is to rely on an automated service, like Google Translate). However, there’s an extra step to consider: local regulations. Let’s say you did some research and found out that fundraising regulations in Spain are entirely different from in the US. This now means that you can’t just translate the rest of your content from English to Spanish. Instead, you need to research the crowdfunding process in Spain and ensure that your content matches their regulations. The lesson here: SEO translations simply don’t take into account international messaging and regulations. Always take the time to understand what information your local audience truly needs. You certainly don’t want to get in trouble for providing false information! Use SEO localization for better user experience and search visibility Regardless of the language, your local market content needs to have the same quality as your original content if you want to maximize user experience for your international audiences. Take the time to plan your content based on keywords in the local language, that match search intent, and have adequate search volume. And, most importantly, ditch direct translations. When you collaborate with native speakers and embrace the process of SEO localization, that’s a strategy that truly drives international business success. Adriana Stein - CEO and Founder at AS Marketing Originally from the US and now living in Germany, Adriana Stein is the CEO and founder of the marketing agency AS Marketing. She leads a team of multi-language SEO experts who develop holistic international marketing strategies for global companies. Twitter | Linkedin

  • Why is Google rewriting so many titles in the search results?

    Author: Mordy Oberstein Carefully written title tags are an important part of optimizing a page for search. Not only do they help Google understand the page’s content, they’re also often used as the title on the search results page (formally known as the title link). Historically, Google would rely on the title tag site owners and SEOs implemented for the title on the search engine results page (SERP). It was mainly in cases where Google considered the title tag to be “sub-optimal” (i.e., not aligned with the content on the page itself) that it would rewrite the title for the SERP. All of that changed at the end of August 2021. Suddenly, Google began rewriting titles here, there, and everywhere. Early speculation saw a 77% increase in Google not exactly using the content found in the title tag on the SERP itself. Now that the dust has settled a bit, where do we stand with title tags? How often is Google abandoning the exact wording in a page’s title tag when it displays the corresponding title link? Moreover, why did Google suddenly decide to drastically increase the propensity with which it rewrites title tags? Below is a summary of my presentation from BrightonSEO April 2022, where I addressed these very questions: How often is Google rewriting titles for the SERP? Where is Google getting title rewrites from? What influences Google’s title rewrites? Examples of Google’s title rewrites and their significance Why is Google rewriting so many title tags? How often is Google rewriting titles for the SERP? Before we get into why Google has decided to double-down on title rewrites, let’s first get our bearings on how often rewrites occur. As of January 2022, Google rewrites 65% of desktop titles and 62% of mobile titles, according to data provided to me by Semrush. That’s the majority of titles users see on the SERP. (However, not all rewrites are extensive, as I’ll soon show). What’s most interesting is that it appears that rewrites are becoming more and more common. Initially, as the graph below (based on data I pulled back in August 2021) shows, Google was only rewriting 56% of titles and relying on the title tag the other 44% of the time. Fast-forward to October 2021 and that number is up to 61% on desktop, increasing to 65% as of January 2022! The trend seems to be that Google relies on the title tags less as time goes on. This is of obvious concern to SEOs, but also speaks to Google’s own intentions, as I’ll get into later. How significant are Google’s title rewrites? An ongoing increase in Google’s propensity to rewrite titles tags for use in its title links is a significant development and needs to be qualified. How substantial are the rewrites? Is Google just changing a word or two? Is Google changing 90% of what was in the title tag? 80%?... The truth is, many of the rewrites rely heavily on the content found in the title tag and reflect relatively minor revisions. Accordingly, roughly 40% of all rewrites (depending on device) are an 80% match to the title tag itself, according to data from Semrush. This means that the rewritten title link is at least an 80% content match to the original title tag. Furthermore, around 15% of desktop rewrites and nearly 20% of mobile rewrites reflect a 90% content match (or better) when compared to the actual title tag. However, that’s not to say that even a 20% or a 10% revision of the title tag’s content is not significant, depending on what was changed. Also important to understand is Google’s tendencies around brand name utilization when the title link does not match the title tag: In this case, there’s a bit of a gap between desktop and mobile. When the title tag does contain the brand name, Google trends to remove it from the desktop title link about 25% of the time. However, on mobile, despite the rewrite, Google removes the site’s name under 20% of the time. This is most likely due to the fact that mobile title links wrap around to multiple lines and are therefore longer than desktop title links. As such, Google has more of an opportunity to leave the brand name in. Where is Google getting title rewrites from? To understand how often and to what extent Google is rewriting (or outright ignoring) your title tags, we should look at where Google is taking the rewrites from. This will be a very important part of the equation when we look to solve the mystery that is Google’s new infatuation with title rewrites. At the onset of Google’s new disposition towards titles, the search engine was disproportionately relying on the H1 heading tag. Data I collected in August of 2021 showed that when Google “rewrote” the title link it would utilize the page’s H1 76% of the time. This means that, at the onset of the rewrites, over three-fourths of cases were Google ignoring the title tag in favor of the H1. This number has been steadily decreasing since then and is an extremely important factor in understanding what these rewrites are really about. Specifically, Google has gone from relying on the H1 for its rewrites 76% of the time in August to 55.5% of the time as of October 2021 and even less as of January 2022, with under 50% of the rewrites pulling in the H1. What influences Google’s title rewrites? Why is Google spending time and resources rewriting titles for the SERP? Perhaps, if we can see when Google does and does not rewrite titles, we can glean some insight as to the search engine’s reasons. The impact of ranking position on title tag rewrites Let’s start with ranking position. Are your rankings affected when Google rewrites a title? As shown in the chart above, the tendency to rewrite titles, while slightly different according to device, is consistent across ranking positions. This likely means that ranking position does not factor into the likelihood of Google rewriting or ignoring the title tag. The impact of title tag length on title tag rewrites Perhaps the length of the title tag significantly influences when Google decides to go with a rewrite in the title link. Though you might suspect this to be the case, it is not. Across devices, whether the title tag contains between 1–7 words or 8–11 words, there is little variance in the chance that Google will rewrite a title. However, as a title tag grows to 12 words or beyond, the chances of a rewrite increase. This, to me, has far less to do with Google’s doubling-down on rewriting title tags and far more to do with the fact that an absurdly long title tag naturally lends itself to be rewritten to some extent. The impact of query length on title tag rewrites One idea that I’ve seen float around is that Google is rewriting titles so that it can implement language that more staunchly aligns with a particular query. This is not the case. If it were the case that Google’s title rewrites were about query targeting, then queries with 5+ words would not be rewritten at the same rate as queries with 3–4 words. Logically, as a query lengthens, there are additional modifiers within it that Google would look to align the rewrite with. The fact that Google doesn’t change its tendency to rewrite the title when query length significantly increases tells us that the rewrites are not primarily aimed at aligning the title link to the query itself. The impact of query intent on title tag rewrites Another intriguing theory is that Google is more likely to rewrite a title depending on the intent of the user reflected in the keyword. This, too, is not the case. Google is once again uniform in how it approaches title rewrites. Regardless of the intent reflected in the query, Google does not change the rate with which it uses the title tag as the title link on the SERP. The implications of the data on title tag rewrites What I’ve been trying to highlight with the data above is not what Google is honing in on with its title tag rewrites, but rather the lack of any apparent focus. Rank position? Doesn’t matter. Title tag or keyword length? Doesn’t matter. Intent? The same—it does not matter. Google is behaving uniformly. Google has adopted an algorithmic method to custom create what it serves in the title link, but it’s doing so in a uniform manner. All of the things we, as SEOs, might consider to be focal points (i.e., intent or rankings) are not things that Google is considering when it comes to rewrites. That’s not to say everything is uniform across the entire web. For example, Google tends to rewrite 37% fewer title tags when the page represents a recipe. This is because the content found in recipe title tags doesn’t lend itself to rewriting. How many ways are there to really write “best meatloaf recipe?” Provided you don’t go wild with what you put into the title tag, the nature of the content can only appropriately be described in a few ways. But again, notice, position doesn't matter here either. The unique characteristics of various verticals being what they are, the search engine has taken a broad stroke to custom create title link content for what seems to be the sake of creating custom title link content. The question now is, why does Google want to create custom title link content? Examples of Google’s title rewrites and their significance Before I get into why I think Google is heavily creating custom content for its title links, I want to run through some of the changes Google makes. It’s one thing to dig into a heap of data, it’s another thing entirely to read through the actual title tags to see the nature of the changes. Analyzing actual examples will not only help us understand the content triggers associated with title tag rewrites, it will also get us closer to the reason why Google has rewritten titles so much more frequently since August 2021. With that, here are some themes I found when reading through thousands of title tag rewrites: Deeper geo-targeting Google will often attempt to offer a more targeted and specific location within the title link when compared with the content found in the title tag. For the keyword bmw 7 series dealer, Google added the specific area of “Beverly Hills” into the title link, along with the general area (in this case, Los Angeles). (Obviously, this all depends on where the search itself was conducted). Fewer local shenanigans Local SERPs are notorious for pages that attempt to overstuff the title tag (let alone their Google Business Profile name) with all sorts of “keywords.” From what I have observed, Google does try to limit this sort of irrelevant keyword stuffing in the title link by rewriting titles. Take the keyword divorce attorney: One page’s title tag read “Arizona Divorce Attorney Near You | Top Marriage Attorney | Contact Us Today.” Google replaced all of that irrelevant content with: “Experienced Arizona Divorce Lawyers Near You - Stewart…” Brand name for YMYL queries Another interesting pattern I saw is that Google tends to tack the site name onto title links for Your Money Your Life (YMYL) queries. This makes sense in that the publisher’s reputation speaks to the trustworthiness of the content. Less over-the-top marketing One of the benefits of the rewrites is that Google seems to be cutting down on the level of salesy-ness often found in the titles on the SERP. In this example, Kohl’s line in the title tag that reads “Upgrade Your Look With a Timeless Top” was removed to a more palatable CTA: “Upgrade Your Look.” Google isn’t against brands using CTAs to drive clicks, but it does seem to be attempting to rein in the marketing speak a bit. Similarly, Google will, in almost all cases, subordinate the brand name in favor of what’s useful for users. For example, for the keyword dog bath, Google ignores Amazon placing its own brand name before the product within the title tag to produce a title link that reads “Dog Bath - Amazon.com.” Nuanced rewrites Contrary to what many SEOs have been saying, the changes Google makes with the title rewrites can be pretty sophisticated. Take the keyword us government passport renewal, for example. The title tag of one of the ranking pages reads: “Passport Application Acceptance Services” With such a title, you would likely expect the page to be an access point to renew your passport or to initially apply for your U.S. passport. You would also expect this page to be hosted on one of the U.S. government’s official websites. But, this is not the case. The page in this example belongs to the county of San Diego, California, acting as a proxy for the federal government. Google thought its users on the SERP should know that, while you might be expecting to land on the federal government’s site, you’re about to land on a local county’s site instead. So, Google rewrote the title tag to reflect this with a title link of “Passport Application Acceptance Services - County of San…” This example shows a sophisticated understanding of the context surrounding the content as well as what searchers are expecting. But, not all rewrites are of this caliber—many simply replace a dot with a dash (Google prefers dashes and will rewrite a dot to be a dash across the board). Despite Google often making small and seemingly insignificant changes, we can’t ignore the level of sophistication that does indeed exist. This level of sophistication, along with Google’s ability to make meaningful, precise changes with its rewrites, speaks directly to why Google has taken on the enterprise of constructing so many rewrites. Why is Google rewriting so many title tags? Now, for the moment you’ve all been waiting for: Google is rewriting an ever-increasing number of title tags because…it can. It is doing this because it is interested in flexing some serious content-understanding muscle. Rewriting titles demonstrates a new level of understanding No, Google is not vain. Rather, Google is showing us (as well as itself) how well it can understand content. To what degree does it comprehend content? Google can understand it well enough to rewrite it. This reminds me of my time as a classroom teacher: It’s one thing to ask students to repeat something back to you. It’s another thing for them to be able to take what you showed them and create something new with it. Google is taking what we’re giving it in title tags and demonstrating a level of mastery by creating something new out of it. In other words, it’s one thing for Google to be able to understand content well enough to show quality results on the SERP. It’s another thing entirely for Google to understand the user’s query, the content on the page, the title tag, etc. to be able to recreate and retitle content. Google isn’t rewriting content in order to push up CTR at various ranking positions or to align the title link with the query to all sorts of absurd extents. This has nothing to do with you, your site, your page, your CTR—none of that. Google’s rewrites are about concretely demonstrating to itself how well it can understand content by being able to manipulate content. Replacing titles with H1s was just the start In fact, this is why (if you’ll recall) Google has taken a step back from relying on H1s for its rewrites. While Google initially used the H1 as the new title link 76% of the time, that number is again down to under 50% because Google never wanted to rely on the H1. Using the H1 doesn’t help Google to see if it can adequately rewrite content. Using the H1 to rewrite title tags was simply stage one of the objective. It was Google saying “Are we good at adequately replacing one snippet of content for another?” Once that was achieved, the next logical step (and the ultimate goal) was, “Can we rewrite content in order to replace what was there before?” Testing with the SERP as the proving ground Rewriting titles for the SERP is in fact the best way to gauge this as Google has a hard metric to measure it with—CTR. Google can measure and track how effective its rewrites are by comparing pre-title change and post-rewrite CTR. Rewriting titles offers Google a limited, controlled, and measurable way to dip its toe into the pool that is rewriting content. This is why Google showed no interest when SEOs asked for a way to override title rewrites and to implement the title tag instead. Why would it? If the entire point of the endeavor is having the perfect testing environment with which to rewrite content and measure results, why would Google give you a way to override that? How would Google enabling you to opt out help it determine just how good it is at rewriting content (which also speaks to how good it is at understanding content)? Simply put, Google rewriting title tags is less about the quality of title links or controlling the SERP and more about Google advancing its machine learning capabilities. Rewrites are the future of the SERP While the impact of Google rewriting a specific title on the SERP has limited implications, the notion of Google rewriting content is a serious moment in search. Although no one can predict what Google rewriting content may eventually mean, there is the potential for significant changes to what we now know the SERP to be. Some might speculate that Google rewriting content is an overstep (such was the sentiment when it briefly tested a featured snippet that merged content from multiple sources), but I don’t think this is the direction that it intends to take things. Whatever the case may be, Google jumping into the world of content creation (in a sense) almost certainly is a milestone to take note of. Mordy Oberstein - Head of SEO Branding, Wix Mordy is the Head of SEO Branding at Wix. Concurrently he also serves as a communications advisor for Semrush. Dedicated to SEO education, Mordy is one of the organizers of SEOchat and a popular industry author and speaker. Twitter | Linkedin

  • Google’s shifts from authority to content diversity on the SERP

    Author: Mordy Oberstein For a while now, I’ve considered Google to be a search engine with a strong bent on becoming what I think of as an “authority engine”—sure, it wants to provide results that you can visit to find the information you need, but at the same time, there’s a clear desire on Google’s part to become the go-to authority. Google wants to be the resource that people rely on not only to direct them to the information they’re seeking, but also to be the authority that actually provides that information. If you search for things like “what is the weather in nyc” or “yankee game score,” for example, you can find the answers without ever clicking on a single result. Google is not just a search engine—it’s an answer engine. To me, that’s ultimately why it’s the “authority engine.” But, I think that’s been changing recently and will change further. In this article, we’ll go over: Google’s transformation from result provider to knowledge authority Why Google became a knowledge authority How Google leveraged featured snippets for authority Google’s push to go beyond the authority dynamic What more content diversity says about Google and the web itself Google: From search engine to information provider to authority Fundamentally, a search engine is a facilitator. As a pure construct, a search engine does not provide you with any information, it merely leads you to sources that do. Sometime around 2015, Google changed that paradigm: In 2013, we saw the launch of a carousel presenting a listing of local establishments. In 2014, Google introduced the featured snippet, a box that appears at the top of the results presenting a snippet of information found on a web page, along with a link to that page. Then, in 2016, Google presented us with the direct answer, which does as it says—provides you with the direct answer to your question: In the years that followed, Google has only increased its investment in SERP features and the information they contain. Featured snippets started to take on multiple forms, including video featured snippets and a snippet that basically functioned as a direct answer. As time went on, Google’s Knowledge Graph vastly improved and all sorts of information became directly available in knowledge panels. Now, it’s to the point where these SERPs resemble what SEO veteran Dan Shure calls “micro-sites.” The point is, we have been living in an era where Google has turned the SERP and, subsequently, its own identity into something other than that of a facilitator. Google has become a very powerful provider of information. Google is no longer exclusively a search engine and it hasn’t been for a long time. It has become a knowledge provider and, in being a provider of information, it has become a knowledge authority. Why Google became a knowledge authority Many search marketers will tell you that Google started providing information directly on the SERP so as to keep users within its ecosystem. The prevailing theory is that Google wants to keep users on its properties, moving them from one set of results to the next, because it affords more opportunity for ad placements, which generate revenue for Google. Let’s run through an example: Say, I run a query for “yankee stadium.” I might (and did) get this: Notice the ads that dominate the left side of the results page. Now, if I scroll down a bit, I would see the “People also search for” feature at the bottom of the knowledge panel: I might then click on the option to scope out another New York stadium, Citi Field, only to move to a new SERP with new ads: The more information on the page, the more I stay on Google, the more opportunity to move me to another set of results, and the greater the chance I will eventually click on an ad. That’s the theory. Sure, this sort of construct will lead to more ad placement and more revenue, all other things being equal. What bothers me about this theory is that myopically engineering its user experience to drive engagement with search ads is not entirely in Google’s character. I’ve always found Google to play the long game. Take Google Search Console, for example: Instead of creating a data vacuum (that would surely be filled by third-party tools), Google offers free data that becomes the primary tool of all SEOs, thus allowing them to create better sites and content. That’s a long-term, “big thinking” play. Offering more entry points to more SERPs for the sake of more ad placement is not a long-term, “big thinking” strategy and, to me, isn’t how Google typically operates. So, why double down on providing information right on the SERP if not to keep users within its own ecosystem to generate more ad revenue? What’s the long-term play here? It’s authority. Google providing users with information directly makes it the authority and not the sites it would have otherwise facilitated. Providing others with knowledge—moving them from helplessness towards empowerment—is an extremely potent relationship. By sending you to other sites, all Google was doing was facilitating you feeling that powerful dynamic with whatever website you landed on. “Why shouldn’t we get in on that?” decision makers at Google might have thought. And they did. Google started to provide a slew of information, creating the association that it is directly the knowledge provider and the authority. That’s a very important association to create for a search engine. The entire idea of a search engine is that users trust them to provide a path to the most substantial information. What fosters that sense more than actually providing that sought-after information? In the business context, the logic was likely that users would be more inclined to return if they felt they could come to one place with expedited access to the information they were seeking, from a platform they trust as the purveyor of that information. Google decided to reinforce a far deeper and far more powerful latent association amongst its user base (i.e., as an authoritative knowledge provider) because doing so fosters a unique bond. This bond, in turn, creates and subsequently reinforces that latent notion that Google is where we should go for information. Google’s association with information (and not just information retrieval) urges users to seek out the platform for all of their information needs. The play here seems to be that the more users think of Google as the go-to source for information, the more they will return to the platform and the more ads Google can serve them. Google is a for-profit company. However, the move to show more information on the SERP itself (which may downgrade the urgency for clicks to websites) is not primarily about the immediate return on investment. It’s about Google creating a certain identity for itself so that, long-term, users will view the platform a certain way—all of which leads to increased use, which ultimately leads to more ad clicks. Google’s featured snippets have been a very important part of this construct. However, they are quickly moving away from being a part of the “pure authority paradigm” and perhaps it says a lot about the state of the world’s most popular search engine. How Google leveraged featured snippets for authority Google stores information about entities in its knowledge graph. This enables it to offer information without any connection to a URL. However, most of the information out there in the ether that is the web exists on web pages. This presents a bit of a problem for a platform looking to become the source of information. The solution? Featured snippets. Featured snippets enable Google to directly answer users’ queries while not actually owning the content. While still somewhat controversial, in many ways, it’s a win-win. Sites get their URL prominently displayed at the top of Google’s results which, in many cases, could positively impact clickthrough rate. Conversely, Google gets to position itself as a knowledge authority by presenting a snippet of information on the SERP. How exactly does Google use the featured snippet to position itself as a knowledge authority if the content within it belongs to a specific website? For starters, the content belonging to a website and the content being perceived as belonging to a website are two different things. When content is displayed within the featured snippet, while the URL that hosts the content is present, it’s not exactly prominent. The content itself almost seems to exist separate from the URL, at least initially. Moreover, Google employs methods with which to directly answer the user’s question. One such method is bolding the most relevant text within the snippet: There is also a featured snippet format that is essentially a direct answer with an accentual snippet of content: For the record, I’m not saying Google is doing anything nefarious. Again, I think what you see here generally works for both Google and site owners. Moreover, the formats shown above give users what they want: quick and immediate access to information. But, featured snippets show Google moving beyond the authority dynamic It all sounds perfect: Google, features your content prominently so that your sites earn more traffic. The search engine gets to position itself as the market leader, bringing in more searches and more potential ad revenue, and you get more clicks (in theory). It was all going so well until some folks spotted a few tests to the format of featured snippets. Google runs hundreds (if not thousands) of tests on what it shows in the SERP and how it shows it. What makes these limited tests noteworthy? The answer is diversification within the featured snippet. Here’s the first test of featured snippets that got me thinking that things may be changing: Gone is the big bold lettering telling you the answer before you get to the snippet of content. Instead, there is a header that says “From the web.” Explicitly telling users that what they are about to read comes from sources across the web stands in sharp contrast to Google positioning itself as the author by using the featured snippet to directly answer the query. Moreover, if you start reading the actual snippet of content, not only do you see multiple URLs further diluting the featured snippets’ focus on authority, but the content itself addresses the query from different angles. Each section on the snippet (with its corresponding URL) is a unique review of a product. The content is not cohesive. It doesn’t all come together to form one flowing paragraph that represents the one true answer. This same concept is reflected in the second test of featured snippets that was discovered around the same time: In fact, in this instance, Google is explicitly sharing the “authority wealth” with a header below the main snippet that reads “Other sites say.” Coincidentally (or not), less than a month later Google was seen displaying a new feature termed the “More about” feature. Here again, Google presents a diverse set of content snippets attached to URLs. Seeing this live (not merely as a test to the SERP) made me think the sands have significantly shifted. This is interesting because the query that brings up the carousel (shown above) would be prime material for a featured snippet that rattles off a list of things you need to do in order to apply for a loan, much the way it does for most other queries of this nature, as shown below. Clearly, something has changed. For the record, it’s not as though the ability to focus on content and URL diversity within the featured snippet is a new development—Bing has been using this very model for its version of featured snippets for years. Furthermore, Google itself has been using this very model with its “Sources Across the Web” feature for a few years now. So, it’s not that Google couldn’t prioritize content diversity over content authority. Rather, it’s that it chose not to. This isn’t necessarily a good or bad thing—each construct has its own positives and negatives. What a shift towards more content diversity says about Google and the web itself Practically speaking, Google moving towards a more diverse showing of content and URLs within featured snippets could mean more potential traffic for sites. That is, at least, if you were not already the sole URL being shown within a specific snippet. More broadly, I think this shift represents how the web itself is maturing. Every time another CEO goes before Congress to discuss data privacy, more people become more skeptical about what’s out there in the digital sphere. Semrush data indicates that Google searches related to data privacy are up over 100% since 2017. This is an important part of the maturation of the web. Relying on a tech provider such as Google for the one true answer stands in contradistinction to this maturation process. User skepticism can be, and as it currently stands, is, integral for a healthier web. While full-on authority may have been what garnered trust in the past, it’s my belief that Google realizes that there needs to be a stronger element of transparency in the mix. Again, this speaks to how we as online content consumers are “wising up” and pushing for a safer, more mature web. Google’s departure from positioning itself as an authority by presenting users with the “one true answer” speaks to how it, as the leader in search, sees the state of the web and the state of those who use the web. It’s a marked shift in what it means to be a healthy operator and leader within the digital space. Moreover, there’s increased pressure on Google to get it right. Recently, there have been an increasing number of major publishers questioning Google’s ability to serve quality results (take this article from the New Yorker as just one example). Showing a more diverse set of content within its answers helps to portray Google as providing a better and more accurate content experience. Whereas in the past, Google may have been better served by providing “the” answer, today’s user is more receptive to having a more holistic and well-rounded set of content (and is fundamentally better served by it). For the record, I think Google is ahead of the curve. Since about 2018, it’s released a set of algorithm changes (referred to in the SEO industry as “core updates”) that I believe have injected new abilities to discern good content from bad. Meaning, Google has long been aware that user expectations around content are shifting and that it needs to move quickly to meet those expectations. What’s happened in the more recent past, at least as I see it, is that people have become rapidly aware that the content out there on the web needs to improve (again, something Google has realized in a substantial way since 2018). At this juncture, the awareness of the user base around the lack of quality content has outpaced Google’s ability to sift such content out of the results. Simply put, we’re far more aware of the lack of quality content on the web and are looking to Google to handle the problem without considering how far Google has come in this regard and without fully appreciating that much of the fault is on content creators, not just search engines. Google’s recently announced Helpful Content update echoes this sentiment, recommending that content creators evaluate whether “the content is primarily to attract people from search engines, rather than made for humans.” In any regard, Google providing a more well-rounded set of answers creates a sense of topical transparency and therefore quality. Expect more content diversity on the SERP in the future Is Google going to kill off the featured snippet as we’ve known it? No, I don’t think so. Having one snippet of information can be quite useful both for how the search engine wants to position itself and to users looking for information (especially factual information). Sometimes you do just want a quick answer. But, there will be an increase in instances of multi-sourced and multi-perspective features on the SERP. The Google results will, inevitably, contain an increasing number of features that give users entry points to multiple sources of information around a given topic. Doing so helps optics. It also speaks to how Google’s algorithm around topics functions. Most importantly, doing so is simply good for people in search of information. *Disclaimer: Mordy Oberstein is not associated with Google and Google was not involved in the writing of this piece. Mordy Oberstein - Head of SEO Branding, Wix Mordy is the Head of SEO Branding at Wix. Concurrently he also serves as a communications advisor for Semrush. Dedicated to SEO education, Mordy is one of the organizers of SEOchat and a popular industry author and speaker. Twitter | Linkedin

  • How stable are Pinterest rankings and traffic? [Study]

    Author: Mordy Oberstein Pinterest is an organic powerhouse. Each month, the millions of keywords it ranks for bring in over a billion site visits from Google. It’s no surprise that, for many, leveraging Pinterest to bring visitors to the images they're hosting on the social media platform is vital. This is why, more often than not, whenever a large Google algorithm update rolls out, some of the analysis that gets done will inevitably mention Pinterest and its organic market share. But, how much of a force is Pinterest really? While the domain is clearly a juggernaut, what does that mean for individual users hosting content on the platform? More specifically, what I want to know is how consistent are the rankings (and by extension, the organic traffic) of a specific Pinterest asset? The problem: Pinterest URL swapping on the SERP Before diving into the data, let me explain the problem: As mentioned, Pinterest garners a lot of traffic from Google. The issue is that, unless you’re Pinterest, you don’t really care about that per se. What you, as a creator on Pinterest, care about is how much traffic can Google drive to your specific assets that you host on Pinterest. At first glance, this doesn’t even seem to be a question. Pinterest pulls in an incredible amount of traffic from Google Search as, for many types of queries, the SERP is littered with Pinterest URLs. The problem, however, is this: What you’re looking at above is Google essentially swapping out different Pinterest URLs within the same ranking position vicinity. When I saw this, it made me wonder, how stable is a ranking Pinterest URL? How often is Google swapping out one Pinterest URL for another? Because when I started to dive in, what you see above seemed to be a pattern. That is, Google seems to give Pinterest a ranking slot on the SERP and oscillates between showing various Pinterest URLs within that slot. So, I’ll ask the question again: how potent is Pinterest in terms of bringing in traffic via search to your specific assets if it seems that Google is relatively quick to swap out various Pinterest URLs? Pinterest URLs & Google ranking: Methodology and limitations The Semrush data team analyzed 1,487 keywords on desktop and another 1,425 keywords on mobile in order to see how often Google is swapping out Pinterest URLs on the SERP. Only keywords that displayed a Pinterest URL with an average rank of 10 or better were considered. The team then analyzed how many times one of these URLs for the given keywords was being swapped for another Pinterest URL. What, however, is the definition of a URL swap in this instance? If a specific Pinterest URL was ranking #7 for a keyword and then moved to rank #10, while a new Pinterest URL began ranking at position #3, is that a swap? What if a Pinterest URL was ranking #8 and then no longer ranked top 10 at all, only to have another Pinterest URL begin to rank at position #10—is this a swap? For the purposes of this study, anytime a Pinterest URL stopped ranking among the top 10 results on the SERP and another Pinterest URL started ranking top 10, it is considered to be a swap. Now, based on the patterns I’ve seen and as shown in the images above, generally speaking, Google gives a certain slot—or in some instances, slots—to Pinterest. The URLs that Google then swaps fall within a certain range of ranking positions. Thus, it makes sense to consider one Pinterest URL as being swapped for another, even if they are not at the same exact ranking position. However, as noted above, this study includes any instance of swapping even if the swap represents a discrepancy in ranking positions “range.” This is simply a limitation to note. Also, approximately 1,400 keywords per device is not a small number of URLs. At the same time, it is not as if a million URLs were analyzed. This, too, is something to consider. Similarly, the data collection period covered a period of 30 days. These days were chosen because, as a continuum, they reflected days of average volatility (so as to increase the accuracy of the data) but all-in-all a larger period could, in theory, yield different results. With that, let’s get to the data itself. How consistent are Pinterest URL rankings on the Google SERP? Just 43% of the keywords studied presented the same Pinterest URL on the desktop SERP over the entire course of the 30-day data period. Meaning, the other 57% of the time, Google is not using the same Pinterest URL on the SERP over the course of the month. On mobile, this number jumps up to a full 60%. Pinterest URL diversity on the SERP is the norm, which means you should, as a rule, expect your ranking Pinterest URLs to be replaced on the SERP at some point. In other words, volatility is the rule rather than the exception when it comes to specific Pinterest URLs ranking on the SERP (again, Pinterest as a domain is very consistent, but we’re concerned with specific creators here, not the platform). The question is, how volatile are specific Pinterest URLs on the Google SERP? To phrase it another way: How many unique Pinterest URLs is Google utilizing over the course of a month? Is your Pinterest pin or board and its URL sharing the SERP with just one other Pinterest URL? What’s the organic market share like for specific Pinterest URLs on the SERP? According to the data, Google swaps Pinterest URLs an average of six times per month and utilizes three unique Pinterest URLs when doing so. In other words, you can expect to share the SERP with two other Pinterest URLs (other than your own) each month. What’s more, you can also expect your URL to be swapped an average of two times per month. For creators relying on organic traffic from their Pinterest uploads, that’s not exactly a picture of stability and stands in sharp contradistinction to our a priori understanding of Pinterest from a domain perspective. When Google swaps Pinterest URLs: Patterns and observations Big data is great and the insight it affords can indeed be illuminating. Still, I typically find that there’s a level of nuance that can only be surfaced by looking at specific instances. With that in mind, let’s dive into some of the patterns I noticed while analyzing specific cases of Google swapping Pinterest URLs on the SERP. Simultaneous consistency and volatility among Pinterest URLs on the Google While the data does show Google has a propensity to swap the Pinterest URLs it ranks on the SERP, this volatility does at times coincide with stability. Specifically, there is a pattern where Google will show one Pinterest URL consistently on the SERP within a position range for the entire course of a 30-day period (perhaps longer, but I only looked at a 30-day period). At the same time, Google may also rank a secondary Pinterest URL at a slightly lower ranking position. This is exactly the pattern seen in the example below: The URL represented by the purple line consistently ranked on the SERP over the entire 30-day period analyzed. Below it, represented by the yellow, pink, and orange lines, was a secondary Pinterest slot on the SERP where Google oscillated between three different Pinterest URLs (or no secondary Pinterest URL at all, depending on the day). Practically speaking, it is entirely possible to experience significant volatility while tracking one of your Pinterest URLs, while another Pinterest URL sees relative stability for the same keyword on the SERP. In terms of real numbers across the dataset we tracked, 50% of the time Google showed two Pinterest URLs on the SERP simultaneously. There is overlap, and a good amount of it: While there are days when Google truly swaps one Pinterest URL with another, there are also days when Google might show both URLs only to remove one of them a day or two after that. Search intent when Google ranks Pinterest pins and boards It is possible that, even though your Pinterest URL for your particular pin is being swapped, the Pinterest URL that replaces yours also contains your pin. This is because Google doesn’t merely swap a Pinterest URL to a specific pin with another URL to a different pin. Rather, Google sometimes replaces a URL to a specific pin with a collection of pins (a Pinterest board). For example, take the keyword mens ring ruby which (as shown earlier on in this article) reflected multiple instances of Google swapping Pinterest URLs. In one case, Google swaps a link from this specific pin: To a collection of pins, as seen here: It is possible that the specific pin shown previously can appear in the collection above. However, even if that were to be the case, a link to your specific pin is obviously of greater value. Take the instance below, for example: The dominant Pinterest URL is to a board (you can tell by the URL structure, just for the record). There’s a secondary URL it tests out (reflected in the orange line), which is considerably less consistent than the board shown in purple. The same can be seen in the rankings for the keyword combat workout: Yes, Google does experiment with an alternative Pinterest URL, but both reflect boards, not specific pins. The same thing goes for the keyword wooden family tree but in the inverse, Google experiments with multiple Pinterest URLs on this SERP; all of them pins, none of them boards: For whatever reason, it seems Google sees the intent of the keyword as either being relevant for a specific Pinterest pin or the opposite, that the user would be better served with a Pinterest board. The types of keywords predisposed to more Pinterest URL swapping Some keywords are subject to Google swapping two Pinterest URLs just once on the SERP each month, while some see Google swapping five or six URLs back and forth, perhaps ten times over the same period. Why? Why do some keywords see so much “Pinterest URL swapping” while others don’t? It’s hard to determine a definitive reason here. In fact, it’s impossible to say unless Google itself released a statement as to why. However, there are some patterns within the dataset that may possibly explain why some keywords lend themselves to more Pinterest URL swapping than others. While I’m not privy to the exact thinking around what about each keyword lends itself to one intent over the other, it is interesting to see how specific Google is here. The most notable trend, although it does not account for all instances, is that the more obscure the “item” represented in the keyword, the fewer swaps. For example, the following keywords saw either one or two Pinterest URL swaps: Dollar tree decorations Puppet makeup Manor lake australian labradoodles Laundry room storage Screaming needle I would imagine that the more obscure the reference, the less content with which to conduct the URL swapping. Conversely, the keywords below saw between 10-15 swaps: World map watch Silver bengal cat Vintage windbreaker jacket Brick paint colors Old lady costume Again, the more mainstream the item is, the more Pinterest content at Google’s disposal with which to execute the swaps (all other things being equal). Is this 100% why certain keywords exhibit less stability with their ranking Pinterest URLs? No, there are many instances within the dataset that contradict my point above. However, again, there does seem (at least to me) to be a pattern where more obscure sorts of keywords tend to exhibit less Pinterest URL swapping. Pinterest URL consistency inside SERP features Pinterest URLs can be a real factor inside of Google’s various SERP features. Similar to the analysis above, the Semrush team pulled some data related to Pinterest URL consistency within two prominent SERP features: featured snippets and image packs. Pinterest URL consistency: Featured snippets Believe it or not, Pinterest URLs are used in featured snippets. In the US alone, the domain has earned featured snippets for 9,400 keywords. Within the smaller dataset we analyzed for this study, there were no featured snippets that contained a Pinterest URL for the entire 30-day period. (Again, that is a number to take with a grain of salt as the dataset here is somewhat limited in that it reflects about 1,500 keywords and not all of them will generate a featured snippet). Still, when Pinterest URLs were used within the featured snippet, the swapping continued. When Google displayed a Pinterest URL within a featured snippet at least once over the 30-day period, the search engine utilized (on average) four other URLs over the same period (for a total of five different URLs seen within the featured snippet on average over the data period). However, not every URL Google swapped in these instances was from Pinterest. Of the five URLs Google used within these featured snippets over the 30-day period, 56% of them were Pinterest URLs. So, while Google tends to give Pinterest a ranking slot (or two) on the SERP and oscillates between various Pinterest URLs in these slots, this is not the case for featured snippets—at least not to the same extent. With featured snippets, Google is not locked in to giving Pinterest (as a domain) the slot and merely swapping various Pinterest URLs. Rather, Google only replaces one Pinterest URL with another Pinterest URL just over half of the time. For the record, on average, it would appear that each of the five URLs gets about two “spots” in the featured snippet, as we noticed that Google swapped the URLs 12 times over the 30-day data period. That is, the same five URLs (just over half of which were Pinterest URLs) constituted a total of 12 different URL swaps over the 30-day data period. Meaning, Google used a URL in the featured snippets, swapped it with another, and then reused the already displayed URL again at some point (as a rule). Pinterest URL consistency: Image packs As is to be expected, one of the more prominent places that Pinterest URLs can appear is within Google’s image pack. Accordingly, the Semrush team also pulled data on how often Google was swapping Pinterest URLs inside the image pack. To start, the average image pack includes links to 13 URLs on desktop and 10 on mobile. Of those URLs, only 13% of them come from Pinterest on desktop and just 9% on mobile. Google seems to have swapped these Pinterest URLs 13 times on desktop and 15 times on mobile over the course of the 30 days. Note: This doesn’t indicate whether Google is swapping Pinterest URLs within the SERP feature more often than it does for URLs from other domains. Why so much swapping? Why isn’t there a more consistent showing on the SERP for Pinterest URLs? Clearly, I cannot offer a definitive answer—I am not Google. All I can do is offer my best theory. To me, this is all about the nature of images and intent. If you recall, Google, on average, executes six Pinterest URL swaps for keywords that display a Pinterest URL among the top 10 results. That number more than doubles when you look at the image pack, where Google executes 13 swaps (again, this is the number of total swaps, not unique URLs used for swapping). Moreover, while I don’t have specific numbers, the Semrush team did mention that image pack URLs are often moved around in terms of position and even entirely removed from the SERP feature. To me, this tells a lot of the story. Google sees images as being “dynamic.” Whatever the reason, Google tends to not treat images in a static way on the SERP when possible. Personally, I think this is because there are so many varieties and variations to the images that reflect a given product or topic, etc. Having a limited and fixed set of images to reflect the topic or product doesn’t align with the very nature of visual representation, which is often nuanced and extremely varied. Think about the images Google shows for “Batman”—if it went with the same five images for all eternity, that would not reflect the diverse way the topic can be visually represented. This comes into play on the main SERP as Google has limited space to show images (as opposed to Image Search per se). From a search intent point of view, Pinterest URLs are present to serve as access to images. It’s a way to provide users with an entry point to see an image that aligns with the search query they entered. If we think about a ranking Pinterest URL as an image on the SERP (instead of as an organic result), then you can make sense of why there is so much volatility: Google is treating the URLs within the organic results much the way it treats images in an image pack. This might be why we generally don’t see the same pattern with Amazon. Google is not showing one specific Amazon product URL one day and a different one the next. Google simply shows a URL to a set of Amazon results—not so with Pinterest. In the chart below, while Amazon ranks with one consistent URL, Google swaps a variety of URLs to specific Pinterest pins over the course of the month: Why? I think it’s because Google treats Pinterest URLs like an image. And, images need diversity, not stale, static, and therefore generic representation. Pinterest rankings need qualification Tracking rank sounds easy, but it’s not. Doing it in a way that makes good sense and that doesn’t end up being a bit of a vanity metric can be hard. All the more so when trying to define the organic performance of your Pinterest pins on the SERP. Seeing that your pins or boards rank well at a given moment, based on what we’ve seen above, is not enough. In these cases, you simply can’t sit back and assume traffic is coming in because at a specific moment in time your Pinterest URLs rank well (not that you really ever should have such a mindset). As we’ve seen, there is an unusual amount of volatility with Pinterest URLs on the SERP. Taking that into account when assessing performance, reporting, and certainly when predicting future performance is highly recommended. Mordy Oberstein - Head of SEO Branding, Wix Mordy is the Head of SEO Branding at Wix. Concurrently he also serves as a communications advisor for Semrush. Dedicated to SEO education, Mordy is one of the organizers of SEOchat and a popular industry author and speaker. Twitter | Linkedin

  • What AI content generators mean for the future of search

    Author: Mordy Oberstein How the web—and search engines in particular—handle the proliferation of AI-written content (as sparked by OpenAI’s breakthroughs with ChatGPT) is the central question facing SEOs and content creators at the moment. Foretelling how AI-written content will shape the future of search and SEO is multi-faceted in that it includes both how search engines algorithmically handle that content and how search engines themselves will incorporate AI chat functionality into their ecosystems. Both of these issues are complex in their own way and, at this point, no one has all the answers. The best I can do is apply my SEO outlook—with respect to how search engines have evolved—to the situation at hand in order to describe what I see as Google’s inevitable reaction to AI-written content and AI chat technology. Table of contents: The problem AI-written content poses for the web Google’s inevitable response to AI-written content The potential emphasis on domain-level metrics Concerns for SMBs and ranking in the era of AI-written content How Google can combat the over-proliferation of AI content AI chat technology on the SERP The problem AI-written content poses for the web In my opinion, the place to start this exploration is not within the scope of SEO. Rather, I think we should examine the (perhaps obvious) problem AI-written content presents to the web as web content is the clay that search engines use to mold their result pages. Before anything else, it’s vital to understand just how powerful Al content generators are—and I don’t mean the power and the proficiency of the technology per se. Rather, the power AI content generators have to solve a very serious pain point for most of the web: content is hard. If you come from a content background like myself, it’s sometimes difficult to appreciate just how hard it is to create “high-quality content.” Content creation is really an art form. In my humble opinion, creating strong content relies on both the ability to connect to the recesses of your persona(s) and to then be able to deliver that connection in a scaffolded, methodological, and engaging manner, all at the same time. Content requires profundity and the unique ability to distribute that profundity into digestible chunks. At the same time, content is the lifeblood of the web. Without content, there is no such thing as a website. What a predicament for the average site owner: In a way, when a search engine like Google asks a site owner to “create good content” it’s an unreasonable request. Being able to create a substantial series of content to fill a website is a unique skill. Just like the average person probably can’t change a car transmission, they also can’t create professional-level content. We get fooled into thinking this is a possibility because everyone can read and write. Being able to write and being able to create content are not one and the same. A site owner who first starts dipping their toes into content creation will quickly realize just how tough and time consuming it is. For the record, I’m not saying that the average site owner can’t create enough content to put forth a substantial website. What I am saying is that there is a ceiling here. In addition, the amount of time it takes to create content can be prohibitive for many site owners. So even if a site owner is a fabulous writer, what are the chances that they’ll have the time to create content as it’s a time-consuming process even for the best of us. (Parenthetically, and with much bias, this is one of the great advantages of a platform like Wix in that it frees up time to focus on content creation and this is why, regardless of my bias, the platform represents the future of the web in a certain regard). And now we arrive at an inflection point: AI content generators seemingly solve both of these pain points. They certainly save time thereby making the content creation process more accessible and ostensibly spin up pretty decent content to boot. The temptation is real. To the unsuspecting person, AI content generators open up a huge world of possibilities and cost savings. In other words, the pain and struggle of content creation are so significant and the possible solution AI content generators present is so strong that, inevitably, the web will become filled to capacity with AI-written content. The problem, of course, is, AI-written content is not a panacea and is in many cases a gateway drug to the proliferation of thin, inaccurate, and unhelpful content. One could argue the web is already overfilled with such content. That’s why Google has done everything from releasing its “new” set of core updates that began in 2018 to the Product Review Updates in 2021 to the more recent Helpful Content Update. However, with the newfound capability and accessibility of AI content generators, there is going to be a proliferation of this sort of content unlike anything the internet has ever seen. It will be such a proliferation that Google will have to respond, because if it doesn’t, it faces criticism by discerning users for the irrelevance of its results. The question is, how will Google solve this inevitability? Google’s inevitable response to AI-written content Let me propose a wild scenario: What if every web page whose content answers the question what are snow tires? was created by AI? What if we went really wild with this hypothetical and said all of the content AI content generators spun up to answer this question was more or less the same? (Now that I put this into writing, the latter doesn’t seem that wild.) In such a scenario, if someone went to Google what are snow tires?, how would Google know what page to rank if all of the content out there was of nearly identical quality? If all snippet-level content is equal, then what will rank at the top? In a world that may very well be driven by AI-written content, this scenario (while hyperbolic) isn’t that far-fetched. How much value does human experience lend to snippet-level topics that have been answered across the web a thousand times over? What new insights are you going to add to a static topic like what are snow tires? that hasn’t already been done before? Snippet-level content has the potential to be a great fit for AI content generators, assuming the technology allows for topical accuracy. So flash forward five years in time when all of this content will be written (in theory) by AI—how does Google decide what to rank for the query what are snow tires? (or whatever snippet-level query) when all of the snippets are relatively the same? AI-written content and the search engine’s emphasis on domain-level metrics The problem I laid out above, to borrow an SEO cliche, is “old, not new.” The truth is, AI-written content amplifies the quality conundrum that already faces the web. There is a proliferation of mediocre content on the web today. The web is, unfortunately, full of fluff content that is more concerned with ranking, traffic, or whatever acquisitional metric, than with helping its target audience. Google has the same problem with this content as it does with AI-written content. A less-than-stellar site owner can spin up a lot of snippet-level content without exerting a ton of effort, as again, the nature of this content isn’t exactly prolific. It’s for this reason that “quality” has long been a domain-level metric for Google. (For the record, it’s a long-standing myth among SEOs that Google doesn’t rank sites and instead only ranks pages). Meaning, if the pages on a site for snippet-level content are of “adequate” quality but the other pages that target deeper content needs are not, the performance of those snippet-level pages would be negatively impacted (all other things being equal). This concept culminated with the advent of the Helpful Content Update, which according to Google’s own documentation: “ …introduces a new site-wide signal that we consider among many other signals for ranking web pages. Our systems automatically identify content that seems to have little value, low-added value or is otherwise not particularly helpful to those doing searches.” This issue really comes into focus within niche topics or once you begin to move past surface-level understanding. To me, this is why Google explicitly focuses on sites that don’t offer a level of nuance and depth in their content. When explaining what sites should avoid (within the context of the Helpful Content Update) the search engine asks content creators: “Did you decide to enter some niche topic area without any real expertise, but instead mainly because you thought you'd get search traffic?” Simply put, ranking snippet-level content (that doesn’t really vary from one site to the next) is contextualized by how the site handles content that should be very differentiated. The performance of snippet-level content that is easily churned out doesn’t exist in a vacuum, but is dependent on how well you handle more niche topics and subject matters that require more nuance and expertise. In other words, ranking is a semantic proposition. What you do on the site, as a whole, impacts page-level performance. And it’s not just a matter of “quality” in the sense that the pages don’t present a negative user experience (i.e., intrusive interstitials or filled to the brim with ads). Quality is far more holistic than that and far more semantic than that. Quality, with regard to Google’s algorithms, very much overlaps with relevance. Google doesn’t consider it to be a quality experience if the user is bombarded with all sorts of topics that are not interrelated when navigating through a website (and rightly so). Is it really a quality site or a quality experience if the user encounters a lack of topical cohesion across the site? A website should have an “identity” and it should be very clear to the user what the function of the site is and what sort of information they might expect to find on it. Don’t take my word for it, here’s what Google again advises site owners to consider when looking to avoid being negatively impacted by the Helpful Content Update: “Does your site have a primary purpose or focus? Are you producing lots of content on different topics in hopes that some of it might perform well in search results?” Having a strong content focus (and the quality of that content itself) sends signals to Google about the entire domain. This is the answer to the AI-written content conundrum: Domain-level metrics help Google differentiate sites for ranking when the content at the page level is generic. Google will inevitably double down on these domain-level signals as it already has with the Helpful Content Update. To apply this to our original construct (where all of the snippet-level content answering what is a snow tire? is written by AI and is therefore relatively undistinguishable), ranking this content will involve looking not just at the content on the page, but how the site deals with the topic across the board. If two pages have basically the same AI-written content about what a snow tire is, Google will be forced to look at the strength of the domain itself with regard to snow tires. Which site has a prolific set of content around snow tires? Which has in-depth, niche knowledge? Which site goes down the snow tire rabbit hole and which site has a heap of snippetable AI-written content? Parsing out the quality of the domain is how a search engine will function in spaces where AI-written content has legitimately made generic all of the content that answers a top-level query—which makes a great deal of sense. Don’t look at the query what is a snow tire? simply as someone looking to get an answer to a specific question. Rather, zoom out. This person is looking for information about snow tires as a topic. Which site then makes the most sense for them to visit: a site that has a few pages of generic information about snow tires or a site that is dedicated to diving into the wonderful world of tires? Domain-level metrics also make sense without the problem AI-written content poses. All AI-written content does is make this approach an absolute necessity for search engines and for them to place the construct at the forefront of how they operate. In an era of the web that will be saturated with content that lacks a certain degree of differentiation (i.e., AI-written content), what you do on the other pages of your site will increasingly be as important as what you do on the page you want to rank. Google will, in my opinion, differentiate that which can’t be differentiated by redoubling its focus on domain quality. My concerns for SMBs and ranking in the era of AI-written content What worries me the most about the above-mentioned ranking construct (i.e., one that is heavily weighted on the strength of the domain overall) is that it might leave smaller sites in a bit of a pickle. A site competing for a snippet-level query with AI-written content (similar to all of the other AI-written content around that topic) will be able to rely on the strength of its other content to rank here, according to what I’m proposing. A large site with an efficient content team that has created all sorts of topical clusters related to the overarching topic should thrive, all other considerations being equal. However, a smaller site (typically run by a smaller business) does not have those resources. So while they may have strong content, they may lack quantity. In a world where semantics rule the ranking day, such sites would (in theory) be at a disadvantage as they simply would not be able to create the strong semantic signals needed to differentiate the pages that target snippet-level queries. One could argue that, given the current ecosystem, this is already a problem. While there might be something to that, if Google increases its emphasis on domain-level metrics, the problem will only increase exponentially—in theory. Experience and language structure: How Google can combat the over-proliferation of AI content What about cases where content is not predominantly generated by AI? Even if most snippet-level content is eventually spun up by AI content generators, that still leaves many content categories that are probably not best-served by this method of content creation. How would Google go about handling AI-written content where the content demands more nuance and depth, and is a bit more “long tail” in nature? Again, I don’t think search engines will be able to ignore this problem as AI content generators “solve” an extreme pain point that will inevitably lead to mass (and most likely improper) usage. Google will have to figure out a way to “differentiate” AI-written content if it expects to keep user satisfaction at acceptable levels. The truth is, we may have already been given the answer. In December 2022, Search Engine Journal’s Roger Montii reported on a research paper that points to Google being able to use machine learning to determine if content was written by AI (sort of like turning the machines on the machines). Of course, we don’t know for sure how (or even if) Google deploys this technology, but it does point to a logical construct: language structure can be analyzed to determine if an author is likely human or not. This is fundamentally the basis of a plethora of tools on the market that analyze chunks of text to determine the likelihood that it was constructed by AI (Glenn Gabe has a nice list of the tools you can use to determine AI authorship). The language structures humans tend to use contrast sharply with the language structures employed by AI content generators. The schism between the two language structures is so deep that a number of companies make a living analyzing the difference and letting you know what content has and hasn’t been written by humans. This is precisely what a tool called GLTR did with a section of this very article below: Notice all of the words in purple and in red—these indicate the content was written by a human, which it was (me). Now compare that with something ChatGPT spun up about how Google will rank AI-written content: There’s a far lower ratio of red and purple wording, indicating that this was written by AI. Language structure cannot be overemphasized when differentiating human-written content from AI-written content. Should you use AI to spin up a generic piece of “fluff” content, you are far more likely to create something that seems like it was not written by a human. Going forward, I see Google differentiating low-quality content and AI-written content (which runs the risk of being low quality) by examining language, as it is exactly what machine learning algorithms do: profile language structures. Profiling language constructs is very much within Google's wheelhouse and is a potentially effective way to assess human authorship and quality overall. What’s more, this perfectly aligns with both the extra “E” (for experience) in E-E-A-T and Google’s guidance around Product Review Updates, both of which focus on first-hand experience (as in, that which AI cannot provide). How can Google know if you have actual experience with something? One way is language structure. Imagine you were reading reviews on the best vacuum cleaners on the market and in describing these “best” vacuum cleaners, one page wrote, “Is great on carpet,” while another page wrote, “great on carpet but not for pet hair on carpet.” Which of these two pages was probably written by someone who actually used the darn thing? It’s obvious. It’s obvious to us and it’s also not far-fetched to think that Google can parse the language structure of these two sentences to realize that the modification of the original statement (as in “but not for pet hair on carpet”) represents a more complex language structure which is more closely associated with text based on actual first-hand experience. Aside from domain-level metrics, language structure analysis, to me, will play a vital role in Google determining if AI wrote the content and if the content is generally of sound quality. The integration of AI chat technology into the SERP Let’s talk a bit now about the other elephant in the room: the integration of AI chat technology into search engine results. Clearly, search engines will integrate AI chat experiences into their result pages. I say this because, from Bing to You.com, they already have. Google (at the time of writing this) has indicated that AI chat will be a part of its ecosystem with the announcement of BARD. The question is, what will these systems look like as they mature and how will they impact the ecosystem? More succinctly, will AI chat on search engines be the end of all organic traffic? Understandably, there’s been a lot of concern around the impact of AI chat experiences on organic clicks. If the search engine answers the query within an AI chat experience, what need will there be for clicks? There’s a lot to chew on here. Firstly, for top-level queries that have an immediate and clear answer, the ecosystem already prevents clicks with Direct Answers (as shown below). Is this Google “stealing” clicks? Personally, I don’t abide by this view. While I do think there are things that Google can improve on to better the organic ecosystem, I don’t think abolishing Direct Answers is one of them (also, every ecosystem has areas for improvement, so don’t take my statements here as being overly critical in that way). I think the web has evolved to the point where users want to consume information in the most expeditious manner possible and Direct Answers fill that need. To the extent that AI chat features within search prevent clicks, we need to consider this dynamic as well. Is the chat stealing clicks or simply aligning with the user’s desire to not have to click, to begin with? If it’s the latter, our problem as SEOs is with people, not search engines. However, because of how frequently users might engage with AI chat features, including organic links for the sake of citation is critical to the health of the web—both in terms of traffic incentives and in terms of topical accuracy. It’s vital that users know the source of the information presented by the AI chat feature so that they can verify its accuracy. I’ve seen many instances where these chat tools present out-of-date information. It’s really not that hard to find at this point, so including citations is key (what would be even better is if the providers pushed their tools to be more accurate, but hopefully that will come with time as we are still in the infancy of this technology’s application). Take this result from Neeva’s AI chat feature as an example: The result implies that the Yankees have a good defensive shortstop (the player who stands between second and third base). This was true…at the start of the 2022 season as indicated in the first citation within the chat’s response: Fast forward to the end of the season and there were many concerns about one particular player’s defensive abilities: At least with citations, a user might be clued into the potential issues with the results (again, the better path would be for AI chat tools to evolve). The point is that citations are very important for the health of the web both because they contextualize the answer and because they enable a site to receive traffic. This is even a point that Bing acknowledged in its blog post outlining how its AI chat experience functions: “Prometheus is also able to integrate citations into sentences in the Chat answer so that users can easily click to access those sources and verify the information. Sending traffic to these sources is important for a healthy web ecosystem and remains one of our top Bing goals.” I’m actually very happy to hear Bing say that. I think a lot of the organic market share has to do with how the search engines decide to present their chat results. In Bing’s case, the AI chat feature sits to the right of the organic results (on desktop)—not above them. My eye initially sees the organic results, then the summary from the AI chat feature. You.com, for example, makes you move from the initial organic results to a specific tab and then places organic results to the right of the chat box. Search engines will need to be responsible in how they present their AI-produced content so as to maintain a healthy and functioning web. And again, a lot of that does not comes from the functionality per se, but how these search engines go about accenting the AI content with organic opportunities. As AI chat ecosystems evolve, more opportunities for clicks to sites might exist. Personally, I don’t think the novelty of these tools is in their function as a direct answer. For that, we already have Direct Answers and featured snippets. The novelty, to me at least, is in their ability to refine queries. Look at the conversation I had with You.com’s chat feature about pizza in NYC: Here, the lack of URLs within the chat was a major limitation to my overall user experience. I think the example above (i.e., query refinement) is where users will find chat tools more useful and presenting relevant URLs will be critical. To be fair, there are organic results to the side, but I would have much preferred (and even expected) the chat to offer me two or three curated URLs for local pizza places that fit my criteria. Parenthetically, you can see how this format might wreak havoc on rank tracking (should URLs be placed at each stage of the chat). How valuable is ranking at the top position when there is a URL within a citation provided by the AI chat experience? Will rank trackers see those initial citations? Possibly, but as you refine the query with additional chat prompts as I did above, they certainly won’t be able to! AI chat integrated into the SERP could put a far greater emphasis on data sources like Search Console (where you can see the total impressions), and may make rank tracking within third-party tools less reliable than it currently is. So, does the integration of AI chat into the SERP mean the end of organic traffic? Probably not. It would appear that search engines seem to generally understand the need to incentivize content creation by pushing organic traffic and offering context to the user via citation and beyond. To again use Bing as an example, there seems like plenty of opportunity to take notice of the organic results on the SERP below: My read on Bing is that it is using the chat functionality to accent search. I see the Bing SERP, just for example, as trying to use the chat feature to offer a more layered and well-rounded search experience—not to replace it. At a minimum, there are some early and encouraging signs that the search engines understand that they cannot leave organic traffic out of the equation. AI content generation: A pivotal moment for the web and SEO Over the course of my time in the SEO industry, I’ve seen all sorts of trends come and go. I’m still waiting for the dominance of voice search to materialize. AI content generators are not a fad. The problems that they “solve” are too attractive and the technology too advanced to ever be put back in the bottle. Search engines, as we’ve already seen, are going to incorporate the technology into their ecosystems. Search engines are also going to have to adapt their algorithms accordingly so as to handle the impending wave of AI-written content. Whatever the outcome of all of this is, I can say one thing with total certainty—I cannot remember a more determinative moment for the web and for SEO. Mordy Oberstein - Head of SEO Branding, Wix Mordy is the Head of SEO Branding at Wix. Concurrently he also serves as a communications advisor for Semrush. Dedicated to SEO education, Mordy is one of the organizers of SEOchat and a popular industry author and speaker. Twitter | Linkedin

  • Live AMA: Understanding Wix's high performance and CWV scores

    Have your questions answered in a live AMA with Wix’s Performance Tech Lead, Dan Shappir. Plus, take a deeper look into how Wix prioritizes performance and what this means for you and your clients. Read the Transcript Transcript: Understanding Wix high performance and CWV scores Speakers Brett Haralson, Community Manager, Wix Dan Shappir, Performance Tech Lead, Wix 00:00 Brett: Hey, hello everybody and welcome to this week's Partners live AMA with Dan Shappir. Today we're going to be talking all about Core Web Vitals understanding Wix's high score and performance. And let's just kind of jump into it. This is a 100% live AMA. So everything I'm about to ask, is what you've submitted. Dan, welcome, Dan is the head of Wix performance—Tech Lead here at Wix. Dan, everybody knows you as “Dan the Beast”, welcome. 00:27 Dan: Thank you very much. I'm so excited to be here and engage with the community. Looking forward to the questions, let’s see if they can stump me. 00:35 Brett: Yes, yes. Can you stump Dan? So by the way, the questions, here's just for those of you who are joining us, here's kind of the flow, what you can expect. To sign up to this and register you submitted some questions. I've got them all ready to go. However, I'm going to try to get some in chat. So as we're talking about things, if you want to ask Dan a question, please go ahead and do it. And I'll try to get to it towards the end. And also, there were a lot of questions that were submitted about, you know, what [are] CVW, what is SEO and any tips? We did that webinar with Dan and I and Dikla from Google. It [was] a little while ago, but I added it to the description if you want to go back and refresh yourself. So with that being said, Dan, it's been a while since we talked. And I think the only thing I can do to show where we've come is this right here. It's this graph, Dan. 01:31 Dan: Yeah, it's a great graph. And it's important to note that this is not our graph. This is a graph that's actually hosted on HTTP Archive, which is this open source project sponsored by Google. And the data that's coming in, that's feeding this graph, is data collected by Google with their Chrome user experience report and database, it's the same data that they then use for the SEO ranking boost, that you know, that performance can now give. So this is not Wix data. This is, you can call it objective data about Wix's performance as it's reflected by Google data. 02:16 Brett: So I think it's important to note too, that it's, it's not Wix, so I want to thank you for clarifying that. Wix has come really, really far. And I love this graph. And then I'm gonna jump into the questions. But I want to just spend two more seconds on this. I think it's really important to note that on this graph, if you go back to 2020, Wix is really at the bottom of the pack. Now, if you look at this, Wix is leading the pack. This is incredible Dan, this. What the heck are y'all doing over there? 02:46 Dan: I have really, yeah, I have to say that this has been a company wide effort, you know, this, I’d love to take credit for it. But really, hundreds of people at Wix have been working on this diligently. It's been designated as a top priority, strategic task across the entire Wix organization. And essentially, everybody in Wix R&D, Support, QA, Marketing, everybody has been engaged in pushing Wix’s performance, up and up, you know. 03:17 Brett: So it's funny, because it's all the results. Alright. We said, we said, you know, the Partners, that was one of the pain points, and we needed to be fast, we needed to load faster. And I remember saying and sitting down with so many Partners and you, and we're this is our top priority at Wix. This will happen. I remember even executive levels with some of our roundtable saying that it's so great to see this. I'm glad to be where we are. But we still have questions, Dan. Let's see. Let's see. So I'm going into the question bank, and I'm gonna start pulling questions. If you have some, please go ahead and drop them. I'll try to get to them towards the end. So here we go. First up, Jose wants to know, “Do I need to republish my site to benefit from performance improvements?” Now I saw this a lot. Dan, is there something the Partners need to do to see something happen? Are there any backend widgets they need to work on? Or does it just happen? 04:09 Dan: Okay, so let's distinguish between, let's say, modifying your website to get the most bang for the buck. And as you said, in the webinar that we did with Google, we did provide a whole bunch of tips and suggestions of things that you can do to get the most performant website that you can on top of the Wix platform. That being said, in order to just benefit from the improvements that we're making, you don't have to do a thing. One of the great things about Wix is that, you know, we don't break your website, you don't need to update plugins, you don't need to update themes. You don't need to worry about security or scalability. We take care of all these things for you. And the same goes with performance. If somebody built a website on our platform eight years ago, and didn't touch it since then, didn't publish it or anything, it's much faster now than it's ever been. 05:06 Brett: I don't know if anybody, I mean. I really think if you were to look that up in Websters, it would literally be defined as sorcery. I'm serious. That's incredible. That's really incredible. But I do have other questions, too. I think I'll touch on those that actually want to elaborate a little bit on that. But I'll circle back to that. So I'm tossing another one at you. Ray wants to know, “How can partners utilize these scores to help promote Wix to their clients?” And Wellington has a secondary follow-up question to that, “How can you correct the assumption that Wix is slow?” And you know, and Dan, I'll chime in here for just a second. I understand that, you know, as a Partner, your creativity, your business, is building a web presence for a client. And a lot of clients have different conceptions of something, or they may have seen an ad on Facebook or Google or something, and they're interested in this site. So— or this platform, a lot of Partners, I think, battle that—convincing their client to go a certain way, because it's what they love. What would you say about this to help Partners put a feather in their cap? 06:17 Dan: Well, first of all, let's start with the fact that as the graph shows, if we go back two, three years, four years, Wix was slow. You know, we did do a lot of work, we have come a long way, you know, I can give examples of other places where Wix has substantially pushed the envelope forward, like SEO, or with accessibility, where, you know, we knew that we needed to up our game, and we were able to do that. And performance is yet another example of this. So if we look at that graph, and compare, then three, then let's say two years ago, if you built a site on Wix, or you built a site on WordPress, then with WordPress, you would have been three times more likely to have a good Core Web Vitals score. Now, with Wix in the US, you're twice as likely to [have] a good Core Web Vitals score, then you are with WordPress. So there's a definite improvement and shift here. So if somebody says, you know, “I heard that Wix is slow?” Well, the answer to that is your information is simply outdated. Wix has come a long way forward. And, you know, that's kind of the benefit of using a platform such as ours, that you get all these benefits and improvements over time, like we said before, without you needing to do anything. 07:48 Brett: Who doesn't love Dan, your information is outdated. I want a shirt, I'm gonna brand that me and the Partners are going to start wearing shirts that quote, Dan the Beast, “Your information is outdated.” Wix is the GOAT. I saw that, by the way, that was great, who said that? Oh, gosh, it was great. We need more, we need more GOAT icons. Okay. So let me go to another question. I think you answered both of those. That was great. Thank you for that. Matt. I think this is Matt. This is a great, great question, “How can we view more detailed CWV metrics?” And more importantly, he wants to know, “Is it possible to import, export and share with clients?” I think this is a fantastic question, Dan. 08:28 Dan: Well, the great thing about Core Web Vitals and what Google [has] done is that they've kind of standardized the market around these, these metrics. And as a result of this, you can literally see these metrics in almost any performance measurement tool that you use. So currently, we don't yet show them in the Site Speed dashboard. And you know, you can take my use of the word yet as an indication of things to come. But you can definitely check them out in other sources. So for example, if you're interested about your own website, and if you have enough traffic, then you know, if you just go into the Google Search Console, you will see there is a Core Web Vitals tab in there. And you can actually get information about your code vitals for your own website within the Google Search Console. They will actually highlight, you know which pages have good Core Web Vitals, which pages need improvement, and then you can kind of focus on those. So that's one place where you can see this information. Another place where you can see this information is in Google PageSpeed Insights, where you can literally put in any website, your own, your competitors, you know, like CNN, whatever, and if that website has sufficient traffic, you will see the Core Web Vitals information for that website. Now, unfortunately, PSI is kind of confusing, the way that the data is presented. A little bird at Google whispered in our ear that they're looking at revamping their user interface and hopefully making it clearer and more understandable. Because you kind of have the score at the top, which—it doesn't actually have to do with Core Web Vitals, it's actually based on lab data. 10:25 Brett: And I have a question about that, it’s queued up. So let's talk about that in just a second. Because that's interesting. I want to know about that. But Dan, I'm curious about what the Partners do. I know a lot of the SEO-specialized Partners have like a report that they show a lot of their clients to show how they're gaining local organic SEO traffic? Are any Partners doing anything with performance, are you sending this to—just drop it in the chat? I'm curious. So I [because] I would Dan, what do you think? 10:53 Dan: Well, for sure. I mean, you know, you can, for example, that graph that we showed at the beginning that we said that it's from HTTP Archive. That graph is available to anybody. We can we, you know, we can share the link to that. And it's a really nice tool that the people that the HTTP Archive have created, because you can filter and compare various CMS platforms or website builders or eCommerce platforms, you can look at different geos. By the way, I highly recommend that you filter it for the particular geography that you're in. So for example, if you're selling in the States, and you want to compare to others, to other platforms, then, you know, filter that graph to the States or, UK or wherever, because that is a better indication of what you can expect. And then you can definitely just show that, you know, I'm going to build a website for you, if I build it with this platform, it's that much more likely to get a good Core Web Vitals score than if, you know, you build it with some other platform. 12:04 Brett: Yeah, and again, here, I think it's a really great opportunity here for Partners to share some of the other sites that they've done and show those scores. And you know, so I think this is a great question. And I think every Partner can handle it a different way. But I think it's a good conversation for us to have as a community, Partner, so whatever you do, I'm curious. Okay. 12:26 Dan: Yeah, just just one more comment on that one of my favorite posts on our Community group in Facebook, was this post where people started posting, you know, screenshots and grabs of their GT metric scores, and you know, boasting how far like, "We had a C, and now it's an A", and it's all green and whatnot. So that, you know, I really enjoyed watching that conversation. It was really great. 12:53 Brett: That's great. It's great. A lot of Partners are actually doing this. And by the way, there are some really good questions that have gone into chat that I've taken note of, so we may actually get to stump the GOAT today. Okay. So let's—I'm gonna keep going. So let's go to another one. Alright. So here's a great one. How do I check current performance and measure impact of site changes? Is there a way to see if I've made some changes? Maybe some changes that we've talked about in the previous webinars, Dan. I make those changes. How do I know if I've measured or if my performance is shifted? That's a good question. 13:31 Dan: So you know, all the tools that we've mentioned are totally relevant to measure your performance at any point in time. One of the great things about the Site Speed dashboard— currently it just shows the Time to Interactive metric, but it definitely shows it over time. So you can see, so you know, there's this nice graph in there that you can see how the changes that you're making, impact your site, or likewise, you can measure different points in time. One of the problems with the Google tools is, you know, it, actually, let me clarify that. If you use the Google Search Console, they use a moving average of, you know, looking at a month back, but it's from today, until a month back. In PageSpeed Insight, they only look at like month segments. So you need to take into account that changes that you make, will not show for example in PageSpeed Insights in the field section for about up to a month. So be aware of that when you're trying to measure the changes that you're making. So either use like a lab score to see whether the score is going up or down, you know, we'll talk a little bit about PageSpeed Insights and how to, you know, consider that score in a bit. So I don't want to go too deeply into that right now. But I will say that it's really useful for seeing whether you're improving or regressing, you know, so forget about what the actual score is right now. Just compare it to a score that you had before, see whether it's higher, or whether it's lower. And that's a great way and again, you can actually run it directly from within the Site Speed dashboard, you don't actually have to go to PageSpeed Insight. If you go to the Site Speed dashboard, in your Google, in your sorry, in your Wix dashboard, you can scroll down, and you can see your PSI score for both, Lighthouse score for both desktop and mobile. And you can click Refresh to rerun it again and again. So you can check the impact of changes that you made. Now, what I usually recommend for people to do—so first of all, you know, one of the great features, one of the best features, in my opinion, that we have in Wix, is our Site History. So you can always make changes. And then if you don't like them, well, you can just refer to a previous version. You know, it's useful for performance. But it's also useful, just you know, in general, if you're testing out various changes, and now we also have the, what's it called, the Release Candidates within the Editor that do like, which is an amazing feature, you can run like A B test. Now you can't A B test for performance, at least not yet. But— 16:17 Brett: Is that a not yet, Dan? Is that a not yet? Yeah, 16:20 Dan: We'll see. But, but you can, you can use that mechanism. Or you can even really go old school. And you can either duplicate the page, or duplicate even the entire site. And so for example, you can duplicate the page, make whatever changes you want, then, for example, use PageSpeed Insights to compare the score for this page and compare the score for that page. One more thing that I will say about Google's PageSpeed Insight, it's a known issue with that, that scores within it fluctuate a lot. So if you're looking at the PSI score, I would recommend for you to essentially run it several times, like I don't know, five times. And then take the average score, or something like that, or the median score, something like that. And not just, you know, run it once and assume that whatever you get is the actual, like, absolute score that you have. 17:18 Brett: I hope everybody's taking notes. I'm pretty sure that there are some notepads smoking right now, there's so much heavy writing or typing keyboards burning up. I think that whole segment just needs to be turned into a blog. Everything you just said needs to be a blog right there. 17:32 Dan: Yeah, that's probably gonna happen. That's incredibly good. Yeah, that's probably going to happen as well. 17:37 Brett: Okay, good. Good, because we need that. Alright, let's, I've got another good question. Rhen wants to know, now, this is kind of a double part here. Rhen wants to know, “Why his mobile PSI score is low?” And Ari wants to know, similar, but specifically about Stores. So maybe this is the same, or maybe they're different? I'll let you, I'll let you answer this. 17:58 Dan: So I'll start with the general one, about talking about the mobile PSI score. So you know, when you run, when you put in your website, or anybody's website, inside PSI, and you press the Go button, it does two things. Again, as we previously explained, if you have sufficient traffic, it will actually go and retrieve your field data from that Google database. But in addition to that, it actually loads your site on a Google virtual machine somewhere in the cloud, and does a whole bunch of measurements on it. So it effectively does the single session, and just tries to measure the performance of that particular session. Actually it does two sessions, one to measure desktop performance and one to measure mobile performance. In the case of mobile, Google are intentionally simulating a low-end device, the device that they're simulating in PageSpeed Insights is a Moto G4 phone, that's a phone that was released, like the beginning of 2016. So it's over five years old. And they're using a simulated 3G network. So you know, our experience is that the vast majority of visitors to Wix websites have much better devices and connectivity than that. So it's not surprising. You know, sometimes people ask me, why do I see green Core Web Vitals, but I'm seeing, you know, a relatively low score in PageSpeed Insights, especially for mobile. Well, that's the reason. The reason is that your users, probably your actual users, your actual visitors, probably have much better devices and much better connectivity than what Google is simulating. Now why is Google simulating such a low-end device? Well, because they want to be inclusive, because, you know, we're living in a global economy. They want you to think about potential customers in Africa, or in Southeast Asia or whatever, where they might have, you know, not such good, not such powerful devices or slower conductivity than what you might have. And, in fact, they've recently written a blog post. We can share a link to that as well, although it's a bit technical, about why there is a potential significant discrepancy between their mobile, their lab scores, those simulated scores and the actual field data. The important thing to note here, is that the ranking boost within the Google Search Console is just based on the field data. So the lab data that you're seeing in PageSpeed Insights has zero impact on the Google ranking algorithm, you can use it as you know, as an indication, and like, you know, I want to move up the score. So you know, I'm making changes, I can see the score going up, because it will take time until these changes are reflected in the field data. But it's important to remember that this is only used as a tool to give you an indication of what a low-end device might experience when visiting your site. I hope this was clear—kind of a technical explanation. 21:26 Brett: I feel like every time I ask a question, you pull out a book open and start reading. And then we close the book and go to the next one. It's like the library of Dan here. I don't know what's going on. So yeah, it makes perfect sense to me, that makes perfect sense to me. 21:41 Dan: Now, going back to the specific part about Stores. So there are a couple of points I wanted to make here. The first and important point is that, you know, in many ways, a store site, or a blog site, or an event site, or fitness site or restaurant or whatever. They're all just Wix sites, and most of the changes that we're making are essential infrastructure changes that impact every site, regardless of which Wix features it actually uses. That being said, you know, it's not possible to move the needle equally across the board. So some aspects of Wix might be, let's call it further ahead, in terms of performance than others. But we're not stopping. We're not holding, you know, we'll talk about this later on, we keep on pushing forward. And, our goal is to be, you know, the fastest best option across the board. 22:45 Brett: Okay, we'll close that book. Let's open another one. So Daniel wants to know, “How well does Wix's performance scale with large databases and stores?” So is there like a breaking point where too much affects performance? Is there a sweet spot? 23:04 Dan: So we built Wix to scale, this whole change that we made with the introduction of dynamic pages and collections, and stuff like that was implemented exactly for this purpose. You know, it used to be that if you wanted to have lots of items within your Wix site, you basically just needed to build manually, lots and lots of pages. These days, that's not the way to go. You build a single dynamic page, you bind it to a collection, and off you go. And the great thing about that, is that, you know, the mechanism doesn't really care how many items are in the collection in terms of the performance of that dynamic page. Because these are databases running on fast servers, they're built to scale, there's literally no problem. Every page is wholly independent of the other pages in the site. So the fact that you know, you have one page, which is heavy, and another page, which is lighter, you know, the heavy page does not impact the lighter page. For example, that being said, you know, sometimes you show a lot of content within a single page. So for example, you might have a product catalog, or a blog feed, or gallery, or what have you or a repeater. And in that case, if you decide to display a lot of items within that, let's say catalog, that will result in a bigger page, and that page as a result will be heavier, and that will have an impact on performance. So usually, my recommendation is not to overdo it in terms of items on a page. You know, when reviewing websites, occasionally I see mobile pages that are 30, 50, even 100 screens long. And I, you know, I kind of asked myself, you know, who expects their visitors to scroll through 100 screens on their mobile device to find the item that they're interested in. If that's your approach, you're creating a cognitive overload for your visitors and it's unlikely that they will scroll through that entire page. And that huge page has a performance cost. We are working on mitigating it, we've done some work, we're doing more work to be able to handle bigger pages. But there are no free lunches. The more stuff you put on a page, you know, it will impact your performance. So generally speaking, in the context of you know, having large databases is, you know, you know, go wild to have as many items in your collection as you would like. But make sure not to try to overload your visitor with too many items on a single page. 26:01 Brett: It makes perfect sense to me, Dan, perfect sense. So for those of you who are just joining us, we are live, we're having a live AMA with Dan the man, the GOAT, the legend. And I'm taking questions that you have submitted, but if you have one that you want to ask, please drop it in chat. I'm gonna, I've got a few more to go. And then I'm going to go to some of your live questions. And we're going to keep going. So great question. And thank you, Dan. So let's jump to this. And I think you sort of asked this. I mean, I think you sort of answered this, but let's maybe go a little bit more in deep, a little more in-depth, how does adding content or functionality to a page impact and you kind of touched on that which is a perfect prelude to this question. So I'll ask it again, how does adding content to a page impact CWV? 26:50 Dan: Well, first, yeah, so as I said, there are no free lunches. The more stuff that you put on the page, the greater the impact on the page's performance. It's almost impossible to add stuff with zero impact. You know, like I said, we are doing all sorts of optimizations, like, for example, lazy loading images. So for example, [on] a Wix page, you know, you, we initially load low-quality images that, you know, are replaced with the final high-resolution images. The images that are below the fold, or, you know, outside the initial viewport that you need to scroll to get to, we only download them when you start scrolling towards that section. So we don't download them upfront. So in this way, we kind of tried to mitigate the impact of adding more content to the page. But like I said, at the end of the day, the more stuff that you put in, the heavier the page becomes, the bigger the HTML, the bigger, you know, more stuff. Now—so you do need to take that into account. And also, as I said, there's also the concept of perceived performance, or the cognitive overhead, the more stuff that you put on the page, the greater the load is on your visitor to try to figure out what that page is about. So don't just think about the performance in terms of how long it takes for the browser to load and display your content. Try to also think about how long it takes for the visitor to kind of comprehend what you're showing to them and being able to understand what your website is about, you know, what is your primary message that you want to get across. Which brings me to an important point—it's a term that's familiar in marketing, I don't know how many of our listeners are familiar with it, that's a call-to-action or CTA. It basically refers to that message or that action that you would like your visitors to perform. So for example, if it's a store, what obviously what you want for them to do is to make a purchase, if I don't know if, let's say you're a fitness trainer, you may want them to book an appointment or something like that. So, anything that is [conducive] to your CTA, you know has a place on that page. Anything that does not contribute to that CTA should probably be removed. It will improve your performance, it will reduce the cognitive overhead and will likely improve your conversion. And you know, sometimes I look at pages that are all messed up and you know what happens there begin— you know, somebody in the company wants to promote one thing and somebody else wants to promote another thing. So ultimately, they just tried to put everything in there and at the end of the day, that's just a bad idea. And you do need to try to figure out what your website is all about and try to focus on that. Another point that I would like to make is that not all components are created equal. You know, there are obviously some heavier things and some lighter things. So obviously a gallery is heavier than a single image. So when you're putting stuff, especially in the initial viewport, again, what is known also as above the fold, think about the stuff that you're putting in there. For example, I usually recommend for people to make sure that they have some text, at least some text above the fold, not just images, not just galleries, not just videos, but also some text, because that text will appear usually faster, and it will provide meaningful content for the person who's visiting your website. You know, I kind of strayed off from the original question. 30:59 Brett: I like it. I think there's—I hope people are taking notes. I mean, there's just so much knowledge. I kind of like it when you kind of wander off a little. It's still very interesting, but relatable, right? It's related to what we’re talking about. Can we close that book? And can I go to the library and pull another one out? 31:17 Dan: Yeah, for sure. Go for it. You know, okay. 31:20 Brett: So, here's a good one. I'm watching the chat. There's a couple questions. By the way, Patricia, your question I pulled and it's coming next. So hang tight on that one. What exactly is Wix doing for CWV for Wix Stores? Is it separate Dan? Is the performance different for eComm sites versus regular, earlier you said it's all the same. So I'll give you an opportunity to hit this nail on the head. 31:44 Dan: So it's kind of the same, but not exactly the same. So as I said, all Wix sites share the same infrastructure. And the same underlying technology. And the same core—let's call it code, by the way, and it's also true, whether you're using ADI, or the Wix Editor, or Editor X, whatever editor you used to build your website. It's all running on the same infrastructure, and using the same core code to actually render the site. And so as a result, improvements that we are able to make within that infrastructure, and within that core code impacts every Wix website out there. And by the way, I want to give—you know, use this opportunity to give this huge shout out to what is known inside Wix as the Viewer Company. That's the team working on the core code that displays websites, they made a huge improvement in terms of performance, they've effectively rewritten that entire component from scratch. Much of that upward trend that you saw on the graph is a result of their work. It's amazing work that they've done. And, and as I said, that impacts every Wix website of any type, regardless of the functionality that it uses. That being said, obviously, there are also some elements within a Wix Store website that are specific to Stores, like you know, the shopping cart icon, you only get that if you've got the Store functionality. Or you may add chat in a Store that you might not add, for example, in a Blog. And those things also have their own code. And we are working to improve the performance of all of these components. As I said, you know, some are further ahead than others. But obviously Stores [are] really important for us. And, and it's one that we're focusing a lot of effort on, in particular. And as you saw, when you looked at the graph, you know, I'll say it quietly, I think that one of the companies shown in that graph was Shopify. And they are as you can see, they're the one just behind us now. They're also making improvements. They've also upped their game in terms of performance, so everybody's kind of doing it with one exception. But, yeah, anyway, but we've managed, at least in the US for—no, well, not at least, but for example, in the US to actually pull ahead of them in terms of the performance of websites built on the platform, or more accurately stated, the percentage of websites built on our platform that get good Core Web Vitals versus the percentage of websites built on their platform that get good Core Web Vitals. 34:54 Brett: Yes, and I think it was either this week or last week, I saw a Partner drop an article that was specifically talking about how another platform is now trying to put together their own team similar to what Wix has done. Because it's evident that the performance that we've—the progress we've made in a year is incredible. And I love, here, and I'm not gonna, this is a question I'm gonna ask you in a minute. But I love how you're saying, you know, we've done well, but we're still there's so much more for us to do. And I love that, I just love that. So I'm gonna go back to the library, Dan. I'm gonna get another one. And then I've got a few that people have asked in the chat that are just outstanding, I want to do a little bit of overdrive and get to see if we can stump the man. Okay, here we go. Here we go. So Patricia wants to know, “Is it better to upload WebP files to make the page faster?” And then Gordon has sort of like a really close question to that. And this question is, “What is the best format to use for fast and clear loading images, specifically ones that are extra large?” 36:02 Dan: Yeah. So images, you know, they're a very interesting topic, because on the one hand, it's really obvious to everybody, you know, we want to have good clear pictures on the website. But then when you kind of start delving into this topic, there are a lot of technicalities in there. And also it turns out that there are a lot of myths. So first of all, I want to point out that in that webinar that we did, I discussed media in particular, so I highly recommend for people who are interested in this topic, to go back and check it out, you know, beyond what we just say in this AMA. Because there are a lot, there's a lot of useful information there about what you can do to get the most out of it. I also want to say that Wix has some of the best media people that I've ever encountered in the industry, working for it, you know, the Wix Media services are amazing, you get out-of-the box functionality that you need to, you know, purchase separately on other platforms. One of the things that we do is that we automatically optimize images for you. For example, when you crop and clip images, we just download the parts that are actually visible on the screen, we don't, so you can upload this huge image that contains, for example, you want to show a portrait of yourself, but you know, your favorite image is the one that you actually took on vacation, and there's a whole bunch of stuff all around. You can upload that huge image, then within Wix, within the Editor, just crop the part that you actually want to show, and you don't have to worry about it, we won't download all that stuff that's outside the cropped area. So that's one example of some of the optimizations that we automatically do for you. Another optimization that we do for you is to automatically use modern and optimized image formats. WebP is another one, we'll discuss maybe more of them when we talk about—if we have time to talk about future plans that we have. But you can upload your images as you know, standard JPEG or PNG formats. And we will automatically convert them to WebP for browsers that support it. So we actually recommend that you use the original format, don't convert to WebP yourself. There are some browsers out there that don't properly support WebP. And by uploading the original format, it enables us to use that on those older or less capable browsers, and then do the optimal conversion to WebP for browsers that actually do support that format. So you know, you don't have to worry about WebP, we take care of that for you. And as another advantage, when a newer image, a media image comes along, that's even better than WebP, we will use that—automatically. And again, you won't need to do anything. So just as an example, we talked before about old websites. A person who built their website, six years ago, seven years ago, before you know WebP was even out there. Well, they're now serving WebP automatically from their website, because we do this automatically for each and every Wix website out there. So that's one important note to make. In terms of the format to load, without going too much into details. It's generally preferable to use JPEGs over PNGs where possible. Sometimes you need PNGs because you need transparency, for example, maybe you're creating some sort of a parallax effect or something like that and you need that transparent background. But if you can make do without, then I would generally recommend to use JPEG, they result in smaller files. And they result in smaller WebP files. So JPEGs that are converted into WebP are smaller than PNGs that are converted into WebP. So that is what I generally recommend using. Oh, and do avoid GIFs if you can. GIFs you know, people use animated GIFs. I prefer animated clips, you know, video animated, we just use a looping video or something instead, because animated GIFs are huge. They don't get converted into WebP so it's just this GIF and I've seen websites where a single GIF was like three times bigger than the rest of the website. 40:43 Brett: So Patricia, I hope you got all of that. I hope, I mean, I know you're out there. I'm just curious. How do you feel about that response from Dan, because that was, it blew my mind too. And Sam, awesome. Thanks for the love man. That's, I agree, Dan, and everybody at Wix is doing a really good job. But we don't stop there. And that kind of leads me into my next question. Before I jump into the questions from our Partners that are viewing Dan, and I'm gonna ask for just a moment of overdrive. So one of the Partners actually asked this Simon wants to know, “So what are the next updates for Wix Performance? What's on the horizon?” So we've come a long way. Absolutely. But we're not stopping there Dan. Can you tell us all these—maybe in the future things, these air quotes we're using. What’s next on the agenda? 41:33 Dan: So obviously, you know, putting all the required restrictions and whatever about forward looking statements and whatnot, you know, we have plans, but then, you know, fate intervenes. But that being said, we are definitely not stopping. One thing that I do want to know, if you look at that graphic, and if you can put it up again, you will see that some in some months, we move forward, and then we kind of we kind of—it seems like we stop, and then we move forward again, you know. So I can't promise that we will be able to move forward at the same rate in each and every month. But we have put systems in place that first of all are intended to prevent regression. So we don't expect to see ourselves ever going backward. And we do intend, and we are continuing to push forward. So overall, you will continue to see that graph keep on going up and up and up and up. For sure. And we do have a lot of stuff on our plate. You know, there are people at Wix, even right now, specifically working on performance related advancements to our platform. So you know, to give an example of something that just got recently rolled out. So it's already out there. But it came out so recently that it's not yet impacted that graph. It's support for HTTP/3. HTTP/3 is like the little one of the latest and greatest web standards, really, really new, not widely used. And it improves the performance of downloading content from the web servers down to the browsers. And we've stopped and we've rolled it out. So we use HTTP/3 where we can and it can deliver content much faster. So that's an example of something that's already been deployed, but is not yet impacting that graph that you showed before. Something else for example, that we're looking at, I mentioned before, that we're looking at support for newer media formats. So you know, WebP is currently the hotness that some websites are using. By the way, I'm sometimes surprised that so many websites aren't yet using WebP because it's really widely supported. But really recently, for example, a new format has come out called AVIF, which is supposed to be something like 20, even sometimes 30% smaller than WebP and we're looking at it. So this is something that we're currently investigating. And if we find that it actually delivers on its promise, and is actually able to reduce the size of the image downloads without adversely impacting quality, then we will automatically enable support for it. And again, you won't have to do anything. Brett: Nobody has to do anything. Yeah, anything. Yeah, it will just—you'll just start getting AVIF. And yeah, another thing that we're looking at is being smarter about how we do this gradual image display. We already have it but we're looking, but currently it's either low-res or high-res and we're looking at making it [a] more gradual kind of build up to the final form. Let me see, I've actually made a list of some of the things. So I'm— 45:08 Brett: I'm gonna jump in while you're doing this, I want to preface this because I'm going to start bringing in questions from the Community that have asked about this. And Rhen had a really good question kind of about that, “With these increases in the scores, do you anticipate future optimizations will be incremental? Or do you think there's going to be things that can make some huge jumps in the future?” And I don't know if that's what you're getting ready to show? Or— 45:30 Dan: Yeah, well, the reality usually is that these things are incremental. You know, if there were obvious ones, that would make this a huge change, then we would just, you know, go for it. We are working on some changes. So you know, the Core Web Vitals, there are three of them. Again, I won't go into too much details, but we are looking at making some significant improvements on, you know, one of them. So you might, you might see an occasional jump. But overall, this is going to be a gradual thing, if for no other reason [than] there are so many different types of Wix websites out there. So for example, there are some websites where the primary content is an image. And there are some websites where the primary content is text. And so if we make an improvement in how quickly we are able to download and display an image that benefits, you know, those sites, but not the ones where the primary content is textual. And that general graph that we showed was across all Wix websites, so you know, we might make a change that would make a particular website suddenly really improve in terms of performance. But if you look at Wix as a whole, I expect more of a gradual improvement to be honest. Brett: That makes a lot of sense. Dan: Yeah, so I did want to mention a few more things that we're looking at. So you know, we've introduced a Site Speed dashboard, that was definitely a version one. We are looking at ways to make that dashboard better, provide more actionable metrics, and in general be more applicable when you're looking to improve your performance. So expect to see improvements there. Oh, another really cool one. You know, a lot of people when they use PSI, the Google PageSpeed Insights, all the recommendations there are really generic. And a lot of them are not really things that you know, you as a Wix website owner can actually do anything with. So for example, you might see recommendations such as reduce the amount of JavaScript, well, you don't really have control over the JavaScript, this is up to us. Well, you know, you can remove functionality from the page, that will likely reduce the amount of JavaScript that you're using, but you know, short of that, you know, you can't really keep your functionality and reduce the JavaScript, that's up to us. Well, guess what, we are working on it. We are working on significantly reducing the amount of JavaScript that we reduce in order to provide a current functionality by essentially being smarter about, you know, identifying exactly which functionality each page is using, and only downloading what the page actually exactly needs. And this is a work in progress. This is, you know, not something that will likely happen overnight, it will happen gradually. It's something that we will keep improving over time. But going back to Google PageSpeed Insights, we are actually looking to integrate Wix specific suggestions into Google PageSpeed Insights, so that when you put in a Wix website, it will identify that it's a Wix website, and in that Recommendations section and Suggestions. In addition to the generic ones, you will also get Wix specific recommendations and suggestions for things that you can improve. I think that's a really cool, cool thing that we are looking to do. 49:21 Brett: And for those of you he's not reading this off of a script, like it's incredible to me, Dan, how you just, it's you have there's, there's you know, more than you've probably forgotten more than I'll ever know in my life. Okay. I've got another one. And this kind of touches—you touched on this a little bit, but I love when the Partners are interested, technically. So looking at the graphs, Wix improved significantly. I mean, can you talk about specifically things that you did, and you touched on this a little you talked about the viewer. Do you want to add anything? Or just kind of talk about how the viewer—talk about that again, for this particular question, because I thought this was incredibly interesting. Yeah. Dan 49:58 So just to clarify FID is First Input Delay, it measures the time when a visitor visits your site and the first time that that visitor interacts with the page in any way whatsoever, for example, [they] click on a button or on a menu. Anything other than scroll and zoom. Scroll and zoom don't count. Any actual interaction that requires the page to respond. The browser measures the time that it takes for that first interaction, and, and sees how quickly the browser, the web page responds. And that's the FID. And ideally, by the way, FID which should be under 100 milliseconds, because according to research, that counts as an essentially instantaneous response. And, as you correctly stated, that's one of the main things that we improved, you know, if we look at the graph, we really like went from having really poor FID to being right up there, with almost perfect FID. And that has to do with that you know, I shouted out to the Viewer team before, that mostly has to do with the work that they've done. We've shifted a lot of the computation that used to take place within the browser, off to our own servers, so that instead of having to do a lot of heavy lifting, using JavaScript inside the browser, we just now do it on our fast servers. And we were you know, we offload this effort off of the visitors device. And, you know, by offloading this processing off of the device, it frees up the device to more quickly respond to the visitor's interaction. So if you're asking specifically where, how did that happen? Well, you know, that's kind of a really short explanation of what we did. 52:01 Brett: Thank you. And by the way, I have to say that there was a secondary question that was asked, and I also want to grab this, I think this also is a pretty good one. Wix has comparable performance to Shopify in the US, but not in other places. And this is kind of not to compare with Shopify, but on more of the horizon, are there other geographies that you can maybe speak on that Wix is working on? And increasing the performance in other geographies? Or is there anything you want to touch on there? 52:33 Dan: So you know, for sure, so first of all, I have to, you know, the reality is that some geographies will have better performance than others, if for no other reason [than] mobile networks are better in some places than in others, or that the device that the average person might have, would be better than those, you know, faster that's not to say, then those that you might have, that people might have in other countries. And that's something that obviously over which we have no control, although, and here, I don't actually want to go into the details, we are looking at ways to even mitigate that. That being said, there are things, definitely things that we can do and that we are doing. So for example, way back when I joined Wix, we effectively had one data center in the US, which would serve the entire world. Now, Wix has many data centers, spread around the globe, which is obviously better for reliability and uptime. But it's also better for performance because you will be served by a data center that's closer to you. Beyond that, we are working with using CDNs to quickly deliver content, you know, Content Delivery Networks, stuff like Fastly, or Akamai or Google has a CDN there are various CDN providers out there. And you know, one of the cool and unique things that we are doing is that we actually try to optimize CDN per geography. So a particular CDN might be better in the States, but another CDN might actually be better in India. So we actually try to measure the CDN performance that we are getting in particular geographies. And if we see that one CDN is potentially better than the other one, we will actually automatically switch. So yes, we are working hard to improve performance around the globe. I can give as again, a concrete example. performance in Australia, for example, has improved dramatically over the past you know, years and months, because of you know, such changes that we have made in our infrastructure. 54:55 Brett: That's interesting because somebody actually asked that so you know, [are] these performance improvements only in the US? And that's actually what they're talking about the clients that are in the UK, Australia and New Zealand. So I guess what you just said sort of answers that question as well. 55:11 Dan: Well, yes, we improved around the globe. And by the way, you know, we just showed the graph from the HTTP Archive website for the US. But you know, go in there like I said, we should provide the link and select the UK instead, and you will see the same thing. You will see that the graph is, you know, going up and up, and that we are much better than most of our competitors, if not all. 55:41 Brett: And that's a great question. So I can, what other resources Dan, and and this is, a great question here that was a follow-up. Are there other sites and tools? Can I add some of that, if you can give me a few of those links I'll add that to the description. So the Partners can sort of— 55:54 Dan: Yeah, so yeah. So we saw that HTTP Archive site where, you know, if you want to sell Wix as a platform, not a specific site, that's just a great research tool to use. Or you can use Google PageSpeed Insight or GT metrics, if you want to use it to measure the performance of a particular website, even one that's not your own to do comparisons. If you're looking at your own Core Web Vitals data, then Google Search Console, the Core Web Vitals tab in it. And of course, our own Site Speed dashboard that you can use to look at performance data for your website on Wix. 56:37 Brett: So this has been incredible. And Dan, I just have to say, you really are a GOAT, you're the greatest of all time man. And it's incredible, because we asked you a question and you just amazingly explained it and go into so much detail. Like I said, there are keyboards smoking, and pencils and pads on fire from all the notes, we're definitely gonna have to dissect this. This has been absolutely incredible. So there's more to come. Wix isn't done. But I want to thank you. First off for taking the time and just coming in and sitting with us and answering our questions. This is such great content for our Partners, you know, they love this, and I appreciate it. 57:18 Dan: You're very welcome. I enjoyed this a whole lot myself. As you know, I love engaging with the Community. By the way, for example, I'm on Twitter. You can hit me up there. I'm slightly, occasionally on Facebook. Not much. But you know, you can also try to drop a question there. I'm sure, Brett, and you know, you can always contact Brett and our amazing Support team. We've got amazing support. One of the things that we've done in terms of performance is we've trained a lot of our support people to be able to answer support questions related to performance. So it's not just me by not by a long shot. 57:57 Brett: And by the way, by the way, huge shout out to them. They've been in the chat. They've been answering questions. Amazing job. I saw a lot of actual Partners comment how great their interaction with Support was. 100% Agree. Awesome, awesome team efforts all around, right? Dan: Exactly. Brett: Look, you're getting some shout outs. Mike. Michael wants everybody to follow you on Twitter, because you tweet about interesting stuff. 58:22 Dan: Yeah, it's Dan Shappir on Twitter. Just so you know. So you know, feel free. 58:27 Brett: Awesome. Dan, thanks a lot. I want to and by the way I do see sometimes comments in the Community and you write dissertations and it just blows people's minds. Okay, so, so awesome. Thanks, y'all. I'll see y'all in the Community. If you're not in the Community, what are you doing? You got to get in there with us. Okay, for sure. So, thanks, Dan. Thanks, Partners, and I'll see you all out there. Have a great day. Bye.

  • How to Use Wix Site Inspection

    Mordy Oberstein | 11 min

Get more SEO insights right to your inbox

* By submitting this form, you agree to the Wix Terms of Use and acknowledge that Wix will treat your data in accordance with Wix's Privacy Policy

bottom of page