top of page
SubscribePopUp

Core Web Vitals: Real user metrics vs. Lab data

Author: Jamie Indigo

An image of author Jamie Indigo accompanied by search-related iconography, including a bar chart, pie chart, and mock clicks and impressions metrics

Before May 2020, your site’s performance depended on who you asked. Different tools, platforms, and services all offered their own version of what “fast” was and how it was defined.


This led to a number of problematic “performance-enhancing” solutions.


Page taking too long to achieve DOMContentLoaded? Why not punch out a bunch of holes and load content in later! It’s not like actual humans will be stuck with the terrible headache that comes when a page suddenly wedges itself between a header and a footer.


Screenshots showing the loading sequence of the mobile version for https://www.hercrentals.com/equipment/category.html/surface-prep.html, with content loading sporadically, causing layout shifts.

Actually, it is like that.


When workarounds to subvert technical definitions of performance make the experience worse for actual humans, it fundamentally undermines the entire endeavor.


So we needed a unified, human-centric definition of performance. Enter Core Web Vitals.


Table of contents:


What are Core Web Vitals?


Core Web Vitals were originally announced by the Chromium team in May 2020. Chromium is a free and open-source web browser project that powers major browsers like Chrome, Opera, and Microsoft Edge. It also powers the web rendering services used by Google Search and Bing Search, respectively.


The Chromium team studied business key performance indicators and how they relate to various performance metrics. The data pointed to a couple of key takeaways:


  • Users have an incredibly short attention span.

  • If a user can’t tell a page is loading, they leave.

  • If tapping a button doesn’t seem to work, the user leaves.

  • If a user accidentally taps the wrong button because everything suddenly shifted, they leave and never come back.


This led the team to craft the first iteration of the unified human-centric metrics we now refer to as Core Web Vitals (CWV).


01. Is it loading? — Largest Contentful Paint

02. Can I interact? — First Input Delay

03. Is it visually stable? — Cumulative Layout Shift


For more details on these specific metrics, jump ahead to the metrics defined section.

These changes became part of Google’s ranking systems as the Page Experience signal (more on this below).


Core Web Vitals: Real User Monitoring data vs. Lab data


As alluded to above, there are two types of data we can use to improve site performance and SEO outcomes (real user data that comes from the field and lab data). Before I explain the nuances of each, let’s look at why there needs to be two types of data in the first place.


In order to troubleshoot effectively, you need detailed data. In order to maintain user privacy, you need to limit the amount of detailed data you collect.


It’s a balancing act.


A graphic representing user privacy on one side and user data for marketing on the other side, with a slider located in the middle representing balance between the two considerations.

Each page load has a unique set of circumstances. A user trying to buy a new umbrella on their smartphone while waiting at a rainy metro station will have a much different experience than the dev sitting in the office telling you, “It works on my machine.”


The balance between protecting real user data and providing insights means that Core Web Vitals has two modes: Limited metrics available for Real User Monitoring (RUM) data and the detailed metrics available as Lab data.


Real User Monitoring and Lab data are both types of CWV—the differences come from how they are gathered and used.


Real User Monitoring data

Source: The Chrome User Experience Report (CrUX), which provides metrics for how real-world Chrome users experience page loads for a URL.


Aliases: Field data, CrUX data


Used in: Search Console's Page Experience Report, CrUX Dashboards

CrUX data is collected by Chrome and published on the second Tuesday of every month in a publicly accessible dataset in BigQuery, Google’s platform for managing and analyzing data at scale.


In order for a user’s page load metrics to be included in CrUX data, the user must:


  • Enable usage statistic reporting

  • Sync their browser history

  • Not have a sync passphrase set (so that Google can read your Chrome data)

  • Use a supported platform including desktop versions of Chrome (i.e., Windows, MacOS, ChromeOS, and Linux operating systems) or Android versions of Chrome (such as native apps using Custom Tabs and WebAPKs).


This means that not all Chrome page loads are included. Some notable page loads left out of CrUX are:


  • Chrome on iOS

  • Native Android apps using WebView

  • Other Chromium browsers (like Microsoft Edge, for example)


In order for a page to appear in the dataset, it must:



*CrUX strips easily recognized fragments and parameters like UTMs from URLs and groups them together. If your site uses parameters to differentiate pages (i.e., ?productID=101) instead of unique URLs, this can result in the URLs being grouped together.


The benefits of Real User Monitoring Data

As with any methodology, RUM data has its strengths and potential weaknesses. Some of its more compelling advantages include:


  • Inclusion in the Helpful Content Update — The Page Experience Ranking Signal was folded into the Helpful Content update in April 2023.

  • Captures true real-world user experience — You test your site using Lighthouse in your browser or a technical SEO crawler and everything looks great! But then the latest batch of Crux data comes out and it says your site is a dumpster fire. Testing from your office can’t emulate all the variables (viewport, device processing capacity, network connectivity) experienced by real users.

  • Enables correlation to business key performance indicators — The key to getting both stakeholder and developer buy-in is showing the results of your work. As you prioritize improvements, tie them to a quantifiable KPI and share the results!


The potential drawbacks of Real User Monitoring data

For all its advantages, RUM data isn’t perfect. Its drawbacks come primarily in the form of limitations:


  • RUM data may not be available for every page —If a page doesn't meet a minimum number of page loads, it’s omitted for user privacy.

  • Only a restricted set of metrics is available — Three metrics can only get you so far. CWVs are a simplified representation of your rendering strategy.

  • Limited debugging capabilities — In order to resolve a poor CWV score, you'll need to get under the hood. The big three metrics give you a place to start.


RUM data availability

Real User Monitoring data pulled from the BigQuery dataset is available in multiple locations. These include:


  • Google Search Console — Aggregates sitewide performance, groups issues together based on behavior patterns, and provides example URLs.


A screenshot of the Google Search Console Page experience overview report, showing sitewide performance.

  • CrUX API — Allows you to query URL-level data for the most recent month’s dataset.

  • CrUX History API — Allows you to query the previous six months of historical CrUX trends.

  • PageSpeed Insights — Provides URL-level and origin summary data along with Lab data for the test page load. As an extra bonus, the CWV assessment for the URL is shareable via a link!


A screenshot of the pagespeed insights dashboard showing passing core web vitals for a Wix support page about core web vitals.


You can also collect Core Web Vitals for every real user page load by running Google’s web-vitals library on your site. This tiny modular library enables measurement for all the Web Vitals metrics on real users in a way that accurately matches how they’re measured by Chrome and reported to other Google tools.


Note: Wix site owners can view real user data via the Site Speed Dashboard, which shows your website’s Core Web Vitals as well as a performance estimation from Google’s PageSpeed Insights, even if your site doesn't meet GSC’s traffic thresholds.


What should I do if RUM data isn’t available for a URL?

Not all pages will have enough data to be available in the CrUX data set. If you test a page and see “The Chrome User Experience Report does not have sufficient real-world speed data for this page,” don’t fret.


You have two options:


01. Test other URLs that use the same template/resources as the unavailable URL.

02. Use Lab data instead.


Lab Data

Source: Lighthouse, Google’s automated, open-source tool for improving web performance.


Aliases: Lighthouse data, Synthetic data


Used in: Debugging, QA


The benefits of Lab data

When you discover an issue affecting Real User Metrics, Lab data allows you additional deep dive insights. Lab data:


  • Is helpful for debugging performance issues — RUM data’s three high-level metrics can only get you so far. With lab data, you can dig deeper into key technical moments that affect the big picture.

  • Allows for end-to-end and deep visibility into the UX — Lighthouse allows you to test user flows. It utilizes Puppeteer to script page loads and trigger synthetic user interactions, capturing key insights during those interactions. This means that performance can be measured during page load and during interactions with the page.

  • Offers a reproducible testing and debugging environment — We can’t fix what we can't see. Lab data allows you to recreate issues affecting real users in a way that allows engineers to replicate and isolate the variables.


The potential drawbacks of Lab data

If you've ever raised an issue only to hear “it works on my machine,” then you’ve experienced the drawbacks of Lab data—it exists in a digital petri dish.


  • Might not capture real-world bottlenecks — The conditions of a local environment can’t emulate all the variables impacting real users (such as device usage or network connection).

  • Cannot correlate against real-world page KPIs — Each business has its own unique goals. Lab data alone can’t help improve ROI or be matched 1:1 with KPIs.

  • Can show tests passing on that one dev’s machine — No one likes their ticket being marked “will not do.” A lot of the variability in your overall Performance score and metric values is not due to Lighthouse. Browser extensions, antivirus, and A/B tests are just some of the reasons Lab data can fluctuate.


Additional metrics available in Lab data

In addition to the metrics comprising Core Web Vitals, Lab data also provides metrics helpful for diagnosing underperforming CWVs!

These include:


  • Time to First Byte (TTFB) — How long it takes your server to respond to a request for the page. Slow server response times are one possible cause for long page loads.

  • First Contentful Paint (FCP) — How long it takes the browser to render the first piece of DOM content after a user navigates to your page. Images, non-white <canvas> elements, and SVGs on your page are considered DOM content; anything inside an iframe isn’t included.

  • Speed Index — Is a calculated metric that shows how quickly the contents of a page are visibly populated.


Lab data availability

Lighthouse powers auditing tools across the internet, including:


  • PageSpeed Insights — As mentioned earlier, this tool allows you to query a single page.

  • PageSpeed Insights API — Also mentioned above, this API allows you to pull Lab data in bulk.

  • Chrome DevTools — Built into Chrome, this panel allows you to audit a single page at a time.


A screenshot of the Lighthouse results for a web page in Chrome DevTools.

  • Node CLI — Allows you to programmatically audit pages using a headless version of Chrome.


Core Web Vitals: Origin vs. URL vs. Platform


CWV is available for individual URLs, origins, and for platforms. Each level of data is uniquely suited for different purposes:


  • URL-level data is used for the Page Experience ranking signal. Use URL-level data when optimizing for rankings.

  • Origin-level data is an aggregate for all URLs on a given domain across both http:// and https:// connections. An origin could be www.example.com or subdomain.example.com. If your site resolves without www, then it could use example.com. Origin-level data is typically used for high-level monitoring (via the CrUX Dashboard) or competitive research (via the Chrome UX Report Compare Tool).

  • Technology-level data represents the aggregate metrics across sites using a specific technology platform.


The Core Web Vitals Technology Report chart, showing the percentage of origins having good CWV between January 2020 and January 2023, with Wix and Shopify having the highest percentages (~49%).

If you’re considering re-platforming your site and want to consider which of the major technologies could give you a competitive edge, HTTP Archive’s Core Web Vitals Technology Report looks at aggregated performance across 2,000 technologies.


Core Web Vitals metrics defined


Below are the CWV metrics that present-day SEOs are concerned with, but know that these metrics are designed to evolve. Core Web Vitals actively announces new experimental metrics and solicits feedback before making changes.


Largest Contentful Paint (LCP)

  • Represents: Is the page loading?

  • Goal: LCP < 2.5 seconds

  • Available as: RUM and Lab Data


Largest Contentful Paint (LCP) measures the time from when the page started loading until the render of the largest image or text block is visible within the initial viewport.


A graphic showing that LCP must occur in under 2.5 seconds for a “good” score, between 2.5 and 4.0 seconds for a “needs improvement” score, or longer than 4.0 seconds for a “poor” score.
Source: Google.

For a good score, LCP must be 2.5 seconds or less.


The node [read as: element on page] representing the Largest Contentful Paint tends to follow a page template.


For example, you have an eCommerce site that uses the same template for all product detail pages. The product image is the largest visual element in the template.


If you use a standard template for your product pages and the product image is the largest visual element in the initial viewport, then all optimizations for the product image load will benefit most of your product pages.


Cumulative Layout Shift (CLS)

  • Represents: Is the page visually stable?

  • Goal: CLS < 0.10

  • Available as: RUM and Lab Data


Cumulative Layout Shift (CLS) measures the total of all individual, unexpected layout shifts that occur during the page’s entire loading phase. An unexpected layout shift occurs any time a visible element changes its position without user interaction.


A graphic showing that cumulative layout shift must be under 0.1 for a “good” score, between 0.1 and 0.25 for a “needs improvement” score, or greater than 0.25 for a “poor” score.
Source: Google.

For a good score, CLS must be 0.1 or less.


Cumulative Layout Shift tends to follow a specific element. If you’re having trouble reproducing CLS issues for your site, it’s probably your cookie banner or a promotional prompt pushing content down the screen.


If you’d like to learn more about the metrics, Wix’s Support Center offers additional details and insights.


First Input Delay (FID)

  • Represents: Can I interact with the page?

  • Goal: FID < 100ms

  • Available as: RUM

  • Ineffective on: Single Page Applications (SPAs)

  • Deprecation date: March 2024, to be replaced by Interaction to Next Paint (below)


First Input Delay (FID) measures the time from when a user first interacts with a page (e.g., by clicking on a link or button) until the browser is actually able to process that interaction.


A graphic showing that first input delay must be under 100ms for a “good” score, between 100 and 300ms for a “needs improvement” score, or longer 300ms for a “poor” score.
Source: Google.

For a good score, FID must be 100 milliseconds or less.


One of the most common causes of slow First Input Delay is JavaScript keeping the browser’s main thread busy. It can’t respond to a user interaction because it’s too busy running scripts called by the page.


Interaction to Next Paint (INP)

  • Represents: Can I interact with the page? v2.0

  • Goal: INP < 200ms

  • Available as: RUM

  • Promotion date: March 2024, to replace First Input Delay


Within a year of CWV’s launch, the need for a better responsiveness metric was clear. First Input Delay's accuracy past the initial page load for single page applications is dubious.


INP measures the visual feedback that accompanies user input (think tapping a thumbnail to see a product image, typing information into a form, or clicking an “Add to Cart” button). The metric reported is the longest latency encountered.


A graphic showing that the interaction to next paint must be under 200ms for a “good” score, between 200 and 500ms for a “needs improvement” score, or longer 500ms for a “poor” score.
Source: Google.

For a good score, INP must be 200ms or less.


Interactivity is primarily powered by JavaScript though can sometimes be controlled by CSS. The optimization concepts used for FID JavaScript optimization will remain relevant with this new replacement metric.


The differences between RUM and Lab metrics


The metrics comprising Core Web Vitals differ slightly depending on whether you’re using RUM or Lab data.


First Input Delay requires user interaction for measurement. At this time, Lab data can’t accurately replicate a user’s interactions. Instead, Lab data uses Total Blocking Time. This metric has slightly different thresholds.

Metric

RUM Data

Lab Data

Largest Contentful Paint

X

X

First Input Delay

X

Interaction to Next Paint

X

Total Blocking Time

X

Cumulative Layout Shift

X

X


A practical SEO workflow for optimizing Core Web Vitals


It’s important to remember that optimizing for CWVs affects all your users—every medium, channel, and device will benefit. If you’re looking for where to start, here’s a quick framework as a jumping-off point:


01. Start with Google Search Console’s Core Web Vitals Report. You should always start with RUM data since we know that’s impacting real users.


A screenshot of the Core Web Vitals report in Google Search Console, showing statuses for mobile and desktop URLs.

Google Search Console groups together pages with similar issues. As you click through the mobile or desktop reports and into a specific metric, you’ll be able to see a count of pages along with sample URLs and RUM data for them (if available).

02. Run PageSpeed Insights to join Lab data to the RUM data. Comparing Lab data with RUM data side by side provides a more detailed look at the mechanics causing the underperforming metric.


A screenshot of the pagespeed insights dashboard for a page, showing both real user experience and a performance diagnosis using lab data.

Scroll down the page. Just below the render screenshots, you’ll find a helpful selector that lets you see which Opportunities and Diagnostics are relevant for the metric you’re targeting.


A screenshot of the opportunities section in PageSpeed Insights, showing two opportunities (to reduce unused JavaScript and unused CSS) and how much it’ll improve page load time.

Performance matters because it matters to humans


The same audience you’re trying to reach is constantly being bombarded by companies wanting the same thing from them. All these calls to action are taxing.


Providing an experience that delivers what it says on the tin, in an easy-to-use format, is key to keeping your users engaged and coming back.


As you work on Core Web Vitals, remember that these efforts benefit every user—regardless of the traffic’s referrer or channel.


 

Jamie Indigo

Jamie Indigo isn’t a robot but speaks bot. As a technical SEO, they study how search engines crawl, render, and index. They love to tame wild JavaScript and optimize rendering strategies. When not working, Jamie likes horror movies, graphic novels, and D&D. Twitter | Linkedin


Get the Searchlight newsletter to your inbox

* By submitting this form, you agree to the Wix Terms of Use

and acknowledge that Wix will treat your data in accordance

with Wix's Privacy Policy

Thank you for subscribing

bottom of page