Google decided in 2020 that page experience matters for ranking, and gave us three numbers to summarise it. Those three numbers are the Core Web Vitals. The names are intimidating (LCP, INP, CLS) but each measures something a normal person would care about: did the page load fast enough, does it react when I click, does it jump around while I read.
This article explains what each one actually measures, what tends to break each one, and how to fix it. No buzzwords.
LCP: Largest Contentful Paint
What it measures: how many seconds it took for the largest visible element on screen to finish loading. That element is usually a hero image, a big headline, or a video poster.
The threshold: under 2.5 seconds is "good", 2.5 to 4 is "needs improvement", over 4 is "poor".
What tends to break it:
- A large hero image that is bigger than it needs to be (4 MB JPEG when 200 KB AVIF would do).
- A hero image loaded via lazy-loading. Lazy-loading is for images below the fold; if you lazy-load the hero, the browser waits before fetching it and LCP shoots up.
- A hero image loaded from a slow third-party CDN.
- The page being slow to start rendering at all (server slow, render-blocking JavaScript or CSS in the head).
- A web font that the browser waits for before painting the text that turns out to be the LCP element.
How to fix it:
- Compress the LCP image properly. AVIF or WebP at quality 70-80 is usually 1/5 the size of an unoptimised JPEG.
- Add
fetchpriority="high"on the hero image tag. This tells the browser "fetch this before everything else". - Make sure the LCP image is not in a CSS background. CSS background images are discovered late by the browser. Use a real
<img>tag. - Preload web fonts with
rel="preload"if a heading is the LCP element. - Serve from a CDN if your origin is far from your audience.
LCP is the easiest of the three to fix. 80% of the gain comes from doing the image right.
INP: Interaction to Next Paint (replaced FID in 2024)
What it measures: when a user clicks, taps, or types, how many milliseconds pass before the page visibly responds. It is measured across the whole session, and the worst event (or near-worst) is reported. Replaces the older First Input Delay metric, which only measured the first interaction.
The threshold: under 200 ms is "good", 200 to 500 is "needs improvement", over 500 is "poor".
What tends to break it:
- Heavy JavaScript running on the main thread that blocks event handlers from firing.
- Third-party scripts (analytics, ads, chat widgets, A/B testing) that hog the CPU when the user clicks.
- Event handlers that do too much synchronous work (parse 50 KB of JSON, run a layout-triggering DOM manipulation, sort a large array).
- React/Vue components that re-render most of the page on a small interaction.
How to fix it:
- Move expensive work off the main thread (Web Workers,
requestIdleCallback). - Defer or async third-party scripts that are not critical for the first interaction.
- Break long tasks into smaller chunks. Browsers measure tasks longer than 50 ms as "long tasks"; aim to keep all your work below that.
- Use
<button>and<a>correctly so the browser handles the visual feedback for free; do not reimplement clickable areas with<div onclick>and 200 lines of JS.
INP is the metric that punishes WordPress sites overloaded with plugins. A site with 20 active plugins, half of which inject JS on every page, will struggle here. Audit, deactivate, replace.
CLS: Cumulative Layout Shift
What it measures: how much the page's content moves around after the initial render. Specifically: the sum of all unexpected layout shifts during the page lifecycle, weighted by how much content moved and by how far.
The threshold: under 0.1 is "good", 0.1 to 0.25 is "needs improvement", over 0.25 is "poor". The values are unitless, they are a fraction of viewport-times-distance.
What tends to break it:
- Images without explicit
widthandheightattributes. The browser does not know how much space to reserve, lays out the page without the image, then shifts everything when the image arrives. - Web fonts loaded with
font-display: swapand a fallback font that has very different metrics from the web font. The page renders in the fallback, then re-renders when the web font arrives, and everything shifts. - Ads that load late and inject themselves into the page, pushing content down.
- A "cookie banner" or "newsletter modal" that appears 2 seconds after page load and pushes content down.
- Late-loading components that resize the page (carousels, embeds, dynamic widgets).
How to fix it:
- Always set
widthandheighton<img>,<video>,<iframe>. Modern browsers use them to compute aspect ratio and reserve space. - Use
aspect-ratioin CSS for containers whose ratio is known. - For web fonts, use the new
size-adjustandascent-overrideCSS descriptors to make the fallback font match the metrics of the real font, so the swap is invisible. - Reserve space for ads with a fixed-height container, even if the ad has not loaded yet.
- Show banners and modals as overlays that do not push content, not as injected blocks.
CLS is the easiest to make worse by accident. Adding one third-party widget is enough.
Where the data comes from
There are two sources for these numbers, and they tell you different things.
Lab data: a synthetic test run by tools like Lighthouse, PageSpeed Insights, WebPageTest. They simulate a user with a specific device and network, run the page, measure the metrics. Useful during development because they are repeatable.
Field data (RUM): real user monitoring. Google collects anonymised metrics from real Chrome users visiting your site (the Chrome User Experience Report, CrUX). The data is the percentile (75th by default) over 28 days. This is what Google uses for ranking signals. PageSpeed Insights shows both lab and field data side by side.
The trap: lab data is fast because the test runs from a clean room with controlled conditions. Field data is from real users with old phones, slow networks, browser extensions, junk in the cache. Field is what matters for SEO. Lab is useful but not authoritative.
If your lab Lighthouse score is 95 and your field LCP is 5 seconds, do not trust the 95. Real users are slow, and Google ranks based on what real users see.
A short order of operations
If you want to fix Core Web Vitals on a site that is currently failing them:
- Get field data from PageSpeed Insights or Search Console. You need to know what your real users see, not your laptop.
- Identify which of LCP/INP/CLS is failing. They have different fixes; do not waste time optimising one that is already green.
- Fix the biggest source of pain first. For LCP, the hero image. For INP, the heaviest third-party script. For CLS, the missing image dimensions.
- Re-measure. Wait a few days for field data to update.
- Iterate.
The other 200 tips you find online are useful but secondary. Get the three above right and most sites move from "poor" to "good" without further work.