Every fast website you visit is fast because of caching. Not better code, not faster databases, not more expensive servers. Caching. The same database query that takes 200 milliseconds the first time takes 0.1 milliseconds the second time, because somebody saved the result. Multiply that across hundreds of queries per page and the whole web makes sense.
Most performance problems are caching problems. Either a cache is missing where it should be, or a cache is hot when it shouldn't be (the user updated content and is still seeing the old version). To debug either case you need to know which caches exist, where they sit, and what each one is responsible for.
There are five main layers between your database and the visitor's eyes. We will walk through them from the database outward.
1. PHP OPcache (or its equivalent for other languages)
Lives inside the PHP interpreter on your web server. Caches compiled PHP bytecode, not data. Every time PHP reads a .php file, it has to parse it into an internal representation before executing it. OPcache stores that compiled form in shared memory so PHP skips the parsing on the next request.
You don't configure this per request. You enable it once in php.ini, give it 128 to 256 MB of memory, and it speeds up every PHP application on the server by 2 to 5 times. WordPress without OPcache is roughly half its real speed.
Same idea exists in other stacks: Python's __pycache__, Node's V8 inline caching, Java's JIT. Different mechanisms, same job: don't recompile the same code on every request.
You only think about OPcache when something goes wrong: deployments where new code doesn't take effect (you forgot to reset OPcache), memory pressure (the cache fills up and starts evicting things). On a healthy server you set it once and forget about it.
2. Object cache (Redis or Memcached)
Lives on the server, separate process from PHP. Caches arbitrary data, usually database query results, by key. WordPress, Drupal, Magento, Laravel, all support an object cache via Redis. The pattern looks like this:
$cache_key = 'user_posts_' . $user_id;
$result = $redis->get($cache_key);
if ($result === null) {
$result = $db->query('SELECT ...');
$redis->set($cache_key, $result, 3600);
}
The first request runs the query, the next 3,600 seconds of requests skip it. On a typical WordPress homepage you save dozens of MySQL queries with this pattern alone.
Redis is the modern default. Memcached still works fine but Redis has more data structures and persistence. Both are 5 minutes to install. The plugin you put on top of WordPress (Redis Object Cache, W3 Total Cache, LiteSpeed Cache, Object Cache Pro) translates application calls into Redis calls.
The thing to know: object cache does not reduce the number of PHP processes that handle a request. PHP still wakes up, runs your code, hits Redis instead of MySQL. It is faster, but PHP-FPM is still working. To skip PHP entirely, you need the next layer.
3. Page cache (full page HTML stored as a file or in memory)
This is the layer that turns a 1,500 millisecond WordPress request into a 50 millisecond static file response. Once a page has been generated, you save the complete rendered HTML somewhere fast. The next visitor for that exact URL gets the saved HTML directly, with no PHP, no database, no nothing.
Two ways to implement it on a typical Linux server:
File-based: WP Rocket, WP Super Cache, LiteSpeed Cache. Generated HTML is written to wp-content/cache/. Your web server serves it as if it were a static file.
Server-level: nginx FastCGI cache or nginx + Redis (the srcache_nginx module that WordOps uses by default). The HTML is stored in nginx's own memory or in Redis, and nginx serves it without ever calling PHP.
The file-based version is simpler and works on shared hosting. The server-level version is faster, has lower memory footprint per request, and survives PHP being broken (your homepage still loads if PHP-FPM crashes).
The hard part is invalidation. When you publish a post, the page cache for the homepage, the category page, the archive, the tag pages, the RSS feed, all need to expire. Plugins handle most of this. They do not always handle it perfectly. The classic bug is "I changed the price and customers see the old price for 24 hours" because the page cache was not flushed when the price field was updated.
If your site is dynamic (logged-in users, shopping carts, personalised content) you cannot cache the whole page. You either cache fragments, or you cache only for anonymous visitors and let logged-in users hit PHP.
4. CDN (content delivery network)
A CDN is a network of servers spread across the world that sit between your origin server and your visitors. The visitor in Berlin connects to a CDN node in Frankfurt instead of to your server in Hetzner Helsinki. The CDN either has the content cached and serves it locally, or it goes back to your server and stores the answer for next time.
What a CDN actually accelerates:
- Static assets (images, CSS, JS, fonts, video). Always. By default. This is the easy 80% of the gain.
- HTML pages, but only if you configure it. Cloudflare's default for HTML is
cf-cache-status: DYNAMIC, meaning "we don't cache HTML unless you tell us to". You enable it with a Cache Rule (or the older Page Rules) on your domain.
Without HTML caching at the CDN, every page request still hits your origin. With HTML caching at the CDN, the request never even reaches your server for cached pages, which means your server's load is divided by 10 or 20 on a busy site.
The complication is the same as page cache: invalidation. When you publish, you need to purge the CDN. Cloudflare and Fastly have APIs and plugins. BunnyCDN has a per-URL purge button. Some CDNs require you to wait for the TTL to expire, which is not great if your TTL is 24 hours.
A second thing CDNs give you: TLS termination at the edge. Your origin can serve plain HTTP to the CDN over a private connection, and the CDN handles HTTPS to the visitor. This saves CPU on your origin and lets you offload security headers, redirects, rate limiting to the edge.
5. Browser cache
The visitor's own browser stores files locally. The browser asks your server (or the CDN) "do you have a newer version of style.css than the one I cached on Tuesday?" and the server answers either "no, use yours" (304 Not Modified, almost free) or "yes, here it is" (200 OK with the new file).
You control browser cache through HTTP headers:
Cache-Control: public, max-age=31536000, immutablefor assets with a hash in the filename (style.abc123.css). Cache for a year, never check.Cache-Control: public, max-age=3600, must-revalidatefor HTML and assets without versioning. Cache for an hour, then check.Cache-Control: no-storefor personal/sensitive data. Never cache.ETagandLast-Modifiedheaders let the browser do conditional requests.
Most static-asset performance gains in real projects come from this layer. A returning visitor with everything cached locally loads your site in 200 milliseconds because the browser only re-fetches the HTML, not the 800 KB of CSS, JS, fonts, images. A first-time visitor pays the full cost.
How they stack
In order, from the visitor's request to your database:
- The browser checks its local cache. Hit, done.
- The browser asks the CDN. Hit, done.
- The CDN asks your origin's page cache. Hit, returned without waking PHP.
- PHP wakes up, runs your code, asks Redis for cached query results.
- Redis miss, MySQL is queried, result saved in Redis, code runs, HTML returned.
- HTML stored in page cache (3), forwarded to CDN (2) which stores it, forwarded to browser (1) which stores it.
A first request hits all five layers and is slow. The second identical request from the same browser is served from layer 1 and is instant. A request from a different visitor in a different city is served from layer 2 and is fast. The closer to the visitor the cache is, the faster the request.
What this gives you
When something is slow, you don't ask "is the database slow?". You ask "which cache is missing?". The answer tells you what to fix:
- TTFB is 1.5 seconds and never goes down: page cache is missing or always missing (often a cookie or query string is excluding the page from cache). Investigate
srcache_fetch_status,cf-cache-status, or your plugin's debug log. - TTFB drops on the second hit but rises again later: cache is being evicted, often because Redis or nginx cache zone is undersized.
- Fast for new pages, slow for changed pages: the cache exists but is not being purged when content changes.
- Fast in your country, slow in others: the CDN is not active for HTML, or your origin is not behind it for that route.
The five caches above answer 95% of the performance questions you will hit on a typical PHP site. The remaining 5% is application-level inefficiency (slow queries, N+1, third-party scripts), which is a different conversation.