Most "I lost everything" stories are not because somebody had no backup. They are because the backup was on the same machine that broke, or the same provider that locked the account, or the same disk the ransomware encrypted. The 3-2-1 rule is the simplest framework that prevents all those failures.

The rule

Three copies of the data, on two different types of media, one copy off-site.

That is it. Memorising it takes 10 seconds. The whole article below is just unpacking each number.

"Three copies"

The original counts as one. The two backups count as two and three. So:

  • The live data on your production server (copy 1)
  • A backup somewhere on the same server or same provider (copy 2)
  • A backup somewhere completely separate (copy 3)

Why three and not two: when the primary fails and you go to restore, the backup might fail too. Drive corruption, half-finished snapshot, expired retention. With two copies of backup, the probability that both are unusable when you need them is small enough to ignore.

"On two different types of media"

This rule was written when "media" meant tape vs disk vs CD. Today it translates to "two technologies that fail for different reasons". A useful modern reading:

  • A block-level snapshot at the host (Hetzner snapshots, DigitalOcean snapshots) and a file-level archive somewhere else (tarball, restic, Borg, BackBlaze B2). They fail for different reasons. A snapshot survives a deleted file but dies with the disk. A file archive survives the disk but is slower to restore.
  • A database dump (mysqldump, pg_dump) and a filesystem snapshot. The dump is human-readable and portable. The snapshot is fast but tied to the storage layer.
  • Hot storage (a backup you can access immediately) and cold storage (a backup that takes hours to retrieve). Cold storage is cheap, immune to ransomware that targets hot copies, and useful for long retention.

The point is not the literal "two media". The point is "if my whole backup category fails, do I have a category that survives independently?".

"One copy off-site"

Off-site means a different physical location, ideally a different provider. The reason is not just fire or earthquake. It is account-level disaster:

  • Your hosting provider locks your account because of a billing problem.
  • Your AWS root user gets compromised and somebody deletes everything, including the backups in the same account.
  • Your provider goes bankrupt and the data centre is sealed for six months.
  • Ransomware on your laptop reaches the connected backup target and encrypts it too.

If your only backup is on the same provider as your live data, none of those scenarios are recoverable.

Off-site does not need to be expensive. A 100 GB BackBlaze B2 bucket is around 0.50 euro per month. A Hetzner Storage Box at a different location than your VPS is 3 euro per month for 1 TB. The cost objection is rarely real.

Concrete setups

The rule is abstract, here is what it looks like for three common cases.

Single WordPress site on a VPS. Live database and files on Hetzner CX22 in Helsinki (copy 1). Daily snapshot taken by Hetzner control panel, 7 retained (copy 2, same provider). Nightly restic archive of /var/www and a mysqldump pushed to BackBlaze B2 in EU-Central, 30 days retained (copy 3, different provider, different region). About 1 euro a month total for the off-site, set up in 30 minutes once, then never touched.

Agency hosting 20 sites on one VPS. Same logic but the off-site is a Borg repo on a Hetzner Storage Box at a different DC than the VPS. Borg deduplicates, so 20 WordPress installations cost roughly the storage of one large one. Plus weekly verification: a small script that runs borg check and emails you if anything is wrong.

E-commerce site, real money flowing. Same as above plus: nightly point-in-time database backup with binary logs (so you can recover to any timestamp, not only nightly snapshots), monthly full-restore drill on a staging box (the only way to know your backups actually work), one cold-storage copy per quarter pushed to a different provider entirely (AWS Glacier or Wasabi), retention of 1 year minimum.

The part everybody skips: testing the restore

A backup that has never been restored is a hope, not a backup. The first time you really need it is the worst time to discover it does not work.

Once a quarter, do this:

  1. Spin up an empty VPS or a local VM.
  2. Pull the most recent off-site backup.
  3. Restore the database, restore the files, point a temporary DNS at it.
  4. Open the site in a browser. Does it load? Are last week's posts there? Does the database integrity check pass?

If yes, the backup is real. If no, fix the broken pipeline now, when nothing is on fire.

Do this once and you will be in the top 10% of small operators on backup discipline. Do it never and you join the majority who think they have backups.

What I do not consider a backup

  • A RAID array. RAID survives a disk failure, not a deleted file, not ransomware, not "I overwrote the wrong row". RAID is uptime, not backup.
  • A snapshot at the same provider, only. Useful for fast rollback. Not a backup against account loss.
  • A copy on a USB stick that lives next to the laptop. Off-site means different location, not different drawer.
  • "The host has a backup". Without testing the restore yourself you do not know what they keep, for how long, in what format, and what the SLA is for getting it back. The day you ask is the worst day to find out.

A practical starting point

If you are reading this and have nothing structured today, start with the smallest version:

  1. Write a 10-line script that runs mysqldump | gzip > /tmp/db_$(date +%Y%m%d).sql.gz and a tar of /var/www.
  2. Push both to a different provider (BackBlaze B2 with b2-cli is the simplest, also rclone to any S3-compatible target).
  3. Cron it nightly.
  4. Set a calendar reminder for 30 days from now to do a test restore.

Total cost: 30 minutes of setup, 0.50-2 euro a month. You go from "no backup" to "compliant with 3-2-1" in one evening. The rest is refinement.