HomeTechOps

NAS

Unraid Docker appdata backup plan

Docker appdata holds the state of every container on the server — Plex databases, Sonarr/Radarr configs, Home Assistant data, Nextcloud sessions. Most operators have appdata on a Use cache: Only share, which keeps it fast but also means it lives outside parity protection. Without a separate backup, a cache disk failure takes every container's state with it.

Best for: Unraid operators running Docker containers with state worth keeping (Plex, *arr stack, Home Assistant, Nextcloud, databases, dashboards).

Why appdata needs its own backup

  • Appdata typically lives on the cache pool with Use cache: Only — Mover never migrates it to the array, so parity never covers it.
  • A single-disk cache pool failure means total loss of every container's state: Plex watch history, *arr download history, Home Assistant automations, password manager databases.
  • Even on a redundant cache pool (BTRFS/ZFS mirror), you still have no protection against ransomware, accidental delete, or a container that corrupts its own database.
  • Most container backup tools are appdata-aware and handle the stop-backup-start sequence safely; manual rsync without stopping the container risks copying a database mid-write.

What to back up and what to skip

  • Back up: /mnt/user/appdata (full tree) including container configs, databases, and any state-on-disk that took time to set up.
  • Skip: container temporary files, transcode scratch directories, and download/seedboxes if the data can be re-downloaded; these inflate backup size without value.
  • Pay special attention to: PostgreSQL/MariaDB/SQLite databases (need consistent dump or container-stopped copy), Home Assistant .storage directory, Plex's Plug-in Support/Databases folder.
  • Document which containers MUST be stopped before backup (anything with a live database) and which can be backed up running (stateless containers, static configs).

Tool options

  • CA Backup / Restore Appdata plugin (Community Applications): the standard Unraid-native option. Stops containers, snapshots appdata, restarts containers, writes to a destination share. Configurable schedule and retention.
  • Borg or restic: scripted backups with deduplication and encryption; better for offsite or cross-server backups. Requires you to handle the container stop/start sequence yourself.
  • rclone: for pushing backups to cloud storage (Backblaze B2, S3, Wasabi) after the initial appdata copy. Often combined with Borg/restic.
  • Avoid: raw rsync against running containers with databases — likely produces an inconsistent copy that won't restore cleanly.

Restore drill — required, not optional

  • Once a month, restore one container's appdata to a temporary location and start a copy of the container pointing at the restored data. Verify it boots, reads its database, and looks like the original.
  • Document the restore steps for the most critical container (Home Assistant or Plex usually) so the procedure is known before you actually need it.
  • If a restore fails, that backup is not a backup. Fix the backup tool or the procedure before moving on.

Retention and offsite

  • Daily snapshots for 7-14 days is a reasonable baseline; some operators add weekly snapshots for another month.
  • At least one copy should live off the cache pool — ideally on the array (Use cache: Yes share for backups) AND on an offsite/cloud destination.
  • Ransomware protection: prefer cloud destinations with object-lock or versioning rather than mirror-style sync that propagates encryption.
Operator snapshotEvidence first
First proof

Locate the appdata share and confirm its Use cache setting.

Screen to open

Shares > appdata (Use cache row)

Expected signal

Shares > appdata shows Use cache: Only (or Yes with a clear rationale).

Stop boundary

Stop before pointing the offsite copy at the same account/MFA device as the local backup.

Layer path

1Docker appdata holds container state (databases, configs, history) and almost always lives on the cache pool with Use cache: Only — outside parity protection.
2Parity does not back up appdata: it covers single-disk failure on the array, not the cache pool, and not against accidental delete, ransomware, or a container corrupting its own database.
3Database-bearing containers (Plex, *arr stack, Home Assistant, Nextcloud, dashboards) need to be stopped before backup OR backed up via a tool that handles the stop/start cycle.
4A backup that has never been restored is a hope, not a backup; monthly restore drills convert hopes into evidence.
Runbook

Step-by-step runbook

Start here. Do each check in order, compare it to the expected result, and stop when the evidence explains the failure or the safe stop point applies.

1

Map appdata and identify state-bearing containers

Check: List every running container, its appdata path, and whether it holds a live database.

Expected result: You have a written list: stateless containers (safe to rsync running) vs database containers (must be stopped or use a tool that handles stop/start).

If not: Without this map, the backup procedure is inconsistent.

2

Install or configure CA Backup / Restore Appdata

Check: Community Applications > install CA Backup / Restore Appdata. Set source = appdata share, destination = a backup share on the array (Use cache: Yes recommended), schedule daily, retention 7-14 daily + 4 weekly.

Expected result: First scheduled run completes successfully and lists every container's appdata size.

If not: If the run errors on specific containers, those containers need stop/start config in the plugin's Advanced View.

3

Add an offsite layer

Check: Configure rclone or Borg to push the backup share to cloud storage (Backblaze B2, S3, Wasabi) or to a rotated external drive. Use credentials separate from the local backup.

Expected result: Offsite copy completes; cloud destination has the data.

If not: Without offsite, ransomware or loss of the home takes every copy.

Safe stop: Stop before pointing the offsite copy at the same account/MFA device as the local backup.

4

Run the first restore drill

Check: Pick the most critical container (Home Assistant or Plex usually). Restore its appdata to a temp path, start a second Docker container instance with that path mounted, verify it boots with the right data.

Expected result: Restored container is functionally identical to the original.

If not: If the drill fails, fix the cause before scheduling regular backups.

Safe stop: Stop before declaring backups working without a drill.

5

Document and schedule recurring drills

Check: Write down the restore steps and add a monthly calendar reminder to repeat the drill. Update the document when containers change.

Expected result: Procedure exists in written form; next drill is on the calendar.

If not: Without documented procedure, restore knowledge lives only in one person's head.

Decision tree

Decision tree

If: Container holds a live database (Plex, *arr stack, Home Assistant, Nextcloud, any *SQL).

Then: Inconsistent-copy risk if backed up while running.

Action: Use CA Backup / Restore Appdata (handles stop/start automatically) OR stop the container manually before any rsync/snapshot/Borg job.

If: Container is stateless (reverse proxy, static config service).

Then: Safe to back up while running.

Action: Skip the stop/start step; rsync or Borg directly. Document which containers are in this category.

If: Cache pool is a single disk (no redundancy).

Then: A single disk failure is total loss of every container's state without a separate backup.

Action: Backup is non-negotiable: at least one local copy off the cache pool plus an offsite/cloud copy.

Safe stop: Stop adding new containers until the backup is in place.

If: Cache pool is a BTRFS/ZFS mirror (redundant).

Then: Protected against a single SSD failure, NOT against ransomware, accidental delete, or container DB corruption.

Action: Still need a separate backup; mirror is not a backup.

If: Restore drill failed (restored container won't start or shows wrong data).

Then: The backup tool, config, or destination has a problem.

Action: Treat this as a backup outage; do not move on. Common causes: stop/start not handling locks, permissions changed on copy, file path differs on restore.

Safe stop: Stop before adding more containers or changing schedule until the drill passes.

Evidence

Evidence table

SymptomEvidence to collectLikely layerNext action
Plex starts fresh with no watch history after restore.Restored to a temp path, started a second Plex container against it, watch history is empty.Database file copied while server was running, ended up partialSwitch to CA Backup / Restore Appdata which stops Plex during copy.
Sonarr/Radarr restored but doesn't recognize previous library.Restored container shows empty library.SQLite database copied mid-writeStop the *arr container before copy; or use the tool's native backup export instead of file copy.
Backup destination filling up faster than expected.Backup share growing GB/day; no retention policy applied.Retention not configured; backups accumulating foreverSet CA Backup retention to 7-14 daily + 4 weekly; older versions auto-pruned.
Backup completes but restore drill produces unbootable container.Tool logs show success; restored container errors on start.Permission or path mismatch on restore, OR container version driftDocument the restore path; pin container image versions next to the backup config.
Reference

Commands and settings paths

Appdata location and cache setting

Shares > appdata (Use cache row)

Where: In the Unraid web UI.

Expected: Share is Use cache: Only and mount path is /mnt/user/appdata.

Failure means: Other settings (No / Yes / Prefer) produce performance and reliability problems for containers.

Safe next step: Set to Only after stopping containers and running Mover; restart containers after.

CA Backup / Restore Appdata install status

Apps > installed > search 'CA Backup'

Where: In the Unraid web UI's Community Applications interface.

Expected: Plugin is installed and has run at least once.

Failure means: If not installed, install it and configure: source = appdata, destination = a backup share on the array (Use cache: Yes), schedule daily.

Safe next step: Verify the first run in the plugin's log; do not move on until that succeeds.

Manual stop-copy-start for one container (verification)

docker stop <container> && rsync -a /mnt/user/appdata/<container>/ /mnt/user/<backup-share>/test/ && docker start <container>

Where: In an Unraid Terminal SSH session.

Expected: Container stops, rsync completes without errors, container restarts and runs normally.

Failure means: If rsync errors mid-copy or container fails to restart, the appdata path or permissions are misconfigured.

Safe next step: Use this only as a verification step; production backups should use the plugin, not raw rsync.

Restore drill

Restore an appdata folder to a temp path and start a second container instance against it.

Where: In Unraid > Docker (Add Container with same image, override appdata path to the restored copy).

Expected: Second container boots, reads its database, shows the same data as the original.

Failure means: Failure here is the backup-not-working signal. Investigate before trusting future backups.

Safe next step: Fix the root cause (stop/start config, permissions, path mismatch) and rerun the drill before scheduling additional backups.

Hardware boundary

Hardware and platform boundary

Change only when

  • Upgrade to a redundant cache pool (BTRFS/ZFS mirror) after restore-drill evidence shows the backup chain is reliable; until then, capacity to actually back up is more important than mirror redundancy.

Evidence that matters

  • SSD endurance for the appdata write workload, BTRFS/ZFS pool support, backup destination capacity (array space for the backup share), and a clear offsite target matter.

Evidence that does not matter

  • Faster cache SSDs do not help the backup chain; the bottleneck is restore-drill discipline.

Avoid

  • Avoid relying on a redundant cache pool 'as backup' — it protects against one SSD failure, nothing else.

Last reviewed

2026-05-07 · Reviewed by HomeTechOps. Reviewed for Unraid Docker appdata backup planning using cache-pool location framing, container-state classification, CA Backup / Restore Appdata as the Unraid-native option, offsite layer requirement, and monthly restore-drill discipline.

Source-backed checks

HomeTechOps turns official docs and conservative safety rules into a shorter runbook. These links are the source trail for the page direction.