NAS
Unraid parity, cache pool, and Mover setup
Unraid is not RAID. Understanding what the array, parity, cache pool, and Mover actually do — and which workload belongs on each — is the single biggest predictor of whether your data stays safe and your apps stay fast.
Best for: New Unraid operators planning their first server, and existing operators who realize they put the wrong workload on the wrong layer.
The four storage layers in Unraid
- Array (data disks): individually-formatted disks pooled into one namespace, each disk's files readable on its own. Slow on writes because parity must be computed.
- Parity disk: not a backup. Reconstructs one (or two, with dual parity) failed data disk by XOR with all remaining data disks. Must be at least as large as the largest data disk.
- Cache pool: separate SSD or NVMe disks (typically in BTRFS or ZFS pool) that absorb writes at native SSD speed. Lives outside the parity-protected array.
- Mover: scheduled task (Settings > Scheduler) that migrates files from the cache pool to the array on a schedule. Only moves files that share configs say should live on the array.
Share cache settings: what each option really means
- Use cache disk: No — writes go directly to the array. Slow but parity-protected from write moment one. Right for archive shares with no daily change.
- Use cache disk: Yes — writes land on cache first, Mover migrates to array later. Fast write, eventually parity-protected. Right for most general-purpose shares (Documents, Media imports).
- Use cache disk: Only — files stay on the cache pool permanently. Mover never touches them. Right for Docker appdata, VM images, active databases — anything that needs SSD speed and won't fit on the array's parity write pattern.
- Use cache disk: Prefer — files prefer the cache, but Mover migrates the opposite direction (array → cache) on next run. Right for shares you want to keep on cache but that grew onto the array.
Which workload belongs where
- Long-term storage of large files (media library, archived photos, backups received from other devices): Use cache: Yes — fast writes, files land on array overnight, parity protects them.
- Docker appdata (databases, container state, app config): Use cache: Only — these files are written to constantly and the parity overhead is brutal. Back them up separately.
- VM disk images (vdisk1.img): Use cache: Only — same reason as Docker, plus VMs need low-latency IO.
- Plex transcoding temp directory: Use cache: Only or point to /tmp (RAM) — transcoding writes are heavy and short-lived; the array is the wrong place.
- Backups received from Windows/Mac/laptops: Use cache: Yes — receive at SSD speed during the backup window, Mover migrates to array later.
Parity vs backup: the rule that matters
- Parity protects against ONE disk failing (or two with dual parity). If two disks fail with single parity, the affected data is gone.
- Parity does not protect against accidental delete, ransomware, fire/theft/flood, or a corrupting bug in your write path.
- A NAS without an independent backup is not a backup target. See the 3-2-1 backup guide for the operator-grade plan.
Open Main and identify the four sections: array (parity + data disks), cache pool, status indicators, and disk capacity bars.
Main page (top of Unraid web UI)
You can name each disk's slot (Parity, Disk 1, Cache) and its current state from the page.
Stop before mass-changing every share at once; one-at-a-time gives a reversible feedback loop.
Layer path
Step-by-step runbook
Start here. Do each check in order, compare it to the expected result, and stop when the evidence explains the failure or the safe stop point applies.
Map your current state before changing anything
Check: Open Main and Shares; record each disk's slot/role and each share's Use cache setting.
Expected result: You have a written snapshot of array, parity, cache, and per-share cache settings.
If not: If you cannot answer 'which share is on cache: Only and which is on Yes', do not proceed to changes yet.
Decide intended layer per share
Check: Apply the workload rule: Docker appdata / VM / DB = Only; backups & imports = Yes; archive = No; cache-prefer = Prefer.
Expected result: Each share has a written intended setting next to its current setting.
If not: If a share doesn't fit any pattern, lean toward Yes (write to cache, Mover migrates) — it's the safest default.
Set one share at a time
Check: Change the most-impactful mis-set share first (usually Docker appdata to Only). Stop affected containers, change setting, run Mover, restart containers.
Expected result: The share's content lives on the intended layer; Mover does not move it on the next run.
If not: If Mover fails to clear cache for a share you set to Only-from-Yes, an open file is holding it; identify and release.
Safe stop: Stop before mass-changing every share at once; one-at-a-time gives a reversible feedback loop.
Verify with Mover Now + system log
Check: Trigger Mover manually after each change and read the System Log for the Mover lines for that share.
Expected result: Log shows the share name and either 'moved' counts or 'skipped' reasons.
If not: Skipped files due to open handles usually mean a container or VM is still using them.
Right-size cache pool against daily change
Check: Estimate daily change (backups + appdata growth + temp downloads). Cache pool capacity should be 2-3x that.
Expected result: Cache pool has headroom across normal Mover cycles without running near full.
If not: If daily change > 50% of cache, add another cache disk or upgrade to a larger SSD before more shares hit cache.
Safe stop: Stop before formatting a single-disk cache pool to convert to multi-disk without a verified backup of cache contents.
Decision tree
If: Share holds Docker appdata, VM images, or active databases.
Then: These workloads need SSD speed and constant writes; the array's parity-write penalty is the wrong fit.
Action: Set Use cache: Only on that share. Back up the appdata share separately because parity does not cover it.
If: Share receives backups, media imports, or large file copies during business hours.
Then: Fast writes matter during the window; long-term storage doesn't need SSD.
Action: Set Use cache: Yes. Mover migrates files to array overnight; parity protects from the next morning.
If: Share is pure archival storage (rarely written, often read).
Then: Cache speed adds no value; cache wear and Mover work add cost.
Action: Set Use cache: No. Writes go straight to array, parity-protected from write moment one.
If: Cache pool is filling up faster than Mover empties it.
Then: Either daily change exceeds cache size or a share is mis-set to Yes when it should be Only/No.
Action: Audit Use cache per share, then right-size the cache pool. Follow the cache-pool-full-mover-not-running page for live triage.
If: Considering dual parity.
Then: Dual parity doubles disk-failure tolerance at the cost of one usable disk slot.
Action: Worth it when total array exceeds ~50TB or rebuild times exceed ~24h on your hardware; not worth it for small arrays with current backups.
Evidence table
| Symptom | Evidence to collect | Likely layer | Next action |
|---|---|---|---|
| Docker containers slow to start or save state slowly. | Docker tab shows containers with appdata pointing into /mnt/user/<share> where that share is Use cache: Yes or No. | Appdata on parity-protected array | Stop containers, change appdata share to Use cache: Only, run Mover to flush cache, restart containers. |
| Receiving backups takes much longer than expected for the network. | Backup share is Use cache: No; writes are hitting parity-calculation path. | Backup share mis-set to No | Change share to Use cache: Yes; new backups land on SSD, Mover migrates overnight. |
| Plex/media playback is fine but Plex appdata writes (metadata, thumbnails) are slow. | Plex appdata share on array. | Plex appdata mis-located | Move Plex appdata share to Use cache: Only and back it up independently. |
| Array won't start after adding a larger data disk. | Unraid error: 'Parity disk too small'. | Parity capacity less than new data disk | Replace parity disk with one at least as large as the new data disk; rebuild parity before adding the larger data disk. |
Commands and settings paths
Storage layer overview
Main page (top of Unraid web UI)
Where: In the Unraid web UI under Main.
Expected: Array section shows parity + data disks; cache pool section shows cache disks separately; each has used/total capacity bars.
Failure means: If you cannot identify which disks are array vs cache, share-cache decisions cannot be made correctly.
Safe next step: Capture a screenshot of Main and label each section before changing any share setting.
Per-share cache configuration
Shares > <share name> > Use cache disk
Where: In the Unraid web UI for each user share.
Expected: Each share has an explicit Use cache value (No / Yes / Only / Prefer) matching the workload table.
Failure means: If shares are inconsistent or set to defaults that don't match intent, files end up on the wrong layer.
Safe next step: Change one share at a time and let Mover redistribute on the next run before changing another.
Mover schedule
Settings > Scheduler > Mover Settings
Where: In the Unraid web UI under Settings.
Expected: Schedule is set to a recurring time (commonly daily 04:00); not disabled.
Failure means: Disabled or never-fires schedules mean cache: Yes shares never migrate.
Safe next step: Set to daily at a low-write window and verify the next run completes in the System Log.
Manual Mover run for verification
Main > Mover Now (or Settings > Scheduler > Mover Now depending on Unraid version)
Where: In the Unraid web UI after share config changes.
Expected: Mover runs immediately; the System Log records each share it touched and any skipped files.
Failure means: If Mover reports zero moved files when you expected migration, a share is set to Only or files are open.
Safe next step: Read the Mover log lines before assuming the migration succeeded.
Hardware and platform boundary
Change only when
- Buy a larger cache SSD when daily change rate exceeds half the current cache pool size, not as a precaution.
Evidence that matters
- Cache SSD endurance (TBW) for the write workload, BTRFS/ZFS pool support if you want pool redundancy, parity disk capacity matching the largest data disk, and dual-parity threshold (~50TB array).
Evidence that does not matter
- Cache SSD speed beyond ~500MB/s sequential rarely matters for home NAS workloads; the network is usually the bottleneck before the SSD.
Avoid
- Avoid mixing cache SSDs of different sizes in a single BTRFS pool — usable capacity falls to the smallest disk × number of disks.
Last reviewed
2026-05-07 · Reviewed by HomeTechOps. Reviewed for Unraid storage-layer foundation using the four-layer model (array / parity / cache / Mover), the four Use cache settings with workload mapping, share configuration triage, and parity-is-not-backup boundary.
Source-backed checks
HomeTechOps turns official docs and conservative safety rules into a shorter runbook. These links are the source trail for the page direction.