NAS
Unraid cache pool full or Mover not running
If the cache pool fills up, writes either slow to array speed or fail outright. The fix is almost never deleting files — it is understanding why Mover did not migrate them, fixing the actual cause, and then running Mover.
Best for: Unraid operators seeing 'Cache full' warnings, slow writes after fast SSD writes used to work, or Mover that completes without moving anything.
Confirm the symptom before changing anything
- Open Main > Cache Devices and capture used vs total capacity for each cache disk in the pool. Screenshot or write it down.
- Open Shares and list every share with Use cache: Yes — these are the shares Mover should be migrating from cache to array.
- Open Tools > System Log and search for 'mover' around the last scheduled Mover time. Mover logs each share it touched and whether it skipped files.
The four reasons Mover does not free cache space
- Open files: Mover skips files that are open (Docker containers, VMs, active SMB sessions). The file stays on cache until the holder releases it.
- Wrong cache setting: shares with Use cache: Only stay on cache permanently — Mover ignores them by design. Confirm the share's setting matches your intent.
- Mover schedule not enabled or set to a window that never fires: check Settings > Scheduler > Mover Settings.
- Mover Tuning plugin thresholds: if installed, Mover only runs when the cache pool is over a certain percent full. Confirm thresholds aren't blocking the run.
Safe recovery sequence
- Stop Docker containers and VMs that have appdata or vdisks on the cache pool you want freed (Docker tab > Stop, VMs tab > Stop). This releases open files.
- Run Mover manually from Main > Mover Now (or Settings > Scheduler depending on Unraid version). Watch the log.
- If Mover still does not free space, check each Use cache: Yes share's content folder for files that should not be there (e.g., temp downloads, build artifacts).
- Once cache has headroom, restart Docker/VMs. They should resume normally.
Long-term fix: right-size the cache pool
- Estimate daily change rate: backups received per day + Docker appdata growth + temporary downloads. Sum in GB/day.
- Cache pool capacity should be at least 2-3x the daily change rate so a missed Mover run does not fill it.
- If daily change exceeds half the pool capacity, add another cache disk (BTRFS pool) or move to a larger SSD.
Open Main > Cache Devices and capture used/total per disk.
Main > Cache Devices
Each cache disk shows current used and total capacity.
Stop before mass-stopping containers; only stop the ones holding files for the share you're trying to free.
Layer path
Step-by-step runbook
Start here. Do each check in order, compare it to the expected result, and stop when the evidence explains the failure or the safe stop point applies.
Capture state
Check: the "Cache capacity snapshot" command below (screenshot), Shares (Use cache column), Tools > System Log (recent mover entries).
Expected result: You have baseline capacity, per-share cache settings, and Mover log evidence.
If not: If diagnostics are missing, do not delete or move files manually.
Identify open-file blockers
Check: Run the "Open-file identification via lsof" command below in Terminal or SSH; map PIDs to Docker containers via docker ps.
Expected result: You have a named list of which container/VM is holding which file.
If not: If lsof shows files held by core Unraid processes, do not kill them.
Release the holders, run Mover, watch the log
Check: Stop the identified Docker containers and VMs that hold the files. Trigger Main > Mover Now. Watch Tools > System Log live.
Expected result: Mover migrates the previously-held files and the log shows successful moves.
If not: If files still don't move, recheck Use cache for that share — it may be set to Only.
Safe stop: Stop before mass-stopping containers; only stop the ones holding files for the share you're trying to free.
Confirm cache headroom and restart workloads
Check: After Mover completes, check the "Cache capacity snapshot" command below for freed capacity. Restart the stopped Docker containers and VMs.
Expected result: Cache has headroom and containers/VMs come back healthy.
If not: If containers fail to start, check their appdata path didn't change.
Long-term: right-size against daily change
Check: Estimate daily change (backups + appdata + downloads). If it exceeds half your cache pool, plan capacity addition.
Expected result: Cache pool runs at <60% across normal cycles.
If not: If daily change is genuinely large, a single Mover cycle won't keep up — schedule Mover more frequently or expand cache.
Safe stop: Stop before disabling parity 'to speed up Mover' — Mover writes go to the array's parity-protected path; disabling parity drops protection.
Decision tree
If: Mover log says 'in use' or 'skipped' for files you expected to move.
Then: Open file handles from Docker, VMs, or active SMB sessions are blocking those files.
Action: Stop the holding container/VM (Docker tab > Stop or VMs > Stop), trigger Mover Now, restart after migration.
If: Mover log says it ran but moved zero files from cache.
Then: Either no shares are set to Yes, or the Yes-shares' files have not aged past Mover's minimum file age (if configured).
Action: Audit each share's Use cache and Mover age thresholds in Settings.
If: Mover Tuning plugin installed and Mover is not running on schedule.
Then: Plugin thresholds (e.g., 'only run if cache > 70% full') may be blocking the run.
Action: Open Settings > Mover Tuning and review thresholds; lower or disable temporarily to confirm.
If: Cache fills within hours of every Mover run.
Then: Daily change exceeds Mover's window or cache capacity.
Action: Right-size: add a cache disk or move some Yes-shares to No (write direct to array) if SSD speed is not actually needed.
Safe stop: Stop before deleting cache files manually as a 'fix' — files may belong to active workloads.
If: Cache disk is failing (SMART warning on cache).
Then: Replacing a single-disk cache pool is destructive to its contents.
Action: Back up appdata + any Use cache: Only share contents before replacing cache disk; convert to redundant pool if possible.
Safe stop: Stop before swapping a failing cache disk while the array is started.
Evidence table
| Symptom | Evidence to collect | Likely layer | Next action |
|---|---|---|---|
| Cache pool 95%+ full, Mover ran but moved nothing. | System Log shows mover run with 'in use' skips on appdata files. | Docker containers holding files | Stop the relevant containers, run Mover Now, restart after migration completes. |
| Cache pool fills overnight after every backup window. | Backup share with Use cache: Yes receives more data per night than Mover moves the following day. | Cache undersized for daily change | Add a second cache disk in a BTRFS pool or move to a larger single SSD. |
| Mover schedule shows 'never' in Settings. | Mover Settings > Schedule is set to Disabled or to a manual-only mode. | Mover disabled | Set to daily at a low-write window; verify the next run completes in the log. |
| Specific Yes-share never empties. | Mover log skips that share's files repeatedly with 'in use'. | Persistent open-file holder (Plex/qBittorrent/database container) on that share | Identify the holder, decide whether the share should be Use cache: Only instead, and reconfigure. |
Commands and settings paths
Cache capacity snapshot
Main > Cache Devices
Where: In the Unraid web UI.
Expected: Each cache disk shows current used and total capacity, plus pool-level summary.
Failure means: If you cannot tell which disks are in the cache pool, the share-level investigation cannot start.
Safe next step: Save baseline numbers before any change; compare after Mover run.
Mover log review
Tools > System Log (search 'mover')
Where: In the Unraid web UI.
Expected: Log shows last Mover run, per-share processing, and skip reasons.
Failure means: Without the log, the root cause is invisible.
Safe next step: Save log excerpt for the last 2-3 Mover runs as evidence.
Open-file identification via lsof
lsof /mnt/cache/<share>
Where: In an SSH session or the Unraid Terminal.
Expected: Lists processes holding files in that share open.
Failure means: Without this, you guess which container is the blocker.
Safe next step: Map PIDs to Docker container IDs using docker ps to identify the holder cleanly.
Manual Mover trigger
Main > Mover Now (or Settings > Scheduler > Mover Now)
Where: In the Unraid web UI after stopping any holding containers.
Expected: Mover runs immediately and the System Log records each file moved or skipped.
Failure means: If Mover Now does nothing visible, the schedule may still be holding the run; check Mover Tuning thresholds.
Safe next step: Wait for Mover Now to fully complete before declaring the cache freed.
Hardware and platform boundary
Change only when
- Add a second cache disk in a BTRFS or ZFS pool, or upgrade to a larger SSD, only after audit shows the cause is genuine capacity rather than a mis-set share or open-file blocker.
Evidence that matters
- Cache SSD endurance (TBW rating) for the write workload, pool filesystem support (BTRFS/ZFS), and matching SSD sizes within the pool.
Evidence that does not matter
- Higher sequential SSD speed beyond ~500MB/s rarely matters; the bottleneck is usually the network or Mover's per-file overhead.
Avoid
- Avoid running cache near 100% as a normal state — write amplification on SSDs increases sharply above ~85% and shortens disk life.
Last reviewed
2026-05-07 · Reviewed by HomeTechOps. Reviewed for Unraid cache-pool-full triage using capacity snapshot, Mover log evidence, lsof open-file identification, share-cache audit, and the right-size rule based on daily change rate.
Source-backed checks
HomeTechOps turns official docs and conservative safety rules into a shorter runbook. These links are the source trail for the page direction.