Tired of your apps waking up all your drives just to list a directory? Common examples are media library scans (Plex/Jellyfin) and NVR retention workflows. PolicyFS unifies multiple disks under one mountpoint, routes reads and writes by explicit rules, and keeps archive disk metadata in a local index so scans don't needlessly spin up sleeping drives.
Features
Route reads and writes by path pattern — one rule for library/**, another for everything
else. Write policies (first_found, most_free, least_free) choose a
write target. Path-preserving mode prefers targets where the parent directory exists to reduce scattering.
For archive disks, PolicyFS can keep metadata-heavy scans from touching sleeping drives unnecessarily. You can run maintenance jobs during a scheduled window (for example, overnight) when waking disks is acceptable.
PolicyFS includes maintenance jobs for tiered storage: move colder files to an archive tier, apply queued changes, and refresh metadata. This works well with systemd timers and a simple “maintenance window” model.
Workflow
Expose a single mountpoint and keep your directory layout stable as your storage grows.
Use explicit routing rules so new files land on SSDs (or your preferred fast tier).
Move colder files to archive disks and apply queued changes during a maintenance window.
When configured, metadata-heavy scans can avoid touching sleeping archive disks unnecessarily.
Use cases
Media servers are a common example, but PolicyFS is also useful anywhere you want one merged path across many disks, explicit placement rules, and a predictable maintenance window.
Keep a single path for your library while scaling storage over time. Reduce the “scan wakes every disk” problem by separating normal access from scheduled maintenance.
Write new footage to a fast tier, then migrate older recordings to an archive tier on a schedule without changing paths.
Comparison
mergerfs is a mature FUSE union filesystem with broad POSIX coverage and a large install base. pfs is narrower and focuses on explicit storage placement and metadata behavior. If you need maximum compatibility or complex workloads, mergerfs is likely a better fit.
| Feature | pfs | mergerfs |
|---|---|---|
| FUSE-based storage pooling | ✓ | ✓ |
| Path-pattern routing rules | ✓ | — |
| Multiple write target policies | ✓ | ✓ |
| Optional metadata index (reduced HDD wakeups) | ✓ | — |
| Queued changes (applied later) | ✓ | — |
| Built-in tiered storage mover | ✓ | — |
| POSIX feature coverage | partial | ✓ |
| Maturity & ecosystem | newer | established |
pfs trades POSIX breadth for explicit, inspectable behavior. mergerfs covers more edge cases and has a longer track record.
Configuration
A minimal two-tier setup: SSD as a write cache, HDDs as indexed archive storage.
# /etc/pfs/pfs.yaml mounts: media: mountpoint: /mnt/pfs/media # single path your apps use storage_paths: - id: ssd1 path: /mnt/nvme/cache # new files land here first - id: hdd1 path: /mnt/hdd1/media indexed: true # metadata cached in SQLite; disk stays asleep - id: hdd2 path: /mnt/hdd2/media indexed: true routing_rules: - match: "**" targets: [ssd1, hdd1, hdd2] write_policy: most_free # new files go to the disk with most free space
Install the .deb from GitHub Releases. Includes systemd units.