omv7:omv7_plugins:zfs

This is an old revision of the document!



ZFS Plugin For OMV7

ZFS Plugin For OMV7



The ZFS plugin makes it easy for users to take advantage of ZFS, with a streamlined installation and making ZFS' most important features available within Openmediavault's GUI.

ZFS (the Zettabyte File System) is a high-performance, scalable file system and logical volume manager designed by Sun Microsystems which is now part of Oracle. It was originally developed for the Solaris operating system and it's the granddaddy of COW (Copy On Write) filesystems. ZFS has since been ported to other platforms, to include Linux and FreeBSD . Having been under constant development since its creation for Sun Solaris server, in 2001, ZFS is very mature.

Currently, OPENZFS on Linux is sponsored by Lawrence Livermore Labs, it's very well-funded and will be fully supported into the foreseeable future and, likely, beyond.

Following are some key features and characteristics of ZFS:

  • Data Integrity: ZFS uses a copy-on-write mechanism and checksums for all data and metadata, ensuring that any corruption can be detected and corrected.
  • Snapshots and Clones: ZFS allows for the creation of snapshots, which are read-only copies of the file system at a specific point in time. Clones are writable copies of snapshots, enabling efficient data management and backup.
  • Pooled Storage: ZFS combines the concepts of file systems and volume management, allowing multiple file systems to share the same storage pool. This simplifies storage management and improves efficiency.
  • Scalability: ZFS is designed to handle large amounts of data, making it suitable for enterprise-level storage solutions. It can manage petabytes of data and supports very large file systems.
  • RAID Functionality: ZFS includes built-in RAID capabilities, allowing users to configure redundancy and improve data availability without the need for separate hardware RAID controllers.
  • Compression and Deduplication: ZFS supports data compression and deduplication, which can save storage space and improve performance.
  • Self-Healing: When ZFS detects data corruption, it can automatically repair the affected data using redundant copies, enhancing data reliability.

More detailed information on capabilities and limitations is available → here.

  Note
While the above external resource is informative, creating a ZFS pool on the command line, in accordance with external references, is not recommended. This document will walk users through a ZFS installation process that is compatible with the ZFS plugin and Openmediavault.

  • PuTTY is a prerequisite for installing OMV-Extras and for working with ZFS on the command line.
    PuTTY is an SSH client that will allow users to connect to their SBC, from a Windows client, to get on the server's command line. PuTTY is installable on a Windows client. Installation and use guidance, for PuTTY, can be found →here.
  • OMV-Extras is a prerequisite for installing the kernel plugin. Installation and use guidance, for the OMV-Extras plugin can be found →here.
  • The Kernel Plugin is required for ZFS. After OMV-Extras is installed, on OMV's left side menu bar go to System, Plugins. Find, select, and install the openmediavault-kernel 7.x.x plugin.

With a focus on getting new users started:
Most of the documentation at OMV-Extras.org is written with a focus on “How-To” do a specific task. Further, in most cases, topics on this site are geared toward beginners. While getting to a running ZFS installation can be laid out in a “How-To” format, ZFS and its RAID equivalents are NOT beginner topics. Accordingly, this document will support the “How-To” route along with explanations (very brief in nature) to inform beginners of ZFS basics and to prevent users from straying too far from reasonable norms.

As the “How-To” path is laid out, overview explanations of key concepts are provided along with links to more extended information. For beginners and others who have had little to no exposure to ZFS, taking a few minutes to read and understand ZFS related concepts will increase understanding and dispel some of the myths and mysticism related to this unique file system.

TL;DR links allow intermediate or expert users to jump straight to installation steps..

There are a great many misunderstandings with regard to ZFS. This section will go over a few of them:

(TL;DR - send me to → Kernels and Their Impact.)

“I heard that ZFS requires massive amounts of RAM” or “ZFS requires a strong CPU”.
While all use cases are not the same, for this sake of this discussion, we'll assume that users reading this document are NOT Corporate or Datacenter Admins. The assumption will be that readers are home server users, server admins for small businesses, or other entities that have 25 users or less. In other words, when compared to Enterprise level network traffic, we're talking about relatively “light usage”.

The author of this document, in personal experience with running ZFS, has set up a 4TB pool on a host with 4GB of RAM and an older Atom processor (read, “a weak CPU”). File server performance for a few users, along with streaming data, was fine. Memory might become an issue only if “dedup” (deduplication of data) is turned ON. (This is an Enterprise feature that is OFF by default.) In most home or small business use cases, ZFS' CPU requirements are modest.

“ECC RAM is required to run ZFS”.
As is the case with most file server and NAS installations, ECC is desirable but not required. ECC is designed to correct randomly “flipped bits” in RAM, notionally caused by cosmic rays. While flipped RAM bits could cause an errored disk write, a more likely outcome would be a kernel or application error. Data stored and checksummed, on a spinning hard drive or an SSD, is another matter altogether. Correcting storage media errors is a task that ZFS handles well.

“ZFS is eating all of my RAM!”
Actually, this is a good thing. If memory is unused and ZFS needs RAM for house keeping chores (a scrub for example) or for a copying operation, ZFS will use existing RAM to facilitate and speed up I/O. Further, ZFS will hold the same RAM until another process requests RAM. At that point ZFS will release RAM to the requesting process. Assuming that a reasonable amount of RAM has been provisioned (4GB or more), even if most of a ZFS server's RAM “appears” to be in use, there's nothing to worry about.

ZFS is licensed under CDDL which is a “freeopen source license. Due to a perceived (but never tested in a Court of Law) licensing conflict, Debian Linux does not build ZFS kernel modules into their kernels by default. This is more of a “legal fiction”, than anything else, in that the OpenZFS license is simply another version of several “free open source” licenses, not unlike Debian's GNU General Public license and a handful of other free OSS licenses contained within the Debian distros (as outlined →here). Since Openmediavault is based on Debian Linux, ZFS is provided as a plugin to prevent any issue that may result from a licensing conflict.

In the final analysis, for the end user, this free license “wrangling” is a non-issue.

TL;DR: “I'll choose my kernel”. Send me to → Installation.

Openmediavault installs with the Debian Backports kernel, by default. The standard Debian kernel is available, but must be selected. A third option, the Proxmox kernel, is installable VIA the kernel plugin.
The following are the pros and cons of each kernel for ZFS

The Debian Backports Kernel

The Backports Kernel (OMV's default kernel) is used, primarily, for its support of the latest hardware and, along similar lines, for the latest software packages. The issue with the backports kernel, where ZFS is concerned, is that it's possible to have a kernel upgrade offered for installation that does not have ZFS packages in its repos. (This has happened to the author of this doc.) After a backports kernel upgrade, this may result in a ZFS pool “disappearing”. The pool still exists but, “fixing the issue” requires booting into an older kernel, to see the existing pool, until the new kernel's ZFS repository packages “catch up”. For this reason alone, the backports kernel is not recommended.

The Standard Debian Kernel

The Standard Debian Kernel (selectable) can be used for ZFS. However, since ZFS kernel modules are not installed in the Debian kernel by default, they must be built by the ZFS plugin when it is installed. While this process works, building the modules is a long process that requires continuous access to online repos. Accordingly, the potential for a build error exists. For this reason, while the Standard Kernel is very usable for ZFS, it is not ideal.

The Proxmox Kernel

The Proxmox Kernel is a Ubuntu Kernel that has ZFS modules prebuilt and compiled into the kernel by default. However, the Kernel plugin is required to install the Proxmox Kernel. Among the other useful features available, the kernel plugin can pull and install a Proxmox kernel, and can make it the default kernel when booting. As Proxmox kernel upgrades become available and are performed, the repos for the kernel will always have the required packages to support ZFS. Further, since the Proxmox kernel is financially supported by the Proxmox Virtualization project, the kernel is exhaustively tested with ZFS modules installed, before it's made available to the public. In the bottom line, using the Proxmox kernel decreases the possibility of an installation error and guarantees ZFS support through kernel upgrades, while increasing overall server reliability.

Kernels for ZFS Support - The Bottom line


ZFS with the backports Debian kernel: This is a bad idea. Problems are possible with each backports kernel update.
ZFS with the standard Debian kernel: This combination will work, but it's not ideal.
ZFS with the Proxmox kernel: This is the best case scenario for ZFS.

To get started with ZFS and to create an easy installation path to a stable server, some preliminary set up, settings and adjustments are recommended.

First, bring the openmediavault server (hereafter known as OMV) up-to-date by applying all pending updates:

Under System, Update Management, Updates, click the Install Updates button. Confirm and Yes.

(Depending on the number of updates pending, this may take some time. If the installation is new with several updates pending, it may take more than one update session to fully update the server.)

At this point, a user choice is required:



As previously mentioned, in preparing to install ZFS, disabling backports kernels is highly recommended.

Under System, OMV-Extras, click on Disable backports
(This may take a few minutes to complete. When End of Line appears, click Close to finish.)



Since the above process changes software repositories, click on apt clean repos and apt clean.

While it's not absolutely necessary, to ensure that the standard Debian kernel and its repos are aligned, consider rebooting the server.

When complete, skip the following and proceed directly to Installing the ZFS Plugin.



Under System, Kernel, select the download Proxmox icon and select a kernel.

(While this selection is the user's choice; the oldest kernel may result in an avoidable kernel upgrade in the near future while the newest kernel will not be as well tested in field conditions. **The "middle of the road" kernel is recommended**.)



The dialog box will recommend rebooting to complete the installation of the Proxmox kernel.
Reboot now.

After the reboot is complete, under System, Update Management, Updates, check for updates.

It is likely that Proxmox related updates will be available. Install these updates.





Under System, Kernel, take note of the kernels available.



The kernel ending with -pve is the Proxmox kernel and it is, now, the default.



Optional: Non Proxmox kernels can be removed.

TL;DR proceed to → Installing the ZFS plugin.

Removing non-proxmox kernels is recommended in that, when Openmediavault is updated, the remaining Debian kernels will be updated as well. These updates will also update the grub bootloader with unnecessary entries for the newer Debian Kernels. While rare, occasionally, grub/kernel updates do not go well. Issues with unused Debian Kernels and their grub updates can be prevented by removing non-proxmox kernels.

To remove non-proxmox kernels:
Under System, Kernel, click on the Proxmox icon and select Remove non-Proxmox Kernels from the menu.



When the popup dialog box displays END OF LINE, click the Close button.
Reboot


After the reboot, under System, Kernel:
Only Proxmox kernels should be displayed (ending with -pve), along with memory testing utilities and other utilities that may have been previously installed.



Under System, Plugins, scroll all the way to the bottom. Highlight openmediavault-zfs 7.X.X: and click the down arrow to install.

The installation pop up will proceed until END OF LINE appears. At that point, click the Close button.
In most cases, the GUI will reset which changes the left side menu, adding ZFS to the Storage Pop-down.


Pre-Pool Creation - General Info

- A ZFS “Pool” is made up of one or more “VDEV's”. More detailed information on VDEVs can be found → here. It's recommended that new ZFS users review the various VDEV types as the selections made during the following steps will have an impact on future Pool maintenance and expansions.
- During the creation of a “Pool”, in accordance with user selections, the installation process creates and adds the first VDEV automatically.
- A Pool can have one or more VDEVs and a new VDEV can be added to an existing pool, at any time, increasing the pool's size.



Under Storage, ZFS, Pools, click on the create (+) icon.





The "Create" window will pop up. Note that making selections on this page will create a pool and the first "VDEV".

Name:
The user's choice. However, limit the name to letters and numbers, with a reasonable length.

Pool type:

The choices are; Basic, Mirror, RAID-Z1, RAID-Z2, RAID-Z3
For more details on the these selections, see → VDEV types.


Devices:
When clicked, a list of available drives will pop up.

To see and select drives in this field, they must be wiped under Storage, Disks. Generally, a Quick wipe will do.

Mountpoint:
If left blank (recommended) the pool will be mounted, by name, at the root of the boot drive.

Device alias:
This will be used to name devices (hard drives). Leaving at default is recommended.

Force creation:
Checking this box should not be necessary and is not recommended.

Set ashift: Checking this box is HIGHLY recommended. Checking the box will set ashift to 12 which sets the sector size (4K) of most current spinning hard drives and SSD's. (More information on ashift is available → here.

If the box is not checked, the default ashift value passed to the pool will be 0. As noted in the previous link, this will not be good for performance.

Compression:
Optional. However checking this box may have the effect of saving some drive/pool space. When this box is checked, lz4 is the offered default. LZ4 is a good overall choice for speed and efficiency with little to no performance penalty.



In the following example, a mirror was chosen for disk redundancy and default automatic error correction.
(While mentioned in VDEVs this -> external reference explains why mirrors are a good choice.)

Once selections have been made, click Save.
Confirm the Pending Change.

If a WARNING dialog pops up containing "invalid VDEV specification", it may be necessary to check the Force creation box.


TL;DR Take me to → Adding Filesystems.

While it’s possible to create Linux folders directly at the root of a ZFS pool, creating dedicated ZFS filesystems offers many advantages.

ZFS filesystems are logical containers within the pool that can have their own assignable properties, such as compression, quotas, and more. These properties can be set individually or inherited from the parent pool.

One of the most powerful features of using ZFS filesystems is that each filesystem can have its own set of snapshots, making backups and versioning much more flexible and granular.

As a best practice, ZFS filesystems can be used to organize and separate different types of data — such as Documents, Music, Pictures, and Videos. This allows you to fine-tune settings (e.g., compression for documents, record size for video files) and manage snapshots independently for each data type, making your NAS more efficient and easier to maintain.



Under, Storage, ZFS, Pools:

Highlight the Pool and click the "+" (add) icon.





In the Add dialog box, give the Filesystem a name, and click Add.




Confirm the Pending Change.

Repeat as needed for the filesystems necessary for the use case.



Openmediavault has a built in scrub scheduled for the second Sunday of every month. Generally speaking a monthly Pool scrub is all that is needed.
Scrub results will be available under Storage, ZFS, Pools. Highlight the Pool, click the Tools Icon and select Details from the pop-down.

Using the commands below and a scheduled task (found under System, Scheduled Tasks), it's possible to schedule additional scrubs, cancel a scrub or check a scrub's status.



(Where ZFS1 is found in the following, substitute the name of the user's Pool.)

zpool scrub ZFS1
zpool scrub -s ZFS1 #stop a scrub in progress#
zpool status -v ZFS1


A ZFS snapshot is a read-only, point-in-time copy of a ZFS file system or volume. It captures the exact state of data at the moment the snapshot is taken, without duplicating the actual data. This means snapshot creation is very fast and uses very little additional space initially.

Snapshots are useful for:

  • Recovering files that were accidentally deleted or modified.
  • Creating consistent backups.
  • Safely testing changes or upgrades.


Since snapshots are read-only, they cannot be modified, which helps ensure data integrity. You can later clone or rollback to a snapshot if needed. Snapshot cloning or rollbacks makes the data contained in ZFS filesystems impervious to ransomware and other data altering viruses.

Taking A Snapshot


A manual Snapshot of a Pool or an individual filesystem can be taken under Storage, ZFS, Pools:
- Highlight the Pool or an individual filesystem.
- Click the “+” (add) icon.
- From the popdown, select + Add filesystem|snap|volume.
- In the Add filesystem, snapshot, or volume… dialog box, in the Type field, select Snapshot. Finally, click Add.

Note the name of the Snapshot. In addition to being attributed to the "Parent" ZFS1/Documents filesystem, it's identified by the year, month, day and the time taken.



While taking an occasional manual snapshot is worthwhile, the snapshotting process can be fully automated using zfs-auto-snapshot. A document describing what zfs-auto-snapshot is, how to install it and set it up is available → here.




General:
As noted before, when creating the first Pool a new VDEV is created within the new pool in accordance with the user's selections.
There are a few rules concerning Pools and VDEVs.

First: If a VDEV is lost, the Pool is lost. There is no recovery from this situation.
Second: Disk redundancy is at the VDEV level. If redundancy is not used, the failure of one disk will result in the failure of the Pool.
Therefore, while possible, it wouldn't be advisable to expand a Pool with an existing RAID VDEV, using a single Basic volume because a Basic volume has no redundancy.

  • VDEVs are made up of physical block devices, I.E storage drive(s).
  • A Pool can be expanded by adding new VDEVs. However, the addition of a VDEV, to a Pool, is PERMANENT. A VDEV cannot be removed from a Pool.
  • Once created, a VDEV cannot be modified; a Basic volume will always remain a Basic volume. A mirror (RAID1 equivalent) will remain a mirror. RAID-Z1 (a RAID5 equivalent) will remain RAID-Z1. RAID-Z1 cannot be upgraded to RAID-Z2 (a RAID6 equivalent), etc.
  • If a single VDEV is lost, in a multi-VDEV Pool, the entire Pool is lost.
  • Disk redundancy is at the VDEV level. Accordingly, it makes no sense to add a Basic disk VDEV, to a RAID level VDEV, to expand a Pool. Per the rules, a VDEV can't be removed and if the Basic (single disk) fails, the entire Pool will be lost.


  • Basic:

Basic is a single disk volume. A single “Basic” disk is fine for basic storage. A scrub will reveal data integrity errors but, using default attributes, data errors will not be automatically corrected. However, if the filesystem attribute copies=2 is set, that filesystem residing on a Basic volume will autocorrect data errors. (The cost associated with 2 copies of all files is that it uses twice the disk space.) As noted earlier, using a Basic volume to expand a Pool with a RAID VDEV is a bad idea.

  • Mirror:

Also known as a Zmirror. A Zmirror is a RAID1 equivalent. A mirror requires a 2 disk minimum. In that there are always at least 2 copies of all files in a Mirror, data integrity scrubs automatically correct data errors. Further, it's worth noting that more than 2 disks can be added to a single Mirror. Adding more than 2 disks creates additional disk mirrors and more than 2 copies of all files. While the cost is the loss of hard drive space, multiple drives in a mirror configuration provides for maximum data integrity and safety. For the reasons stated in this → reference, using VDEV(s) comprised of one or more mirrors should be considered and is recommended.

For RAID-Z implementations, it is generally recommended to run an “odd” number of drives.

  • RAID-Z1: With one striped parity disk, this is the equivalent of RAID5. (RAID-Z1 requires 3 disks minimum. A rule of thumb maximum would be 7 drives.)
  • RAID-Z2: With two striped parity disks, this is the equivalent of RAID6. (RAID-Z2 requires 4 disks minimum. A rule of thumb maximum would be 11 drives )
  • RAID-Z3: With three striped parity disks, RAID-Z3 has no traditional RAID equivalent but it could, notionally, be called RAID7. (RAID-Z3 requires 5 disks minimum. A rule of thumb maximum would be 15 drives.)




To reiterate; if any one VDEV in a Pool is lost, the entire pool is lost.
Accordingly, if redundancy is to be used it must be set up at the VDEV level.

Consider the following illustration:

A pool, with a VDEV of one disk (Basic), is lost if the disk fails. That's straight forward.
A pool with a mirror VDEV will survive if one disk in the mirror is lost.
A pool with a RAID-Z1 VDEV, also, will survive if one disk in the array is lost.

While it will work, mixing dissimilar VDEVs in a Pool, in the manner illustrated, is extremely bad practice. As previously mentioned, once a VDEV is added to a pool, it can't be removed. Therefore, the Pool as illustrated will be lost if there's a problem with the Basic drive because the single drive cannot be replaced.




The Recommended Pool / VDEV to Start Using ZFS

Pools made up of mirrors offer the most features and advantages. Disk redundancy is taken care of and, since there are two copies of all files, bit-rot damage and other silent damage to files is self-healing. If needed, another mirror (a pair of disks) can be added to expand the pool at any time.

Data Integrity is one of ZFS' hallmark features. In many file systems silent corruption goes unnoticed and unchecked, largely because they have no mechanism to detect “bit-rot”. There are numerous reasons for bit-rot having to do with the inevitable degradation of magnetic media due to age, SSD's that have flipped bits from cosmic rays and other scenarios. Another consideration is the way hard drives and SSD's fail. Contrary to popular belief, storage media does not fail instantly, with a similarity to flipping a light switch. Often they fail slowly and silently, irreversibly corrupting data. Without a filesystem that actively monitors the health of stored data, the discovery of data corruption may be well after sensitive data is irretrievably lost. Again, since corruption is often silent and may occur over an extended period, the potential for copying corrupted files to a backup device is a possibility as well.

Conversely, by using file checksums and “scrubs”, ZFS actively monitors the health of user data. ZFS scrubs may detect issues with a hard drive well before SMART stats indicate a developing drive issue. Keeping the primary server's data clean, ensures that backup copies of data are clean as well.

How ZFS Data Integrity Works

When a file is copied into a ZFS Pool, the file is parsed and assigned a checksum. A checksum may also be referred to as a “hash”. Every file will have a unique checksum. If a file is modified, a new checksum is calculated and assigned to the file. In this manner, the exact state of the file is tracked as it was when it was created or modified. It's important to note that, with or without modifications by the filesystem, if the file is altered in any respect, even at the bit level, the checksum will change. Changes that did not occur as a result of a filesystem write are what data integrity is all about.
A housekeeping chore called a “scrub” can be used to compare file checksums against actual file content. Scrubs should be run periodically to ensure that data is clean and to serve as a warning of potential degradation of storage media.

In the following illustration:
- The file in the Pool on the left matches its checksum. If all files match their checksums, a scrub would report:
scan: scrub repaired 0B in <Length of Time required for the scan> with 0 errors on <Day, Date, Time, Year>
- The file in the Pool on the right does not match its checksum. Since the Pool has no redundancy, this file might be reported as a “checksum error” and / or a “unrecoverable error”.

Automatic File Restoration

Full safety, with automatic file restoration, is available “if” two copies of the same file exist.

Consider the following illustration:

- On the left:
When a file is created a ZFS mirror (RAID1 equivalent) creates two identical copies of the file, on each hard drive, and assigns them identical checksums.
- In the middle:
A scrub found that one of two previously identical files no longer matched its checksum.
- On the right:
ZFS will automatically remove the corrupted file and restore it from the valid copy.


Automatic file restoration, with silent corruption protection is available under two conditions:

- When using Zmirrors (where there are two copies of all files).
- In all other VDEVs where Basic Volumes or RAID-ZX is used, AND where filesystems have the copies=2 feature enabled.

Consider the following:
In cases where filesystems are using the copies=2 feature, automatic file restoration works the same as it would with Zmirrors where 2 file copies exist natively.


Practical considerations:
- While copies=2 can be used with a Basic volume, it should be noted that in the event of a disk failure both file copies would be lost. However, if data integrity and automatic restoration is used at the primary server, data on a backup server would be clean.
- RAID-Z implementations reconstruct errored data a bit differently. While some data reconstruction is possible, using parity calculations, RAID-Z does not provide for restoration of silent errors. While RAID-Z provides disk redundancy, copies=2 would be required to provide for maximum data protection and file restoration.

Expanding a VDEV:

Spare drives can be added to RAID VDEVs but they cannot (currently) be used to expand the VDEV. However, Zmirrors or RAID-ZX arrays can be “upgraded” for increased size by failing a drive and replacing each of the array's drives, one-by-one, with larger drives. The feature to support this is “autoexpand” and it's on by default. However, it should be noted that significant RISK is involved in failing, replacing, and resilvering numerous drives, especially if they are old. Before beginning such a process, ensuring the server's backup is up-to-date is highly recommended.

  • omv7/omv7_plugins/zfs.1750821848.txt.gz
  • Last modified: 2025/06/25 03:24
  • by crashtest