The ZFS plugin makes it easy for users to take advantage of ZFS, with an easy installation and making ZFS's more important features available within Openmediavault's GUI.
ZFS (the Zettabyte File System) is the Granddaddy of COW (Copy On Write) filesystems. Having been under constant development since it's creation for Sun Solaris server in 2001, ZFS is very mature. Currently, OPENZFS on Linux is sponsored by Lawrence Livermore Labs. In short, ZFS on Linux is very well funded and will be fully supported into the foreseeable future and, likely, beyond.
The more important among ZFS' several features are as follows:
Data integrity and repair, and data restoration (VIA snapshots) are among ZFS' more important features.
More detail information on capabilities and limits is available → here.
Note | |
While the above external resource is informative creating a ZFS pool on the command line, in accordance with external references, is not recommended. This document will walk users through a ZFS installation process that is compatible with the ZFS plugin and Openmediavault. |
With a focus on getting new users started:
Most of the documentation at OMV-Extras.org is written with a focus on “How-To” do a specific task. Further, in most cases, topics on this site are geared toward beginners. While getting to a running ZFS installation can be laid out in a “How-To” format, ZFS and it's RAID equivalents are NOT beginner topics. Accordingly, this document will have the “How-To” route along with explanations (very brief in nature) to inform beginners of ZFS basics and to prevent users from straying too far from reasonable norm's when setting up ZFS.
If users are only interested in an A to B path for setting up ZFS - “TL_DR” links will bypass suggested reading and go straight to setup steps.
There are a great many misunderstandings with regard to ZFS. This section will go over a few.
(TL;DR - send me to → The Installation Process.)
“I heard that ZFS requires massive amounts of ram” or “ZFS requires a strong CPU”.
While all use cases are not the same, for this sake of this discussion, we'll assume that users reading this document are NOT Corporate or Datacenter Admins. The assumption will be that readers are home server users or server admins for small businesses or other entities that have 25 users or less. In other words, when compared to Enterprise level network traffic, we're talking about relatively “light usage”.
The author of this document, in personal experience with running ZFS, has set up a 4TB pool on a host with 4GB of ram and an older Atom processor (read, a weak CPU). Performance for a few users, along with streaming data, was fine. Memory becomes an issue if “dedup” (deduplication of data) is turned ON. This feature is OFF by default. Similarly, without numerous concurrent streams, without transcoding, and without numerous dockers or KVM virtual machines, ZFS' CPU requirements are relatively modest.
“ZFS is eating all of my RAM!”
Actually, this is a good thing. If memory is unused and ZFS needs RAM for a house keeping chore (a scrub for example) or for a copying operation, ZFS will use existing RAM to facilitate and speed up I/O. Further, ZFS will hold the same RAM until another process needs it. At that point ZFS will release RAM to the requesting process.
ZFS is licensed under CDDL which is a “free” open source license. Due to a perceived (but never tested in a Court of Law) licensing conflict, Debian Linux does not build ZFS kernel modules into their kernels by default. This is more of a “legal fiction”, than anything else, in that the OpenZFS license is simply another version of several “free open source” licenses, not unlike Debian's GNU General Public license and a hand full of other free OSS licenses contained within the Debian distro's (as outlined →here). Since Openmediavault is based on Debian Linux, ZFS is provided as a plugin to prevent any issue that may result from a licensing conflict.
In the final analysis, for the end user, this free license “wrangling” is a non-issue.
Openmediavault installs with the Debian Backports kernel, by default. The standard Debian kernel is available, but must be selected. A third option, the Proxmox kernel, is installable VIA the kernel plugin.
Following are the pro's and con's of each kernel, where ZFS is concerned:
The Backports Kernel (OMV's default kernel) is used, primarily for it's support of the latest hardware and, along similar lines, for the latest software packages. The issue with the backports kernel, where ZFS is concerned, is that it's possible to have a kernel upgrade offered for installation that does not have ZFS packages in its repo's. (This has happened to the author of this doc.) After a backports kernel upgrade, this may result in a ZFS pool “disappearing”. The pool still exists but, “fixing the issue” requires booting into an older kernel, to see the existing pool, until the new kernel's ZFS repository packages “catch up”. For this reason alone, the backports kernel is not recommended.
The Standard Debian Kernel (selectable) can be used for ZFS. However, since ZFS kernel modules are not installed in the Debian kernel by default, they must be built by the ZFS plugin. While this process works, building the modules is a long process that requires continuous access to online repo's. Accordingly, the potential for a build error exists. For this reason, while the Standard Kernel is very usable for ZFS, it is not ideal.
The Proxmox Kernel is a Ubuntu Kernel that has ZFS modules prebuilt and compiled into the kernel by default. However, the Kernel plugin is required to install the Proxmox Kernel. Among the other useful features available, the kernel module can pull and install a Proxmox kernel, and can make it the default kernel when booting. As Proxmox kernel upgrades become available and are performed, the repo's for the kernel will always have the required packages to support ZFS. Further, since the Proxmox kernel is financially supported by the Proxmox Virtualization project, the kernel is exhaustively tested with ZFS modules installed, before it's made available to the public. In the bottom line, using the Proxmox kernel decreases the possibility of an installation error and guarantees ZFS support through kernel upgrades, while increasing overall server reliability.
ZFS with the backports Debian kernel - this is a bad idea. Problems are possible with each backports kernel update.
ZFS with the standard Debian kernel - this combination will work, but it's not ideal.
ZFS with the Proxmox kernel - this is the best case scenario for ZFS.
To get started with ZFS and to create an easy installation path to the most stable server possible, some preliminary setup, settings and adjustments are recommended.
First, bring the server up-to-date by applying all pending updates:
Under System, Update Management, Updates, click the Install Updates button. Confirm and Yes.
(Depending on the number of updates pending, this may take some time. If the installation is new with several updates pending, it may take more than one update session to fully update the server.)
At this point, a user choice must be made:
As previously mentioned, in preparing to install ZFS, disabling the backports kernel is highly recommend.
Under System, OMV-Extras, click on Disable backports
(This may take a few minutes to complete. At the end of the process, End of Line will appear. Clicking the Close button will finish the process.)
Since the above process changes software repositories, click on apt clean repos and apt clean.
While it's not absolutely necessary, to insure that the standard Debian kernel and it's repo's are aligned, consider rebooting the server.
When complete, skip the following and proceed to Installing the ZFS Plugin.
Under System, Kernel, select the download Proxmox icon and select a kernel.
(While this selection is the users choice, the oldest kernel may result in an avoidable upgrade in the near future while the newest kernel is not as well tested in the field conditions.)
The dialog box will recommend rebooting to complete the installation of the Proxmox kernel. Reboot now.
After the reboot is complete, under System, Update Management, Updates, check for updates.
It is likely that Proxmox related updates will be available. Install these updates.
Under System, Kernel, take note of the kernels available.
The kernel ending with -pve is the Proxmox kernel and it is, now, the default.
Optional: Non Proxmox kernels can be removed.
TL;DR proceed to → Installing the ZFS plugin.
Removing non-proxmox kernels is recommended in that, when Openmediavault is updated, the remaining Debian kernels will be updated as well. These updates will also update the grub bootloader with unnecessary entries for the newer Debian Kernels. While rare, occasionally, grub/kernel updates do not go well. Issues with unused Debian Kernels and their grub updates can be prevented by removing non-proxmox kernels.
To remove non-proxmox kernels:
Under System, Kernel, click on the Proxmox icon and select Remove non-Proxmox Kernels from the menu.
When the popup dialog displays END OF LINE, click the Close button.
Reboot.
Under System, Kernel:
Only Proxmox kernels should be displayed (ending with -pve), along with memory testing utilities and other utilities that may have been previously installed.
Under System, Plugins, scroll all the way to the bottom. Highlight openmediavault-zfs 7.X.X and click the down arrow to install.
The installation pop up will proceed until END OF LINE appears. Click the Close button.
In most cases, the GUI will reset which changes the left side menu, adding ZFS to the Storage Pop down.
- A ZFS Pool is made up of one or more “VDEV's”
- A VDEV can be a single disk (a basic volume) or a collection of disks in a RAID like format. (RAID1, RAID5, etc.)
Not to worry - the installation process creates the Pool and add's the first VDEV automatically.
- A pool can have multiple VDEV's, AND, a new VDEV can be added to an existing pool, increasing it's size.
- If a VDEV is lost, the Pool is lost. There is no recovery from this situation.
- Disk redundancy is at the VDEV level. With a mirror as the first VDEV, $$$$$$$
Therefore, while possible, it wouldn't be advisable to expand a Pool with a RAID VDEV, using a single Basic volume (a basic volume has no redudancy).
At this point, users should have some idea of what they're looking for
Under Storage, ZFS, Pools, click on the create (+) icon.
The “Create” window will pop up. Note that making selections on this page will create a pool and the first “vdev”. (More on vdev's later.)
The Name* field:
The user's choice. However, limit the name to letters and numbers with a reasonable length.
Pool type: The choices are as follows.
Devices: To see and select drives in this field, they must be wiped under Storage, Disks. Generally, a Quick wipe will do.
As noted before, when creating the first Pool a new VDEV is created within a new pool. There are a few rules concerning Pools and VDEV's.
Spare drives can be added to VDEV's but they can (currently) be used to expand the VDEV. However, mirror's or RAID-ZX arrays can be “upgraded” for size by failing a drive and replacing each array drives, one-by-one, with larger drives. The feature to support this is “autoexpand” on and it's on by default. However, it should be noted that RISK is involved in failing, replacing, and resilvering numerous drives, especially if they are old.
The various types of possible VDEV types are noted below:
Basic is a single disk volume. A single “Basic” disk is fine for basic storage. A scrub will reveal data integrity errors but, using default attributes, data errors will not be automatically corrected. However, if using the filesystem attribute copies=2 is set, that filesystem residing on a Basic volume will autocorrect data errors. (The cost associated with 2 copies of all files is that it uses twice the disk space.) As noted earlier, using a Basic volume to expand a Pool with a RAID VDEV is a bad idea.
Mirror: Also known as a Zmirror. A Zmirror is a RAID1 equivalent. A mirror requires a 2 disk minimum. In that there are allways at least 2 copies of all files in a Mirror, data intergety scrubs automattically correct data errors.
More than 2 disks can be added to a single Mirror. Adding more that 2 disks creates more that addtional disk mirrors and more than 2 copies of all files. This increases data integrity and safety.
RAID-Z2: With two striped parity disks, this is the equivalent of RAID6. (RAID-Z2 requires 4 disks minimum. A rule of thumb maximum would be 7 drives )
RAID-Z3: With three striped parity disks, RAID-Z3 has no RAID equivalent but it could, notionally, be called RAID7. (RAID-Z3 requires 5 disks minimum. A rule of thumb maximum would be 9 drives.)