Incomplete Draft Document
Not for public use
{{ :underconstruction.jpg?400 |}} \\
ZFS Plugin For OMV6
{{ :omvextras_logo4.jpg?400 |}} ====== ZFS Plugin For OMV7 ====== \\ \\ ===== Summary ===== The ZFS plugin makes it easy for users to take advantage of ZFS, with an easy installation and making ZFS's more important features available within Openmediavault's GUI. ===== What is ZFS? ===== **ZFS** (the **Z**ettabyte **F**ile **S**ystem) is the Granddaddy of **COW** (**C**opy **O**n **W**rite) filesystems. Having been under constant development since it's creation for Sun Solaris server in 2001, ZFS is very mature. Currently, [[https://zfsonlinux.org/|OPENZFS on Linux]] is sponsored by [[https://computing.llnl.gov/projects/openzfs|Lawrence Livermore Labs]]. In short, ZFS on Linux is very well funded and will be fully supported into the foreseeable future and, likely, beyond. The more important among ZFS' several features are as follows: * Can be used with "**Basic Volumes**" (a single disk) or for **Pooled storage** (with built in logical volume management) * Zmirrors, RAID-Z and other implementations of ZFS are the functional equivalents to legacy RAID, without legacy RAID drawbacks such as the "write hole" and silent data corruption. * **Snapshots** (Creates file, folder, filesystem and volume histories.) * Data **integrity verification** is automated. * When using zmirrors or when using the copies=2 feature in other ZFS implementations, after integrity verification during a scrub, **data repair is automatic**. Data integrity and repair, and data restoration (VIA snapshots) are among ZFS' more important features. More detail information on capabilities and limits is available -> [[https://www.howtogeek.com/175159/an-introduction-to-the-z-file-system-zfs-for-linux/|here]].
  Note
While the above external resource is informative creating a ZFS pool on the command line, in accordance with external references, is not recommended. This document will walk users through a ZFS installation process that is compatible with the ZFS plugin and Openmediavault.
===== Prerequisites ===== * PuTTY is a prerequisite for installing OMV-Extras\\ [[https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html|PuTTY]] is an SSH client that will allow users to connect to their SBC, from a Windows client, to get on the server's command line. PuTTY is installable on a Windows client. Installation and use guidance, for PuTTY, can be found ->[[https://wiki.omv-extras.org/doku.php?id=omv7:utilities_maint_backup#putty|here]]. * [[https://wiki.omv-extras.org/doku.php?id=misc_docs:omv_extras|OMV-Extras]] is a prerequisite for installing the kernel plugin. Installation and use guidance, for the OMV-Extras plugin can be found ->[[https://wiki.omv-extras.org/doku.php?id=misc_docs:omv_extras|here]]. * The **Kernel Plugin** is required for ZFS. After OMV-Extras is installed, on OMV's left side menu bar go to **System**, **Plugins**. Find, select, and install the **openmediavault-kernel** 7.x.x plugin. ===== Foreward ===== With a focus on getting new users started: Most of the documentation at OMV-Extras.org is written with a focus on "How-To" do a specific task. Further, in most cases, topics on this site are geared toward beginners. While getting to a running ZFS installation can be laid out in a "How-To" format, ZFS and it's RAID equivalents are NOT beginner topics. Accordingly, this document will have the "How-To" route along with explanations (very brief in nature) to inform beginners of ZFS basics and to prevent users from straying too far from reasonable norm's when setting up ZFS.\\ \\ If users are only interested in an A to B path for setting up ZFS - "**TL_DR**" links will bypass suggested reading and go straight to setup steps. ===== ZFS - General ===== There are a great many misunderstandings with regard to ZFS. This section will go over a few. (TL;DR - send me to -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#installation|The Installation Process]].) ==== ZFS - The Memory Myth ==== "I heard that ZFS requires massive amounts of ram" or "ZFS requires a strong CPU".\\ While all use cases are not the same, for this sake of this discussion, we'll assume that users reading this document are NOT Corporate or Datacenter Admins. The assumption will be that readers are home server users or server admins for small businesses or other entities that have 25 users or less. In other words, when compared to Enterprise level network traffic, we're talking about relatively "light usage".\\ \\ The author of this document, in personal experience with running ZFS, has set up a 4TB pool on a host with 4GB of ram and an older Atom processor (read, a weak CPU). Performance for a few users, along with streaming data, was fine. Memory becomes an issue if "dedup" (deduplication of data) is turned ON. This feature is OFF by default. Similarly, without numerous concurrent streams, without transcoding, and without numerous dockers or KVM virtual machines, ZFS' CPU requirements are relatively modest.\\ \\ "ZFS is eating all of my RAM!"\\ Actually, this is a good thing. If memory is unused and ZFS needs RAM for a house keeping chore (a scrub for example) or for a copying operation, ZFS will use existing RAM to facilitate and speed up I/O. Further, ZFS will hold the same RAM until another process needs it. At that point ZFS will release RAM to the requesting process.\\ \\ ==== The "Licensing" Issue ==== ZFS is licensed under [[https://opensource.org/license/CDDL-1.0|CDDL]] which is a "**free**" __open source__ license. Due to a perceived (but never tested in a Court of Law) licensing conflict, Debian Linux does not build ZFS kernel modules into their kernels by default. This is more of a "legal fiction", than anything else, in that the OpenZFS license is simply another version of several "free open source" licenses, not unlike Debian's [[https://www.gnu.org/licenses/gpl-3.0.html|GNU General Public license]] and a hand full of other free OSS licenses contained within the Debian distro's (as outlined ->[[https://www.debian.org/legal/licenses/|here]]). Since Openmediavault is based on Debian Linux, ZFS is provided as a plugin to prevent any issue that may result from a licensing conflict.\\ \\ In the final analysis, for the end user, this free license "wrangling" is a non-issue.\\ ==== Kernels and Their Impact ==== Openmediavault installs with the Debian Backports kernel, by default. The standard Debian kernel is available, but must be selected. A third option, the Proxmox kernel, is installable VIA the kernel plugin.\\ Following are the pro's and con's of each kernel, where ZFS is concerned:\\ === The Debian Backports Kernel === The Backports Kernel (OMV's default kernel) is used, primarily for it's support of the latest hardware and, along similar lines, for the latest software packages. The issue with the backports kernel, where ZFS is concerned, is that it's possible to have a kernel upgrade offered for installation that does not have ZFS packages in its repo's. (This has happened to the author of this doc.) After a backports kernel upgrade, this may result in a ZFS pool "disappearing". The pool still exists but, "fixing the issue" requires booting into an older kernel, to see the existing pool, until the new kernel's ZFS repository packages "catch up". For this reason alone, the backports kernel is __not recommended__. === The Standard Debian Kernel === The Standard Debian Kernel (selectable) can be used for ZFS. However, since ZFS kernel modules are not installed in the Debian kernel by default, they must be built by the ZFS plugin. While this process works, building the modules is a long process that requires continuous access to online repo's. Accordingly, the potential for a build error exists. For this reason, while the Standard Kernel is very usable for ZFS, it is not ideal. === The Proxmox Kernel === The Proxmox Kernel is a Ubuntu Kernel that has ZFS modules prebuilt and compiled into the kernel by default. However, the **Kernel plugin** is required to install the Proxmox Kernel. Among the other useful features available, the kernel module can pull and install a Proxmox kernel, and can make it the default kernel when booting. As Proxmox kernel upgrades become available and are performed, the repo's for the kernel will always have the required packages to support ZFS. Further, since the Proxmox kernel is financially supported by the [[https://www.proxmox.com/en/|Proxmox Virtualization project]], the kernel is exhaustively tested with ZFS modules installed, before it's made available to the public. In the bottom line, using the Proxmox kernel decreases the possibility of an installation error and guarantees ZFS support through kernel upgrades, while increasing overall server reliability.\\ \\ === Kernels for ZFS Support - The Bottom line === \\ ZFS with the backports Debian kernel - this is a bad idea. Problems are possible with each backports kernel update.\\ ZFS with the standard Debian kernel - this combination will work, but it's not ideal.\\ ZFS with the Proxmox kernel - this is the best case scenario for ZFS.\\ \\ ===== Installation ===== To get started with ZFS and to create an easy installation path to the most stable server possible, some preliminary setup, settings and adjustments are recommended.\\ \\ First, bring the server up-to-date by applying all pending updates: Under **System**, **Update Management**, **Updates**, click the **Install Updates** button. **Confirm** and **Yes**.\\ (Depending on the number of updates pending, this may take some time. If the installation is new with several updates pending, it may take more than one update session to fully update the server.)\\ \\ At this point, a user choice must be made:\\ * **If the Standaard Debian kernel is to be used**, proceed with [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#disable_backports|Disable Backports directly below]].\\ * **If the Proxmox kernel is to be used** (recommended), skip to -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#install_the_proxmox_kernel|Install The Proxmox Kernel]].\\ \\ ---- ==== Disable Backports: ==== As previously mentioned, in preparing to install ZFS, disabling the backports kernel is highly recommend.\\ \\ Under **System**, **OMV-Extras**, click on **Disable backports**\\ (This may take a few minutes to complete. At the end of the process, **End of Line** will appear. Clicking the **Close** button will finish the process.)\\ \\ {{ :omv7:omv7_plugins:zfs-01.jpg?nolink&600 |}} \\ \\ Since the above process changes software repositories, click on **apt clean repos** and **apt clean**.\\ While it's not absolutely necessary, to insure that the standard Debian kernel and it's repo's are aligned, consider rebooting the server.\\ \\ When complete, skip the following and proceed to [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#installing_the_zfs_plugin|Installing the ZFS Plugin]].\\ \\ ---- ==== Install the Proxmox kernel: ==== \\ Under **System**, **Kernel**, select the download **Proxmox** icon and select a **kernel**.\\ (While this selection is the users choice, the oldest kernel may result in an avoidable upgrade in the near future while the newest kernel is not as well tested in the field conditions.)\\ \\ {{ :omv7:omv7_plugins:zfs-02.jpg?nolink&600 |}} \\ The dialog box will recommend rebooting to complete the installation of the Proxmox kernel. Reboot now.\\ \\ After the reboot is complete, under **System**, **Update Management**, **Updates**, check for updates.\\ It is likely that Proxmox related updates will be available. Install these updates.\\ \\ ---- \\ Under **System**, **Kernel**, take note of the kernels available.\\ \\ {{ :omv7:omv7_plugins:zfs-02.1.jpg?nolink&600 |}} \\ The kernel ending with **-pve** is the Proxmox kernel and it is, now, the default.\\ ---- \\ **Optional:** Non Proxmox kernels can be removed.\\ \\ TL;DR proceed to -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#installing_the_zfs_plugin|Installing the ZFS plugin]].\\ \\ Removing non-proxmox kernels is recommended in that, when Openmediavault is updated, the remaining Debian kernels will be updated as well. These updates will also update the grub bootloader with unnecessary entries for the newer Debian Kernels. While rare, occasionally, grub/kernel updates do not go well. Issues with unused Debian Kernels and their grub updates can be prevented by removing non-proxmox kernels. To remove non-proxmox kernels: Under **System**, **Kernel**, click on the **Proxmox** icon and select **Remove non-Proxmox Kernels** from the menu.\\ \\ {{ :docs_in_draft:zfs-02.2.jpg?nolink&600 |}} \\ When the popup dialog displays **END OF LINE**, click the **Close** button.\\ \\ **Reboot**.\\ \\ Under **System**, **Kernel**: Only Proxmox kernels should be displayed (ending with **-pve**), along with memory testing utilities and other utilities that may have been previously installed.\\ ---- ==== Installing the ZFS Plugin ==== Under System, Plugins, scroll all the way to the bottom. Highlight openmediavault-zfs 7.X.X and click the down arrow to install.\\ {{ :omv7:omv7_plugins:zfs-03.jpg?nolink&600 |}} The installation pop up will proceed until **END OF LINE** appears. Click the **Close** button.\\ In most cases, the GUI will reset which changes the left side menu, adding ZFS to the Storage Pop down.\\ ---- ==== Creating a Pool ==== === General Info === == ZFS terms == \\ - A ZFS Pool is made up of one or more "VDEV's"\\ - A VDEV can be a single disk (a basic volume) or a collection of disks in a RAID like format. (RAID1, RAID5, etc.)\\ **Not to worry - the installation process creates the Pool and add's the first VDEV automatically.**\\ - A pool can have multiple VDEV's, AND, a new VDEV can be added to an existing pool, increasing it's size.\\ - If a VDEV is lost, the Pool is lost. There is no recovery from this situation.\\ - Disk redundancy is at the VDEV level. With a mirror as the first VDEV, **$$$$$$$** \\ Therefore, while possible, it wouldn't be advisable to expand a Pool with a RAID VDEV, using a single Basic volume (a basic volume has no redudancy).\\ == RAID levels etc: == At this point, users should have some idea of what they're looking for \\ Under **Storage**, **ZFS**, **Pools**, click on the create (**+**) icon.\\ \\ {{ :docs_in_draft:zfs-04.jpg?nolink&600 |}} \\ ---- \\ The "Create" window will pop up. Note that making selections on this page will create a pool and the first "vdev". (More on vdev's later.) {{ :docs_in_draft:zfs-05.jpg?nolink&600 |}} The **Name*** field:\\ The user's choice. However, limit the name to letters and numbers with a reasonable length.\\ \\ **Pool type**: The choices are as follows.\\ \\ **Devices**: To see and select drives in this field, they must be wiped under **Storage**, **Disks**. Generally, a **Quick** wipe will do.\\ \\ ===== ZFS Pools and VDEV's ===== As noted before, when creating the first Pool a new VDEV is created within a new pool. There are a few rules concerning Pools and VDEV's. * A Pool can be expanded by adding __new__ VDEV's. However, the addition of a VDEV, to a Pool, is PERMANENT - a VDEV __can not__ be removed from a Pool. * A VDEV, once created, can not be changed. A Basic volume with remain a basic volume. A mirror (RAID1 equivalent) will remain a mirror. RAID-Z1 (a RAID5 equivalent) can not be upgraded to RAID-Z2 (a RAID6 equivalent), etc. Disk redundancy is at the VDEV level. Note: If a single VDEV is lost, in a multi-VDEV Pool, the entire Pool is lost. Accordingly, it makes no sense to add a Basic disk VDEV, to a RAID level VDEV. Per the rules, a VDEV can't be removed and if the Basic (single disk) fails, the enitre Pool will be lost.\\ \\ Spare drives can be added to VDEV's but they can (currently) be used to expand the VDEV. However, mirror's or RAID-ZX arrays can be "upgraded" for size by failing a drive and replacing each array drives, one-by-one, with larger drives. The feature to support this is "autoexpand" on and it's on by default. However, it should be noted that RISK is involved in failing, replacing, and resilvering numerous drives, especially if they are old. The various types of possible VDEV types are noted below: * Basic: Basic is a single disk volume. A single "Basic" disk is fine for basic storage. A scrub will reveal data integrity errors but, using default attributes, data errors will not be automatically corrected. However, if using the filesystem attribute copies=2 is set, that filesystem residing on a Basic volume will autocorrect data errors. (The cost associated with 2 copies of all files is that it uses twice the disk space.) As noted earlier, using a Basic volume to expand a Pool with a RAID VDEV is a bad idea. Mirror: Also known as a Zmirror. A Zmirror is a RAID1 equivalent. A mirror requires a 2 disk minimum. In that there are allways at least 2 copies of all files in a Mirror, data intergety scrubs automattically correct data errors.\\ More than 2 disks can be added to a single Mirror. Adding more that 2 disks creates more that addtional disk mirrors and more than 2 copies of all files. This increases data integrity and safety.\\ * RAID-Z1: With one striped parity disk, this is the equivalent of RAID5. (RAID-Z1 requires 3 disks minimum. A rule of thumb maximum would be 5 drives.)\\ RAID-Z2: With two striped parity disks, this is the equivalent of RAID6. (RAID-Z2 requires 4 disks minimum. A rule of thumb maximum would be 7 drives )\\ RAID-Z3: With three striped parity disks, RAID-Z3 has no RAID equivalent but it could, notionally, be called RAID7. (RAID-Z3 requires 5 disks minimum. A rule of thumb maximum would be 9 drives.)\\ ===== Configuration ===== ===== Source Code ===== -> [[https://github.com/OpenMediaVault-Plugin-Developers/openmediavault-zfs|Source Code]]