Next revision | Previous revision |
omv7:omv7_plugins:zfs [2025/06/25 00:03] – created crashtest | omv7:omv7_plugins:zfs [2025/06/26 12:27] (current) – [Additional Reading] crashtest |
---|
<html><center><span style="color:#b7620b;font-size:300%;">Incomplete Draft Document</span></center></html> | |
<html><center><span style="color:#b7620b;font-size:300%;">Not for public use</span></center></html> | |
{{ :underconstruction.jpg?400 |}} | |
| |
\\ | \\ |
<html><center><strong>ZFS Plugin For OMV6</strong></center></html> | <html><center><strong>ZFS Plugin For OMV7</strong></center></html> |
| |
{{ :omvextras_logo4.jpg?400 |}} | {{ :omvextras_logo4.jpg?400 |}} |
===== Summary ===== | ===== Summary ===== |
| |
The ZFS plugin makes it easy for users to take advantage of ZFS, with a streamlined installation and making ZFS's more important features available within Openmediavault's GUI. | The ZFS plugin makes it easy for users to take advantage of ZFS, with a streamlined installation and making ZFS' most important features available within Openmediavault's GUI. |
| |
===== What is ZFS? ===== | ===== What is ZFS? ===== |
| |
**ZFS** (the **Z**ettabyte **F**ile **S**ystem) is a high-performance, scalable file system and logical volume manager designed by Sun Microsystems which is now part of Oracle. It was originally developed for the Solaris operating system and it's the Granddaddy of **COW** (**C**opy **O**n **W**rite) filesystems. ZFS has since been ported to other platforms, to include Linux and FreeBSD . Having been under constant development since it's creation for Sun Solaris server, in 2001, ZFS is very mature. | **ZFS** (the **Z**ettabyte **F**ile **S**ystem) is a high-performance, scalable file system and logical volume manager designed by Sun Microsystems which is now part of Oracle. It was originally developed for the Solaris operating system and it's the granddaddy of **COW** (**C**opy **O**n **W**rite) filesystems. ZFS has since been ported to other platforms, to include Linux and FreeBSD . Having been under constant development since its creation for Sun Solaris server, in 2001, ZFS is very mature. |
| |
Currently, [[https://zfsonlinux.org/|OPENZFS on Linux]] is sponsored by [[https://computing.llnl.gov/projects/openzfs|Lawrence Livermore Labs]], it's very well funded and will be fully supported into the foreseeable future and, likely, beyond. | Currently, [[https://zfsonlinux.org/|OPENZFS on Linux]] is sponsored by [[https://computing.llnl.gov/projects/openzfs|Lawrence Livermore Labs]], it's very well-funded and will be fully supported into the foreseeable future and, likely, beyond. |
| |
==== ZFS Features ==== | ==== ZFS Features ==== |
| |
With a focus on getting new users started:\\ | With a focus on getting new users started:\\ |
Most of the documentation at OMV-Extras.org is written with a focus on "How-To" do a specific task. Further, in most cases, topics on this site are geared toward beginners. While getting to a running ZFS installation can be laid out in a "How-To" format, ZFS and it's RAID equivalents are NOT beginner topics. Accordingly, this document will support the "How-To" route along with explanations (very brief in nature) to inform beginners of ZFS basics and to prevent users from straying too far from reasonable norm's.\\ | Most of the documentation at OMV-Extras.org is written with a focus on "How-To" do a specific task. Further, in most cases, topics on this site are geared toward beginners. While getting to a running ZFS installation can be laid out in a "How-To" format, ZFS and its RAID equivalents are NOT beginner topics. Accordingly, this document will support the "How-To" route along with explanations (very brief in nature) to inform beginners of ZFS basics and to prevent users from straying too far from reasonable norms.\\ |
\\ | \\ |
As the "How-To" path is laid out, overview explanations of key concepts are provided along with links to more extended information. For beginners and others who have had little to no exposure to ZFS, taking a few minutes to read and understand ZFS related concepts will increase understanding and dispel some the myths and mysticism related to this unique file system.\\ | As the "How-To" path is laid out, overview explanations of key concepts are provided along with links to more extended information. For beginners and others who have had little to no exposure to ZFS, taking a few minutes to read and understand ZFS related concepts will increase understanding and dispel some of the myths and mysticism related to this unique file system.\\ |
\\ | \\ |
For intermediate and expert users, **TL:DR** document links will skip directly to installing ZFS. | **TL;DR** links allow intermediate or expert users to jump straight to installation steps.. |
| |
| |
===== ZFS - General ===== | ===== ZFS - General ===== |
| |
There are a great many misunderstandings with regard to ZFS. This section will go over a few of them. | There are a great many misunderstandings with regard to ZFS. This section will go over a few of them: |
| |
(**TL;DR** - send me to -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#kernels_and_their_impact|Kernels and Their Impact]].) | (**TL;DR** - send me to -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#kernels_and_their_impact|Kernels and Their Impact]].) |
==== ZFS - The Memory Myth ==== | ==== ZFS - The Memory Myth ==== |
| |
**"I heard that ZFS requires massive amounts of ram" or "ZFS requires a strong CPU".**\\ | **"I heard that ZFS requires massive amounts of RAM" or "ZFS requires a strong CPU".**\\ |
While all use cases are not the same, for this sake of this discussion, we'll assume that users reading this document are NOT Corporate or Datacenter Admins. The assumption will be that readers are home server users, server admins for small businesses, or other entities that have 25 users or less. In other words, when compared to Enterprise level network traffic, we're talking about relatively "light usage".\\ | While all use cases are not the same, for this sake of this discussion, we'll assume that users reading this document are NOT Corporate or Datacenter Admins. The assumption will be that readers are home server users, server admins for small businesses, or other entities that have 25 users or less. In other words, when compared to Enterprise level network traffic, we're talking about relatively "light usage".\\ |
\\ | \\ |
The author of this document, in personal experience with running ZFS, has set up a 4TB pool on a host with 4GB of ram and an older Atom processor (read, "a weak CPU"). Performance for a few users, along with streaming data, was fine. Memory might become an issue only if "dedup" (deduplication of data) is turned ON. (This is an Enterprise feature that is __OFF__ by default.) In most home or small business use cases, ZFS' CPU requirements are modest.\\ | The author of this document, in personal experience with running ZFS, has set up a 4TB pool on a host with 4GB of RAM and an older Atom processor (read, "a weak CPU"). File server performance for a few users, along with streaming data, was fine. Memory might become an issue only if "dedup" (deduplication of data) is turned ON. (This is an Enterprise feature that is __OFF__ by default.) In most home or small business use cases, ZFS' CPU requirements are modest.\\ |
\\ | \\ |
**"ECC RAM is required to run ZFS".**\\ | **"ECC RAM is required to run ZFS".**\\ |
As is the case with most file server and NAS installations, ECC is desirable but not required. ECC is geared toward correcting randomly "flipped bits" in RAM, notionally caused by cosmic rays. While flipped RAM bits could cause an errored disk write, a more likely outcome would be an kernel or application error. Data stored and checksumed, on a spinning hard drive or an SSD, is another matter altogether. Correcting storage media errors is a task that ZFS handles well.\\ | As is the case with most file server and NAS installations, ECC is desirable but not required. ECC is designed to correct randomly "flipped bits" in RAM, notionally caused by cosmic rays. While flipped RAM bits could cause an errored disk write, a more likely outcome would be a kernel or application error. Data stored and checksummed, on a spinning hard drive or an SSD, is another matter altogether. Correcting storage media errors is a task that ZFS handles well.\\ |
\\ | \\ |
**"ZFS is eating all of my RAM!"**\\ | **"ZFS is eating all of my RAM!"**\\ |
Actually, this is a good thing. If memory is unused and ZFS needs RAM for house keeping chores (a scrub for example) or for a copying operation, ZFS will use existing RAM to facilitate and speed up I/O. Further, ZFS will hold the same RAM until another process requests RAM. At that point ZFS will release RAM to the requesting process. Assuming that a reasonable amount of ram has been provisioned (4GB or more), even if most of a ZFS server's ram "appears" to be in use, there's nothing to worry about.\\ | Actually, this is a good thing. If memory is unused and ZFS needs RAM for house keeping chores (a scrub for example) or for a copying operation, ZFS will use existing RAM to facilitate and speed up I/O. Further, ZFS will hold the same RAM until another process requests RAM. At that point ZFS will release RAM to the requesting process. Assuming that a reasonable amount of RAM has been provisioned (4GB or more), even if most of a ZFS server's RAM "appears" to be in use, there's nothing to worry about.\\ |
\\ | \\ |
| |
==== The "Licensing" Issue ==== | ==== The "Licensing" Issue ==== |
| |
ZFS is licensed under [[https://opensource.org/license/CDDL-1.0|CDDL]] which is a "**free**" __open source__ license. Due to a perceived (but never tested in a Court of Law) licensing conflict, Debian Linux does not build ZFS kernel modules into their kernels by default. This is more of a "legal fiction", than anything else, in that the OpenZFS license is simply another version of several "free open source" licenses, not unlike Debian's [[https://www.gnu.org/licenses/gpl-3.0.html|GNU General Public license]] and a hand full of other free OSS licenses contained within the Debian distro's (as outlined ->[[https://www.debian.org/legal/licenses/|here]]). Since Openmediavault is based on Debian Linux, ZFS is provided as a plugin to prevent any issue that may result from a licensing conflict.\\ | ZFS is licensed under [[https://opensource.org/license/CDDL-1.0|CDDL]] which is a "**free**" __open source__ license. Due to a perceived (but never tested in a Court of Law) licensing conflict, Debian Linux does not build ZFS kernel modules into their kernels by default. This is more of a "legal fiction", than anything else, in that the OpenZFS license is simply another version of several "free open source" licenses, not unlike Debian's [[https://www.gnu.org/licenses/gpl-3.0.html|GNU General Public license]] and a handful of other free OSS licenses contained within the Debian distros (as outlined ->[[https://www.debian.org/legal/licenses/|here]]). Since Openmediavault is based on Debian Linux, ZFS is provided as a plugin to prevent any issue that may result from a licensing conflict.\\ |
\\ | \\ |
In the final analysis, for the end user, this free license "wrangling" is a non-issue.\\ | In the final analysis, for the end user, this free license "wrangling" is a non-issue.\\ |
==== Kernels and Their Impact ==== | ==== Kernels and Their Impact ==== |
| |
**TL;DR: "I'll chose my kernel"**. Send me to -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#installation|Installation]].\\ | **TL;DR: "I'll choose my kernel"**. Send me to -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#installation|Installation]].\\ |
\\ | \\ |
Openmediavault installs with the Debian Backports kernel, by default. The standard Debian kernel is available, but must be selected. A third option, the Proxmox kernel, is installable VIA the kernel plugin.\\ | Openmediavault installs with the Debian Backports kernel, by default. The standard Debian kernel is available, but must be selected. A third option, the Proxmox kernel, is installable VIA the kernel plugin.\\ |
Where ZFS is concerned, following are the pro's and con's of each kernel:\\ | The following are the pros and cons of each kernel for ZFS\\ |
| |
=== The Debian Backports Kernel === | === The Debian Backports Kernel === |
The Backports Kernel (OMV's default kernel) is used, primarily, for it's support of the latest hardware and, along similar lines, for the latest software packages. The issue with the backports kernel, where ZFS is concerned, is that it's possible to have a kernel upgrade offered for installation that does not have ZFS packages in its repo's. (This has happened to the author of this doc.) After a backports kernel upgrade, this may result in a ZFS pool "disappearing". The pool still exists but, "fixing the issue" requires booting into an older kernel, to see the existing pool, until the new kernel's ZFS repository packages "catch up". For this reason alone, the backports kernel is __not recommended__. | The Backports Kernel (OMV's default kernel) is used, primarily, for its support of the latest hardware and, along similar lines, for the latest software packages. The issue with the backports kernel, where ZFS is concerned, is that it's possible to have a kernel upgrade offered for installation that does not have ZFS packages in its repos. (This has happened to the author of this doc.) After a backports kernel upgrade, this may result in a ZFS pool "disappearing". The pool still exists but, "fixing the issue" requires booting into an older kernel, to see the existing pool, until the new kernel's ZFS repository packages "catch up". For this reason alone, the backports kernel is __not recommended__. |
| |
=== The Standard Debian Kernel === | === The Standard Debian Kernel === |
The Standard Debian Kernel (selectable) can be used for ZFS. However, since ZFS kernel modules are not installed in the Debian kernel by default, they must be built by the ZFS plugin when it is installed. While this process works, building the modules is a long process that requires continuous access to online repo's. Accordingly, the potential for a build error exists. For this reason, while the Standard Kernel is very usable for ZFS, it is not ideal. | The Standard Debian Kernel (selectable) can be used for ZFS. However, since ZFS kernel modules are not installed in the Debian kernel by default, they must be built by the ZFS plugin when it is installed. While this process works, building the modules is a long process that requires continuous access to online repos. Accordingly, the potential for a build error exists. For this reason, while the Standard Kernel is very usable for ZFS, it is not ideal. |
| |
=== The Proxmox Kernel === | === The Proxmox Kernel === |
The Proxmox Kernel is a Ubuntu Kernel that has ZFS modules prebuilt and compiled into the kernel by default. However, the **Kernel plugin** is required to install the Proxmox Kernel. Among the other useful features available, the kernel plugin can pull and install a Proxmox kernel, and can make it the default kernel when booting. As Proxmox kernel upgrades become available and are performed, the repo's for the kernel will always have the required packages to support ZFS. Further, since the Proxmox kernel is financially supported by the [[https://www.proxmox.com/en/|Proxmox Virtualization project]], the kernel is exhaustively tested with ZFS modules installed, before it's made available to the public. In the bottom line, using the Proxmox kernel decreases the possibility of an installation error and guarantees ZFS support through kernel upgrades, while increasing overall server reliability.\\ | The Proxmox Kernel is a Ubuntu Kernel that has ZFS modules prebuilt and compiled into the kernel by default. However, the **Kernel plugin** is required to install the Proxmox Kernel. Among the other useful features available, the kernel plugin can pull and install a Proxmox kernel, and can make it the default kernel when booting. As Proxmox kernel upgrades become available and are performed, the repos for the kernel will always have the required packages to support ZFS. Further, since the Proxmox kernel is financially supported by the [[https://www.proxmox.com/en/|Proxmox Virtualization project]], the kernel is exhaustively tested with ZFS modules installed, before it's made available to the public. In the bottom line, using the Proxmox kernel decreases the possibility of an installation error and guarantees ZFS support through kernel upgrades, while increasing overall server reliability.\\ |
\\ | \\ |
=== Kernels for ZFS Support - The Bottom line === | === Kernels for ZFS Support - The Bottom line === |
===== Installation ===== | ===== Installation ===== |
| |
To get started with ZFS and to create an easy installation path to a stable server, some preliminary setup, settings and adjustments are recommended.\\ | To get started with ZFS and to create an easy installation path to a stable server, some preliminary set up, settings and adjustments are recommended.\\ |
\\ | \\ |
First, bring the openmediavault server (hereafter known as **OMV**) up-to-date by applying all pending updates:\\ | First, bring the openmediavault server (hereafter known as **OMV**) up-to-date by applying all pending updates:\\ |
\\ | \\ |
At this point, a user choice is required:\\ | At this point, a user choice is required:\\ |
* **If the Standard Debian kernel is to be used**, proceed with [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#disable_backports|Disable Backports directly below]].\\ | * **If the Standard Debian kernel is to be used**, proceed with [[https://wiki.omv-extras.org/doku.php?id=omv7:omv7_plugins:zfs#disable_backports_kernels|Disable Backports directly below]].\\ |
| |
* **If the Proxmox kernel is to be used** (recommended), skip to -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#install_the_proxmox_kernel|Install The Proxmox Kernel]].\\ | * **If the Proxmox kernel is to be used** (recommended), skip to -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#install_the_proxmox_kernel|Install The Proxmox Kernel]].\\ |
==== Disable Backports Kernels: ==== | ==== Disable Backports Kernels: ==== |
| |
As previously mentioned, in preparing to install ZFS, disabling backports kernels is highly recommend.\\ | As previously mentioned, in preparing to install ZFS, disabling backports kernels is highly recommended.\\ |
\\ | \\ |
Under **System**, **OMV-Extras**, click on **Disable backports**\\ | Under **System**, **OMV-Extras**, click on **Disable backports**\\ |
(This may take a few minutes to complete. At the end of the process, **End of Line** will appear. Clicking the **Close** button will finish the process.)\\ | (This may take a few minutes to complete. When **End of Line** appears, click **Close** to finish.)\\ |
\\ | \\ |
{{ :omv7:omv7_plugins:zfs-01.jpg?nolink&600 |}} | {{ :omv7:omv7_plugins:zfs-01.jpg?nolink&600 |}} |
Since the above process changes software repositories, click on **apt clean repos** and **apt clean**.\\ | Since the above process changes software repositories, click on **apt clean repos** and **apt clean**.\\ |
\\ | \\ |
While it's not absolutely necessary, to insure that the standard Debian kernel and it's repo's are aligned, consider rebooting the server.\\ | While it's not absolutely necessary, to ensure that the standard Debian kernel and its repos are aligned, consider rebooting the server.\\ |
\\ | \\ |
When complete, skip the following and proceed directly to [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#installing_the_zfs_plugin|Installing the ZFS Plugin]].\\ | When complete, skip the following and proceed directly to [[https://wiki.omv-extras.org/doku.php?id=omv7:omv7_plugins:zfs#installing_the_zfs_plugin|Installing the ZFS Plugin]].\\ |
\\ | \\ |
---- | ---- |
<html><center>Under <b>System</b>, <b>Kernel</b>, select the download <b>Proxmox</b> icon and select a <b>kernel</b>.</center></html> | <html><center>Under <b>System</b>, <b>Kernel</b>, select the download <b>Proxmox</b> icon and select a <b>kernel</b>.</center></html> |
\\ | \\ |
<html><center>(While this selection is the users choice; the oldest kernel may result in an avoidable kernel upgrade in the near future while the newest kernel will not be as well tested in field conditions. **The "middle of the road" kernel is recommended**.)</center></html> | <html><center>(While this selection is the user's choice; the oldest kernel may result in an avoidable kernel upgrade in the near future while the newest kernel will not be as well tested in field conditions. **The "middle of the road" kernel is recommended**.)</center></html> |
\\ | \\ |
\\ | \\ |
**Optional:** Non Proxmox kernels can be removed.\\ | **Optional:** Non Proxmox kernels can be removed.\\ |
\\ | \\ |
**TL;DR** proceed to -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#installing_the_zfs_plugin|Installing the ZFS plugin]].\\ | **TL;DR** proceed to -> [[https://wiki.omv-extras.org/doku.php?id=omv7:omv7_plugins:zfs#installing_the_zfs_plugin|Installing the ZFS plugin]].\\ |
\\ | \\ |
Removing non-proxmox kernels is recommended in that, when Openmediavault is updated, the remaining Debian kernels will be updated as well. These updates will also update the grub bootloader with unnecessary entries for the newer Debian Kernels. While rare, occasionally, grub/kernel updates do not go well. Issues with unused Debian Kernels and their grub updates can be prevented by removing non-proxmox kernels.\\ | Removing non-proxmox kernels is recommended in that, when Openmediavault is updated, the remaining Debian kernels will be updated as well. These updates will also update the grub bootloader with unnecessary entries for the newer Debian Kernels. While rare, occasionally, grub/kernel updates do not go well. Issues with unused Debian Kernels and their grub updates can be prevented by removing non-proxmox kernels.\\ |
{{ :omv7:omv7_plugins:zfs-02.2.jpg?nolink&600 |}}\\ | {{ :omv7:omv7_plugins:zfs-02.2.jpg?nolink&600 |}}\\ |
\\ | \\ |
<html><center>When the popup dialog displays <b>END OF LINE</b>, click the <b>Close</b> button.</center></html> | <html><center>When the popup dialog box displays <b>END OF LINE</b>, click the <b>Close</b> button.</center></html> |
<html><center><b>Reboot</b></center></html> | <html><center><b>Reboot</b></center></html> |
\\ | \\ |
\\ | \\ |
<html><center>After the reboot, under <b>System</b>, <b>Kernel</b>:</center></html> | <html><center>After the reboot, under <b>System</b>, <b>Kernel</b>:</center></html> |
\\ | |
<html><center>Only Proxmox kernels should be displayed (ending with <b>-pve</b>), along with memory testing utilities and other utilities that may have been previously installed.</center></html> | <html><center>Only Proxmox kernels should be displayed (ending with <b>-pve</b>), along with memory testing utilities and other utilities that may have been previously installed.</center></html> |
| |
==== Installing the ZFS Plugin ==== | ==== Installing the ZFS Plugin ==== |
| |
Under System, Plugins, scroll all the way to the bottom. Highlight openmediavault-zfs 7.X.X and click the down arrow to install.\\ | Under System, Plugins, scroll all the way to the bottom. Highlight **openmediavault-zfs 7.X.X**: and click the down arrow to install.\\ |
| |
{{ :omv7:omv7_plugins:zfs-03.jpg?nolink&600 |}} | {{ :omv7:omv7_plugins:zfs-03.jpg?nolink&600 |}} |
| |
The installation pop up will proceed until **END OF LINE** appears. At that point, click the **Close** button.\\ | The installation pop up will proceed until **END OF LINE** appears. At that point, click the **Close** button.\\ |
In most cases, the GUI will reset which changes the left side menu, adding ZFS to the Storage Pop down.\\ | In most cases, the GUI will reset which changes the left side menu, adding **ZFS** to the **Storage** Pop-down.\\ |
---- | ---- |
| |
| |
| |
- A ZFS "**Pool**" is made up of one or more "**VDEV**'s". More detailed information on VDEV's can be found -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#vdev_types|here]]. It's recommended that new ZFS users review the various VDEV types as the selections made during the following steps will have an impact on future Pool maintenance and expansions.\\ | - A ZFS "**Pool**" is made up of one or more "**VDEV**'s". More detailed information on VDEVs can be found -> [[https://wiki.omv-extras.org/doku.php?id=omv7:omv7_plugins:zfs#vdev_types|here]]. It's recommended that new ZFS users review the various VDEV types as the selections made during the following steps will have an impact on future Pool maintenance and expansions.\\ |
- During the creation of a "Pool", in accordance with user selections, the installation process creates and adds the first VDEV automatically.\\ | - During the creation of a "Pool", in accordance with user selections, the installation process creates and adds the first VDEV automatically.\\ |
- A Pool can have one or more VDEV's and a new VDEV can be added to an existing pool, at any time, increasing the pool's size.\\ | - A Pool can have one or more VDEVs and a new VDEV can be added to an existing pool, at any time, increasing the pool's size.\\ |
| |
---- | ---- |
**Pool type**:\\ | **Pool type**:\\ |
| |
The choices are; **Basic**, **Mirror**, **RAIDZ1**, **RAIDZ2**, **RAIDZ3**\\ | The choices are; **Basic**, **Mirror**, **RAID-Z1**, **RAID-Z2**, **RAID-Z3**\\ |
For more details on the these selections, see -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#vdev_types|VDEV types]].\\ | For more details on the these selections, see -> [[https://wiki.omv-extras.org/doku.php?id=omv7:omv7_plugins:zfs#vdev_types|VDEV types]].\\ |
| |
\\ | \\ |
\\ | \\ |
**Set ashift**: | **Set ashift**: |
Checking this box is HIGHLY recommend. Checking the box will set ashift to 12 which sets the sector size (4K) of most current spinning hard drives and SSD's. (More information on ashift is available -> [[https://jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/|here]].\\ | Checking this box is HIGHLY recommended. Checking the box will set ashift to 12 which sets the sector size (4K) of most current spinning hard drives and SSD's. (More information on ashift is available -> [[https://jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/|here]].\\ |
\\ | \\ |
If the box is not checked, the default ashift value passed to the pool will be 0. As noted in the previous link, this will not be good for performance.\\ | If the box is not checked, the default ashift value passed to the pool will be 0. As noted in the previous link, this will not be good for performance.\\ |
\\ | \\ |
<html><center>In the following example, a mirror was chosen for disk redundancy and default automatic error correction.</center></html> | <html><center>In the following example, a mirror was chosen for disk redundancy and default automatic error correction.</center></html> |
<html><center>(While mentioned in VDEV's this -> <a href="https://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/">external reference</a> explains why mirrors are a good choice.)</center></html> | <html><center>(While mentioned in VDEVs this -> <a href="https://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/">external reference</a> explains why mirrors are a good choice.)</center></html> |
| |
| |
<html><center>Confirm the Pending Change.</center></html> | <html><center>Confirm the Pending Change.</center></html> |
\\ | \\ |
<html><center>If a <b>WARNING</b> dialog pops up containing "<b>invalid VDEV specification</b>", it may be necessary to check the <b>FORCE</b> box. | <html><center>If a <b>WARNING</b> dialog pops up containing "<b>invalid VDEV specification</b>", it may be necessary to check the <b>Force creation</b> box. |
</center></html> | </center></html> |
---- | ---- |
==== Working With ZFS FileSystems ==== | ==== Working With ZFS FileSystems ==== |
| |
TL;DR Take me to -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#adding_filesystems|Adding Filesystems]].\\ | **TL;DR** Take me to -> [[https://wiki.omv-extras.org/doku.php?id=omv7:omv7_plugins:zfs#adding_filesystems|Adding Filesystems]].\\ |
\\ | \\ |
While it’s possible to create Linux folders directly at the root of a ZFS pool, creating dedicated ZFS filesystems offers many advantages.\\ | While it’s possible to create Linux folders directly at the root of a ZFS pool, creating dedicated ZFS filesystems offers many advantages.\\ |
\\ | \\ |
ZFS filesystems are logical containers within the pool that can have their own assignable properties, such as compression, quotas, etc. These properties can be set individually or inherited from the parent pool.\\ | ZFS filesystems are logical containers within the pool that can have their own assignable properties, such as compression, quotas, and more. These properties can be set individually or inherited from the parent pool.\\ |
\\ | \\ |
One of the most powerful features of using ZFS filesystems is that each filesystem can have its own set of snapshots, making backups and versioning much more flexible and granular.\\ | One of the most powerful features of using ZFS filesystems is that each filesystem can have its own set of snapshots, making backups and versioning much more flexible and granular.\\ |
---- | ---- |
\\ | \\ |
<html><center>In the <b>Add</b> dialog box, give the Filesystem a name, and click <b>Add</b>.</center></html> | <html><center>In the <b>Add</b> dialog box, under <b>Type</b> select <b>Filesystem</b>, give the Filesystem a name, and click <b>Add</b>.</center></html> |
\\ | \\ |
\\ | \\ |
==== Commands for the CLI: ==== | ==== Commands for the CLI: ==== |
\\ | \\ |
(Where ZFS1 is found in the following, substituent the name of the user's Pool.)\\ | (Where ZFS1 is found in the following, substitute the name of the user's Pool.)\\ |
\\ | \\ |
''zpool scrub ZFS1''\\ | ''zpool scrub ZFS1''\\ |
* Safely testing changes or upgrades. | * Safely testing changes or upgrades. |
\\ | \\ |
Since snapshots are read-only, they cannot be modified, which helps ensure data integrity. You can later clone or roll back to a snapshot if needed. Snapshot cloning or roll backs makes the data contained in ZFS filesystems impervious to ransomware and other data altering viruses.\\ | Since snapshots are read-only, they cannot be modified, which helps ensure data integrity. You can later clone or rollback to a snapshot if needed. Snapshot cloning or rollbacks makes the data contained in ZFS filesystems impervious to ransomware and other data altering viruses.\\ |
\\ | \\ |
=== Taking A Snapshot === | === Taking A Snapshot === |
{{ :omv7:omv7_plugins:zfs-10.jpg?nolink&700 |}} | {{ :omv7:omv7_plugins:zfs-10.jpg?nolink&700 |}} |
\\ | \\ |
| ---- |
==== Automating Snapshots ==== | ==== Automating Snapshots ==== |
| |
While taking an occasional manual snapshot is worth wile, the snapshotting process can be fully automated using ''zfs-auto-snapshot''. A document describing what zfs-auto-snapshot is, how to install it and set it up is available -> [[https://wiki.omv-extras.org/doku.php?id=misc_docs:auto_zfs_snapshots|here]]. | While taking an occasional manual snapshot is worthwhile, the snapshotting process can be fully automated using ''zfs-auto-snapshot''. A document describing what zfs-auto-snapshot is, how to install it and set it up is available -> [[https://wiki.omv-extras.org/doku.php?id=misc_docs:auto_zfs_snapshots|here]].\\ |
| \\ |
| \\ |
| \\ |
| ---- |
| |
| |
| |
| |
===== Additional Reading ===== | ===== Core ZFS Concepts Explained ===== |
| |
==== ZFS Pools and VDEV's ==== | ==== ZFS Pools and VDEVs ==== |
| |
**General:**\\ | **General:**\\ |
As noted before, when creating the first Pool a new VDEV is created within the new pool in accordance with the user's selections.\\ | As noted before, when creating the first Pool a new VDEV is created within the new pool in accordance with the user's selections.\\ |
There are a few rules concerning Pools and VDEV's.\\ | There are a few rules concerning Pools and VDEVs.\\ |
| |
First: If a VDEV is lost, the Pool is lost. There is no recovery from this situation.\\ | First: If a VDEV is lost, the Pool is lost. There is no recovery from this situation.\\ |
Second: Disk redundancy is at the VDEV level. If redundancy is not used, the failure of one disk will result in the failure of the Pool.\\ | Second: Disk redundancy is at the VDEV level. If redundancy is not used, the failure of one disk will result in the failure of the Pool.\\ |
Therefore, while possible, it wouldn't be advisable to expand a Pool with an existing RAID VDEV, using a single Basic volume because a basic volume has no redundancy.\\ | Therefore, while possible, it wouldn't be advisable to expand a Pool with an existing RAID VDEV, using a single Basic volume because a Basic volume has no redundancy.\\ |
| |
* VDEV's are made up of physical block devices, I.E storage drive(s). | * VDEVs are made up of physical block devices, I.E storage drive(s). |
* A Pool can be expanded by adding __new__ VDEV's. However, the addition of a VDEV, to a Pool, is PERMANENT. A VDEV __can not__ be removed from a Pool. | * A Pool can be expanded by adding __new__ VDEVs. However, the addition of a VDEV, to a Pool, is PERMANENT. A VDEV __cannot__ be removed from a Pool. |
* A VDEV, once created, can not be changed. A Basic volume with remain a Basic volume. A mirror (RAID1 equivalent) will remain a mirror. RAID-Z1 (a RAID5 equivalent) will remain RAIDZ1. RAIDZ1 can not be upgraded to RAID-Z2 (a RAID6 equivalent), etc. | * Once created, a VDEV cannot be modified; a Basic volume will always remain a Basic volume. A mirror (RAID1 equivalent) will remain a mirror. RAID-Z1 (a RAID5 equivalent) will remain RAID-Z1. RAID-Z1 cannot be upgraded to RAID-Z2 (a RAID6 equivalent), etc. |
* If a single VDEV is lost, in a multi-VDEV Pool, the entire Pool is lost. | * If a single VDEV is lost, in a multi-VDEV Pool, the entire Pool is lost. |
* Disk redundancy is at the VDEV level. Accordingly, it makes __no sense__ to add a Basic disk VDEV, to a RAID level VDEV, to expand a Pool. Per the rules, a VDEV can't be removed and if the Basic (single disk) fails, the entire Pool will be lost.\\ | * Disk redundancy is at the VDEV level. Accordingly, it makes __no sense__ to add a Basic disk VDEV, to a RAID level VDEV, to expand a Pool. Per the rules, a VDEV can't be removed and if the Basic (single disk) fails, the entire Pool will be lost.\\ |
Basic is a single disk volume. A single "Basic" disk is fine for basic storage. A scrub will reveal data integrity errors but, using default attributes, data errors will not be automatically corrected. However, if the filesystem attribute copies=2 is set, that filesystem residing on a Basic volume will autocorrect data errors. (The cost associated with 2 copies of all files is that it uses twice the disk space.) As noted earlier, using a Basic volume to expand a Pool with a RAID VDEV is a bad idea. | Basic is a single disk volume. A single "Basic" disk is fine for basic storage. A scrub will reveal data integrity errors but, using default attributes, data errors will not be automatically corrected. However, if the filesystem attribute copies=2 is set, that filesystem residing on a Basic volume will autocorrect data errors. (The cost associated with 2 copies of all files is that it uses twice the disk space.) As noted earlier, using a Basic volume to expand a Pool with a RAID VDEV is a bad idea. |
| |
* **Mirror**: Also known as a Zmirror. A Zmirror is a RAID1 equivalent. A mirror requires a 2 disk minimum. In that there are always at least __2 copies__ of all files in a Mirror, data integrity scrubs automatically correct data errors. Further, it's worth noting that more than 2 disks can be added to a single Mirror. Adding more that 2 disks creates additional disk mirrors and more than 2 copies of all files. While the cost is the loss of hard drive space, multiple drives in a mirror configuration provides for **maximum** data integrity and safety. For the reasons stated in this -> [[https://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/|reference]], using VDEV(s) comprised of one or more mirrors should be considered and is recommended.\\ | * **Mirror**: |
| Also known as a Zmirror. A Zmirror is a RAID1 equivalent. A mirror requires a 2 disk minimum. In that there are always at least __2 copies__ of all files in a Mirror, data integrity scrubs automatically correct data errors. Further, it's worth noting that more than 2 disks can be added to a single Mirror. Adding more than 2 disks creates additional disk mirrors and more than 2 copies of all files. While the cost is the loss of hard drive space, multiple drives in a mirror configuration provides for **maximum** data integrity and safety. For the reasons stated in this -> [[https://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/|reference]], using VDEV(s) comprised of one or more mirrors should be considered and is recommended.\\ |
| |
For RAIDZ implementations, it is generally recommended to run and "**odd**" number of drives. | For RAID-Z implementations, it is generally recommended to run an "**odd**" number of drives. |
| |
* **RAID-Z1**: With one striped parity disk, this is the equivalent of RAID5. (RAID-Z1 requires 3 disks minimum. A rule of thumb maximum would be 7 drives.)\\ | * **RAID-Z1**: With one striped parity disk, this is the equivalent of RAID5. (RAID-Z1 requires 3 disks minimum. A rule of thumb maximum would be 7 drives.)\\ |
\\ | \\ |
<html><center><span style="font-size:125%;">To reiterate; if any one VDEV in a Pool is lost, the entire pool is lost.</span></center></html> | <html><center><span style="font-size:125%;">To reiterate; if any one VDEV in a Pool is lost, the entire pool is lost.</span></center></html> |
<html><center><span style="font-size:125%;">Accordingly, if redundancy is to be used it must be setup at the VDEV level.</span></center></html> | <html><center><span style="font-size:125%;">Accordingly, if redundancy is to be used it must be set up at the VDEV level.</span></center></html> |
\\ | \\ |
<html><center><span style="font-size:125%;">Consider the following illustration:</span></center></html> | <html><center><span style="font-size:125%;">Consider the following illustration:</span></center></html> |
<html><center><span style="font-size:100%;">A pool with a RAID-Z1 VDEV, also, will survive if one disk in the array is lost.</span></center></html> | <html><center><span style="font-size:100%;">A pool with a RAID-Z1 VDEV, also, will survive if one disk in the array is lost.</span></center></html> |
\\ | \\ |
While it will work, mixing dissimilar VDEV's in a Pool, in the manner illustrated, is extremely bad practice. As has already been stated, once a DEV is added to a Pool, it can't be removed. Therefore, the Pool as illustrated will be lost if there's a problem with the Basic drive because the single drive can not be replaced.\\ | While it will work, mixing dissimilar VDEVs in a Pool, in the manner illustrated, is extremely bad practice. As previously mentioned, once a VDEV is added to a pool, it can't be removed. Therefore, the Pool as illustrated will be lost if there's a problem with the Basic drive because the single drive cannot be replaced.\\ |
| |
---- | ---- |
{{ :omv7:omv7_plugins:zfs-recommended-pool.jpg?nolink&500 |}} | {{ :omv7:omv7_plugins:zfs-recommended-pool.jpg?nolink&500 |}} |
| |
Pools made up of mirrors offer the most features and advantages. Disk redundancy is taken care of and, since there are two copies off all files, bit-rot damage and other silent damage to files is self healing. If needed, another mirror (a pair of disks) can be added to expand the pool at any time. | Pools made up of mirrors offer the most features and advantages. Disk redundancy is taken care of and, since there are two copies of all files, bit-rot damage and other silent damage to files is self-healing. If needed, another mirror (a pair of disks) can be added to expand the pool at any time. |
| |
| |
| |
| |
===== Data Integrity ===== | ==== Data Integrity ==== |
| |
Data Integrity is one of ZFS' hall mark features. In many file systems silent corruption go unnoticed and unchecked, largely because they have no mechanize to detect "bit-rot". There are numerous reasons for bit-rot having to do with the inevitable degradation of magnetic media due to age, SSD's that have flipped bits from comic rays and other scenarios. Another consideration is the way hard drives and SSD's fail. Contrary to popular belief, storage media does not fail instantly, with a similarity to flipping a light switch. Often they fail slowly and silently, irreversibly corrupting data. Without a filesystem that actively monitors the health of stored data, the discovery of data corruption may be well after sensitive data is irretrievably lost. Again, since corruption is often silent and may occur over an extended period, the potential for copying corrupted files to a backup device is possibility as well.\\ | Data Integrity is one of ZFS' hallmark features. In many file systems silent corruption goes unnoticed and unchecked, largely because they have no mechanism to detect "bit-rot". There are numerous reasons for bit-rot having to do with the inevitable degradation of magnetic media due to age, SSD's that have flipped bits from cosmic rays and other scenarios. Another consideration is the way hard drives and SSD's fail. Contrary to popular belief, storage media does not fail instantly, with a similarity to flipping a light switch. Often they fail slowly and silently, irreversibly corrupting data. Without a filesystem that actively monitors the health of stored data, the discovery of data corruption may be well after sensitive data is irretrievably lost. Again, since corruption is often silent and may occur over an extended period, the potential for copying corrupted files to a backup device is a possibility as well.\\ |
| |
Conversely, by using file checksums and "scrubs", ZFS actively monitors the health of user's data. ZFS scrubs may detect issues with a hard drive well //**before**// SMART stat's provide warnings of a developing drive issue. Keeping the primary server's data clean, insures that backup copies of data are clean as well. | Conversely, by using file checksums and "scrubs", ZFS actively monitors the health of user data. ZFS scrubs may detect issues with a hard drive well //**before**// SMART stats indicate a developing drive issue. Keeping the primary server's data clean, ensures that backup copies of data are clean as well. |
| |
| |
==== How ZFS Data Integrity Works ==== | === How ZFS Data Integrity Works === |
| |
When a file is copied into a ZFS Pool, the file is parsed and assigned a checksum. A checksum may also be referred to as a "hash". Every file will have a unique checksum. If a file is modified, a new checksum is calculated and assigned to the file. In this manner, the exact state of the file is tracked as it was when it was created or modified. It's important to note that, with or without modifications by the filesystem, if the file is altered in any respect, even at the bit level, the checksum will change. Changes that __did not__ occur as a result of a filesystem write, is what data integrity is all about.\\ | When a file is copied into a ZFS Pool, the file is parsed and assigned a checksum. A checksum may also be referred to as a "hash". Every file will have a unique checksum. If a file is modified, a new checksum is calculated and assigned to the file. In this manner, the exact state of the file is tracked as it was when it was created or modified. It's important to note that, with or without modifications by the filesystem, if the file is altered in any respect, even at the bit level, the checksum will change. Changes that __did not__ occur as a result of a filesystem write are what data integrity is all about.\\ |
A housekeeping chore called a "scrub" can be used to compare file checksums against actual file content. Scrub's should be run, periodically, to insure that data is clean and as to serve as a warning of potential degradation of storage media.\\ | A housekeeping chore called a "scrub" can be used to compare file checksums against actual file content. Scrubs should be run periodically to ensure that data is clean and to serve as a warning of potential degradation of storage media.\\ |
| |
In the following illustration:\\ | In the following illustration:\\ |
- The file in the Pool on left matches its checksum. If __all__ files match their checksums a scrub would report:\\ | - The file in the Pool on the left matches its checksum. If __all__ files match their checksums, a scrub would report:\\ |
'' scan: scrub repaired 0B in <Length of Time required for the scan> with 0 errors on <Day, Date, Time, Year>''\\ | '' scan: scrub repaired 0B in <Length of Time required for the scan> with 0 errors on <Day, Date, Time, Year>''\\ |
- The file in the Pool on the right does not match it's checksum. Since the Pool has no redundancy, this file might be reported as a "checksum error" and / or a "unrecoverable error".\\ | - The file in the Pool on the right does not match its checksum. Since the Pool has no redundancy, this file might be reported as a "checksum error" and / or a "unrecoverable error".\\ |
\\ | \\ |
| |
{{ :omv7:omv7_plugins:zfs-integrity-1.jpg?nolink&300 |}} | {{ :omv7:omv7_plugins:zfs-integrity-1.jpg?nolink&300 |}} |
| |
==== Automatic File Restoration ==== | === Automatic File Restoration === |
** | ** |
Full safety, with automatic file restoration, is available "if" two copies of the same file exist**.\\ | Full safety, with automatic file restoration, is available "if" two copies of the same file exist**.\\ |
When a file is created a ZFS mirror (RAID1 equivalent) creates two identical copies of the file, on each hard drive, and assigns them identical checksums.\\ | When a file is created a ZFS mirror (RAID1 equivalent) creates two identical copies of the file, on each hard drive, and assigns them identical checksums.\\ |
- **In the middle**:\\ | - **In the middle**:\\ |
A scrub took place where one of two files, that where previously identical, no longer matches its checksum.\\ | A scrub found that one of two previously identical files no longer matched its checksum.\\ |
- **On the right**:\\ | - **On the right**:\\ |
ZFS will automatically delete the error-ed file and copy the known good file to the 2nd disk. | ZFS will automatically remove the corrupted file and restore it from the valid copy. |
| |
{{ :omv7:omv7_plugins:zfs-integrity-2.jpg?nolink&600 |}} | {{ :omv7:omv7_plugins:zfs-integrity-2.jpg?nolink&600 |}} |
| |
- When using Zmirrors (where there are two copies of all files).\\ | - When using Zmirrors (where there are two copies of all files).\\ |
- In all other VDEV's where Basic Volumes or RAIDZX is used, __AND__ where filesystems have the **copies=2** feature active.\\ | - In all other VDEVs where Basic Volumes or RAID-ZX is used, __AND__ where filesystems have the **copies=2** feature enabled.\\ |
| |
Consider the following:\\ | Consider the following:\\ |
**Practical considerations**:\\ | **Practical considerations**:\\ |
- While copies=2 can be used with a Basic volume, it should be noted that in the event of a disk failure both file copies would be lost. However, if data integrity and automatic restoration is used at the primary server, data on a backup server would be clean.\\ | - While copies=2 can be used with a Basic volume, it should be noted that in the event of a disk failure both file copies would be lost. However, if data integrity and automatic restoration is used at the primary server, data on a backup server would be clean.\\ |
- RAIDZ implementations reconstruct error-ed data a bit differently. While some data reconstruction is possible, using parity calculations, RAIDZ does not provide for restoration of //silent// errors. While RAIDZ provides __disk__ redundancy, copies=2 would be required to provide for maximum data protection and file restoration.\\ | - RAID-Z implementations reconstruct errored data a bit differently. While some data reconstruction is possible, using parity calculations, RAID-Z does not provide for restoration of //silent// errors. While RAID-Z provides __disk__ redundancy, copies=2 would be required to provide for __maximum__ data protection and file restoration.\\ |
| \\ |
| ---- |
| \\ |
| ==== Additional Reading ==== |
| |
| The following link features Arron Topance's excellent [[https://tadeubento.com/2024/aarons-zfs-guide/|ZFS guide]]. |
| \\ |
| For a deep dive into ZFS and its internals -> [[https://openzfs.github.io/openzfs-docs/|OpenZFS Documentation]].\\ |
| \\ |
| |
| |
| |
| |
| |
| |
Expanding a VDEV:\\ | **Expanding a VDEV:**\\ |
\\ | \\ |
Spare drives can be added to RAID VDEV's but they can not (currently) be used to expand the VDEV. However, Zmirrors or RAID-ZX arrays can be "upgraded" for size by failing a drive and replacing each of the array's drives, one-by-one, with larger drives. The feature to support this is "**autoexpand**" and it's on by default. However, it should be noted that significant RISK is involved in failing, replacing, and resilvering numerous drives, especially if they are old. Before beginning such a process, insuring the server's backup is up-to-date is highly recommended.\\ | Spare drives can be added to RAID VDEVs but they cannot (currently) be used to expand the VDEV. However, Zmirrors or RAID-ZX arrays can be "upgraded" for increased size by failing a drive and replacing each of the array's drives, one-by-one, with larger drives. The feature to support this is "**autoexpand**" and it's on by default. However, it should be noted that significant **RISK** is involved in failing, replacing, and resilvering numerous drives, especially if they are old. Before beginning such a process, ensuring the server's backup is up-to-date is highly recommended.\\ |
| |
| |