Both sides previous revision Previous revision Next revision | Previous revision |
docs_in_draft:zfs [2025/03/27 00:19] – [The Debian Backports Kernel] crashtest | docs_in_draft:zfs [2025/06/03 01:42] (current) – [ZFS - The Memory Myth] crashtest |
---|
===== What is ZFS? ===== | ===== What is ZFS? ===== |
| |
**ZFS** (the **Z**ettabyte **F**ile **S**ystem) is the Granddaddy of **COW** (**C**opy **O**n **W**rite) filesystems. Having been under constant development since it's creation for Sun Solaris server in 2001, ZFS is very mature. Currently, [[https://zfsonlinux.org/|OPENZFS on Linux]] is sponsored by [[https://computing.llnl.gov/projects/openzfs|Lawrence Livermore Labs]]. In short, ZFS on Linux is very well funded and will be fully supported into the foreseeable future and, likely, beyond. | **ZFS** (the **Z**ettabyte **F**ile **S**ystem) is a high-performance, scalable file system and logical volume manager designed by Sun Microsystems which is now part of Oracle. It was originally developed for the Solaris operating system and is the Granddaddy of **COW** (**C**opy **O**n **W**rite) filesystems. ZFS has since been ported to other platforms, ito nclude Linux and FreeBSD . Having been under constant development since it's creation for Sun Solaris server in 2001, ZFS is very mature. |
| |
The more important among ZFS' several features are as follows: | Currently, [[https://zfsonlinux.org/|OPENZFS on Linux]] is sponsored by [[https://computing.llnl.gov/projects/openzfs|Lawrence Livermore Labs]], it's very well funded and will be fully supported into the foreseeable future and, likely, beyond. |
| |
* Pooled storage (Built in logical volume management) | ==== ZFS Features ==== |
* **Snapshots** (Creates file, folder, filesystem and volume histories.) | |
* Data **integrity verification** and, with zmirrors or with activation of the 2copies feature, **automatic repair** | |
* RAID-Z, Zmirror and other implementations of ZFS are the functional equivalents to legacy RAID, without legacy RAID drawbacks such as the "write hole" and silent data corruption. | |
| |
Data integrity and repair, and data restoration (VIA snapshots) are among ZFS' more important features. | Following are some key features and characteristics of ZFS: |
| |
More detail information on capabilities and limits is available -> [[https://www.howtogeek.com/175159/an-introduction-to-the-z-file-system-zfs-for-linux/|here]]. | - **Data Integrity**: ZFS uses a copy-on-write mechanism and checksums for all data and metadata, ensuring that any corruption can be detected and corrected. |
| - **Snapshots and Clones**: ZFS allows for the creation of snapshots, which are read-only copies of the file system at a specific point in time. Clones are writable copies of snapshots, enabling efficient data management and backup. |
| - **Pooled Storag**e: ZFS combines the concepts of file systems and volume management, allowing multiple file systems to share the same storage pool. This simplifies storage management and improves efficiency. |
| - **Scalability**: ZFS is designed to handle large amounts of data, making it suitable for enterprise-level storage solutions. It can manage petabytes of data and supports very large file systems. |
| - **RAID Functionality**: ZFS includes built-in RAID capabilities, allowing users to configure redundancy and improve data availability without the need for separate hardware RAID controllers. |
| - **Compression and Deduplication**: ZFS supports data compression and deduplication, which can save storage space and improve performance. |
| - **Self-Healing**: When ZFS detects data corruption, it can automatically repair the affected data using redundant copies, enhancing data reliability. |
| |
| |
| More detailed information on capabilities and limitations is available -> [[https://www.howtogeek.com/175159/an-introduction-to-the-z-file-system-zfs-for-linux/|here]]. |
| |
<html> | <html> |
<tr> | <tr> |
<td style="background-color:#E6FEFF;height:25px;width:380px;"> | <td style="background-color:#E6FEFF;height:25px;width:380px;"> |
While the above external resource is informative creating a ZFS pool on the command line, in accordance with external references, is not recommended. This document will walk users through a ZFS installation process that is compatible with the ZFS plugin and Openmediavault. | While the above external resource is informative, creating a ZFS pool on the command line, in accordance with external references, is not recommended. This document will walk users through a ZFS installation process that is compatible with the ZFS plugin and Openmediavault. |
</tr> | </tr> |
</table> | </table> |
===== Prerequisites ===== | ===== Prerequisites ===== |
| |
| * PuTTY is a prerequisite for installing OMV-Extras and for working with ZFS on the command line. \\ [[https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html|PuTTY]] is an SSH client that will allow users to connect to their SBC, from a Windows client, to get on the server's command line. PuTTY is installable on a Windows client. Installation and use guidance, for PuTTY, can be found ->[[https://wiki.omv-extras.org/doku.php?id=omv7:utilities_maint_backup#putty|here]]. |
* PuTTY is a prerequisite for installing OMV-Extras\\ [[https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html|PuTTY]] is an SSH client that will allow users to connect to their SBC, from a Windows client, to get on the server's command line. PuTTY is installable on a Windows client. Installation and use guidance, for PuTTY, can be found ->[[https://wiki.omv-extras.org/doku.php?id=omv7:utilities_maint_backup#putty|here]]. | * [[https://wiki.omv-extras.org/doku.php?id=misc_docs:omv_extras|OMV-Extras]] is a prerequisite for installing the kernel plugin. Installation and use guidance, for the OMV-Extras plugin can be found ->[[https://wiki.omv-extras.org/doku.php?id=misc_docs:omv_extras|here]]. |
* [[https://wiki.omv-extras.org/doku.php?id=misc_docs:omv_extras|OMV-Extras]] is a prerequisite for installing the kernel plugin.\\ Installation and use guidance, for the OMV-Extras plugin can be found ->[[https://wiki.omv-extras.org/doku.php?id=misc_docs:omv_extras|here]]. | * The **Kernel Plugin** is **required** for ZFS. After OMV-Extras is installed, on OMV's left side menu bar go to **System**, **Plugins**. Find, select, and install the **openmediavault-kernel** 7.x.x plugin. |
* After OMV-Extras is installed, on OMV's left side bar go to **System**, **Plugins**. Find, select, and install the **openmediavault-kernel** 7.x.x plugin. | |
| |
==== ZFS - The "Licensing" Issue ==== | ===== Foreword ===== |
| |
ZFS is licensed under [[https://opensource.org/license/CDDL-1.0|CDDL]] which is a "free" __open source__ license. Due to a perceived (but never tested in a Court of Law) licensing conflict, Debian Linux does not build ZFS kernel modules into their kernels by default. This is more of a "legal fiction", than anything else, in that the OpenZFS license is simply another version of several "free open source" licenses, not unlike Debian's [[https://www.gnu.org/licenses/gpl-3.0.html|GNU General Public license]] and a hand full of other free OSS licenses contained within the Debian distro's (as outlined ->[[https://www.debian.org/legal/licenses/|here]]). Since Openmediavault is based on Debian Linux, ZFS is provided as a plugin to prevent any issue that may result from a licensing conflict.\\ | With a focus on getting new users started:\\ |
| Most of the documentation at OMV-Extras.org is written with a focus on "How-To" do a specific task. Further, in most cases, topics on this site are geared toward beginners. While getting to a running ZFS installation can be laid out in a "How-To" format, ZFS and it's RAID equivalents are NOT beginner topics. Accordingly, this document will have the "How-To" route along with explanations (very brief in nature) to inform beginners of ZFS basics and to prevent users from straying too far from reasonable norm's during setting up.\\ |
\\ | \\ |
In the final analysis, for the user, this free license "wrangling" is a non-issue.\\ | As the "How-To" path is laid out, links to overview explanations of key concepts are provided. For beginners and others who have had little to no exposure to ZFS, taking a few minutes to read and understand ZFS related concepts will increase understanding and dispel some the myths and mysticism related to this unique file system.\\ |
| |
===== ZFS - Kernels and Their Impact ===== | |
| |
Openmediavault installs with the Debian Backports kernel, by default. The standard Debian kernel is available, but must be selected. A third option, the Proxmox kernel, is installable VIA the kernel plugin. Following are the pro's and con's of each kernel, where ZFS is concerned: | ===== ZFS - General ===== |
| |
| There are a great many misunderstandings with regard to ZFS. This section will go over a few of them. |
| |
| (TL;DR - send me to -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#kernels_and_their_impact|Kernels and Their Impact]].) |
| |
| ==== ZFS - The Memory Myth ==== |
| |
| "I heard that ZFS requires massive amounts of ram" or "ZFS requires a strong CPU".\\ |
| While all use cases are not the same, for this sake of this discussion, we'll assume that users reading this document are NOT Corporate or Datacenter Admins. The assumption will be that readers are home server users, server admins for small businesses, or other entities that have 25 users or less. In other words, when compared to Enterprise level network traffic, we're talking about relatively "light usage".\\ |
| \\ |
| The author of this document, in personal experience with running ZFS, has set up a 4TB pool on a host with 4GB of ram and an older Atom processor (read, a weak CPU). Performance for a few users, along with streaming data, was fine. Memory might become an issue only if "dedup" (deduplication of data) is turned ON. (This is an Enterprise feature that is __OFF__ by default.) In most home or small business use cases, ZFS' CPU requirements are modest.\\ |
| \\ |
| "ECC RAM is required to run ZFS".\\ |
| As is the case with most file server and NAS installations, ECC is desirable but not required. ECC is geared toward correcting randomly "flipped bits" in RAM, notionally caused by cosmic rays. While flipped RAM bits could cause an errored disk write, a more likely outcome would be an kernel or application error. Data stored and checksumed, on a spinning hard drive or an SSD, is another matter altogether. Correcting storage media errors is a task that ZFS handles well.\\ |
| \\ |
| "ZFS is eating all of my RAM!"\\ |
| Actually, this is a good thing. If memory is unused and ZFS needs RAM for a house keeping chore (a scrub for example) or for a copying operation, ZFS will use existing RAM to facilitate and speed up I/O. Further, ZFS will hold the same RAM until another process needs it. At that point ZFS will release RAM to the requesting process. If all of a ZFS servers ram appears to be in use, there's nothing to worry about.\\ |
| \\ |
| |
| ==== The "Licensing" Issue ==== |
| |
| ZFS is licensed under [[https://opensource.org/license/CDDL-1.0|CDDL]] which is a "**free**" __open source__ license. Due to a perceived (but never tested in a Court of Law) licensing conflict, Debian Linux does not build ZFS kernel modules into their kernels by default. This is more of a "legal fiction", than anything else, in that the OpenZFS license is simply another version of several "free open source" licenses, not unlike Debian's [[https://www.gnu.org/licenses/gpl-3.0.html|GNU General Public license]] and a hand full of other free OSS licenses contained within the Debian distro's (as outlined ->[[https://www.debian.org/legal/licenses/|here]]). Since Openmediavault is based on Debian Linux, ZFS is provided as a plugin to prevent any issue that may result from a licensing conflict.\\ |
| \\ |
| In the final analysis, for the end user, this free license "wrangling" is a non-issue.\\ |
| |
| ==== Kernels and Their Impact ==== |
| |
| Openmediavault installs with the Debian Backports kernel, by default. The standard Debian kernel is available, but must be selected. A third option, the Proxmox kernel, is installable VIA the kernel plugin.\\ |
| Where ZFS is concerned, following are the pro's and con's of each kernel:\\ |
| |
=== The Debian Backports Kernel === | === The Debian Backports Kernel === |
The Backports Kernel (OMV's default kernel) is used, primarily for it's support of the latest hardware and, along similar lines, for the latest software packages. The issue with the backports kernel, where ZFS is concerned, is that it's possible to have a kernel upgrade offered for installation that does not have ZFS packages in it's repo's. (This has happened, to the author of this doc.) After such an upgrade, this may result in a ZFS pool "disappearing". The pool still exists but, "fixing the issue" requires booting into an older kernel to see the pool until the new kernels packages "catch up". For this reason alone, the backports kernel is not recommended. | The Backports Kernel (OMV's default kernel) is used, primarily for it's support of the latest hardware and, along similar lines, for the latest software packages. The issue with the backports kernel, where ZFS is concerned, is that it's possible to have a kernel upgrade offered for installation that does not have ZFS packages in its repo's. (This has happened to the author of this doc.) After a backports kernel upgrade, this may result in a ZFS pool "disappearing". The pool still exists but, "fixing the issue" requires booting into an older kernel, to see the existing pool, until the new kernel's ZFS repository packages "catch up". For this reason alone, the backports kernel is __not recommended__. |
| |
=== The Standard Debian Kernel === | === The Standard Debian Kernel === |
The Standard Debian Kernel (selectable) can be used for ZFS. However, since ZFS kernel modules are not installed by default, they must be built by the ZFS plugin. While this will work, building the modules is a long process that requires access to online repo's. Accordingly, it is prone to errors. The Standard Kernel is very usable, for ZFS, but not ideal. | The Standard Debian Kernel (selectable) can be used for ZFS. However, since ZFS kernel modules are not installed in the Debian kernel by default, they must be built by the ZFS plugin when it is installed. While this process works, building the modules is a long process that requires continuous access to online repo's. Accordingly, the potential for a build error exists. For this reason, while the Standard Kernel is very usable for ZFS, it is not ideal. |
| |
=== The Proxmox Kernel === | === The Proxmox Kernel === |
The Proxmox Kernel is a Ubuntu Kernel that has ZFS modules prebuilt and compiled in the kernel by default. However, the Kernel plugin is required to install the Proxmox Kernel. Among the other useful features available, the kernel module will pull and install a Proxmox kernel, and can make it the default when booting. As Proxmox kernel upgrades become available and are performed, the repo's for the kernel will always have the required packages to support ZFS. Further, since the Proxmox kernel is financially supported by the [[https://www.proxmox.com/en/|Proxmox project]], the kernel is exhaustively tested before it's made available. In the bottom line, using the Proxmox kernel decreases the possibility of an installation error and guarantees ZFS support while increasing overall server reliability. | The Proxmox Kernel is a Ubuntu Kernel that has ZFS modules prebuilt and compiled into the kernel by default. However, the **Kernel plugin** is required to install the Proxmox Kernel. Among the other useful features available, the kernel plugin can pull and install a Proxmox kernel, and can make it the default kernel when booting. As Proxmox kernel upgrades become available and are performed, the repo's for the kernel will always have the required packages to support ZFS. Further, since the Proxmox kernel is financially supported by the [[https://www.proxmox.com/en/|Proxmox Virtualization project]], the kernel is exhaustively tested with ZFS modules installed, before it's made available to the public. In the bottom line, using the Proxmox kernel decreases the possibility of an installation error and guarantees ZFS support through kernel upgrades, while increasing overall server reliability.\\ |
| \\ |
| === Kernels for ZFS Support - The Bottom line === |
| \\ |
| ZFS with the backports Debian kernel - this is a bad idea. Problems are possible with each backports kernel update.\\ |
| ZFS with the standard Debian kernel - this combination will work, but it's not ideal.\\ |
| ZFS with the Proxmox kernel - this is the best case scenario for ZFS.\\ |
| \\ |
| |
| |
===== Installation ===== | ===== Installation ===== |
| |
To get started with ZFS and to create an easy installation path to the most stable server possible, some preliminary setup, settings and adjustments are recommended. | To get started with ZFS and to create an easy installation path to the most stable server possible, some preliminary setup, settings and adjustments are recommended.\\ |
| \\ |
| First, bring the server up-to-date by applying all pending updates: |
| Under **System**, **Update Management**, **Updates**, click the **Install Updates** button. **Confirm** and **Yes**.\\ |
| (Depending on the number of updates pending, this may take some time. If the installation is new with several updates pending, it may take more than one update session to fully update the server.)\\ |
| \\ |
| At this point, a user choice must be made:\\ |
| * **If the Standaard Debian kernel is to be used**, proceed with [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#disable_backports|Disable Backports directly below]].\\ |
| |
==== Disable Backports: ==== | * **If the Proxmox kernel is to be used** (recommended), skip to -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#install_the_proxmox_kernel|Install The Proxmox Kernel]].\\ |
| \\ |
| ---- |
| |
=== General: === | ==== Disable Backports Kernels: ==== |
Linux backport kernels are released quickly to support the "latest and greatest - cutting edge hardware". As a consequence the "userland" (software supported by the latest backport kernels) may not be complete. ZFS, at times, may not be in the software repo's of backports kernels, immediately after their release. This may result in a kernel upgrade where a server's ZFS pool(s) may disappear. There are remedies for this but an ounce of prevention is worth a pound of cure. To avoid broken package issues it's best to use the mainline kernel, by disabling "backports". \\ | |
| As previously mentioned, in preparing to install ZFS, disabling backports kernels is highly recommend.\\ |
\\ | \\ |
| Under **System**, **OMV-Extras**, click on **Disable backports**\\ |
| (This may take a few minutes to complete. At the end of the process, **End of Line** will appear. Clicking the **Close** button will finish the process.)\\ |
| \\ |
| {{ :omv7:omv7_plugins:zfs-01.jpg?nolink&600 |}} |
| \\ |
| \\ |
| Since the above process changes software repositories, click on **apt clean repos** and **apt clean**.\\ |
| While it's not absolutely necessary, to insure that the standard Debian kernel and it's repo's are aligned, consider rebooting the server.\\ |
| \\ |
| When complete, skip the following and proceed to [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#installing_the_zfs_plugin|Installing the ZFS Plugin]].\\ |
| \\ |
| ---- |
| |
==== Install the Proxmox kernel: ==== | ==== Install the Proxmox kernel: ==== |
\\ | \\ |
=== General: === | Under **System**, **Kernel**, select the download **Proxmox** icon and select a **kernel**.\\ |
place holder | (While this selection is the users choice, the oldest kernel may result in an avoidable upgrade in the near future while the newest kernel is not as well tested in the field conditions.)\\ |
| \\ |
| {{ :omv7:omv7_plugins:zfs-02.jpg?nolink&600 |}} |
| \\ |
| The dialog box will recommend rebooting to complete the installation of the Proxmox kernel. Reboot now.\\ |
| \\ |
| After the reboot is complete, under **System**, **Update Management**, **Updates**, check for updates.\\ |
| It is likely that Proxmox related updates will be available. Install these updates.\\ |
| \\ |
| ---- |
| \\ |
| Under **System**, **Kernel**, take note of the kernels available.\\ |
| \\ |
| {{ :omv7:omv7_plugins:zfs-02.1.jpg?nolink&600 |}} |
| |
| \\ |
| The kernel ending with **-pve** is the Proxmox kernel and it is, now, the default.\\ |
| ---- |
| \\ |
| **Optional:** Non Proxmox kernels can be removed.\\ |
| \\ |
| TL;DR proceed to -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#installing_the_zfs_plugin|Installing the ZFS plugin]].\\ |
| \\ |
| Removing non-proxmox kernels is recommended in that, when Openmediavault is updated, the remaining Debian kernels will be updated as well. These updates will also update the grub bootloader with unnecessary entries for the newer Debian Kernels. While rare, occasionally, grub/kernel updates do not go well. Issues with unused Debian Kernels and their grub updates can be prevented by removing non-proxmox kernels. |
| |
| To remove non-proxmox kernels: |
| Under **System**, **Kernel**, click on the **Proxmox** icon and select **Remove non-Proxmox Kernels** from the menu.\\ |
| \\ |
| {{ :docs_in_draft:zfs-02.2.jpg?nolink&600 |}} |
| \\ |
| When the popup dialog displays **END OF LINE**, click the **Close** button.\\ |
| \\ |
| **Reboot**.\\ |
| \\ |
| Under **System**, **Kernel**: |
| Only Proxmox kernels should be displayed (ending with **-pve**), along with memory testing utilities and other utilities that may have been previously installed.\\ |
| |
| ---- |
| |
| |
| |
| |
| ==== Installing the ZFS Plugin ==== |
| |
| Under System, Plugins, scroll all the way to the bottom. Highlight openmediavault-zfs 7.X.X and click the down arrow to install.\\ |
| |
| {{ :omv7:omv7_plugins:zfs-03.jpg?nolink&600 |}} |
| |
| The installation pop up will proceed until **END OF LINE** appears. Click the **Close** button.\\ |
| In most cases, the GUI will reset which changes the left side menu, adding ZFS to the Storage Pop down.\\ |
| ---- |
| |
| ==== Creating a Pool ==== |
| |
| === General Info === |
| |
| == ZFS terms == |
| \\ |
| - A ZFS Pool is made up of one or more "VDEV's"\\ |
| - A VDEV can be a single disk (a basic volume) or a collection of disks in a RAID like format. (RAID1, RAID5, etc.)\\ |
| **Not to worry - the installation process creates the Pool and add's the first VDEV automatically.**\\ |
| - A pool can have multiple VDEV's, AND, a new VDEV can be added to an existing pool, increasing it's size.\\ |
| - If a VDEV is lost, the Pool is lost. There is no recovery from this situation.\\ |
| - Disk redundancy is at the VDEV level. With a mirror as the first VDEV, **$$$$$$$** \\ |
| Therefore, while possible, it wouldn't be advisable to expand a Pool with a RAID VDEV, using a single Basic volume (a basic volume has no redudancy).\\ |
| |
| == RAID levels etc: == |
| |
| |
| |
| At this point, users should have some idea of what they're looking for |
| \\ |
| Under **Storage**, **ZFS**, **Pools**, click on the create (**+**) icon.\\ |
| \\ |
| {{ :docs_in_draft:zfs-04.jpg?nolink&600 |}} |
| \\ |
| ---- |
| \\ |
| The "Create" window will pop up. Note that making selections on this page will create a pool and the first "vdev". (More on vdev's later.) |
| |
| {{ :docs_in_draft:zfs-05.jpg?nolink&600 |}} |
| |
| The **Name*** field:\\ |
| The user's choice. However, limit the name to letters and numbers with a reasonable length.\\ |
| \\ |
| **Pool type**: The choices are as follows.\\ |
| |
| \\ |
| **Devices**: To see and select drives in this field, they must be wiped under **Storage**, **Disks**. Generally, a **Quick** wipe will do.\\ |
| \\ |
| |
| ===== ZFS Pools and VDEV's ===== |
| |
| **General:**\\ |
| As noted before, when creating the first Pool a new VDEV is created within the new pool in accordance with the user's selections.\\ |
| There are a few rules concerning Pools and VDEV's.\\ |
| |
| * VDEV's are made up of physical block devices, I.E storage drive(s). |
| * A Pool can be expanded by adding __new__ VDEV's. However, the addition of a VDEV, to a Pool, is PERMANENT. A VDEV __can not__ be removed from a Pool. |
| * A VDEV, once created, can not be changed. A Basic volume with remain a Basic volume. A mirror (RAID1 equivalent) will remain a mirror. RAID-Z1 (a RAID5 equivalent) will remain RAIDZ1. RAIDZ1 can not be upgraded to RAID-Z2 (a RAID6 equivalent), etc. |
| * If a single VDEV is lost, in a multi-VDEV Pool, the entire Pool is lost. |
| * Disk redundancy is at the VDEV level. Accordingly, it makes __no sense__ to add a Basic disk VDEV, to a RAID level VDEV, to expand a Pool. Per the rules, a VDEV can't be removed and if the Basic (single disk) fails, the entire Pool will be lost.\\ |
| \\ |
| |
| |
| |
| ==== VDEV Types ==== |
| |
| {{ :omv7:omv7_plugins:zfs-pools_vdev_types.jpg?nolink&600 |}} |
| |
| * **Basic**: |
| Basic is a single disk volume. A single "Basic" disk is fine for basic storage. A scrub will reveal data integrity errors but, using default attributes, data errors will not be automatically corrected. However, if the filesystem attribute copies=2 is set, that filesystem residing on a Basic volume will autocorrect data errors. (The cost associated with 2 copies of all files is that it uses twice the disk space.) As noted earlier, using a Basic volume to expand a Pool with a RAID VDEV is a bad idea. |
| |
| * **Mirror**: Also known as a Zmirror. A Zmirror is a RAID1 equivalent. A mirror requires a 2 disk minimum. In that there are allways at least 2 copies of all files in a Mirror, data intergety scrubs automattically correct data errors. More than 2 disks can be added to a single Mirror. Adding more that 2 disks creates more that additional disk mirrors and more than 2 copies of all files. While the cost is the loss of hard drive space, multiple drives in a mirror configuration provides for maximum data integrity and safety.\\ |
| |
| For RAIDZ implementations, it is generally recommended to run and "**odd**" number of drives. |
| |
| * **RAID-Z1**: With one striped parity disk, this is the equivalent of RAID5. (RAID-Z1 requires 3 disks minimum. A rule of thumb maximum would be 7 drives.)\\ |
| * **RAID-Z2**: With two striped parity disks, this is the equivalent of RAID6. (RAID-Z2 requires 4 disks minimum. A rule of thumb maximum would be 11 drives )\\ |
| * **RAID-Z3**: With three striped parity disks, RAID-Z3 has no RAID equivalent but it could, notionally, be called RAID7. (RAID-Z3 requires 5 disks minimum. A rule of thumb maximum would be 15 drives.)\\ |
| |
| ==== VDEV Disk Redundancy ==== |
| |
| \\ |
| <html><center><span style="font-size:125%;">To reiterate; if any one VDEV in a Pool is lost, the entire pool is lost.</span></center></html> |
| <html><center><span style="font-size:125%;">Accordingly, if redundancy is to be used it must be setup at the VDEV level.</span></center></html> |
| \\ |
| <html><center><span style="font-size:125%;">Consider the following illustration:</span></center></html> |
| |
| {{ :omv7:omv7_plugins:zfs-pools_vdev_redundancy.jpg?nolink&500 |}} |
| |
| <html><center><span style="font-size:100%;">A pool, with a VDEV of one disk (Basic), is lost if the disk fails. That's straight forward.</span></center></html> |
| <html><center><span style="font-size:100%;">A pool with a mirror VDEV will survive if one disk in the mirror is lost.</span></center></html> |
| <html><center><span style="font-size:100%;">A pool with a RAID-Z1 VDEV, also, will survive if one disk in the array is lost.</span></center></html> |
| \\ |
| While it will work, mixing dissimilar VDEV's in a Pool, in the manner illustrated, is extremely bad practice. As has already been stated, once a DEV is added to a Pool, it can't be removed. Therefore, the Pool as illustrated will be lost if there's a problem with the Basic drive because the single drive can not be replaced.\\ |
| |
| ---- |
| |
| <html><center><span style="font-size:125%;">The Recommended Pool / VDEV to Start Using ZFS</span></center></html> |
| |
| {{ :omv7:omv7_plugins:zfs-recommended-pool.jpg?nolink&500 |}} |
| |
| Pools made up of mirrors offer the most features and advantages. Disk redundancy is taken care of and, since there are two copies off all files, bit-rot damage to files is self healing. If needed, another mirror (pair of disks) can be added to expand the pool at any time. |
| |
| |
| |
| |
| |
| |
| ===== Data Integrity ===== |
| |
| Data Integrity is one of ZFS' hall mark features. In many file systems silent corruption go unnoticed and unchecked, largely because they have no mechanize to detect "bit-rot". There are numerous reasons for bit-rot having to do with the inevitable degradation of magnetic media due to age, SSD's that have flipped bits from comic rays and other scenarios. Another consideration is the way hard drives and SSD's fail. Contrary to popular belief, storage media does not fail instantly, with a similarity to flipping a light switch. Often they fail slowly and silently, irreversibly corrupting data. Without a filesystem that actively monitors the health of stored data, the discovery of data corruption may be well after sensitive data is irretrievably lost. Again, since corruption is often silent and may occur over an extended period, the potential for copying corrupted files to a backup device is possibility as well.\\ |
| |
| Conversely, by using file checksums and "scrubs", ZFS actively monitors the health of user's data. ZFS scrubs may detect issues with a hard drive well //**before**// SMART stat's provide warnings of a developing drive issue. Keeping the primary server's data clean, insures that backup copies of data are clean as well. |
| |
| |
| ==== How ZFS Data Integrity Works ==== |
| |
| When a file is copied into a ZFS Pool, the file is parsed and assigned a checksum. A checksum may also be referred to as a "hash". Every file will have a unique checksum. If a file is modified, a new checksum is calculated and assigned to the file. In this manner, the exact state of the file is tracked as it was when it was created or modified. It's important to note that, with or without modifications by the filesystem, if the file is altered in any respect, even at the bit level, the checksum will change. Changes that __did not__ occur as a result of a filesystem write, is what data integrity is all about.\\ |
| A housekeeping chore called a "scrub" can be used to compare file checksums against actual file content. Scrub's should be run, periodically, to insure that data is clean and as to serve as a warning of potential degradation of storage media.\\ |
| |
| In the following illustration:\\ |
| - The file in the Pool on left matches its checksum. If __all__ files match their checksums a scrub would report '' scan: scrub repaired 0B in <Length of Time required for the scan> with 0 errors on <Day, Date, Time, Year>''\\ |
| - The file in the Pool on the right does not match it's checksum. Since the Pool has no redundancy, this file might be reported as a "checksum error" and / or an "unrecoverable error".\\ |
| \\ |
| |
| {{ :omv7:omv7_plugins:zfs-integrity-1.jpg?nolink&200 |}} |
| |
| ==== Automatic File Restoration ==== |
| |
| File restoration is automatic "if" two copies of the same file exist. Consider the following illustration: |
| |
| - **On the left**:\\ |
| When a file is created a ZFS mirror (RAID1 equivalent) creates two identical copies of the file, on each hard drive, and assigns them identical checksums.\\ |
| - **In the middle**:\\ |
| A scrub took place where one of two files, that where previously identical, no longer matches its checksum.\\ |
| - **On the right**:\\ |
| ZFS will automatically delete the error-ed file and copy the known good file to the 2nd disk. |
| |
| {{ :omv7:omv7_plugins:zfs-integrity-2.jpg?nolink&600 |}} |
| |
| ---- |
| |
| Automatic file restoration is available under two conditions:\\ |
| - When using Zmirrors (where there are two copies of all files).\\ |
| - In all other VDEV's where Basic Volumes or RAIDZX is used, __AND__ where filesystems have the **copies=2** feature active.\\ |
| |
| Consider the following:\\ |
| In cases where filesystems are using the copies=2 feature, auotmatic file restoration works the same as it would with Zmirrors where 2 file copies exist natively.\\ |
| \\ |
| {{ :omv7:omv7_plugins:zfs-integrity-3.jpg?nolink&600 |}} |
| |
| ---- |
| |
| Practical considerations:\\ |
| - While copies=2 can be used with a Basic volume, it should be noted that in the event of a disk failure both file copies would be lost. However, if data integrity and automatic restoration is used at the primary server, data on a backup server wouldbe clean.\\ |
| - The same is true for RAIDZ implementations. While RAIDZ provides __disk__ redundancy, copies=2 would be required to provide for automatic file restoration.\\ |
| |
| |
| |
| |
| |
| |
| |
===== Configuration ===== | ===== Configuration ===== |
| |
| |
| |
| |
| |
| ===== Final Notes ===== |
| |
| Expanding a VDEV:\\ |
| \\ |
| Spare drives can be added to RAID VDEV's but they can not (currently) be used to expand the VDEV. However, Zmirrors or RAID-ZX arrays can be "upgraded" for size by failing a drive and replacing each of the array's drives, one-by-one, with larger drives. The feature to support this is "**autoexpand**" and it's on by default. However, it should be noted that significant RISK is involved in failing, replacing, and resilvering numerous drives, especially if they are old. Before beginning such a process, insuring the server's backup is up-to-date is highly recommended.\\ |
| |
| |