Both sides previous revision Previous revision Next revision | Previous revision |
omv7:omv7_plugins:zfs [2025/06/25 02:55] – created crashtest | omv7:omv7_plugins:zfs [2025/06/26 12:27] (current) – [Additional Reading] crashtest |
---|
While all use cases are not the same, for this sake of this discussion, we'll assume that users reading this document are NOT Corporate or Datacenter Admins. The assumption will be that readers are home server users, server admins for small businesses, or other entities that have 25 users or less. In other words, when compared to Enterprise level network traffic, we're talking about relatively "light usage".\\ | While all use cases are not the same, for this sake of this discussion, we'll assume that users reading this document are NOT Corporate or Datacenter Admins. The assumption will be that readers are home server users, server admins for small businesses, or other entities that have 25 users or less. In other words, when compared to Enterprise level network traffic, we're talking about relatively "light usage".\\ |
\\ | \\ |
The author of this document, in personal experience with running ZFS, has set up a 4TB pool on a host with 4GB of RAM and an older Atom processor (read, "a weak CPU"). Performance for a few users, along with streaming data, was fine. Memory might become an issue only if "dedup" (deduplication of data) is turned ON. (This is an Enterprise feature that is __OFF__ by default.) In most home or small business use cases, ZFS' CPU requirements are modest.\\ | The author of this document, in personal experience with running ZFS, has set up a 4TB pool on a host with 4GB of RAM and an older Atom processor (read, "a weak CPU"). File server performance for a few users, along with streaming data, was fine. Memory might become an issue only if "dedup" (deduplication of data) is turned ON. (This is an Enterprise feature that is __OFF__ by default.) In most home or small business use cases, ZFS' CPU requirements are modest.\\ |
\\ | \\ |
**"ECC RAM is required to run ZFS".**\\ | **"ECC RAM is required to run ZFS".**\\ |
\\ | \\ |
At this point, a user choice is required:\\ | At this point, a user choice is required:\\ |
* **If the Standard Debian kernel is to be used**, proceed with [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#disable_backports|Disable Backports directly below]].\\ | * **If the Standard Debian kernel is to be used**, proceed with [[https://wiki.omv-extras.org/doku.php?id=omv7:omv7_plugins:zfs#disable_backports_kernels|Disable Backports directly below]].\\ |
| |
* **If the Proxmox kernel is to be used** (recommended), skip to -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#install_the_proxmox_kernel|Install The Proxmox Kernel]].\\ | * **If the Proxmox kernel is to be used** (recommended), skip to -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#install_the_proxmox_kernel|Install The Proxmox Kernel]].\\ |
While it's not absolutely necessary, to ensure that the standard Debian kernel and its repos are aligned, consider rebooting the server.\\ | While it's not absolutely necessary, to ensure that the standard Debian kernel and its repos are aligned, consider rebooting the server.\\ |
\\ | \\ |
When complete, skip the following and proceed directly to [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#installing_the_zfs_plugin|Installing the ZFS Plugin]].\\ | When complete, skip the following and proceed directly to [[https://wiki.omv-extras.org/doku.php?id=omv7:omv7_plugins:zfs#installing_the_zfs_plugin|Installing the ZFS Plugin]].\\ |
\\ | \\ |
---- | ---- |
**Optional:** Non Proxmox kernels can be removed.\\ | **Optional:** Non Proxmox kernels can be removed.\\ |
\\ | \\ |
**TL;DR** proceed to -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#installing_the_zfs_plugin|Installing the ZFS plugin]].\\ | **TL;DR** proceed to -> [[https://wiki.omv-extras.org/doku.php?id=omv7:omv7_plugins:zfs#installing_the_zfs_plugin|Installing the ZFS plugin]].\\ |
\\ | \\ |
Removing non-proxmox kernels is recommended in that, when Openmediavault is updated, the remaining Debian kernels will be updated as well. These updates will also update the grub bootloader with unnecessary entries for the newer Debian Kernels. While rare, occasionally, grub/kernel updates do not go well. Issues with unused Debian Kernels and their grub updates can be prevented by removing non-proxmox kernels.\\ | Removing non-proxmox kernels is recommended in that, when Openmediavault is updated, the remaining Debian kernels will be updated as well. These updates will also update the grub bootloader with unnecessary entries for the newer Debian Kernels. While rare, occasionally, grub/kernel updates do not go well. Issues with unused Debian Kernels and their grub updates can be prevented by removing non-proxmox kernels.\\ |
\\ | \\ |
<html><center>After the reboot, under <b>System</b>, <b>Kernel</b>:</center></html> | <html><center>After the reboot, under <b>System</b>, <b>Kernel</b>:</center></html> |
\\ | |
<html><center>Only Proxmox kernels should be displayed (ending with <b>-pve</b>), along with memory testing utilities and other utilities that may have been previously installed.</center></html> | <html><center>Only Proxmox kernels should be displayed (ending with <b>-pve</b>), along with memory testing utilities and other utilities that may have been previously installed.</center></html> |
| |
| |
| |
- A ZFS "**Pool**" is made up of one or more "**VDEV**'s". More detailed information on VDEVs can be found -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#vdev_types|here]]. It's recommended that new ZFS users review the various VDEV types as the selections made during the following steps will have an impact on future Pool maintenance and expansions.\\ | - A ZFS "**Pool**" is made up of one or more "**VDEV**'s". More detailed information on VDEVs can be found -> [[https://wiki.omv-extras.org/doku.php?id=omv7:omv7_plugins:zfs#vdev_types|here]]. It's recommended that new ZFS users review the various VDEV types as the selections made during the following steps will have an impact on future Pool maintenance and expansions.\\ |
- During the creation of a "Pool", in accordance with user selections, the installation process creates and adds the first VDEV automatically.\\ | - During the creation of a "Pool", in accordance with user selections, the installation process creates and adds the first VDEV automatically.\\ |
- A Pool can have one or more VDEVs and a new VDEV can be added to an existing pool, at any time, increasing the pool's size.\\ | - A Pool can have one or more VDEVs and a new VDEV can be added to an existing pool, at any time, increasing the pool's size.\\ |
| |
The choices are; **Basic**, **Mirror**, **RAID-Z1**, **RAID-Z2**, **RAID-Z3**\\ | The choices are; **Basic**, **Mirror**, **RAID-Z1**, **RAID-Z2**, **RAID-Z3**\\ |
For more details on the these selections, see -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#vdev_types|VDEV types]].\\ | For more details on the these selections, see -> [[https://wiki.omv-extras.org/doku.php?id=omv7:omv7_plugins:zfs#vdev_types|VDEV types]].\\ |
| |
\\ | \\ |
<html><center>Confirm the Pending Change.</center></html> | <html><center>Confirm the Pending Change.</center></html> |
\\ | \\ |
<html><center>If a <b>WARNING</b> dialog pops up containing "<b>invalid VDEV specification</b>", it may be necessary to check the <b>FORCE</b> box. | <html><center>If a <b>WARNING</b> dialog pops up containing "<b>invalid VDEV specification</b>", it may be necessary to check the <b>Force creation</b> box. |
</center></html> | </center></html> |
---- | ---- |
==== Working With ZFS FileSystems ==== | ==== Working With ZFS FileSystems ==== |
| |
TL;DR Take me to -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#adding_filesystems|Adding Filesystems]].\\ | **TL;DR** Take me to -> [[https://wiki.omv-extras.org/doku.php?id=omv7:omv7_plugins:zfs#adding_filesystems|Adding Filesystems]].\\ |
\\ | \\ |
While it’s possible to create Linux folders directly at the root of a ZFS pool, creating dedicated ZFS filesystems offers many advantages.\\ | While it’s possible to create Linux folders directly at the root of a ZFS pool, creating dedicated ZFS filesystems offers many advantages.\\ |
---- | ---- |
\\ | \\ |
<html><center>In the <b>Add</b> dialog box, give the Filesystem a name, and click <b>Add</b>.</center></html> | <html><center>In the <b>Add</b> dialog box, under <b>Type</b> select <b>Filesystem</b>, give the Filesystem a name, and click <b>Add</b>.</center></html> |
\\ | \\ |
\\ | \\ |
Basic is a single disk volume. A single "Basic" disk is fine for basic storage. A scrub will reveal data integrity errors but, using default attributes, data errors will not be automatically corrected. However, if the filesystem attribute copies=2 is set, that filesystem residing on a Basic volume will autocorrect data errors. (The cost associated with 2 copies of all files is that it uses twice the disk space.) As noted earlier, using a Basic volume to expand a Pool with a RAID VDEV is a bad idea. | Basic is a single disk volume. A single "Basic" disk is fine for basic storage. A scrub will reveal data integrity errors but, using default attributes, data errors will not be automatically corrected. However, if the filesystem attribute copies=2 is set, that filesystem residing on a Basic volume will autocorrect data errors. (The cost associated with 2 copies of all files is that it uses twice the disk space.) As noted earlier, using a Basic volume to expand a Pool with a RAID VDEV is a bad idea. |
| |
* **Mirror**: Also known as a Zmirror. A Zmirror is a RAID1 equivalent. A mirror requires a 2 disk minimum. In that there are always at least __2 copies__ of all files in a Mirror, data integrity scrubs automatically correct data errors. Further, it's worth noting that more than 2 disks can be added to a single Mirror. Adding more than 2 disks creates additional disk mirrors and more than 2 copies of all files. While the cost is the loss of hard drive space, multiple drives in a mirror configuration provides for **maximum** data integrity and safety. For the reasons stated in this -> [[https://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/|reference]], using VDEV(s) comprised of one or more mirrors should be considered and is recommended.\\ | * **Mirror**: |
| Also known as a Zmirror. A Zmirror is a RAID1 equivalent. A mirror requires a 2 disk minimum. In that there are always at least __2 copies__ of all files in a Mirror, data integrity scrubs automatically correct data errors. Further, it's worth noting that more than 2 disks can be added to a single Mirror. Adding more than 2 disks creates additional disk mirrors and more than 2 copies of all files. While the cost is the loss of hard drive space, multiple drives in a mirror configuration provides for **maximum** data integrity and safety. For the reasons stated in this -> [[https://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/|reference]], using VDEV(s) comprised of one or more mirrors should be considered and is recommended.\\ |
| |
For RAID-Z implementations, it is generally recommended to run an "**odd**" number of drives. | For RAID-Z implementations, it is generally recommended to run an "**odd**" number of drives. |
- While copies=2 can be used with a Basic volume, it should be noted that in the event of a disk failure both file copies would be lost. However, if data integrity and automatic restoration is used at the primary server, data on a backup server would be clean.\\ | - While copies=2 can be used with a Basic volume, it should be noted that in the event of a disk failure both file copies would be lost. However, if data integrity and automatic restoration is used at the primary server, data on a backup server would be clean.\\ |
- RAID-Z implementations reconstruct errored data a bit differently. While some data reconstruction is possible, using parity calculations, RAID-Z does not provide for restoration of //silent// errors. While RAID-Z provides __disk__ redundancy, copies=2 would be required to provide for __maximum__ data protection and file restoration.\\ | - RAID-Z implementations reconstruct errored data a bit differently. While some data reconstruction is possible, using parity calculations, RAID-Z does not provide for restoration of //silent// errors. While RAID-Z provides __disk__ redundancy, copies=2 would be required to provide for __maximum__ data protection and file restoration.\\ |
| \\ |
| ---- |
| \\ |
| ==== Additional Reading ==== |
| |
| The following link features Arron Topance's excellent [[https://tadeubento.com/2024/aarons-zfs-guide/|ZFS guide]]. |
| \\ |
| For a deep dive into ZFS and its internals -> [[https://openzfs.github.io/openzfs-docs/|OpenZFS Documentation]].\\ |
| \\ |
| |
| |
| |
| |