Both sides previous revision Previous revision Next revision | Previous revision |
omv7:omv7_plugins:zfs [2025/06/25 03:24] – [Install the Proxmox kernel:] crashtest | omv7:omv7_plugins:zfs [2025/06/26 12:27] (current) – [Additional Reading] crashtest |
---|
| |
| |
- A ZFS "**Pool**" is made up of one or more "**VDEV**'s". More detailed information on VDEVs can be found -> [[https://wiki.omv-extras.org/doku.php?id=docs_in_draft:zfs#vdev_types|here]]. It's recommended that new ZFS users review the various VDEV types as the selections made during the following steps will have an impact on future Pool maintenance and expansions.\\ | - A ZFS "**Pool**" is made up of one or more "**VDEV**'s". More detailed information on VDEVs can be found -> [[https://wiki.omv-extras.org/doku.php?id=omv7:omv7_plugins:zfs#vdev_types|here]]. It's recommended that new ZFS users review the various VDEV types as the selections made during the following steps will have an impact on future Pool maintenance and expansions.\\ |
- During the creation of a "Pool", in accordance with user selections, the installation process creates and adds the first VDEV automatically.\\ | - During the creation of a "Pool", in accordance with user selections, the installation process creates and adds the first VDEV automatically.\\ |
- A Pool can have one or more VDEVs and a new VDEV can be added to an existing pool, at any time, increasing the pool's size.\\ | - A Pool can have one or more VDEVs and a new VDEV can be added to an existing pool, at any time, increasing the pool's size.\\ |
---- | ---- |
\\ | \\ |
<html><center>In the <b>Add</b> dialog box, give the Filesystem a name, and click <b>Add</b>.</center></html> | <html><center>In the <b>Add</b> dialog box, under <b>Type</b> select <b>Filesystem</b>, give the Filesystem a name, and click <b>Add</b>.</center></html> |
\\ | \\ |
\\ | \\ |
- While copies=2 can be used with a Basic volume, it should be noted that in the event of a disk failure both file copies would be lost. However, if data integrity and automatic restoration is used at the primary server, data on a backup server would be clean.\\ | - While copies=2 can be used with a Basic volume, it should be noted that in the event of a disk failure both file copies would be lost. However, if data integrity and automatic restoration is used at the primary server, data on a backup server would be clean.\\ |
- RAID-Z implementations reconstruct errored data a bit differently. While some data reconstruction is possible, using parity calculations, RAID-Z does not provide for restoration of //silent// errors. While RAID-Z provides __disk__ redundancy, copies=2 would be required to provide for __maximum__ data protection and file restoration.\\ | - RAID-Z implementations reconstruct errored data a bit differently. While some data reconstruction is possible, using parity calculations, RAID-Z does not provide for restoration of //silent// errors. While RAID-Z provides __disk__ redundancy, copies=2 would be required to provide for __maximum__ data protection and file restoration.\\ |
| \\ |
| ---- |
| \\ |
| ==== Additional Reading ==== |
| |
| The following link features Arron Topance's excellent [[https://tadeubento.com/2024/aarons-zfs-guide/|ZFS guide]]. |
| \\ |
| For a deep dive into ZFS and its internals -> [[https://openzfs.github.io/openzfs-docs/|OpenZFS Documentation]].\\ |
| \\ |
| |
| |
| |
| |