Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revisionBoth sides next revision | ||
automated_zfs_snapshots_for_omv5 [2021/05/05 23:15] – [Pool Snapshots] crashtest | automated_zfs_snapshots_for_omv5 [2022/07/26 01:35] – crashtest | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | {{ : | + | < |
- | + | ||
- | < | + | |
{{ : | {{ : | ||
- | ====== Automated ZFS Snapshots for openmediavault | + | ====== Automated ZFS Snapshots for Openmediavault |
\\ | \\ | ||
< | < | ||
Line 22: | Line 20: | ||
\\ | \\ | ||
**For ZFS users:**\\ | **For ZFS users:**\\ | ||
- | This guide will show how to setup and take advantage of one of ZFS' most valuable features for restoration, | + | This guide will show how to setup and take advantage of one of ZFS' most valuable features for restoration, |
+ | **zfs-auto-snapshot**, | ||
While intended primarily for openmendiavault, | While intended primarily for openmendiavault, | ||
---- | ---- | ||
- | **Tested – December 31rst, 2020: With openmediavault 5 and Debian 10 (Buster)**\\ | + | **Tested – December 31rst, 2020: With openmediavault 5 and Debian 10 (Buster).**\\ |
Tested, prior, with openmediavault 4 and Debian 9. | Tested, prior, with openmediavault 4 and Debian 9. | ||
---- | ---- | ||
- | \\ | + | |
==== General ==== | ==== General ==== | ||
- | \\ | + | |
Given the design and function of a CoW (copy on write) filesystem, ZFS gives users the ability to “capture” the state of their file system at a given moment in time and preserve it using snapshots. | Given the design and function of a CoW (copy on write) filesystem, ZFS gives users the ability to “capture” the state of their file system at a given moment in time and preserve it using snapshots. | ||
Having the ability to “roll back” the pool, individual filesystems in the pool, or retrieve individual files from previous snapshots has obvious advantages. | Having the ability to “roll back” the pool, individual filesystems in the pool, or retrieve individual files from previous snapshots has obvious advantages. | ||
Line 43: | Line 42: | ||
- | {{ ::zfs2.jpg?600 |}} | + | {{ ::zfs2.jpg?700 |}} |
---- | ---- | ||
Line 83: | Line 82: | ||
\\ | \\ | ||
**In the interests of clarity: | **In the interests of clarity: | ||
- | A ZFS “filesystem” is interchangeable with a standard Linux folder at the root of the pool and is navigable, on the command line, in the same manner. | + | A ZFS “filesystem” is interchangeable with a standard Linux folder at the root of the parent |
\\ | \\ | ||
---- | ---- | ||
- | ==== Customizing Snapshot jobs ==== | + | ===== Customizing Snapshot jobs ===== |
By default all snapshot time intervals are set to “**true**”. | By default all snapshot time intervals are set to “**true**”. | ||
Line 96: | Line 95: | ||
\\ | \\ | ||
zfs set com.sun: | zfs set com.sun: | ||
+ | \\ | ||
zfs set com.sun: | zfs set com.sun: | ||
zfs set com.sun: | zfs set com.sun: | ||
Line 106: | Line 106: | ||
Again, in the example above, **true** is assumed on all lines. | Again, in the example above, **true** is assumed on all lines. | ||
- | **zfs set com.sun: | + | '' |
---- | ---- | ||
Line 112: | Line 112: | ||
=== Filesystem Snapshots === | === Filesystem Snapshots === | ||
- | The following command lines can be used to selectively turn snapshots on or off for an individual | + | The following command lines can be used to selectively turn snapshots on or off for an individual |
- | zfs set com.sun: | + | zfs set com.sun: |
- | + | \\ | |
- | zfs set com.sun: | + | zfs set com.sun: |
- | zfs set com.sun: | + | zfs set com.sun: |
- | zfs set com.sun: | + | zfs set com.sun: |
- | zfs set com.sun: | + | zfs set com.sun: |
- | zfs set com.sun: | + | zfs set com.sun: |
+ | \\ | ||
+ | Again, **true** is assumed in all lines, by default. | ||
- | Again, true is assumed in all lines, by default. | + | Again, because |
- | + | \\ | |
- | Again, because true is the default for all intervals, to disable the unneeded intervals, the following commands on the Command Line Interface (CLI) would be required: | + | '' |
- | + | '' | |
- | zfs set com.sun: | + | '' |
- | zfs set com.sun: | + | '' |
- | zfs set com.sun: | + | |
- | zfs set com.sun: | + | |
Repeat this process for all child filesystems attached to the parent pool. Again, all intervals are “true” by default. | Repeat this process for all child filesystems attached to the parent pool. Again, all intervals are “true” by default. | ||
Line 135: | Line 135: | ||
==== Looking at and Organizing Snapshots ==== | ==== Looking at and Organizing Snapshots ==== | ||
- | All snapshots taken by zfs-auto-snapshot are viewable in the OMV GUI. OMV provides a tab for looking at and sorting snapshots under Storage, ZFS, Snapshots. | + | All snapshots taken by zfs-auto-snapshot are viewable in the OMV GUI. OMV provides a tab for looking at and sorting snapshots under Storage, ZFS, Snapshots.\\ |
+ | \\ | ||
+ | On the CLI '' | ||
---- | ---- | ||
Line 143: | Line 145: | ||
While rolling back a file system or the parent pool is relatively easy, if going significantly back in time (beyond the most recent snapshot) the roll back feature will not work in the OMV GUI. However, a roll back can be done with the following command line: | While rolling back a file system or the parent pool is relatively easy, if going significantly back in time (beyond the most recent snapshot) the roll back feature will not work in the OMV GUI. However, a roll back can be done with the following command line: | ||
- | (In this example, the roll back would be done to the entire | + | (In this example, the roll back would be done to the parent pool “Rocky”.) |
'' | '' | ||
Line 163: | Line 165: | ||
'' | '' | ||
+ | \\ | ||
+ | \\ | ||
+ | When the hidden directory is made visible, with the set visible command above, it will appear in the associated SMB share.\\ | ||
+ | ---- | ||
+ | < | ||
- | + | {{ ::zfs3.jpg?600 |}} | |
- | When the hidden directory is made visible, with the set visible command above, it will appear in the associated SMB share. | + | |
- | + | ||
- | < | + | |
- | + | ||
- | {{ ::zfs3.jpg?400 |}} | + | |
---- | ---- | ||
\\ | \\ | ||
- | Under **.zfs**, the **snapshot** directory is found. | + | Under **.zfs**, the **snapshot** directory is found. |
- | + | ||
- | {{ :: | + | |
+ | {{ :: | ||
+ | \\ | ||
+ | \\ | ||
Inside of each snapshot folders exists the exact state of the filesystem, as it existed on the date shown. | Inside of each snapshot folders exists the exact state of the filesystem, as it existed on the date shown. | ||
+ | |||
The contents of any snapshot can be copied from the snapshot to replace or overwrite any part of, or all of, the current top level file system. | The contents of any snapshot can be copied from the snapshot to replace or overwrite any part of, or all of, the current top level file system. | ||
- | For businesses and users, going back to a time before malware existed on a share; this feature can be utilized as a ransomware or virus “Killer”. | + | For businesses and users, going back to a time before malware existed on a share; this feature can be utilized as a ransomware or virus “Killer”. |
- | *Caution*: | + | **Caution**:\\ |
The above will make all of the current filesystem' | The above will make all of the current filesystem' | ||
- | If the snapshot directory is visible and an rsync job or any other backup type runs pulling files from the share, the destination will be **OVER RUN** with multiple versions of the same files and folders. | + | If the snapshot directory is visible and an rsync job or any other backup type runs, pulling files from the share, the destination will be **OVER RUN** with multiple versions of the same files and folders. |
Equally important is the need for any copy, paste or replace operations, from past snapshots to the current file system, to be “// | Equally important is the need for any copy, paste or replace operations, from past snapshots to the current file system, to be “// | ||
Line 191: | Line 195: | ||
---- | ---- | ||
- | Notes: | + | ===== Notes: |
- | + | ||
- | 1. For maximum snapshot flexibility, | + | |
+ | 1. For maximum snapshot flexibility, | ||
+ | \\ | ||
2. Think of the roll back feature as going back in time, in the ZFS “time line”. When rolling back to a specific date/time snapshot, ZFS destroys all snapshots (all file changes, additions and deletions) between the present state and the past state of the filesystem. | 2. Think of the roll back feature as going back in time, in the ZFS “time line”. When rolling back to a specific date/time snapshot, ZFS destroys all snapshots (all file changes, additions and deletions) between the present state and the past state of the filesystem. | ||
- | + | \\ | |
- | The practical implications are: | + | The practical implications are:\\ |
- | - In order to not lose file changes, deletions, etc., it's better to roll back the shortest interval possible. | + | |
- | - By extension, it's better to roll back a single filesystem (the ZFS equivalent of a root folder), than it is to roll back an entire pool without child filesystems. | + | |
- | - Due to the minimal overall impact, to prevent the loss of file versions, etc; selective restoration(s) of files and folders is preferred and is “best practice”. | + | |
- | - Roll backs are a disaster recovery option that, if applied, should but be as narrow as possible (a single filesystem) and limited to the shortest time interval possible. | + | |
- | + | \\ | |
- | 3. If a particular file system experiences a high rate of file turnover or file versioning (databases, etc.), snapshot retention periods may need to be shortened to prevent excessive use of disk space. | + | 3. If a particular file system experiences a high rate of file turnover or file versioning (databases, etc.), snapshot retention periods may need to be shortened to prevent excessive use of disk space. |
- | 4. While it is an advanced technique; it is possible to create a file system “clone” from a past snapshot, without rolling back. The clone could be used as a source of folders and files as they existed when the snapshot was taken, without sacrificing the snapshots in between. | + | \\ |
- | 5. While zfs-auto-snapshot creates considerable flexibility in file, folder, filesystem, and pool restorations, | + | 4. While it is an advanced technique; it is possible to create a file system “clone” from a past snapshot, without rolling back. The clone could be used as a source of folders and files as they existed when the snapshot was taken, without sacrificing the snapshots in between. |
- | 6. If more advanced features are needed, such as offloading snapshots to an external host, ZnapZend is a more appropriate solution. | + | \\ |
- | + | 5. While zfs-auto-snapshot creates considerable flexibility in file, folder, filesystem, and pool restorations, | |
- | Additional Information: | + | \\ |
+ | 6. If more advanced features are needed, such as offloading snapshots to an external host, [[http:// | ||
+ | \\ | ||
+ | ==== Additional Information | ||
- | zfs-auto-snapshot source: | + | ==== zfs-auto-snapshot source |
- | https:// | + | https:// |
- | A Comprehensive and well written ZFS reference: | + | \\ |
- | https:// | + | A Comprehensive and well written ZFS reference: |
- | + | https:// | |
- | ZFS Video Tutorial: | + | \\ |
- | Part 1: https:// | + | ==== ZFS Video Tutorial |
- | Part 2: https:// | + | Part 1: https:// |
- | ** Due to a difference in repositories, | + | Part 2: https:// |
- | ** While this tutorial is informative, | + | * Due to a difference in repositories, |
+ | * While this tutorial is informative, | ||