Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
automated_zfs_snapshots_for_omv5 [2021/05/05 23:44] – [General] crashtest | automated_zfs_snapshots_for_omv5 [2022/07/31 17:17] (current) – removed crashtest | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | {{ : | ||
- | |||
- | < | ||
- | |||
- | {{ : | ||
- | |||
- | ====== Automated ZFS Snapshots for openmediavault 5 ====== | ||
- | \\ | ||
- | < | ||
- | |||
- | \\ | ||
- | \\ | ||
- | This document can be converted to a PDF file, in the user's language of choice (see the following), on Windows, Mac's and popular Linux desktop platforms. Simply select the printer icon on the upper right corner of this web page. When prompted at the client, select “print to PDF”, name and save the file. | ||
- | \\ | ||
- | \\ | ||
- | [[https:// | ||
- | [[https:// | ||
- | [[https:// | ||
- | [[https:// | ||
- | \\ | ||
- | ===== Automated ZFS Snapshots ===== | ||
- | \\ | ||
- | **For ZFS users:**\\ | ||
- | This guide will show how to setup and take advantage of one of ZFS' most valuable features for restoration, | ||
- | |||
- | While intended primarily for openmendiavault, | ||
- | ---- | ||
- | |||
- | **Tested – December 31rst, 2020: With openmediavault 5 and Debian 10 (Buster).**\\ | ||
- | Tested, prior, with openmediavault 4 and Debian 9. | ||
- | |||
- | ---- | ||
- | \\ | ||
- | ==== General ==== | ||
- | |||
- | Given the design and function of a CoW (copy on write) filesystem, ZFS gives users the ability to “capture” the state of their file system at a given moment in time and preserve it using snapshots. | ||
- | Having the ability to “roll back” the pool, individual filesystems in the pool, or retrieve individual files from previous snapshots has obvious advantages. | ||
- | \\ | ||
- | An excellent overview of how ZFS snapshots work is available on Youtube → [[https:// | ||
- | ---- | ||
- | |||
- | < | ||
- | |||
- | |||
- | {{ :: | ||
- | |||
- | ---- | ||
- | |||
- | ==== Install zfs-auto-snapshot ==== | ||
- | \\ | ||
- | From the command line as root, copy and paste the following: \\ | ||
- | \\ | ||
- | '' | ||
- | \\ | ||
- | '' | ||
- | \\ | ||
- | If the output of the above is; “'' | ||
- | '' | ||
- | Then rerun the unzip command: \\ | ||
- | '' | ||
- | \\ | ||
- | '' | ||
- | \\ | ||
- | '' | ||
- | |||
- | ---- | ||
- | |||
- | By default the install will set up scripts that will run the following snapshot jobs separately on the parent pool and on all individual child filesystems.\\ | ||
- | \\ | ||
- | **frequent** snapshots run every 15 mins, keeping 4 snapshots \\ | ||
- | **hourly** snapshots run every hour, keeping 24 snapshots \\ | ||
- | **daily** snapshots run every day, keeping 31 snapshots \\ | ||
- | **weekly** snapshots run every week, keeping 7 snapshots \\ | ||
- | **monthly** snapshots run every month, keeping 12 snapshots \\ | ||
- | \\ | ||
- | With default settings, all of these jobs can preserve previous states of the pool and child filesystem(s) for up to a year. \\ | ||
- | \\ | ||
- | ---- | ||
- | \\ | ||
- | < | ||
- | \\ | ||
- | A “pool snapshot” captures the state of the parent pool, in this example “Rocky”, | ||
- | \\ | ||
- | **In the interests of clarity: | ||
- | A ZFS “filesystem” is interchangeable with a standard Linux folder at the root of the pool and is navigable, on the command line, in the same manner. | ||
- | \\ | ||
- | ---- | ||
- | ==== Customizing Snapshot jobs ==== | ||
- | |||
- | By default all snapshot time intervals are set to “**true**”. | ||
- | ---- | ||
- | |||
- | === Pool Snapshots === | ||
- | |||
- | The following command lines can be used to selectively turn snapshot intervals **ON** or **OFF** for the **pool** named **Rocky**. | ||
- | \\ | ||
- | zfs set com.sun: | ||
- | zfs set com.sun: | ||
- | zfs set com.sun: | ||
- | zfs set com.sun: | ||
- | zfs set com.sun: | ||
- | zfs set com.sun: | ||
- | \\ | ||
- | Time intervals where a snapshot is desired are set to **true**. | ||
- | In this example, with all intervals up to “weekly” set to “true”, the pool “Rocky” will have a snapshot recorded 4 times an hour (4 total), once per hour (24 total), once a day (31 total), and once a week (7 total). | ||
- | Again, in the example above, **true** is assumed on all lines. | ||
- | |||
- | '' | ||
- | |||
- | ---- | ||
- | |||
- | === Filesystem Snapshots === | ||
- | |||
- | The following command lines can be used to selectively turn snapshots on or off for an individual | ||
- | zfs set com.sun: | ||
- | \\ | ||
- | zfs set com.sun: | ||
- | zfs set com.sun: | ||
- | zfs set com.sun: | ||
- | zfs set com.sun: | ||
- | zfs set com.sun: | ||
- | \\ | ||
- | Again, **true** is assumed in all lines, by default. | ||
- | |||
- | Again, because **true** is the default for all intervals, to disable the unneeded intervals, the following commands on the Command Line Interface (CLI) would be required:\\ | ||
- | \\ | ||
- | '' | ||
- | '' | ||
- | '' | ||
- | '' | ||
- | |||
- | Repeat this process for all child filesystems attached to the parent pool. Again, all intervals are “true” by default. | ||
- | ---- | ||
- | |||
- | ==== Looking at and Organizing Snapshots ==== | ||
- | |||
- | All snapshots taken by zfs-auto-snapshot are viewable in the OMV GUI. OMV provides a tab for looking at and sorting snapshots under Storage, ZFS, Snapshots.\\ | ||
- | \\ | ||
- | On the CLI '' | ||
- | |||
- | ---- | ||
- | |||
- | ==== Rolling Back ==== | ||
- | |||
- | While rolling back a file system or the parent pool is relatively easy, if going significantly back in time (beyond the most recent snapshot) the roll back feature will not work in the OMV GUI. However, a roll back can be done with the following command line: | ||
- | |||
- | (In this example, the roll back would be done to the entire parent pool “Rocky”.) | ||
- | |||
- | '' | ||
- | |||
- | (In the following example – the roll back would be done to the “Rocky/ | ||
- | |||
- | '' | ||
- | |||
- | Using the exact name of the snapshot provided in the OMV GUI (examples above) and the above rollback command with the -r suffix, the pool or child filesystems can be rolled back to the date / time of the specified snapshot. | ||
- | ---- | ||
- | |||
- | ==== Individual filesystem, folder, or file recovery ==== | ||
- | |||
- | The following command should be used with caution. | ||
- | |||
- | '' | ||
- | | ||
- | Turning the above command off is done as follows: | ||
- | |||
- | '' | ||
- | \\ | ||
- | \\ | ||
- | When the hidden directory is made visible, with the set visible command above, it will appear in the associated SMB share.\\ | ||
- | ---- | ||
- | < | ||
- | |||
- | {{ :: | ||
- | ---- | ||
- | \\ | ||
- | Under **.zfs**, the **snapshot** directory is found. | ||
- | |||
- | {{ :: | ||
- | \\ | ||
- | \\ | ||
- | Inside of each snapshot folders exists the exact state of the filesystem, as it existed on the date shown. | ||
- | |||
- | The contents of any snapshot can be copied from the snapshot to replace or overwrite any part of, or all of, the current top level file system. | ||
- | |||
- | For businesses and users, going back to a time before malware existed on a share; this feature can be utilized as a ransomware or virus “Killer”. | ||
- | |||
- | **Caution**: | ||
- | The above will make all of the current filesystem' | ||
- | If the snapshot directory is visible and an rsync job or any other backup type runs, pulling files from the share, the destination will be **OVER RUN** with multiple versions of the same files and folders. | ||
- | Equally important is the need for any copy, paste or replace operations, from past snapshots to the current file system, to be “// | ||
- | |||
- | Again the command to re-hide the snapshot folder is: | ||
- | |||
- | '' | ||
- | |||
- | ---- | ||
- | ==== Notes: ==== | ||
- | |||
- | 1. For maximum snapshot flexibility, | ||
- | \\ | ||
- | 2. Think of the roll back feature as going back in time, in the ZFS “time line”. When rolling back to a specific date/time snapshot, ZFS destroys all snapshots (all file changes, additions and deletions) between the present state and the past state of the filesystem. | ||
- | \\ | ||
- | The practical implications are: | ||
- | * In order to not lose file changes, deletions, etc., it's better to roll back the shortest interval possible. | ||
- | * By extension, it's better to roll back a single filesystem (the ZFS equivalent of a root folder), than it is to roll back an entire pool without child filesystems. | ||
- | * Due to the minimal overall impact, to prevent the loss of file versions, etc; selective restoration(s) of files and folders is preferred and is “best practice”. | ||
- | * Roll backs are a disaster recovery option that, if applied, should but be as narrow as possible (a single filesystem) and limited to the shortest time interval possible. | ||
- | \\ | ||
- | 3. If a particular file system experiences a high rate of file turnover or file versioning (databases, etc.), snapshot retention periods may need to be shortened to prevent excessive use of disk space. | ||
- | \\ | ||
- | 4. While it is an advanced technique; it is possible to create a file system “clone” from a past snapshot, without rolling back. The clone could be used as a source of folders and files as they existed when the snapshot was taken, without sacrificing the snapshots in between. | ||
- | \\ | ||
- | 5. While zfs-auto-snapshot creates considerable flexibility in file, folder, filesystem, and pool restorations, | ||
- | \\ | ||
- | 6. If more advanced features are needed, such as offloading snapshots to an external host, [[http:// | ||
- | \\ | ||
- | === Additional Information: | ||
- | |||
- | zfs-auto-snapshot source: | ||
- | https:// | ||
- | \\ | ||
- | A Comprehensive and well written ZFS reference: \\ | ||
- | https:// | ||
- | \\ | ||
- | ZFS Video Tutorial: | ||
- | Part 1: https:// | ||
- | Part 2: https:// | ||
- | * Due to a difference in repositories, | ||
- | * While this tutorial is informative, | ||
- | |||
- | |||