ZFS

来自 Arch Linux 中文维基

ZFS 是由 太阳计算机公司(现已被甲骨文公司收购)开发的高级文件系统,在 2005 年 11 月作为OpenSolaris 的一部分发布。

ZFS 的特点包括:存储池(被称为 "zpool" 的集成卷管理系统)、写时复制快照、数据完整性校验和自动修复(擦除)、RAID-Z、最大 16 Exabyte 文件大小,以及最大 256×10¹⁵ Zettabyte 存储,且对文件系统(数据集)或文件的数量没有限制 [1] 。ZFS 采用 通用开发与散布许可证(CDDL)授权。

ZFS 被称为 "终极文件系统",稳定、快速、安全、面向未来。由于采用 CDDL 许可,因此与 GPL 不兼容,ZFS 不可能与 Linux 内核一起发布。然而,这并不妨碍第三方开发者开发并发布原生的 Linux 内核模块,比如 OpenZFS (以前被称为 ZFS on Linux (ZOL))。

ZOL 是一个由 劳伦斯-利弗莫尔国家实验室 资助的项目,旨在为其大量的存储需求和超级计算机开发原生的 Linux 内核模块。

注意:

由于 ZFS 代码的 CDDL 许可证和 Linux 内核的 GPL 之间可能在法律上不相容 ([2],CDDL-GPL,ZFS in Linux) - 内核不支持 ZFS 的开发。

因此:

  • ZFSonLinux 项目必须跟上 Linux 内核版本。在 ZFSonLinux 发布稳定版后,由 Arch ZFS 维护者来发布。
  • 有时会因为不满足依赖关系,无法进行正常的滚动更新,因为尝试更新到的新版本内核不受 ZFSonLinux 支持。

安装[编辑 | 编辑源代码]

一般情况[编辑 | 编辑源代码]

警告: 除非你使用这些软件包的 DKMS 版本,否则 ZFS 和 SPL 内核模块是与特定的内核版本绑定的。在更新的软件包被上传到 AUR 或 archzfs 仓库之前无法进行内核更新。
提示:如果你目前的内核比较新,你可以将 Linux 版本 降级archzfs 仓库的版本。

archzfs 仓库或 Arch 用户仓库 安装:

据他们自己的说法,这些分支依赖于 zfs-utils 软件包。

通过在命令行中执行 zpool status 来测试安装情况。如果出现 "insmod" 错误,请尝试 depmod -a

根分区为 ZFS[编辑 | 编辑源代码]

参见 在 ZFS 上安装 Arch Linux

DKMS[编辑 | 编辑源代码]

为了在每次内核升级时自动重新编译 ZFS 模块,用户可以使用 DKMS

注意: 当安装 dkms 时, 请参阅 动态内核模块支持

安装 zfs-dkmsAURzfs-dkms-gitAUR

提示:pacman.conf 中添加 IgnorePkg 条目,以防在进行定期更新时升级这些软件包。

尝试使用 ZFS[编辑 | 编辑源代码]

如果有用户希望在不会造成数据丢失的情况下,用诸如 ~/zfs0.img ~/zfs1.img 等简单文件的 "虚拟块设备"(在 ZFS 术语中被称为 VDEVs)试验 ZFS,可以参阅 尝试使用 ZFS 文章。这篇文章涵盖了一些常见的任务,如建立一个 RAIDZ 阵列、故意破坏数据并恢复、快照数据集等。

配置[编辑 | 编辑源代码]

开发者认为,ZFS 是一个“零管理”的文件系统;因此,配置 ZFS 非常容易。配置主要通过两个命令完成:zfszpool

自动启动[编辑 | 编辑源代码]

为了达到ZFS所谓的“零管理”状态,您必须启用 zfs-import-cache.service 来导入存储池,启用 zfs-mount.service 来挂载存储池中可用的文件系统。这样做的一个好处是不需要在 /etc/fstab 中挂载 ZFS 文件系统,因为 zfs-import-cache.service 会自动根据 /etc/zfs/zpool.cache 文件来导入存储池。

为每一个你想用 zfs-import-cache.service 自动导入的 存储池 执行如下命令:

# zpool set cachefile=/etc/zfs/zpool.cache <pool>
注意:OpenZFS 的 0.6.5.8 版本 开始,ZFS 的服务单元文件有所变化,你必须明确地启用任何你想运行的 ZFS 服务。更多信息请参见 ArchZFS 问题 72

启用相关服务 (zfs-import-cache.service) 和目标 (zfs-import.target) ,以便在系统启动时自动导入存储池:

想要挂载 ZFS 文件系统,你有两种选择:

使用 zfs-mount.service 服务[编辑 | 编辑源代码]

为了在启动时自动挂载 ZFS,你需要激活 zfs-mount.servicezfs.target

注意: 此方法对于单独的 /var 数据集无效,因为不能提前挂载。你应该改用 zfs-mount-generator 方式。更多信息请参见 OpenZFS 问题 #3768

使用 zfs-mount-generator[编辑 | 编辑源代码]

你也可以用 zfs-mount-generator 来为你的 ZFS 文件系统生成 systemd 挂载单元。systemd 会根据挂载单元自动挂载文件系统,无需使用 zfs-mount.service。具体操作如下:

  1. 创建 /etc/zfs/zfs-list.cache 目录。
  2. 启用必要的 ZFS Event Daemon (ZED) 脚本(被称为 ZEDLET)来创建可挂载的 ZFS 文件系统列表。(如果用的是 OpenZFS >= 2.0.0,这个链接会被自动创建)
    # ln -s /usr/lib/zfs/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d
  3. 启用 zfs.target 目标,启用 (enable) 并启动 (start) ZFS Event Daemon (zfs-zed.service)。这个服务负责运行上一步提到的脚本。
  4. 你需要在 /etc/zfs/zfs-list.cache 目录下创建一个以存储池命名的空白文件。只有当这个文件存在时,ZEDLET 才会更新文件系统列表。
    # touch /etc/zfs/zfs-list.cache/<pool-name>
  5. 检查文件 /etc/zfs/zfs-list.cache/<pool-name> 中的内容。如果该文件为空,确保 zfs-zed.service 处于运行状态,并运行以下命令来修改你文件系统的 canmount 属性。
    zfs set canmount=off zroot/fs1
    修改这个属性会让 ZFS 触发一个由 ZED 捕获的事件,ZED 继而运行 ZEDLET 脚本来更新 /etc/zfs/zfs-list.cache 中的文件。如果 /etc/zfs/zfs-list.cache 中的文件已经更新过,你可以用如下命令来改回 ZFS 文件系统的 canmount 属性。
    zfs set canmount=on zroot/fs1

你需要为系统里的每一个 ZFS 存储池在 /etc/zfs/zfs-list.cache 目录下创建对应的文件。确保已经 参考上文 通过启用 zfs-import-cache.servicezfs-import.target 导入了存储池。

存储池[编辑 | 编辑源代码]

在创建 ZFS 文件系统之前,并不一定要先给它分区。推荐将 ZFS 指向整个硬盘 (例如 /dev/sdx 而不是像 /dev/sdx1 的单个分区),这将 自动创建一个 GPT (GUID 分区表) ,并在磁盘的开始部分为传统引导程序添加一个 8MB 的保留分区。但是,如果你想要创建具有不同冗余属性的多个卷,你可以在现有文件系统中指定一个分区或一个文件。

注意: 如果存储池中的任何驱动器用来组过软 RAID,那么你首先应该 清理所有旧的 RAID 配置信息.
警告: 对于具有 4KB 扇区大小的 先进格式化 磁盘,建议使用 12 的 ashift 值以获得最佳性能。为了与传统系统兼容,先进格式化磁盘模拟 512 字节的扇区大小,这导致 ZFS 有时会使用一个不理想的 ashift 选项号。一旦池被创建,改变 ashift 选项的唯一方法就是重新创建池。与此同时,使用一个 12 的 ashift 值也会减少可用容量。参见 OpenZFS FAQ: 性能考虑, 先进格式化磁盘, 以及 ZFS 和先进格式化磁盘.

识别磁盘[编辑 | 编辑源代码]

OpenZFS 建议在创建少于 10 个设备的 ZFS 存储池时使用设备 ID [3]. 使用 Persistent block device naming#通过 id 和 通过路径 to 来确定要用于建立 ZFS 池的驱动器列表。

磁盘 ID 应该类似于以下内容:

$ ls -lh /dev/disk/by-id/
lrwxrwxrwx 1 root root  9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc
lrwxrwxrwx 1 root root  9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde
lrwxrwxrwx 1 root root  9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd
lrwxrwxrwx 1 root root  9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb
警告: 如果你使用设备名称创建 zpools(例如 /dev/sda,/dev/sdb,...), ZFS 可能无法在启动时间歇地检测到 zpools。

使用 GPT 标签[编辑 | 编辑源代码]

通过使用 GPT 分区,磁盘标签和 UUID 也可以用于 ZFS 挂载。ZFS 驱动器有标签,但 Linux 在启动时无法读取这些标签。与 MBR 分区不同,GPT 分区直接支持 UUID 和标签,与分区内的格式无关。对于 ZFS,给磁盘分区而不是使用整个磁盘,有两个额外的优势。操作系统不会从 ZFS 已写入分区扇区的任何不可预测数据中生成伪分区号,如果需要,你可以很容易地给固态硬盘配置预留空间 (OP),并给机械硬盘配置少量预留空间,以确保 zpool 可以将扇区数略微不同的型号替换到你的镜像。这样,就可以零成本地用现有的技术和工具来配置与控制 ZFS。

使用 gdisk 将全部或部分驱动器划分为单一分区。gdisk 不会自动为分区命名,所以如果需要分区标签,请使用gdisk命令 "c" 为分区添加标签。比起 UUID,你可能更喜欢标签的一些原因是:标签容易控制,标签可以使你每个磁盘的用途一目了然,而且标签更短,更容易输入。这些都是在服务器宕机和高负载时的优势。GPT 分区标签有足够的空间,可以存储大多数国际字符 zhwp:GUID_Partition_Table#Partition_entries,允许以有组织的方式对大型数据池进行标记。

用 GPT 分区的驱动器具有如下所示的标签和 UUID:

$ ls -l /dev/disk/by-partlabel
lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata1 -> ../../sdd1
lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata2 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Apr 30 01:59 zfsl2arc -> ../../sda1
$ ls -l /dev/disk/by-partuuid
lrwxrwxrwx 1 root root 10 Apr 30 01:44 148c462c-7819-431a-9aba-5bf42bb5a34e -> ../../sdd1
lrwxrwxrwx 1 root root 10 Apr 30 01:59 4f95da30-b2fb-412b-9090-fc349993df56 -> ../../sda1
lrwxrwxrwx 1 root root 10 Apr 30 01:44 e5ccef58-5adf-4094-81a7-3bac846a885f -> ../../sdc1
提示:为了尽量减少打字和复制/粘贴错误,可以为目标 PARTUUID 设置一个局部变量: $ UUID=$(lsblk --noheadings --output PARTUUID /dev/sdXY)

创建 ZFS 池[编辑 | 编辑源代码]

要创建 ZFS 池,请使用如下命令:

# zpool create -f -m <mount> <pool> [raidz(2|3)|mirror] <ids>
提示:你可能需要先阅读 先进格式化磁盘,因为建议在创建池时设置 ashift
  • create: 创建池的子命令。
  • -m: 池的挂载点。如果没有指定挂载点, 池将被挂载到 /<pool>
  • pool: 池的名称。
  • raidz(2|3)|mirror: 这是从设备池中创建的虚拟设备的类型,RAID Z 是单盘奇偶校验,RAID Z2 是 2 盘奇偶校验,RAID Z3 是 3 盘奇偶校验,类似于 RAID 5 和 RAID 6。另外还有镜像,它类似于 RAID 1 或 RAID 10,但不限于 2 个设备。如果不指定设备类型,每个设备将被添加为一个与 RAID 0 类似的 vdev。在创建之后,可以在每个单盘 vdev 上添加一个设备来转换为镜像,这对于迁移数据很有用。
  • ids: 池中包含的驱动器或分区的 ID

使用单个 RAID-Z vdev 创建池:

# zpool create -f -m /mnt/data bigdata \
               raidz \
                  ata-ST3000DM001-9YN166_S1F0KDGY \
                  ata-ST3000DM001-9YN166_S1F0JKRR \
                  ata-ST3000DM001-9YN166_S1F0KBP8 \
                  ata-ST3000DM001-9YN166_S1F0JTM1

使用两个镜像 vdev 创建池:

# zpool create -f -m /mnt/data bigdata \
               mirror \
                  ata-ST3000DM001-9YN166_S1F0KDGY \
                  ata-ST3000DM001-9YN166_S1F0JKRR \
               mirror \
                  ata-ST3000DM001-9YN166_S1F0KBP8 \
                  ata-ST3000DM001-9YN166_S1F0JTM1

先进格式化磁盘[编辑 | 编辑源代码]

在池创建时,应始终使用 ashift=12, 但具有 8K 扇区的固态硬盘除外(此时应使用 ashift=13)。A vdev of 512 byte disks using 4k sectors will not experience performance issues, but a 4k disk using 512 byte sectors will. Since ashift cannot be changed after pool creation, even a pool with only 512 byte disks should use 4k because those disks may need to be replaced with 4k disks or the pool may be expanded by adding a vdev composed of 4k disks. Because correct detection of 4k disks is not reliable, -o ashift=12 should always be specified during pool creation. See the OpenZFS FAQ for more details.

提示:Use blockdev(8) (part of util-linux) to print the sector size reported by the device's ioctls: blockdev --getpbsz /dev/sdXY as the root user.

Create pool with ashift=12 and single raidz vdev:

# zpool create -f -o ashift=12 -m /mnt/data bigdata \
               raidz \
                  ata-ST3000DM001-9YN166_S1F0KDGY \
                  ata-ST3000DM001-9YN166_S1F0JKRR \
                  ata-ST3000DM001-9YN166_S1F0KBP8 \
                  ata-ST3000DM001-9YN166_S1F0JTM1

GRUB-compatible pool creation[编辑 | 编辑源代码]

注意: This section frequently goes out of date with updates to GRUB and ZFS. Consult the manual pages for the most up-to-date information. source: /usr/share/zfs/compatibility.d/grub2 and man zpool-features

By default, zpool create enables all features on a pool. If /boot resides on ZFS when using GRUB you must only enable features supported by GRUB otherwise GRUB will not be able to read the pool. GRUB 2.02 supports the read-write features lz4_compress, hole_birth, embedded_data, extensible_dataset, and large_blocks; this is not suitable for all the features of OpenZFS 0.8.0 and higher, which must have unsupported features disabled. We can explicitly name features to enable with the -d argument to zpool create, which disables all features by default.

You can create a pool with only the compatible features enabled:

# zpool create -d -o feature@allocation_classes=enabled \
                  -o feature@async_destroy=enabled      \
                  -o feature@bookmarks=enabled          \
                  -o feature@embedded_data=enabled      \
                  -o feature@empty_bpobj=enabled        \
                  -o feature@enabled_txg=enabled        \
                  -o feature@extensible_dataset=enabled \
                  -o feature@filesystem_limits=enabled  \
                  -o feature@hole_birth=enabled         \
                  -o feature@large_blocks=enabled       \
                  -o feature@lz4_compress=enabled       \
                  -o feature@project_quota=enabled      \
                  -o feature@resilver_defer=enabled     \
                  -o feature@spacemap_histogram=enabled \
                  -o feature@spacemap_v2=enabled        \
                  -o feature@userobj_accounting=enabled \
                  -o feature@zpool_checkpoint=enabled   \
                  $POOL_NAME $VDEVS

Verifying pool status[编辑 | 编辑源代码]

If the command is successful, there will be no output. Using the mount command will show that the pool is mounted. Using zpool status will show that the pool has been created:

# zpool status -v
  pool: bigdata
 state: ONLINE
 scan: none requested
config:

        NAME                                       STATE     READ WRITE CKSUM
        bigdata                                    ONLINE       0     0     0
          -0                                       ONLINE       0     0     0
            ata-ST3000DM001-9YN166_S1F0KDGY-part1  ONLINE       0     0     0
            ata-ST3000DM001-9YN166_S1F0JKRR-part1  ONLINE       0     0     0
            ata-ST3000DM001-9YN166_S1F0KBP8-part1  ONLINE       0     0     0
            ata-ST3000DM001-9YN166_S1F0JTM1-part1  ONLINE       0     0     0

errors: No known data errors

At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.

Importing a pool created by id[编辑 | 编辑源代码]

Eventually a pool may fail to auto mount and you need to import to bring your pool back. Take care to avoid the most obvious solution.

警告: Do not run zpool import pool! This will import your pools using /dev/sd? which will lead to problems the next time you rearrange your drives. This may be as simple as rebooting with a USB drive left in the machine.

Adapt one of the following commands to import your pool so that pool imports retain the persistence they were created with:

# zpool import -d /dev/disk/by-id bigdata
# zpool import -d /dev/disk/by-partlabel bigdata
# zpool import -d /dev/disk/by-partuuid bigdata
注意: Use the -l flag when importing a pool that contains encrypted datasets keys, e.g.:
# zpool import -l -d /dev/disk/by-id bigdata

Finally check the state of the pool:

# zpool status -v bigdata

Destroy a storage pool[编辑 | 编辑源代码]

ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device.

警告: This command destroys any data containing in the pool and/or dataset.

To destroy the pool:

# zpool destroy <pool>

To destroy a dataset:

# zfs destroy <pool>/<dataset>

And now when checking the status:

# zpool status
no pools available

Exporting a storage pool[编辑 | 编辑源代码]

If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the -f argument, but this is considered bad form.

Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the zfs_force=1 to the kernel boot parameters (which is not ideal). See #On boot the zfs pool does not mount stating: "pool may be in use from other system".

To export a pool:

# zpool export <pool>

Extending an existing zpool[编辑 | 编辑源代码]

A device (a partition or a disk) can be added to an existing zpool:

# zpool add <pool> <device-id>

To import a pool which consists of multiple devices:

# zpool import -d <device-id-1> -d <device-id-2> <pool>

or simply:

# zpool import -d /dev/disk-by-id/ <pool>

Renaming a zpool[编辑 | 编辑源代码]

Renaming a zpool that is already created is accomplished in 2 steps:

# zpool export oldname
# zpool import oldname newname

Setting a different mount point[编辑 | 编辑源代码]

The mount point for a given zpool can be moved at will with one command:

# zfs set mountpoint=/foo/bar poolname

Upgrade zpools[编辑 | 编辑源代码]

When using a newer zfs module, zpools may display an upgrade indication:

$ zpool status -v
pool: bigdata
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(5) for details.
注意:
  • Lower version zfs modules will not be able to import a zpool of a higher version.
  • When dealing with important data, one may want to create a backup prior running a zpool upgrade.

To upgrade the version of zpool bigdata:

# zpool upgrade bigdata

To upgrade the version of all zpools:

# zpool upgrade -a

Creating datasets[编辑 | 编辑源代码]

Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:

# zfs create <nameofzpool>/<nameofdataset>

It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:

# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory>

To see all the commands available in ZFS, see zfs(8) or zpool(8).

Native encryption[编辑 | 编辑源代码]

ZFS offers the following supported encryption options: aes-128-ccm, aes-192-ccm, aes-256-ccm, aes-128-gcm, aes-192-gcm and aes-256-gcm. When encryption is set to on, aes-256-gcm will be used.

The following keyformats are supported: passphrase, raw, hex.

One can also specify/increase the default iterations of PBKDF2 when using passphrase with -o pbkdf2iters <n>, although it may increase the decryption time.

注意:
  • Native ZFS encryption has been made available in the stable 0.8.0 release or newer. Previously it was only available in development versions provided by packages like zfs-linux-gitAUR, zfs-dkms-gitAUR or other development builds. Users who were only using the development versions for the native encryption, may now switch to the stable releases if they wish.
  • The default encryption suite was changed from aes-256-ccm to aes-256-gcm in the 0.8.4 release.
  • To import a pool with keys, one needs to specify the -l flag, without this flag encrypted datasets will be left unavailable until the keys are loaded. See #Importing a pool created by id.

To create a dataset including native encryption with a passphrase, use:

# zfs create -o encryption=on -o keyformat=passphrase <nameofzpool>/<nameofdataset>

To use a key instead of using a passphrase:

# dd if=/dev/random of=/path/to/key bs=1 count=32
# zfs create -o encryption=on -o keyformat=raw -o keylocation=file:///path/to/key <nameofzpool>/<nameofdataset>

To verify the key location:

# zfs get keylocation <nameofzpool>/<nameofdataset>

To change the key location:

# zfs set keylocation=file:///path/to/key <nameofzpool>/<nameofdataset>

You can also manually load the keys by using one of the following commands:

# zfs load-key <nameofzpool>/<nameofdataset> # load key for a specific dataset
# zfs load-key -a # load all keys
# zfs load-key -r zpool/dataset # load all keys in a dataset

To mount the created encrypted dataset:

# zfs mount <nameofzpool>/<nameofdataset>

Unlock/Mount at boot time: systemd[编辑 | 编辑源代码]

It is possible to automatically unlock a pool dataset on boot time by using a systemd unit. For example create the following service to unlock any specific dataset:

/etc/systemd/system/zfs-load-key@.service
[Unit]
Description=Load %I encryption keys
Before=systemd-user-sessions.service zfs-mount.service
After=zfs-import.target
Requires=zfs-import.target
DefaultDependencies=no

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/bash -c 'until (systemd-ask-password "Encrypted ZFS password for %I" --no-tty | zfs load-key %I); do echo "Try again!"; done'

[Install]
WantedBy=zfs-mount.service

Enable/start the service for each encrypted dataset, (e.g. zfs-load-key@pool0-dataset0.service). Note the use of -, which is an escaped / in systemd unit definitions. See systemd-escape(1) for more info.

注意: The Before=systemd-user-sessions.service ensures that systemd-ask-password is invoked before the local IO devices are handed over to the desktop environment.

An alternative is to load all possible keys:

/etc/systemd/system/zfs-load-key.service
[Unit]
Description=Load encryption keys
DefaultDependencies=no
After=zfs-import.target
Before=zfs-mount.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/zfs load-key -a
StandardInput=tty-force

[Install]
WantedBy=zfs-mount.service

Enable/start zfs-load-key.service.

Unlock at login time: PAM[编辑 | 编辑源代码]

If you are not encrypting the root volume, but only the home volume or a user-specific volume, another idea is to wait until login to decrypt it. The advantages of this method are that the system boots uninterrupted, and that when the user logs in, the same password can be used both to authenticate and to decrypt the home volume, so that the password is only entered once.

First set the mountpoint to legacy to avoid having it mounted by zfs mount -a:

zfs set mountpoint=legacy zroot/data/home

Ensure that it is in /etc/fstab so that mount /home will work:

/etc/fstab
zroot/data/home         /home           zfs             rw,xattr,posixacl,noauto        0 0

On a single-user system, with only one /home volume having the same encryption password as the user's password, it can be decrypted at login as follows: first create /sbin/mount-zfs-homedir

/sbin/mount-zfs-homedir
#!/bin/bash

# simplified from https://talldanestale.dk/2020/04/06/zfs-and-homedir-encryption/

set -eu

# Password is given to us via stdin, save it in a variable for later
PASS=$(cat -)

VOLNAME="zroot/data/home"

# Unlock and mount the volume
zfs load-key "$VOLNAME" <<< "$PASS" || continue
zfs mount "$VOLNAME" || true # ignore errors

do not forget chmod a+x /sbin/mount-zfs-homedir; then get PAM to run it by adding the following line to /etc/pam.d/system-auth:

/etc/pam.d/system-auth
auth       optional                    pam_exec.so          expose_authtok /sbin/mount-zfs-homedir

Now it will transparently decrypt and mount the /home volume when you log in anywhere: on the console, via ssh, etc. A caveat is that since your ~/.ssh directory is not mounted, if you log in via ssh, you must use the default password authentication the first time rather than relying on ~/.ssh/authorized_keys.

If you want to have separate volumes for each user, each encrypted with the user's password, try the linked method.

Swap volume[编辑 | 编辑源代码]

警告:

ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is important to set the ZVOL block size to match the system page size, which can be obtained by the getconf PAGESIZE command (default on x86_64 is 4KiB). Another option useful for keeping the system running well in low-memory situations is not caching the ZVOL data.

Create a 8 GiB zfs volume:

# zfs create -V 8G -b $(getconf PAGESIZE) -o compression=zle \
              -o logbias=throughput -o sync=always\
              -o primarycache=metadata -o secondarycache=none \
              -o com.sun:auto-snapshot=false <pool>/swap

Prepare it as swap partition:

# mkswap -f /dev/zvol/<pool>/swap
# swapon /dev/zvol/<pool>/swap

To make it permanent, edit /etc/fstab. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.

Add a line to /etc/fstab:

/dev/zvol/<pool>/swap none swap discard 0 0

Access Control Lists[编辑 | 编辑源代码]

To use ACL on a dataset:

# zfs set acltype=posixacl <nameofzpool>/<nameofdataset>
# zfs set xattr=sa <nameofzpool>/<nameofdataset>

Setting xattr is recommended for performance reasons [4].

It may be preferable to enable ACL on the zpool as datasets will inherit the ACL parameters. Setting aclinherit=passthrough may be wanted as the default mode is restricted [5]; however, it is worth noting that aclinherit does not affect POSIX ACLs [6]:

# zfs set aclinherit=passthrough <nameofzpool>
# zfs set acltype=posixacl <nameofzpool>
# zfs set xattr=sa <nameofzpool>

Databases[编辑 | 编辑源代码]

ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.

Tango-inaccurate.png本文或本章节的事实准确性存在争议。Tango-inaccurate.png

原因: At least MariaDB uses a default of 16Kib pages! Check your specific DBMS before setting this value.(在 Talk:ZFS 中讨论)


Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for MySQL/MariaDB, PostgreSQL, and Oracle database, all three of them use an 8KiB block size by default. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:

# zfs set recordsize=8K <pool>/postgres

These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:

注意: L2ARC requires primarycache to function, because it is fed with data evicted from primarycache. If you intend to use the L2ARC, do not set the option below, otherwise no actual data will be cached on L2ARC.
# zfs set primarycache=metadata <pool>/postgres

If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this is that ZFS will be committing data twice to the data disks, and it can severely impact performance. You can tell ZFS to prefer to not use the ZIL, and in which case, data is only committed to the file system once. However, doing so on non-solid state storage (e.g. HDDs) can result in decreased read performance due to fragmentation (OpenZFS Wiki) -- with mechanical hard drives, please consider using a dedicated SSD as SLOG rather than setting the option below. In addition, setting this for non-database file systems, or for pools with configured log devices, can also negatively impact the performance, so beware:

# zfs set logbias=throughput <pool>/postgres

These can also be done at file system creation time, for example:

# zfs create -o recordsize=8K \
             -o primarycache=metadata \
             -o mountpoint=/var/lib/postgres \
             -o logbias=throughput \
              <pool>/postgres

Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily hurt ZFS's performance by setting these on a general-purpose file system such as your /home directory.

/tmp[编辑 | 编辑源代码]

If you would like to use ZFS to store your /tmp directory, which may be useful for storing arbitrarily-large sets of files or simply keeping your RAM free of idle data, you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with fsync or O_SYNC) and return immediately. While this has severe application-side data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected. Please note this does not affect the integrity of ZFS itself, only the possibility that data an application expects on-disk may not have actually been written out following a crash.

# zfs set sync=disabled <pool>/tmp

Additionally, for security purposes, you may want to disable setuid and devices on the /tmp file system, which prevents some kinds of privilege-escalation attacks or the use of device nodes:

# zfs set setuid=off <pool>/tmp
# zfs set devices=off <pool>/tmp

Combining all of these for a create command would be as follows:

# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp

Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) systemd's automatic tmpfs-backed /tmp (tmp.mount, else ZFS will be unable to mount your dataset at boot-time or import-time.

Transmitting snapshots with ZFS Send and ZFS Recv[编辑 | 编辑源代码]

It is possible to pipe ZFS snapshots to an arbitrary target by pairing zfs send and zfs recv. This is done through standard output, which allows the data to be sent to any file, device, across the network, or manipulated mid-stream by incorporating additional programs in the pipe.

Below are examples of common scenarios:

Basic ZFS Send[编辑 | 编辑源代码]

First, let's create a snapshot of some ZFS filesystem:

# zfs snapshot zpool0/archive/books@snap

Now let's send the snapshot to a new location on a different zpool

# zfs send -v zpool0/archive/books@snap | zfs recv zpool4/library

The contents of zpool0/archive/books@snap are now live at zpool4/library

提示: See man zfs-send and man zfs-recv for details on flags.
To and from files[编辑 | 编辑源代码]

First, let's create a snapshot of some ZFS filesystem:

# zfs snapshot zpool0/archive/books@snap

Write the snapshot to a gzip file:

# zfs send zpool0/archive/books@snap > /tmp/mybooks.gz
警告: Make sure to run zfs send with -w flag if you wish to preserve encryption during the send.

Now restore the snapshot from the file:

# gzcat /tmp/mybooks.gz | zfs recv -F zpool0/archive/books

Send over ssh[编辑 | 编辑源代码]

First, let's create a snapshot of some ZFS filesystem:

# zfs snapshot zpool1/filestore@snap

Next we pipe our "send" traffic over an ssh session running "recv":

# zfs send -v zpool1/filestore@snap | ssh $HOST zfs recv coldstore/backups

The -v flag prints information about the datastream being generated. If you are using a passphrase or passkey, you will be prompted to enter it.

Incremental Backups[编辑 | 编辑源代码]

You may wish update a previously sent ZFS filesystem without retransmitting all of the data over again. Alternatively, it may be necessary to keep a filesystem online during a lengthy transfer and it is now time to send writes that were made since the initial snapshot.

First, let's create a snapshot of some ZFS filesystem:

# zfs snapshot zpool1/filestore@initial

Next we pipe our "send" traffic over an ssh session running "recv":

# zfs send -v -R zpool1/filestore@initial | ssh $HOST zfs recv coldstore/backups

Once changes are written, make another snapshot:

# zfs snapshot zpool1/filestore@snap2

The following will send the differences that exist locally between zpool1/filestore@initial and zpool1/filestore@snap2 and create an additional snapshot for the remote filesystem coldstore/backups:

# zfs send -v -i -R zpool1/filestore@initial | ssh $HOST zfs recv coldstore/backups

Now both zpool1/filestore and coldstore/backups have the @initial and @snap2 snapshots.

On the remote host, you may now promote the latest snapshot to become the active filesystem:

# rollback coldstore/backups@snap2

调校[编辑 | 编辑源代码]

通用[编辑 | 编辑源代码]

可以使用参数进一步调整 ZFS 池和数据集。

注意: 除配额和预订外,所有可设置的属性都会从父数据集继承其值。

要检索当前 ZFS 池的参数状态,请执行以下操作:

# zfs get all <pool>

要检索指定数据集的参数状态,请执行以下操作:

# zfs get all <pool>/<dataset>

要禁用默认启用的访问时间功能(atime),请执行以下操作:

# zfs set atime=off <pool>

要禁用特定数据集的访问时间功能(atime),请执行以下操作:

# zfs set atime=off <pool>/<dataset>

除了完全关闭 atime 之外,您还可以使用relatime。这为ZFS带来了默认的ext4/XFS atime语义,其中只有在修改的时间或更改的时间发生变化时,或者在过去24小时内没有更新现有的访问时间时,才更新访问时间。这是 atime=offatime=on之间的折衷。该属性atime=offon时生效:

# zfs set atime=on <pool>
# zfs set relatime=on <pool>

压缩功能则是对数据的透明压缩。ZFS 支持数种不同的压缩算法,目前默认采用 lz4 。gzip 比较适合用于那些不频繁写入并且可压缩率较高的数据。请参考 OpenZFS Wiki 以获得更多信息。

要启用压缩,请执行:

# zfs set compression=on <pool>

若要将池和/或数据集的属性重置为默认状态,请使用 zfs inherit

# zfs inherit -rS atime <pool>
# zfs inherit -rS atime <pool>/<dataset>
注意: 使用-r标志将递归重置ZPool中的所有数据集。

Scrubbing[编辑 | 编辑源代码]

Whenever data is read and ZFS encounters an error, it is silently repaired when possible, rewritten back to disk and logged so you can obtain an overview of errors on your pools. There is no fsck or equivalent tool for ZFS. Instead, ZFS supports a feature known as scrubbing. This traverses through all the data in a pool and verifies that all blocks can be read.

To scrub a pool:

# zpool scrub <pool>

To cancel a running scrub:

# zpool scrub -s <pool>

How often should I do this?[编辑 | 编辑源代码]

From the Oracle blog post Disk Scrub - Why and When?:

This question is challenging for Support to answer, because as always the true answer is "It Depends". So before I offer a general guideline, here are a few tips to help you create an answer more tailored to your use pattern.
  • What is the expiration of your oldest backup? You should probably scrub your data at least as often as your oldest tapes expire so that you have a known-good restore point.
  • How often are you experiencing disk failures? While the recruitment of a hot-spare disk invokes a "resilver" -- a targeted scrub of just the VDEV which lost a disk -- you should probably scrub at least as often as you experience disk failures on average in your specific environment.
  • How often is the oldest piece of data on your disk read? You should scrub occasionally to prevent very old, very stale data from experiencing bit-rot and dying without you knowing it.
If any of your answers to the above are "I do not know", the general guideline is: you should probably be scrubbing your zpool at least once per month. It is a schedule that works well for most use cases, provides enough time for scrubs to complete before starting up again on all but the busiest & most heavily-loaded systems, and even on very large zpools (192+ disks) should complete fairly often between disk failures.

In the ZFS Administration Guide by Aaron Toponce, he advises to scrub consumer disks once a week.

Start with a service or timer[编辑 | 编辑源代码]

注意: Starting with OpenZFS 2.1.3 weekly and monthly systemd timers/services are included. To use these enable/start zfs-scrub-weekly@pool-to-scrub.timer or zfs-scrub-monthly@pool-to-scrub.timer for the desired pool.

Using a systemd timer/service it is possible to automatically scrub pools.

To perform scrubbing monthly on a particular pool:

/etc/systemd/system/zfs-scrub@.timer
[Unit]
Description=Monthly zpool scrub on %i

[Timer]
OnCalendar=monthly
AccuracySec=1h
Persistent=true

[Install]
WantedBy=multi-user.target
/etc/systemd/system/zfs-scrub@.service
[Unit]
Description=zpool scrub on %i

[Service]
Nice=19
IOSchedulingClass=idle
KillSignal=SIGINT
ExecStart=/usr/bin/zpool scrub %i

[Install]
WantedBy=multi-user.target

Enable/start zfs-scrub@pool-to-scrub.timer unit for monthly scrubbing the specified zpool.

Enabling TRIM[编辑 | 编辑源代码]

To quickly query your vdevs TRIM support, you can include trimming information in zpool status with -t.

$ zpool status -t tank
pool: tank
 state: ONLINE
  scan: none requested
 config:

	NAME                                     STATE     READ WRITE CKSUM
	tank                                     ONLINE       0     0     0
	  ata-ST31000524AS_5RP4SSNR-part1        ONLINE       0     0     0  (trim unsupported)
	  ata-CT480BX500SSD1_2134A59B933D-part1  ONLINE       0     0     0  (untrimmed)

errors: No known data errors

ZFS is capable of trimming supported vdevs either on-demand or periodically via the autotrim property.

Manually performing a TRIM operation on a zpool:

 # zpool trim <zpool>

Enabling periodic trimming on all supported vdevs in a pool:

 # zpool set autotrim=on <zpool>
注意: Because of how the automatic TRIM and a full zpool trim differ in their operation, it can make sense to run a manual trim occasionally.

To perform a full zpool trim monthly on a particular pool using a systemd timer/service:

/etc/systemd/system/zfs-trim@.timer
[Unit]
Description=Monthly zpool trim on %i

[Timer]
OnCalendar=monthly
AccuracySec=1h
Persistent=true

[Install]
WantedBy=multi-user.target
/etc/systemd/system/zfs-trim@.service
[Unit]
Description=zpool trim on %i

[Service]
Nice=19
IOSchedulingClass=idle
KillSignal=SIGINT
ExecStart=/usr/bin/zpool trim %i

[Install]
WantedBy=multi-user.target

Enable/start zfs-trim@pool-to-trim.timer unit for monthly trimming of the specified zpool.

SSD Caching[编辑 | 编辑源代码]

You can add SSD devices as a write intent log (external ZIL or SLOG) and also as a layer 2 adaptive replacement cache (L2ARC). The process to add them is very similar to adding a new VDEV.

All of the below references to device-id are the IDs from /dev/disk/by-id/*.

SLOG[编辑 | 编辑源代码]

To add a mirrored SLOG:

 # zpool add <pool> log mirror <device-id-1> <device-id-2>

Or to add a single device SLOG (unsafe):

 # zpool add <pool> log <device-id>

Because the SLOG device stores data that has not been written to the pool, it is important to use devices that can finish writes when power is lost. It is also important to use redundancy, since a device failure can cause data loss. In addition, the SLOG is only used for sync writes, so may not provide any performance improvement.

L2ARC[编辑 | 编辑源代码]

To add L2ARC:

# zpool add <pool> cache <device-id>

L2ARC is only a read cache, so redundancy is unnecessary. Since ZFS version 2.0.0, L2ARC is persisted across reboots.[7]

L2ARC is generally only useful in workloads where the amount of hot data is bigger than system memory, but small enough to fit into L2ARC. The L2ARC is indexed by the ARC in system memory, consuming 70 bytes per record (default 128KiB). Thus, the equation for RAM usage is:

(L2ARC size) / (recordsize) * 70 bytes

Because of this, L2ARC can, in certain workloads, harm performance as it takes memory away from ARC.

ZVOLs[编辑 | 编辑源代码]

ZFS volumes (ZVOLs) can suffer from the same block size-related issues as RDBMSes, but it is worth noting that the default recordsize for ZVOLs is 8 KiB already. If possible, it is best to align any partitions contained in a ZVOL to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the recordsize to accommodate the data inside the ZVOL as necessary (though 8 KiB tends to be a good value for most file systems, even when using 4 KiB blocks on that level).

RAIDZ and Advanced Format physical disks[编辑 | 编辑源代码]

Each block of a ZVOL gets its own parity disks, and if you have physical media with logical block sizes of 4096B, 8192B, or so on, the parity needs to be stored in whole physical blocks, and this can drastically increase the space requirements of a ZVOL, requiring 2× or more physical storage capacity than the ZVOL's logical capacity. Setting the recordsize to 16k or 32k can help reduce this footprint drastically.

See OpenZFS issue #1807 for details.

I/O Scheduler[编辑 | 编辑源代码]

While ZFS is expected to work well with modern schedulers including deadline, mq-deadline, noop, and none, experimenting with manually setting the I/O scheduler on ZFS disks may yield performance gains.

Troubleshooting[编辑 | 编辑源代码]

Creating a zpool fails[编辑 | 编辑源代码]

If the following error occurs then it can be fixed.

# the kernel failed to rescan the partition table: 16
# cannot label 'sdc': try using parted(8) and then provide a specific slice: -1

One reason this can occur is because ZFS expects pool creation to take less than 1 second[8][9]. This is a reasonable assumption under ordinary conditions, but in many situations it may take longer. Each drive will need to be cleared again before another attempt can be made.

# parted /dev/sda rm 1
# parted /dev/sda rm 1
# dd if=/dev/zero of=/dev/sdb bs=512 count=1
# zpool labelclear /dev/sda

A brute force creation can be attempted over and over again, and with some luck the ZPool creation will take less than 1 second. One cause for creation slowdown can be slow burst read writes on a drive. By reading from the disk in parallell to ZPool creation, it may be possible to increase burst speeds.

# dd if=/dev/sda of=/dev/null

This can be done with multiple drives by saving the above command for each drive to a file on separate lines and running

# cat $FILE | parallel

Then run ZPool creation at the same time.

ZFS is using too much RAM[编辑 | 编辑源代码]

By default, ZFS caches file operations (ARC) using up to two-thirds of available system memory on the host. To adjust the ARC size, add the following to the 内核参数 list:

zfs.zfs_arc_max=536870912 # (for 512MiB)

In case that the default value of zfs_arc_min (1/32 of system memory) is higher than the specified zfs_arc_max it is needed to add also the following to the 内核参数 list:

zfs.zfs_arc_min=268435456 # (for 256MiB, needs to be lower than zfs.zfs_arc_max)

For a more detailed description, as well as other configuration options, see Gentoo:ZFS#ARC.

Does not contain an EFI label[编辑 | 编辑源代码]

The following error will occur when attempting to create a zfs filesystem,

/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition

The way to overcome this is to use -f with the zfs create command.

No hostid found[编辑 | 编辑源代码]

An error that occurs at boot with the following lines appearing before initscript output:

ZFS: No hostid found on kernel command line or /etc/hostid.

This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the 内核参数 in the boot loader. For example, adding spl.spl_hostid=0x00bab10c.

The other solution is to make sure that there is a hostid in /etc/hostid, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.

Pool cannot be found while booting from SAS/SCSI devices[编辑 | 编辑源代码]

In case you are booting a SAS/SCSI based, you might occassionally get boot problems where the pool you are trying to boot from cannot be found. A likely reason for this is that your devices are initialized too late into the process. That means that zfs cannot find any devices at the time when it tries to assemble your pool.

In this case you should force the scsi driver to wait for devices to come online before continuing. You can do this by putting this into /etc/modprobe.d/zfs.conf:

/etc/modprobe.d/zfs.conf
options scsi_mod scan=sync

Afterwards, regenerate the initramfs.

This works because the zfs hook will copy the file at /etc/modprobe.d/zfs.conf into the initcpio which will then be used at build time.

On boot the zfs pool does not mount stating: "pool may be in use from other system"[编辑 | 编辑源代码]

Unexported pool[编辑 | 编辑源代码]

If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See #Emergency chroot repair with archzfs.

Once inside the chroot environment, load the ZFS module and force import the zpool,

# zpool import -a -f

now export the pool:

# zpool export <pool>

To see the available pools, use,

# zpool status

It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See Re: Howto zpool import/export automatically? - msg#00227.

If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then regenerate the initramfs in normally booted system.

Incorrect hostid[编辑 | 编辑源代码]

Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.

Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.

Boot using zfs_force and write down the hostid. This one is just an example.

$ hostid
0a0af0f8

This number have to be added to the 内核参数 as spl.spl_hostid=0x0a0af0f8. Another solution is writing the hostid inside the initram image, see the installation guide explanation about this.

Users can always ignore the check adding zfs_force=1 in the 内核参数, but it is not advisable as a permanent solution.

Devices have different sector alignment[编辑 | 编辑源代码]

Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.

# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f

but in this instance, the following error is produced:

cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment

ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use ashift=12, but the faulted disk is using a different ashift (probably ashift=9) and this causes the resulting error.

For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See OpenZFS FAQ: Performance Considerations and ZFS and Advanced Format disks.

Use zdb to find the ashift of the zpool: zdb , then use the -o argument to set the ashift of the replacement drive:

# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o ashift=9 -f

Check the zpool status for confirmation:

# zpool status -v
pool: bigdata
state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Mon Jun 16 11:16:28 2014
    10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go
    2.57G resilvered, 0.17% done
config:

        NAME                                   STATE     READ WRITE CKSUM
        bigdata                                DEGRADED     0     0     0
        raidz1-0                               DEGRADED     0     0     0
            replacing-0                        OFFLINE      0     0     0
            ata-ST3000DM001-9YN166_S1F0KDGY    OFFLINE      0     0     0
            ata-ST3000DM001-1CH166_W1F478BD    ONLINE       0     0     0  (resilvering)
            ata-ST3000DM001-9YN166_S1F0JKRR    ONLINE       0     0     0
            ata-ST3000DM001-9YN166_S1F0KBP8    ONLINE       0     0     0
            ata-ST3000DM001-9YN166_S1F0JTM1    ONLINE       0     0     0

errors: No known data errors

Pool resilvering stuck/restarting/slow?[编辑 | 编辑源代码]

According to the ZFSonLinux github it is a known issue since 2012 with ZFS-ZED which causes the resilvering process to constantly restart, sometimes get stuck and be generally slow for some hardware. The simplest mitigation is to stop zfs-zed.service until the resilver completes.

Fix slow boot caused by failed import of unavailable pools in the initramfs zpool.cache[编辑 | 编辑源代码]

Your boot time can be significantly impacted if you update your intitramfs (eg when doing a kernel update) when you have additional but non-permanently attached pools imported because these pools will get added to your initramfs zpool.cache and ZFS will attempt to import these extra pools on every boot, regardless of whether you have exported it and removed it from your regular zpool.cache.

If you notice ZFS trying to import unavailable pools at boot, first run:

$ zdb -C

To check your zpool.cache for pools you do not want imported at boot. If this command is showing (a) additional, currently unavailable pool(s), run:

# zpool set cachefile=/etc/zfs/zpool.cache zroot

To clear the zpool.cache of any pools other than the pool named zroot. Sometimes there is no need to refresh your zpool.cache, but instead all you need to do is regenerate the initramfs.

Tips and tricks[编辑 | 编辑源代码]

Create an Archiso image with ZFS support[编辑 | 编辑源代码]

Follow the Archiso steps for creating a fully functional Arch Linux live CD/DVD/USB image. To include ZFS support in the image, you can either build your choice of PKGBUILDs from the AUR or include prebuilt packages from one of the unofficial user repositories.

Using self-built ZFS packages from the AUR[编辑 | 编辑源代码]

Build the ZFS packages you want by following the normal procedures. If you are unsure, zfs-dkmsAUR and zfs-utilsAUR are likely to be compatible with the widest range of other modifications to the Archiso image you may wish to perform. Proceed to set up a custom local repository. Include the resulting repository in the Pacman configuration of your new profile.

Include the built packages in the list of packages to be installed. The example below presumes you want to include only the zfs-dkmsAUR and zfs-utilsAUR packages.

packages.x86_64
...
zfs-dkms
zfs-utils

If you include any DKMS packages, make sure you also include headers for any kernels you are including in the ISO (linux-headers for the default kernel).

Using the archzfs unofficial user repository[编辑 | 编辑源代码]

Add the archzfs unofficial user repository to pacman.conf in your new Archiso profile.

Add the archzfs-linux group to the list of packages to be installed (the archzfs repository provides packages for the x86_64 architecture only).

packages.x86_64
...
archzfs-linux
注意: If you later have problems running modprobe zfs, you should include the linux-headers in the packages.x86_64.

Finishing up[编辑 | 编辑源代码]

Regardless of where you source your ZFS packages from, you should finish by building the ISO.

Automatic snapshots[编辑 | 编辑源代码]

zrepl[编辑 | 编辑源代码]

The zreplAUR package from the AUR provides a ZFS automatic replication service, which could also be used as a snapshotting service much like snapper.

For details on how to configure the zrepl daemon, see the zrepl documentation. The configuration file should be located at /etc/zrepl/zrepl.yml. Then, run zrepl configcheck to make sure that the syntax of the config file is correct. Finally, enable zrepl.service.

sanoid[编辑 | 编辑源代码]

sanoidAUR is a policy-driven tool for taking snapshots. Sanoid also includes syncoid, which is for replicating snapshots. It comes with systemd services and a timer.

Sanoid only prunes snapshots on the local system. To prune snapshots on the remote system, run sanoid there as well with prune options. Either use the --prune-snapshots command line option or use the --cron command line option together with the autoprune = yes and autosnap = no configuration options.

ZFS Automatic Snapshot Service for Linux[编辑 | 编辑源代码]

注意: zfs-auto-snapshot-gitAUR has not seen any updates since 2019, and the functionality is extremely limited. You are advised to switch to a newer tool like zreplAUR.

The zfs-auto-snapshot-gitAUR package from AUR provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).

To prevent a dataset from being snapshotted at all, set com.sun:auto-snapshot=false on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set com.sun:auto-snapshot:monthly=false.

注意: zfs-auto-snapshot-git will not create snapshots during scrubbing. It is possible to override this by editing provided systemd unit and removing --skip-scrub from ExecStart line. Consequences not known, someone please edit.

Once the package has been installed, enable and start the selected timers (zfs-auto-snapshot-{frequent,daily,weekly,monthly}.timer).

Creating a share[编辑 | 编辑源代码]

ZFS has support for creating shares by NFS or SMB.

NFS[编辑 | 编辑源代码]

Make sure NFS has been installed/configured, note there is no need to edit the /etc/exports file. For sharing over NFS the services nfs-server.service and zfs-share.service should be started.

To make a pool available on the network:

# zfs set sharenfs=on nameofzpool

To make a dataset available on the network:

# zfs set sharenfs=on nameofzpool/nameofdataset

To enable read/write access for a specific ip-range(s):

# zfs set sharenfs="rw=@192.168.1.100/24,rw=@10.0.0.0/24" nameofzpool/nameofdataset

To check if the dataset is exported successful:

# showmount -e `hostname`
Export list for hostname:
/path/of/dataset 192.168.1.100/24

To view the current loaded exports state in more detail, use:

# exportfs -v
/path/of/dataset
    192.168.1.100/24(sync,wdelay,hide,no_subtree_check,mountpoint,sec=sys,rw,secure,no_root_squash,no_all_squash)

To view the current NFS share list by ZFS:

# zfs get sharenfs

SMB[编辑 | 编辑源代码]

When sharing through SMB, using usershares in /etc/samba/smb.conf will allow ZFS to setup and create the shares. See Samba#Enable Usershares for details.

/etc/samba/smb.conf
[global]
    usershare path = /var/lib/samba/usershares
    usershare max shares = 100
    usershare allow guests = yes
    usershare owner only = no

Create and set permissions on the user directory as root

# mkdir /var/lib/samba/usershares
# chmod +t /var/lib/samba/usershares

To make a pool available on the network:

# zfs set sharesmb=on nameofzpool

To make a dataset available on the network:

# zfs set sharesmb=on nameofzpool/nameofdataset

To check if the dataset is exported successfully:

# smbclient -L localhost -U%
        Sharename       Type      Comment
        ---------       ----      -------
        IPC$            IPC       IPC Service (SMB Server Name)
        nameofzpool_nameofdataset        Disk      Comment: path/of/dataset
SMB1 disabled -- no workgroup available

To view the current SMB share list by ZFS:

# zfs get sharesmb

Encryption in ZFS using dm-crypt[编辑 | 编辑源代码]

Before OpenZFS version 0.8.0, ZFS did not support encryption directly (See #Native encryption). Instead, zpools can be created on dm-crypt block devices. Since the zpool is created on the plain-text abstraction, it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.

dm-crypt, possibly via LUKS, creates devices in /dev/mapper and their name is fixed. So you just need to change zpool create commands to point to that names. The idea is configuring the system to create the /dev/mapper block devices and import the zpools from there. Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection might be partially lost.

For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:

# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 \
             --key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc
# zpool create zroot /dev/mapper/enc

In the case of a root filesystem pool, the mkinitcpio.conf HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:

HOOKS="... keyboard encrypt zfs ..."

Since the /dev/mapper/enc name is fixed no import errors will occur.

Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.

ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible. To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use eCryptfs on it.

For example to have an encrypted home: (the two passwords, encryption and login, must be the same)

# zfs create -o compression=off -o dedup=off -o mountpoint=/home/<username> <zpool>/<username>
# useradd -m <username>
# passwd <username>
# ecryptfs-migrate-home -u <username>
<log in user and complete the procedure with ecryptfs-unwrap-passphrase>

Emergency chroot repair with archzfs[编辑 | 编辑源代码]

To get into the ZFS filesystem from live system for maintenance, there are two options:

  1. Build custom archiso with ZFS as described in #Create an Archiso image with ZFS support.
  2. Boot the latest official archiso and bring up the network. Then enable archzfs repository inside the live system as usual, sync the pacman package database and install the archzfs-archiso-linux package.

To start the recovery, load the ZFS kernel modules:

# modprobe zfs

Import the pool:

# zpool import -a -R /mnt

Mount the boot partitions (if any):

# mount /dev/sda2 /mnt/boot
# mount /dev/sda1 /mnt/boot/efi

Chroot into the ZFS filesystem:

# arch-chroot /mnt /bin/bash

Check the kernel version:

# pacman -Qi linux
# uname -r

uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:

# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux but using the matching kernel modules directory name under the chroot's /lib/modules)

This will load the correct kernel modules for the kernel version installed in the chroot installation.

Regenerate the initramfs. There should be no errors.

Bind mount[编辑 | 编辑源代码]

Here a bind mount from /mnt/zfspool to /srv/nfs4/music is created. The configuration ensures that the zfs pool is ready before the bind mount is created.

fstab[编辑 | 编辑源代码]

See systemd.mount for more information on how systemd converts fstab into mount unit files with systemd-fstab-generator.

/etc/fstab
/mnt/zfspool		/srv/nfs4/music		none	bind,defaults,nofail,x-systemd.requires=zfs-mount.service	0 0

Monitoring / Mailing on Events[编辑 | 编辑源代码]

See ZED: The ZFS Event Daemon for more information.

An email forwarder, such as S-nail, is required to accomplish this. Test it to be sure it is working correctly.

Uncomment the following in the configuration file:

/etc/zfs/zed.d/zed.rc
 ZED_EMAIL_ADDR="root"
 ZED_EMAIL_PROG="mailx"
 ZED_NOTIFY_VERBOSE=0
 ZED_EMAIL_OPTS="-s '@SUBJECT@' @ADDRESS@"

Update 'root' in ZED_EMAIL_ADDR="root" to the email address you want to receive notifications at.

If you are keeping your mailrc in your home directory, you can tell mail to get it from there by setting MAILRC:

/etc/zfs/zed.d/zed.rc
export MAILRC=/home/<user>/.mailrc

This works because ZED sources this file, so mailx sees this environment variable.

If you want to receive an email no matter the state of your pool, you will want to set ZED_NOTIFY_VERBOSE=1. You will need to do this temporary to test.

Start and enable zfs-zed.service.

With ZED_NOTIFY_VERBOSE=1, you can test by running a scrub as root: zpool scrub <pool-name>.

Wrap shell commands in pre & post snapshots[编辑 | 编辑源代码]

Since it is so cheap to make a snapshot, we can use this as a measure of security for sensitive commands such as system and package upgrades. If we make a snapshot before, and one after, we can later diff these snapshots to find out what changed on the filesystem after the command executed. Furthermore we can also rollback in case the outcome was not desired.

znp[编辑 | 编辑源代码]

E.g.:

# zfs snapshot -r zroot@pre
# pacman -Syu
# zfs snapshot -r zroot@post
# zfs diff zroot@pre zroot@post 
# zfs rollback zroot@pre

A utility that automates the creation of pre and post snapshots around a shell command is znp.

E.g.:

# znp pacman -Syu
# znp find / -name "something*" -delete

and you would get snapshots created before and after the supplied command, and also output of the commands logged to file for future reference so we know what command created the diff seen in a pair of pre/post snapshots.

Remote unlocking of ZFS encrypted root[编辑 | 编辑源代码]

As of PR #261, archzfs supports SSH unlocking of natively-encrypted ZFS datasets. This section describes how to use this feature, and is largely based on dm-crypt/Specialties#Remote unlocking (hooks: netconf, dropbear, tinyssh, ppp).

  1. Install mkinitcpio-netconf to provide hooks for setting up early user space networking.
  2. Choose an SSH server to use in early user space. The options are mkinitcpio-tinyssh or mkinitcpio-dropbear, and are mutually exclusive.
    1. If using mkinitcpio-tinyssh, it is also recommended to install tinyssh or tinyssh-convert-gitAUR. This tool converts an existing OpenSSH hostkey to the TinySSH key format, preserving the key fingerprint and avoiding connection warnings. The TinySSH and Dropbear mkinitcpio install scripts will automatically convert existing hostkeys when generating a new initcpio image.
  3. Decide whether to use an existing OpenSSH key or generate a new one (recommended) for the host that will be connecting to and unlocking the encrypted ZFS machine. Copy the public key into /etc/tinyssh/root_key or /etc/dropbear/root_key. When generating the initcpio image, this file will be added to authorized_keys for the root user and is only valid in the initrd environment.
  4. Add the ip= 内核参数 to your boot loader configuration. The ip string is highly configurable. A simple DHCP example is shown below.
    ip=:::::eth0:dhcp
  5. Edit /etc/mkinitcpio.conf to include the netconf, dropbear or tinyssh, and zfsencryptssh hooks before the zfs hook:
    HOOKS=(... netconf <tinyssh>|<dropbear> zfsencryptssh zfs ...)
  6. Regenerate the initramfs.
  7. Reboot and try it out!

Changing the SSH server port[编辑 | 编辑源代码]

By default, mkinitcpio-tinyssh and mkinitcpio-dropbear listen on port 22. You may wish to change this.

For TinySSH, copy /usr/lib/initcpio/hooks/tinyssh to /etc/initcpio/hooks/tinyssh, and find/modify the following line in the run_hook() function:

/etc/initcpio/hooks/tinyssh
/usr/bin/tcpserver -HRDl0 0.0.0.0 <new_port> /usr/sbin/tinysshd -v /etc/tinyssh/sshkeydir &

For Dropbear, copy /usr/lib/initcpio/hooks/dropbear to /etc/initcpio/hooks/dropbear, and find/modify the following line in the run_hook() function:

/etc/initcpio/hooks/tinyssh
 /usr/sbin/dropbear -E -s -j -k -p <new_port>

Regenerate the initramfs.

Unlocking from a Windows machine using PuTTY/Plink[编辑 | 编辑源代码]

First, we need to use puttygen.exe to import and convert the OpenSSH key generated earlier into PuTTY's .ppk private key format. Let us call it zfs_unlock.ppk for this example.

The mkinitcpio-netconf process above does not setup a shell (nor do we need need one). However, because there is no shell, PuTTY will immediately close after a successful connection. This can be disabled in the PuTTY SSH configuration (Connection > SSH > [X] Do not start a shell or command at all), but it still does not allow us to see stdout or enter the encryption passphrase. Instead, we use plink.exe with the following parameters:

plink.exe -ssh -l root -i c:\path\to\zfs_unlock.ppk <hostname>

The plink command can be put into a batch script for ease of use.

See also[编辑 | 编辑源代码]