900 Too Long; Didn't Read

- collection of technical matters compressed down to their essentials -

version: 25J13a
 tldr@c6y.eu

August 4, 2024

I created an encrypted ZPool with "zpool create -o encryption=on -o keyformat=passphrase -o atime=off -o xattr=sa -o compression=lz4 pool /dev/sda". After having exported it, I am not able to import it again with "zpool import -l /dev/sda" or "zpool import -l pool". Why is this not working?

In order to get access to an encrypted ZPool, try to import from /dev/sda1 instead of /dev/sda. ZFS creates two partitions when creating a ZPool on the entire device, /dev/sda1 and /dev/sda9. Although "zpool import -s" shows the ZPool, it cannot be imported, neither with "zpool import -l pool" nor with "zpool import -l /dev/sda". But importing from /dev/sda1 works:

zpool import -l /dev/sda1

But usually it is sufficient to attach the block device and then run a "zpool list" and then a "zfs mount".
And be sure to load the encryption key before with:

zfs load-key -a
November 30, 2024

How do I create an SMB share with Samba that can be used for Apple's TimeMachine?

Use the following parameters in /etc/samba/smb.conf to enable an SMB share (in the example below called "tm") for TimeMachine usage:

[tm]
path = /path/to/directory/owned/by/nobody/
valid users = nobody
browseable = yes
read only = no
spotlight = yes
vfs objects = catia fruit streams_xattr
fruit:time machine = yes
Replace the bold emphasized parameters as needed.
December 11, 2024

How to determine optimal compression levels when using ZFS or BTRFS with compression?

Determine the compression speed for various compression levels by compressing random data which is mostly uncompressable. This yields the worst case compression effort. Testing the worst case compression can be done like this:

dd if=/dev/urandom bs=1048576 count=128 | zstd -1 >/dev/null

This will read 128 MB (128 x 1.048.576 bytes) of random data and compress it with level 1 (parameter -1) compression from ZSTD. At the end the speed is shown. Example:

# dd if=/dev/urandom bs=1048576 count=128 | zstd -1 >/dev/null
128+0 records in
128+0 records out
134217728 bytes (134 MB, 128 MiB) copied, 3.34504 s, 40.1 MB/s

Do this for various compression levels until dd tells you that the speed is below the speed of your underlying device. To find out the speed of your underlying device do something like this:

dd if=/dev/loop5 of=/dev/null bs=1048576 count=512

This will read 512MB from device /dev/loop5 and then show the average speed. Example:

# dd if=/dev/loop0 bs=1048576 count=512 of=/dev/null
512+0 records in
512+0 records out
536870912 bytes (537 MB, 512 MiB) copied, 3.78592 s, 142 MB/s

When the compression speed falls below the speed of the underlying device, reduce the compression level. As a result you will use the maximum compression without reducing I/O speed.

Make sure that you are determining compression speed while your system is doing its typical processing. Otherwise you might increase compression thus far that it will reduce I/O speed as it will take longer when there is processing load on your system.

The examples above show that even the fastest compression is slower than disk I/O. This is due to the fact that the tests were performed while the system was under extreme load. In such a case it is advisable to use the fastest compression.

Sometime ZSTD compression level 2 or 3 are faster than 1. Make sure to measure all levels of compression.

April 8, 2025

When to use BTRFS and when ZFS?

Both filesystems have some advantages and disadvantages. Take the following in consideration when selecting which one to use:

category
BTRFS
ZFS
compression speed
uses all cores available
uses all but one core
frees blocks on underlying filesystem (only relevant when using files as backend)
yes
no
encryption
requires a separate encryption backend
built-in
supports LZ4
no
yes
copy contents to another host
no special feature
allows to copy diffs between snapshots

Simple rule for home use: If you do not need encryption, use BTRFS, otherwise use ZFS.

May 22, 2025

How to avoid ever occurring retransmission of files to OneDrive when using RClone?

Use the following parameters for Rclone so that retransmissions of already transmitted files stop:
rclone sync -vv --track-renames --onedrive-encoding None /from/here to:/there/
October 13, 2025

How to get around the incompatibility of RClone with iCloud?

RClone stopped working with iCloud somewhen in 2025. There seems to be no fix. Version 1.71.1 fails. Falling back to version 1.69.0 does not help either. However, the iCloud Drive can be reached via a local directory:
/Users/*USERNAME*/Library/Mobile Documents/com~apple~CloudDocs/
Setting up a remote of type "local" and making this one the remote of another remote makes the sync work again.

Example:

This is an excerpt from an rclone.conf file that works:

[ilocal]
type = local
[icrypt]
type = crypt
remote = ilocal:/Users/*USERNAME*/Library/Mobile Documents/com~apple~CloudDocs/crypt
filename_encryption = off
directory_name_encryption = false
password = *VERYLONGPASSWORD*
password2 = *VERYLONGPASSWORD*

Make sure to replace all parameters indicated by a pair of stars.