Я дошёл до этапа подключения по сетке.
ZFS:
zpool create -o ashift=12 -O compression=lz4 -O atime=off -O recordsize=64k nvme /dev/nvme0n1 /dev/nvme1n1 /dev/nvme3n1 -f
zfs create -s -V 2.7T -o volblocksize=64k -o compression=lz4 nvme/iser
fio -name=rndw4k16 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 --runtime=100 --time_based=1 -numjobs=4 -bs=64k -iodepth=32 -rw=randwrite -filename=/dev/mapper/mpat
WRITE: bw=916MiB/s (960MB/s), 229MiB/s-229MiB/s (240MB/s-240MB/s), io=54.1GiB (58.0GB), run=60460-60460msec
mdadm:
fio -name=rndw4k16 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 --runtime=100 --time_based=1 -numjobs=1 -bs=64k -iodepth=32 -rw=randwrite -filename=/dev/mapper/mpathd
WRITE: bw=1748MiB/s (1833MB/s), 1748MiB/s-1748MiB/s (1833MB/s-1833MB/s), io=121GiB (130GB), run=71062-71062msec
Да, iSER мне нравится больше iSCSI
mdadm - 40G (1 link) - fio io=32, bs=1M&64k
___________
fio -name=rndw4k16 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 --runtime=10 --time_based=1 -numjobs=1 -bs=1M -iodepth=32 -rw=randread -filename=/dev/sdb
randREAD: bw=4617MiB/s (4841MB/s), 4617MiB/s-4617MiB/s (4841MB/s-4841MB/s), io=45.1GiB (48.4GB), run=10007-10007msec
___________
fio -name=rndw4k16 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 --runtime=10 --time_based=1 -numjobs=1 -bs=1M -iodepth=32 -rw=randwrite -filename=/dev/sdb
randWRITE: bw=4452MiB/s (4669MB/s), 4452MiB/s-4452MiB/s (4669MB/s-4669MB/s), io=43.5GiB (46.7GB), run=10008-10008msec
___________
fio -name=rndw4k16 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 --runtime=10 --time_based=1 -numjobs=1 -bs=64k -iodepth=32 -rw=randread -filename=/dev/sdb
randREAD: bw=2826MiB/s (2963MB/s), 2826MiB/s-2826MiB/s (2963MB/s-2963MB/s), io=27.6GiB (29.6GB), run=10001-10001msec
___________
fio -name=rndw4k16 -ioengine=libaio -direct=1 -buffered=0 -invalidate=1 --runtime=10 --time_based=1 -numjobs=1 -bs=64k -iodepth=32 -rw=randwrite -filename=/dev/sdb
randWRITE: bw=2526MiB/s (2649MB/s), 2526MiB/s-2526MiB/s (2649MB/s-2649MB/s), io=24.7GiB (26.5GB), run=10001-10001msec
___________
Теперь я жду патча io_direct на zfs ещё больше
multipath однако не даёт значительно выше результаты, даже при round-robin & rr_min_io=1 (512k блоки):
один путь - READ: bw=4637MiB/s (4862MB/s), 4637MiB/s-4637MiB/s (4862MB/s-4862MB/s), io=45.3GiB (48.6GB), run=10004-10004msec
два пути - READ: bw=5117MiB/s (5366MB/s), 5117MiB/s-5117MiB/s (5366MB/s-5366MB/s), io=49.0GiB (53.7GB), run=10004-10004msec