The basis of this was taken from the tutorial here. I noticed a typo on the example, which has been fixed. By adding some SSDs to your ZFS pool you can greatly increase the speed of your drives. The L2ARC drives will automatically cache the files you read, while the ZIL drives will cache your writes. Its a good idea to mirror the ZIL so that a drive failure can’t cause corruption.
The following partitions the SSDs with an 8G partition to be used for ZIL, and the remaining drive space to be used for L2ARC
gpart create -s gpt mfid10 gpart create -s gpt mfid11 gpart add -t freebsd-zfs -b 2048 -a 4k -l log0 -s 8G mfid10 gpart add -t freebsd-zfs -b 2048 -a 4k -l log1 -s 8G mfid11 gpart add -t freebsd-zfs -a 4k -l cache0 mfid10 gpart add -t freebsd-zfs -a 4k -l cache1 mfid11
Your devices will probably be different. My server had a RAID card in it, that didn’t support pass through mode. I added each drive to its own RAID0, which allows me to still use it for ZFS.
Now add the drives to the pool, mirroring the ZIL and making the CACHE one big span
zpool add zroot log mirror gpt/log0 gpt/log1 zpool add zroot cache gpt/cache0 gpt/cache1
My new zpool setup:
root@backup1:~/.ssh # zpool status pool: zroot state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 mfid0p3 ONLINE 0 0 0 mfid1p3 ONLINE 0 0 0 mfid2p3 ONLINE 0 0 0 mfid3p3 ONLINE 0 0 0 mfid4p3 ONLINE 0 0 0 mfid5p3 ONLINE 0 0 0 mfid6p3 ONLINE 0 0 0 mfid7p3 ONLINE 0 0 0 mfid8p3 ONLINE 0 0 0 mfid9p3 ONLINE 0 0 0 logs mirror-1 ONLINE 0 0 0 gpt/log0 ONLINE 0 0 0 gpt/log1 ONLINE 0 0 0 cache gpt/cache0 ONLINE 0 0 0 gpt/cache1 ONLINE 0 0 0 errors: No known data errors