I'm running a server with 4xSAS raidz plus 1xNVMe SSD (SAMSUNG MZVLB512HAJQ). The NVMe drive is split in 3 parts:
# zpool status
scan: none requested
NAME STATE READ WRITE CKSUM
nvme ONLINE 0 0 0...
The actual capacity of my filesystem is smaller than what I expected/calculated, and I'd like to better understand why. I suspect this may have something to do with some facet of ZFS like ashift or recordsize that I'm forgetting to account for. Please note that I do not believe this is related...
After moving from 10.1 to 10.3 a couple days ago, I noticed the following oddities when looking at zpool status
Spare drive labels no longer appear, and instead present a diskid with their serial: