[zfs] Multiple ashift value for ZIL on SSD

I decided to test the "ZIL on SSD" thing. I placed a 2GB partition on my GPT formatted (gpart -a 4k) 60GB Corsair Force 3 SSD. Did these then rebooted:

# gnop create -S 4K ada1p9
# zpool add mypool log ada1p9.nop

My root is on zfs, so the regular command above results in an error:
Code:
cannot add to 'mypool': root pool can not have multiple vdevs or separate logs
The work-around is this, before adding the log device:
# zpool set bootfs="" mypool
After the log device has been added you can re-set the value:
# zpool set bootfs=mypool mypool

System comes back up, but I get a strange ashift result:
# zdb mypool | grep ashift
Code:
ashift: 9
ashift: 12
ashift: 9
ashift: 12

Details:
1. I only have 1 zpool
2. I also did the process by booting from a USB installation of my kernel/world and did a zpool export mypool before reboot - the result was the same
3. The zdb command stays busy after displaying the message and I have to <ctrl-c> out of it.
4. I know that the log should be given the entire device, but that is so that the SSD and the on-board controller do not get drowned by I/O stream. In my case this is not an issue so partitioned log is useable.

What do you think the multiple ashifts mean? I assume it's because of my little "workaround".
 
Good call. My zdb appears to have a number of corrupt entries (my fault for not properly destroying some test zpools).
Unfortunately zdb manual talks more about "status reporting" and not much about "error correction". Back to the books...
 
Back
Top