ZFS HDD in 512e mode to 4K mode with existing ZFS pool?

So I got some 512e/4Kn drives, and forgot to set the logical mode to 4K before I copied over my data from my old drives to the new ones (via zfs send/recv), and so the drives are still in 512e mode. I already checked and ashift=12 on the pool. What would happen if I were to send the "SET SECTOR CONFIGURATION EXT" command to swap it to 4K mode? At a guess I'd want to do this with the zfs pool exported so nothing can write, but otherwise, when I import it back would everything be fine?
 
I already checked and ashift=12 on the pool.

last time i checked 2^12 was 4096 (4K) and IIRC vfs.zfs.min_auto_ashift=12 has been the default for several years now. So the pool was already created with 4K blocksize and zfs doesn't care if the disk firmware lies (as it always does) and reports 512b sectors. ...
 
last time i checked 2^12 was 4096 (4K) and IIRC vfs.zfs.min_auto_ashift=12 has been the default for several years now. So the pool was already created with 4K blocksize and zfs doesn't care if the disk firmware lies (as it always does) and reports 512b sectors. ...
This information is not right. I have installed FreeBSD on my system many times so I know if I didn't choose to force 4K sector size when install my zpool will have ashift 9. Unless we run different version of FreeBSD so the default setting differs.
 
12.1 ZFS happily stuffed a 512 disk into a 4K pool without that ashift set in sysctl and then promptly threw a giant fit when I tried to drop a vdev out of a non-raid pool with "invalid config; all top-level vdevs must have the same sector size and not be raidz"

Investigation of this problem using zdb -C seemed to have the stripesize set to... zero. Sounds like a very "defaulty" value.

I would recommend always stuffing vfs.zfs.min_auto_ashift=12 for 4k into /etc/sysctl.conf and rebooting before ever creating the first ZFS pool.

But this thread is not about ashift

The original post is about sending a SATA command to the drive to change it from 512e to 4k native. To the question of if this will blow up all the data on the drive, I would back up all your data with the expectation it will, unmount all the drives, and then find out. If the pool is already 4K, this should only affect low level transfer operations.
 
This information is not right. I have installed FreeBSD on my system many times so I know if I didn't choose to force 4K sector size when install my zpool will have ashift 9. Unless we run different version of FreeBSD so the default setting differs.

Now that you mention it - I switched that option on without much thinking about it for a long time now; that's why all but my oldest installs have vfs.zfs.min_auto_ashift=12 in their /etc/sysctl.conf.
But given the fact that almost all drives nowadays are 4k anyways and 512b is rather exotic today, it might be worth filing a PR to (finally) set ashift=12 as a default. Especially because it won't hurt performance on 512b drives as bad as it does on 4k drives with 512b operations.
 
Let's run a little test here... spins up a FreeBSD 12.1 VM

kldload zfs
sysctl vfs.zfs.min_auto_ashift
vfs.zfs.min_auto_ashift: 9

whoops
 
Now, the previous install was UFS, which is why I had to load the kernel extension. I just reinstalled it and did ZFS-on-root, and bsdinstall has this "Force 4K sectors?" and that is by default in the UI set to YES. Let's see what that does.

sysctl vfs.zfs.min_auto_ashift
vfs.zfs.min_auto_ashift: 12

hmm

tail -n 1 /etc/sysctl.conf
vfs.zfs.min_auto_ashift=12

Ah, of course. So, the default is 9, and if you do ZFS-on-root, bsdinstall will set the sysctl for you to 12 for 4K, which is why it will appear to be the default, but it isn't, really.

This default is set in

/sys/contrib/openzfs/module/zfs/vdev.c
zfs_vdev_min_auto_ashift = ASHIFT_MIN

and that constant is set in

/sys/contrib/openzfs/include/sys/spa.h
ASHIFT_MIN = 9
 
So in /etc/sysctl.conf I have
Code:
vfs.zfs.min_auto_ashift=12

And using zdb -C, the pool has ashift: 12

But with the drive reporting 512b logical, 4K physical, will FreeBSD/ZFS be doing 512b or 4K operations under the hood? And should I backup, run the operation, and see what's what?

And yes msplsh, I should have done this _before_ creating the pool, but I completely forgot and was eager to just get this thing up and running.
 
But with the drive reporting 512b logical, 4K physical, will FreeBSD/ZFS be doing 512b or 4K operations under the hood
I presume if the drive says it's in 512e and ZFS is asking for 4K, FreeBSD's SATA driver is chopping it up into 512 ops, the drive physically fetches it as 4K and then feeds it back to the driver in little 512 bites, which then throws it back in ZFS's face as 4K.

I suggest backing everything up because I've never done a SATA command before to force the sector mode, so I don't know what happens or what could happen.
 
I presume if the drive says it's in 512e and ZFS is asking for 4K, FreeBSD's SATA driver is chopping it up into 512 ops, the drive physically fetches it as 4K and then feeds it back to the driver in little 512 bites, which then throws it back in ZFS's face as 4K.

I suggest backing everything up because I've never done a SATA command before to force the sector mode, so I don't know what happens or what could happen.
That's kinda what I would think too.

Anyone know if there's a way to look at the underlying SATA drivers to see if they're chopping things up/stitching them back together? Seems like even if ZFS is sending/requesting 4K at a time, the underlying commands would still be 512b chunks which is wasteful and probably negatively impacts performance.

Hm...via geom, I'm seeing. Not sure that tells me anything new though, but certainly supports the idea that the driver under the hood is chopping it up/stitching together the 4K that ZFS sends it to 512b

Code:
   Sectorsize: 512
   Stripesize: 4096
 
If you're concerned about it, just back up the data and do the operation. The worst that could happen is that you have to reconstruct the pool.
 
Probably what I'll do. I'll have to rig up something to use some of my old drives to have at least some redundancy and enough storage space to fit the data. I'll figure it out and give it a try.
 
From the Book "FreeBSD Mastery: ZFS" by Michael W. Lucas:
If ZFS uses a 4K sector size on a disk with 512-byte sectors, the disk hardware breaks up the access requests into physical sector sizes, at very little performance cost.While using a larger sector size does not impact performance, it does reduce space efficiency when you’re storing many small files. If you have a whole bunch of 1 KB files, each occupies a single sector.

So I would assume 4K requests are passed through the whole software stack down to the drive. Depending on how dumb the firmware of the drive is (and firmware can be incredibly dumb!) it still breaks them up in 512b chunks and re-assembles them when actually accessing the flash.
Given that you probably won't get any performance difference beyond measuring tolerance and the pool is already correctly set up with 4k sector size I wouldn't bother...

If you absolutely insist to fiddle with the firmware of the drives and you are runnig mirror vdevs, just try to change the sector size on a single drive and see what happens. If it nukes all data the drive gets thrown out of the pool, but you still got the other half of the mirror and can resilver the wiped drive. You could then proceed this way one drive at a time until all drives are in 4Kn mode.
 
Back
Top