ZFS Block size error on single disk in a mirror-0 pool

Hi,

I receive this error in a mirror-0 zpool only on one disk. Most of the websites/forums I searched are stating for all disk in a pool mis-configured.

Considering that only one disk is mis-configured in this situation is there any chance that I can rectify the issue without removing the pool?

Code:
pool: alldata
state: ONLINE
status: One or more devices are configured to use a non-native block size.
           Expect reduced performance.

action: Replace affected devices with devices that support the
           configured block size, or migrate data to a properly configured
           pool.

scan: resilvered 89.3M in 0h1m with 0 errors on Sun Apr 26 00:16:22 2015
config:

   NAME  STATE  READ WRITE CKSUM
   alldata  ONLINE  0  0  0
        mirror-0  ONLINE  0  0  0
            ada2  ONLINE  0  0  0  block size: 512B configured, 4096B native
            ada0  ONLINE  0  0  0

errors: No known data errors

Thanks with regards.
 
ada2 is an "advanced format" disk that has 4k sectors, but the pool* is configured to write in 512b blocks. It's possible that the other disk might also be 4k. FreeBSD can only tell if the disk presents a stripe size of 4k, or is in FreeBSD's list of known 4k disks.

Unfortunately you can't change the block size in ZFS without recreating the pool. You'll need to copy the data off, set the minimum ZFS block size with the below command, then create a new mirror.
Code:
sysctl vfs.zfs.min_auto_ashift=12
Edit: Having just thought about it a bit more, it should be fairly easy to recreate the pool in your case as mirrors can be broken. Detach the 4k disk from the pool and create a new pool on that single disk. If ZFS is already telling you it's a 4k disk, you shouldn't even need to set that sysctl, it should use 4k automatically. Once the pool is created, check zpool status just to make sure you don't see the same warning. Then you can use zfs send/recv to migrate the data over. Finally destroy the old pool and attach the old disk to the new pool.

Code:
zpool detach alldata ada2
zpool create newpool ada2
; confirm new pool is ok, possibly using zdb -l /dev/ada2 to double check ashift=12
; send data over to new pool By creating snapshots and using zfs send/recv
zpool destroy alldata
zpool attach newpool ada2 ada0
Just make sure you use attach in that last command and not add. If you add the second disk, you'll end up with a two disk stripe and have a problem that's a lot harder to fix.

*Before anyone says it, yes the ashift is actually configured on vdev level, but that's fairly irrelevant here. Even in a multi-vdev pool, you can't remove a vdev so the outcome is the same.
 
Just make sure you use attach in that last command and not add. If you add the second disk, you'll end up with a two disk stripe and have a problem that's a lot harder to fix.

Been there, done that, so I will be triple careful on this.

; send data over to new pool By creating snapshots and using zfs send/recv

To make things complete, can you please also include the commands for this.

I do refer to the alldata pool in several programs. Is it possible to rename the newpool after destroying the alldata pool?

Thanks,
 
I've put a sample command to send a single ZFS dataset (filesystem) below. There used to be a complication if you had data in the "root" filesystem, as a full zfs recv doesn't work if the destination dataset already exists. I'm not sure if that's still an issue as I never store data in the root. If you have a lot of datasets, there's also recursive options you can use to snapshot and send all of them in one go. Unfortunately I never use those so someone else will have to give examples of that or you'll have to look them up in the manual.
Code:
zfs snapshot alldata/dataset1@migrate
zfs send alldata/dataset1 | zfs recv newpool/dataset1
You can rename a pool as below. You may prefer to actually rename the existing pool first, then create the new pool with the correct name from the start.
Code:
zpool export alldata
zpool import alldata newname
 
Back
Top