ZFS non-native block size

I have an odd warning, since upgrading to FreeBSD-10.0. And I'm not sure what to make of it.

Code:
status: One or more devices are configured to use a non-native block size. Expect reduced performance.

This happened to the 3 newest disks. Since the update (and this warning) I experience disk-failures. Within 10 minutes, on of the disks will fail. The server will start spitting a lot of errors and eventually freeze.

The gmirror is also behaving odd. Two failures in one hour. So, there is definately something very wrong. But I don't know what, let alone how to fix this.

Could someone explain to me what is going on and how I should fix this?

Thanks!
 
It looks like some devices in the pool are set to use a 512b sector size, when they are actually 'Advanced Format' drives with a 4k sector, or visa-versa.
What disks are displaying this problem, and what does the output of zdb -C poolname look like?

I wouldn't expect this to cause the disks to fail though. People have been dealing with this 512b/4k problem for years and it's only ever shown up as a performance issue, not caused disks to start failing. What errors actually start appearing when one of the drives is failing.
 
zdb -C is spitting out a lot of info. But nothing seems out of the ordinary. Anything specific I should look for?

Two drives are giving checksum error, when I check with zpool status -v. Those are errors, right? It looks like any write action will result in lots of io-errors, and freeze the server. For now I have turned it off, because I'm afraid to damage the data beyond repair. First I want to know what's going on.

I hooked up the 2 problem drives to another PC, to checkout the SMART-logs. No errors whatsoever. The tests are also performed without errors. So, I don't think the drives themselves are the problem.
 
Not a lot of information to go on with. Which disk controller, brand and model and driver used? Same questions about the disks.
 
I was most interested in knowing the disks so we can determine if they are AF or not, and the ashift of the vdevs in the pool.
Was it all functioning correctly before the upgrade and has any hardware changed?
 
I'm not sure if everything was function correctly, before the upgrade. Actually I don't think it was. Last week, I had one drive-fail, out of the blue. This was a USB-drive (Yes, I know it's freaky, but I have a few USB-drives added to my zpool) so I dismissed it as a glitch. But now it seems the entire on-board USB-controller is borked. When I put all the drives in a PCI-e USB-controller, everything works fine. I guess the upgrade was the push over the edge.

The strange behaviour of the gmirror can be explained because the server suddenly froze. It simply had to resync after that. So, it was related but the disk themselves seem fine.

Anyway, I'm not taking any chances. A new motherboard is on the way.
 
Back
Top