ZFS High fragmentation after zfs send/recv

I have just done a zfs send/recv of an entire pool. The purpose of this is to move to 4K sector based vdevs. The existing pool is using ashift=9, and the new pool is ashift=12. Since it's not possibel to convert vdevs ashift size. So I detached one drive out of every mirrored vdev, created a new pool with same number of vdevs. Then did a send/recv from existing pool to new pool. But whilst the FRAG property of the existing pool is only 2%. The FRAG of the new pool is 40%. The filesystem data is mostly using lz4 compression, so I used zfs send -R -e -p.

Why is the fragmentation so high? The filesystems are supposed to be exactly the same. And I thought I read somewhere that send/recv was a good way to remove fragmentation. But for me it's increased by 2000%.

It has same the result (40% fragmentation) with just zfs send -R, except the pool size increases due to compression being lost.
 
Thanks for the info bthomson. It's possible that the old pool was innacurate since it was created and populated on an older zpool version. The send/recv was done on my data pool with very few snapshots. Also, I did a full scrub on the sending (old) pool and I didn't notice the FRAG figure changing for it.

Something tells me it's something else. It seems that the main difference is going from 512b to 4k sector size.
 
Thanks User23, that really clears up my concern. My pool values are probably normal then. I do wonder though, if that amount of undersized free blocks is wasted space. I never go over 90% capacity on my pools, and I don't notice any major performance drop when going over 80%. I'm curious how fragmentation in ZFS compares with that of NTFS. When I was a Windows sys admin, years ago now, NTFS fragmentation was really a big PITA. I started using ZFS in production on Solaris 10, long before it was available on FreeBSD. And I always thought it was good as far as fragmentation goes. Now I hear that COW is inherently bad for fragmentation, it confuses me.
 
Back
Top