Solved Sending and receiving a thin provisioned volume - target gets volsize as actual size

EDIT: misjudged what the problem was from start. The array has a non-optimal blocksize, see answers below.

I am replacing the main zpool in my FreeBSD 10.3-STABLE NAS with a new one, and therefore transfering all volumes from the old pool to the new one using zfs send | zfs recv.

I get (for me) unexpected behaviour when transferring a thinly provisioned zvol (hosting an NTFS volume) this way: the resulting zvol on the new pool gets the maximum size of the zvol as specified during creation with -V (property: volsize), not the actual size of the data as given by properties used, referenced etc. This is unlike previous occasions when I have backed up this volume.

Here is what I did together with listings of zfs properties, with the relevant differences highlighted:

(Transfer from store2 -> store1, vol name = win8d)

# zfs get [...] store2/win8d [ORIGINAL]
NAME PROPERTY VALUE SOURCE
store2/win8d used 187G -
store2/win8d referenced 187G -
store2/win8d volsize 250G local
store2/win8d usedbydataset 187G -
store2/win8d logicalused 187G -
store2/win8d logicalreferenced 187G -

# zfs send -p store2/win8d@moving | zfs recv -v store1/win8d

# zfs get [...] store1/win8d [NEW COPY]
NAME PROPERTY VALUE SOURCE
store1/win8d used [B]249G[/B] -
store1/win8d referenced [B]249G[/B] -
store1/win8d volsize 250G local
store1/win8d usedbydataset [B]249G[/B] -
store1/win8d logicalused 187G -
store1/win8d logicalreferenced 187G -

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
[...]
store1/win8d [B]249G[/B] 5,58T [B]249G[/B] -
[...]
store2/win8d 187G 376G 187G -


I expected that the resulting should have all size properties except volsize set to 187Gb, not 249Gb. So why does the size on disk end up being almost equal to the volsize?

Originally the thinly provisioned volume was created on Linux with "-s -V 250G" and subsequently populated with NTFS and 187Gb of data. I sent it to the FreeBSD machine using zfs send / recv over SSH, then it was received in the expected fashion with all relevant properties intact (the version sitting on store2 above is the result of this transfer).

Could it be that zfs send works differently on FreeBSD vs linux, and/or am I missing a crucial switch to zfs send?

(I tried -p, -R and nothing, all with the same result)
 
Last edited:
I think the problem can be unrelated to send/receive itself. Tell please about configurations your old and new ZFS pools, and value of volblocksize of your zvol.
 
I believe you are right. I trided moving a volume containing the ports collection back and forth between the two disks, it grew from 1.32Gb to 2.27Gb when sending from store2 (old) to store1 (the bigger and newer pool), and then shrunk again to the previous size when sending it back. I did not try sending the win8d zvol back, but I assume the same thing will happen. So it seems that it was just incidental that the new size of the copy NFTS zvol ended up so close to the set volsize (which is really just set so that windows would not be confused). I don't know whether windows would consider this disk full, or what happens if I start adding data to it (for example over iSCSI).

Code:
# zfs send store1/ports@moving | zfs recv store2/ports

# zfs list
store1/ports                2,27G  5,58T  2,27G  /usr/ports
store1/win8d                249G  5,58T   249G -
store2/ports                1,32G   374G  1,32G  /store2/ports
store2/win8d                187G   376G   187G -

zfs send store1/ports@tmp | zfs recv store2/porttmp

# zfs list
store1/ports                2,27G  5,58T  2,27G  /usr/ports
store2/ports                1,32G   374G  1,32G  /store2/ports
store2/porttmp              1,32G   374G  1,32G  /store2/porttmp

store1 = 3x4Tb zraid (new)
store2 = 2x2Tb mirror (old)

compression=on on both pools.

I cannot list a full comparison of zfs properties etc. as store2 is taken out of the machine and I don't have physical access to it now. However it seems reasonable that there are differences in the intrinsic properties of the two pools which accounts for the difference in listed size. The bigger pool (which I can access via ssh) has a volblocksize = 4k, if it conveys anything. But I still find the observed inflation factor quite large, especially for a NTFS zvol whose files are not known to zfs.
 
Thanks Eric, that probably explains it. Then it seems that I will have to re-build my array at some point. Basically the same volumes have lived on at least two raidz* without me seeing this phenomenon before, I don't remember setting the blocksize in the past, but maybe I did. Probably from reading some rule-of-thumb advice without caring to find out what it meant :)
 
I'd say that your biggest problem is 4K block size. It is too small. While technically ZFS allows that, and technically there can be scenarios where it is viable, ZVOL default is 8K, and even that is found to create bottlenecks on sequential reads. Though, depending on your workload, mirrors can indeed be preferable to RAIDZ also for its higher IOPS, not mentioning better space efficiency.
 
Thanks Eric, that probably explains it. Then it seems that I will have to re-build my array at some point. Basically the same volumes have lived on at least two raidz* without me seeing this phenomenon before, I don't remember setting the blocksize in the past, but maybe I did. Probably from reading some rule-of-thumb advice without caring to find out what it meant :)

You likely had a pool with 512b (ashift=9) sectors previously, and 4K (ashift=12) sectors now; the allocation constraints (multiple of n+1 for RAIDZn) are in units of sectors, so the effect is much smaller with ashift=9 with the same size volblocksize.
 
Back
Top