EDIT: misjudged what the problem was from start. The array has a non-optimal blocksize, see answers below.
I am replacing the main zpool in my FreeBSD 10.3-STABLE NAS with a new one, and therefore transfering all volumes from the old pool to the new one using zfs send | zfs recv.
I get (for me) unexpected behaviour when transferring a thinly provisioned zvol (hosting an NTFS volume) this way: the resulting zvol on the new pool gets the maximum size of the zvol as specified during creation with -V (property: volsize), not the actual size of the data as given by properties used, referenced etc. This is unlike previous occasions when I have backed up this volume.
Here is what I did together with listings of zfs properties, with the relevant differences highlighted:
(Transfer from store2 -> store1, vol name = win8d)
I expected that the resulting should have all size properties except volsize set to 187Gb, not 249Gb. So why does the size on disk end up being almost equal to the volsize?
Originally the thinly provisioned volume was created on Linux with "-s -V 250G" and subsequently populated with NTFS and 187Gb of data. I sent it to the FreeBSD machine using zfs send / recv over SSH, then it was received in the expected fashion with all relevant properties intact (the version sitting on store2 above is the result of this transfer).
Could it be that zfs send works differently on FreeBSD vs linux, and/or am I missing a crucial switch to zfs send?
(I tried -p, -R and nothing, all with the same result)
I am replacing the main zpool in my FreeBSD 10.3-STABLE NAS with a new one, and therefore transfering all volumes from the old pool to the new one using zfs send | zfs recv.
I get (for me) unexpected behaviour when transferring a thinly provisioned zvol (hosting an NTFS volume) this way: the resulting zvol on the new pool gets the maximum size of the zvol as specified during creation with -V (property: volsize), not the actual size of the data as given by properties used, referenced etc. This is unlike previous occasions when I have backed up this volume.
Here is what I did together with listings of zfs properties, with the relevant differences highlighted:
(Transfer from store2 -> store1, vol name = win8d)
# zfs get [...] store2/win8d [ORIGINAL]
NAME PROPERTY VALUE SOURCE
store2/win8d used 187G -
store2/win8d referenced 187G -
store2/win8d volsize 250G local
store2/win8d usedbydataset 187G -
store2/win8d logicalused 187G -
store2/win8d logicalreferenced 187G -
# zfs send -p store2/win8d@moving | zfs recv -v store1/win8d
# zfs get [...] store1/win8d [NEW COPY]
NAME PROPERTY VALUE SOURCE
store1/win8d used [B]249G[/B] -
store1/win8d referenced [B]249G[/B] -
store1/win8d volsize 250G local
store1/win8d usedbydataset [B]249G[/B] -
store1/win8d logicalused 187G -
store1/win8d logicalreferenced 187G -
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
[...]
store1/win8d [B]249G[/B] 5,58T [B]249G[/B] -
[...]
store2/win8d 187G 376G 187G -
I expected that the resulting should have all size properties except volsize set to 187Gb, not 249Gb. So why does the size on disk end up being almost equal to the volsize?
Originally the thinly provisioned volume was created on Linux with "-s -V 250G" and subsequently populated with NTFS and 187Gb of data. I sent it to the FreeBSD machine using zfs send / recv over SSH, then it was received in the expected fashion with all relevant properties intact (the version sitting on store2 above is the result of this transfer).
Could it be that zfs send works differently on FreeBSD vs linux, and/or am I missing a crucial switch to zfs send?
(I tried -p, -R and nothing, all with the same result)
Last edited: