I see.. well... i'm kinda have to meditate on that... you see, I used linux LVM for a long time and used to clone volumes (block devs) over net... for that you have to know the exact size... otherwise it doesn't fit on disk... so... when I have an LVM vol it provides the size... then I can create ZVOL with that particular size... and now I don't have that data anymore on FreeBSD except for "zfs history" command... What if I want to block copy ZVOL -> ZVOL?There does not seem to be any, see (specifically) this comment: https://cgit.freebsd.org/src/tree/sys/contrib/openzfs/lib/libzutil/zutil_nicenum.c#n113.
I see.. well... i'm kinda have to meditate on that... you see, I used linux LVM for a long time and used to clone volumes (block devs) over net... for that you have to know the exact size... otherwise it doesn't fit on disk... so... when I have an LVM vol it provides the size... then I can create ZVOL with that particular size... and now I don't have that data anymore on FreeBSD except for "zfs history" command... What if I want to block copy ZVOL -> ZVOL?
probably have to use fdisk or something like that... but! it gives a bit smaller size, then what was in zfs history.
zfs send
(which involves a lot of additional metadata, e.g. for snapshots)Oh! Fine! I didn't get right 'Display numbers in parsable (exact) values.' Only saw 'parsable' not the exact. Thanks a lot!zfs get/set/list and zpool get/set (likey other subcommands as well) all have the -p option to display raw values, could be useful here.
That's not how size and physical space allocation with zfs works... If you dump a dataset/zvol to a file its size may vary depending on compression and amount of metadata that dataset brings along (which also varies depending on e.g. stripe-/blocksize). By far the worst relation between 'physical' and actual dataset/zvol size would be data on raidz pools - the huge amount of padding and parity data introduces a significant error in size "on zfs" and "physical" size if you were to dump a dataset/zvol to a file; let alone the actually moved data byzfs send
(which involves a lot of additional metadata, e.g. for snapshots)
No, I think I need volsize property, not the size that you've used. volsize/1024/1024 matches the size I've used with LVM. It solves the problem of the topic.zpool get -p -H size | awk 'size=$3/1024^2 {print $1,$2,size "MB"}'
You could just clone the zvol if it is OK that the clones will depend on the original (snapshot). For e.g. VMs that are cloned from a master image this is the preferred way and saves a lot of physical space.I haven't tried it yet as I did with LVM. Suppose I want two ZVOLs to send data from one to another, how should I allocate the second? I have to do some test first...
Of course you can send incrementals. That's the main point of snapshots which are always used for zfs send|recv, otherwise you wouldn't be able to work with a dataset while it is sent, or you would need some (bad) workaround like a temporary buffer for all deltas that accumulate while the transfer is running (-> that's what LVM does).zfs send/recv doesn't look attractive as it doesn't send deltas. Am I right?
> You could just clone the zvol if it is OK that the clones will depend on the original (snapshot).You could just ...
How do you mean? When you send the incremental update between from snapshot fs@a to fs@b, it certainly only sends the new changes (deltas), not the whole fs@b. (@a needs to exist at the recv location for this to work, clearly.)zfs send/recv doesn't look attractive as it doesn't send deltas, i believe.
you know the size and reservation of the zvol. also you should always leave enough headroom on a zpool for housekeeping, metadata etc - rough rule of thumb is at ~80% allocated space you should extend the pool.What if it doesn't fit?
Take a new snapshot and send the delta between the last synced and new snapshot...Hmmm.. Let's say we have to identical zvols about 300Gb. One has changed. We want to send the diff to another. So you're saying... we can send 1Gb delta? How it will be look like?
I'm new here. May assume wrong.How do you mean? When you send the incremental update between from snapshot fs@a to fs@b, it certainly only sends the new changes (deltas), not the whole fs@b. (@a needs to exist at the recv location for this to work, clearly.)
> you know the size and reservation of the zvol.you know the size ..
zfs get -p all zpool/path/to/zvol
Something wrong withOnly general stuff in there. I still need to google for if it's possible to send encrypted zvol anyway
zfs send
? My educated guess is that it won't care, just have zfs receive
ready on the other end Only general stuff in there. I still need to google for if it's possible to send encrypted zvol anyway.
zfs send -w ...
will send the encrypted stream (won't be decryptable on the backup system unless it has access to the keys.)I just don't know yet. Haven't a chance to try it before.
Something wrong withzfs send
? My educated guess is that it won't care, just havezfs receive
ready on the other end
And zfs-load-key sеrves both volume and it's snapshots at the same time while keys are loaded into memory? I think this will work just fine, thank you.Assuming you're talking zfs encryption (used with zfs-load-key), and not some other layer, or encryption of the communication pipe,zfs send -w ...
will send the encrypted stream (won't be decryptable on the backup system unless it has access to the keys.)
See zfs-send(8) for more details and discussion.
And zfs-load-key sеrves both volume and it's snapshots at the same time while keys are loaded into memory? I think this will work just fine, thank you.
I didn't know ZFS is that friggin' smart!By default, ZFS volumes are created with a refreservation pre-calculated to account for metadata overhead (including raidz[123] overhead); this is the same as setting refreservation=auto. The only time you might actually care about checking space before-hand is if you're using refreservation=none, but if you're combining these two things, you've designed your system in a very wrong way.
As long as you stick to the default refreservation (which is sensible), ZFS won't allow you to fill your volume more than it possibly can, other datasets won't write so much data that the volume could not be filled, and you cannot receive a volume onto a new pool where it wouldn't fit. Setting "refreservation=none" can violate all of these principles, and that's why you should avoid doing that.