ZFS zfs incremental send not doing anything

So, after finising a multi-week transfer of a 50TB pool to a new server, I created a new shopshot point, and started the "zfs send -I" for it.

I started with "zfs send -nP -R -I ... " to get a size, then started the send (using pv to give me status/ETA/etc). This started as I'd expected, but then just stopped proceeding.

My receiver is running:
nc -6l 5001 | sudo zfs receive -Fduvs tank
and my sender is running:
sudo zfs send -R -I tank@mirror_base tank@mirror_base_2 | pv -s $size -ptebarT | nc -6 -N receiver 5001

The pv shows that 603 KiB sent. That was 14 hours ago when I started it. Since then, nothing.
The receiver showed many lines output when it started seeing the transfer, removing the deleted-on-the-source snapshots. 10+ of:
Code:
attempting destroy tank/volume1@daily-2026-01-24_03.37.00--1w
success
Followed by:
Code:
receiving incremental stream of tank@mirror_base_2 into tank@mirror_base_2                                                                                                                                 
received 312B stream in 0.14 seconds (2.14K/sec)                                                                                                                                                           
receiving incremental stream of tank/volume1@monthly-2026-02-01_05.30.00--6m into tank/volume1@monthly-2026-02-01_05.30.00--6m

Have I done something wrong? The receivers drives are idle, so I don't think it's doing anything, and there are no bytes being sent from the sender.

Any idea what could be causing this behavior? Thank you.
 
Simple things first: tools (pv) are all up-to-date? An old version of pv had an error which would frequently hang on a zfs send stream.

Consider reworking your approach to avoid -F on the recv. The big concern is that an accidental filesystem deletion (we are human) on the source can quickly remove the matching backup. (If you’re unlucky on timing and when you realize the mistake.)
 
The tools are all fairly up to date, and these are the same systems I used to do the initial transfers which worked alright. Let me confirm that the pv on the source machine (FreeBSD 13.5) is up to date...
Oh, okay, that was out of date then. It was `1.8.10`, and `pkg upgrade` brings it up to `1.9.31`.

Yeah, I had the same thought about `-F`, but assumed it would matter. These filesystems have never been mounted on the destination system so it shouldn't matter at all.

If I interrupt the failed transfer, which seems to have already deleted some old snapshots, will it confuse the system that it will try to delete them again? Any other changes I should make, or should `zfs receive -duvs tank` be right?

Now having found out (and edited above, so time-slip) that the pv was old, should I try the same again with -F ?
I guess the most important question is do I need to do anytihng to clean up the partial incremental transfer that began...
 
iirc we've had this same behavior when the target pool was created with the wrong compatibility options.
In my case, the target pool was created _by_ the zfs send -R. So, I suspect that's not the problem. Unless the zfs send on 13.5 would confuse the zfs receive on 15.0.
 
that is not how we understand zfs recv to work — we understand it to require an existing pool to receive into. accordingly, last time we did this, our first task was to create the zpool itself, and the zfs send/recv pair populated the pool from the snapshots. When we created the pool on a newer version of freebsd, without using zpool create -o compatibility=$APPROPRIATE_VALUE, the send/recv task would stall out like you're describing, and we had to zpool destroy the pool and recreate it with the right compatibility set.
 
that is not how we understand zfs recv to work — we understand it to require an existing pool to receive into. accordingly, last time we did this, our first task was to create the zpool itself, and the zfs send/recv pair populated the pool from the snapshots. When we created the pool on a newer version of freebsd, without using zpool create -o compatibility=$APPROPRIATE_VALUE, the send/recv task would stall out like you're describing, and we had to zpool destroy the pool and recreate it with the right compatibility set.
Hmm. You're right, I'm sorry. Of course the pool has to be created on the destination, zfs (send or receive) can't know how to line up vdevs.
My zpool create looks to have been zpool create -m none -o ashift=12 tank [...]. But as noted, after that, it was able to receive the send -R of the whole pool (until it was interrupted), and then of filesystems within the pool after that.
Would the send/receive stall _after_ it was initially working it there was that issue? I wouldn't think so, but I am not familiar with compatibility options.
 
Hmm. You're right, I'm sorry. Of course the pool has to be created on the destination, zfs (send or receive) can't know how to line up vdevs.
My zpool create looks to have been zpool create -m none -o ashift=12 tank [...]. But as noted, after that, it was able to receive the send -R of the whole pool (until it was interrupted), and then of filesystems within the pool after that.
Would the send/receive stall _after_ it was initially working it there was that issue? I wouldn't think so, but I am not familiar with compatibility options.
well, another problem without using the compatibility option is that when you finish the send and try to boot, the bootloader will load the kernel because it doesn't care about parsing the featureset, and then the kernel will fail to mount the root filesystem because it can't parse the featureset. ask us how we know this. ;)

in your shoes we'd zpool destroy tank; zpool create -m none -o ashift=12 -o compatibility=openzfs-2.1-freebsd ... and try again — per git logs, 13.5 tracks OpenZFS 2.1.
 
And I would suggest rolling back each file system on the receive side to the @mirror_base snapshot, and discarding any partial receive given that you’ve hardly transferred any data. This will be much easier overall than completing the partial and then going through and completing individual file systems piece by piece.

If you choose to still use the -F, you don’t need to worry about the fact that it deleted some previously, it’s primarily makes sure that the set of remaining ones matches. But, again, don’t use -F. There are exceedingly few situations where “-F” is the right choice.
 
Back
Top