ZFS FreeBSD 8.4: move zfs pool to new disks

Hi, all

I have FreeBSD 8.4-STABLE running fine for years. I need exactly this version for some running legacy application which could not be moved to other version.
So, upgrade is not my option.

I have no option to run new server. All activity should be done on existing server remotely.

I have zpool raidz-1 on that system:
Code:
config:

        NAME           STATE     READ WRITE CKSUM
        zroot          ONLINE       0     0     0
          raidz1-0     ONLINE       0     0     0
            gpt/disk1  ONLINE       0     0     0
            gpt/disk2  ONLINE       0     0     0
            gpt/disk3  ONLINE       0     0     0

errors: No known data errors

All existing HDD disks are too old and should be replaced.

I would like to migrate this pool to new ZFS mirror pool which I can create from two new ssd disks.
Here, on that server only 4 SATA ports available.

I detached 1 HDD drive and attached 2 new ssd drives. Running zpool zroot becomes in DEGRADED state.
I was able to create 2 new GPT partitions and ZFS mirror pool from SSD drives.

Code:
config:

        NAME                     STATE     READ WRITE CKSUM
        zroot                    DEGRADED     0     0     0
          raidz1-0               DEGRADED     0     0     0
            7048945465148507392  UNAVAIL      0     0     0  was /dev/gpt/disk1
            gpt/disk2            ONLINE       0     0     0
            gpt/disk3            ONLINE       0     0     0

errors: No known data errors

  pool: zroot-2021
 state: ONLINE
status: The pool is formatted using a legacy on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on software that does not support feature
        flags.
  scan: none requested
config:

        NAME                STATE     READ WRITE CKSUM
        zroot-2021          ONLINE       0     0     0
          mirror-0          ONLINE       0     0     0
            gpt/disk1-2021  ONLINE       0     0     0
            gpt/disk2-2021  ONLINE       0     0     0

errors: No known data errors

I started send-recv process which stuck:
Code:
zfs send -R zroot@b2021 | zfs recv -u zroot-2021

Could you, please, help me to migrate to new ZFS pool remotely?
I can ask for on-site support if needed, but I have to provide detailed manual what to do.

Thanks in advance.
 
Well if upgrade is not an option than you are out of luck.
This is a support forum for FreeBSD versions 12 and 13.
You are using an operating system that was EOL over 6 years ago.
Legacy app ain't a good reason.
 
if you don't have lots of snapshots, clones, etc you can rsync / tar / etc
you should have left the original pool as it was and create the 2nd with 1 disk
then if succesful add the 2nd to mirror it
try to serialize / send to /dev/null and see if it completes without error
if that works you can try various tactics to complete the transfer
 
Sometimes, it helps to send a SIGTERM (kill -9 pid) to the offending send process, and re-start it. I think that ZFS is smart enough to just pick up where it was interrupted, and continue without missing a beat. A brute-force tactic would be to reboot the send server and try again.
 
Probably , too late, there is this, zfs-recv has parameter :

-s If the receive is interrupted, save the partially received state,
rather than deleting it. Interruption may be due to premature
termination of the stream (e.g. due to network failure or failure
of the remote system if the stream is being read over a network
connection), a checksum error in the stream, termination of the zfs
receive process, or unclean shutdown of the system.

The receive can be resumed with a stream generated by zfs send -t
token, where the token is the value of the receive_resume_token
property of the filesystem or volume which is received into.

To use this flag, the storage pool must have the extensible_dataset
feature enabled. See zpool-features(5) for details on ZFS feature
flags.
 
OP says there is only 4 SATA ports available, original pool is raidz1-0.
If your intent is to have the mirror bootable, that's going to be "interesting".

I'd start by putting the HDD back in and make sure the existing pool recovers.
That would leave you with one of the SSDs in.
Blow away any partitioning and any ZFS info that may be hanging around: this may require use of dd to a few MB on either end of the disk.
Partition the SSD make sure to have aligned (easiest to use 1M alignment), make sure you have a freebsd-boot or efi partition big enough, make sure you have a freebsd-zfs partition big enough for all the data.
I don't feel comfortable with snapshotting the whole zpool and trying to send|recv it, so I would do individual datasets from the original.

I have a question:
You say 8.4, did that have ZFS support? The earliest I remember is FreeBSD9. ETA: Ok looks like 8.4 did have ZFS support.
The status message on zroot-2021 is interesting. "...The pool is formatted using a legacy on-disk format." Is this being reused from another system? If you are running on the 8.4 system and did the gpart and zpool create stuff I would not expect to see that.
If one builds a custom kernel in FreeBSD12, can it support running 8.x binaries?
 
When you compile a FreeBSD 13 kernel you can choose,
Code:
option    COMPAT_FREEBSD4     # Compatible with FreeBSD4
option    COMPAT_FREEBSD5     # Compatible with FreeBSD5
option    COMPAT_FREEBSD6     # Compatible with FreeBSD6
option    COMPAT_FREEBSD7     # Compatible with FreeBSD7
Some thing might continue to run.
 
Your ZFS version is very early. Most people were not booting off ZFS back then because there were "issues" (they had a UFS root partition).

I agree strongly with mer. Put your original RAIDZ1 disks back in place, and re-silver. It's too dangerous to proceed any other way.

Can you get extra disk capacity by deploying external media, e.g. eSATA disks, USB disks, or even thumb drives? WIll the system boot from USB media?

Will your application tolerate down time? If so, how much? How much data in the zroot pool? On each disk?
 
If you inherited this situation, I would politely point out to management that it's parlous... CYA!

In terms of options to escape, I think it's reasonably possible. Here is one way that I think may work:

Remove one of the two new disks, and replace the original stripe member.

Re-silver the existing stripe with zpool-replace(8). Make sure it's fully redundant before proceeding.

Partition the new disk. It would need to look something like this (start partitions 2 and 3 on megabyte boundaries; all free space in partition 3):
Code:
[sherman.399] # gpart show ada0
=>       40  488397088  ada0  GPT  (233G)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048   33554432     2  freebsd-swap  (16G)
   33556480  454840320     3  freebsd-zfs  (217G)
  488396800        328        - free -  (164K)
Create a new zroot2 pool on the new disk:
Code:
[sherman.400] # zpool status zroot2
  pool: zroot2
 state: ONLINE
  scan: scrub repaired 0B in 00:01:31 with 0 errors on Sun Nov 21 03:03:25 2021
config:

    NAME        STATE     READ WRITE CKSUM
    zroot2       ONLINE       0     0     0
        ada0p3   ONLINE       0     0     0
Check and set the zfs options for zpool2 with zpool-get(8) and zpool-set(8).

You could then use zfs send with zfs receive to copy a snapshot of the existing zroot pool to the zroot2 pool on new disk. Then fix up the boot partition on the new disk.

At this point you commence an outage and will need console access. Test boot single user from the new disk, if it looks good, reboot single-user from the old disks, create a new zroot snapshot, and incremental copy it to zroot2. zroot and zroot2 now have exactly the same data.

Make the changes required to boot from, and swap on, the new disk. Shutdown, pull out the old disks, and add the second new disk.

Boot multi-user with the new zroot. The outage is over.

It should then be possible to mirror the single new disk.

Depending on the risk profile, I would be tempted to trial the whole thing on a virtual machine. Certainly you should prepare a detailed plan and air it here.

A significant risk is that your version of ZFS is so old that it may not behave according to the current manuals...

You would still have a spare disk slot, and could deploy that to perform a FreeBSD upgrade, at a later date.
 
Hi again!
Sorry for delay.

Thank you very much for all your ideas.
Migration to some virtualisation is not an option. Our application is dependent on hardware and it may be more difficult.

Now remote engineer is able to delete 1 HDD drive from zroot (raidz1-0) pool and add two SSD disks for new mirror pool.

Ok.

I can see both ZFS pools:

Code:
# zpool status
  pool: zroot
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
        the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://illumos.org/msg/ZFS-8000-2Q
  scan: resilvered 2,21M in 0h0m with 0 errors on Tue Jan  1 00:01:09 2002
config:

        NAME                     STATE     READ WRITE CKSUM
        zroot                    DEGRADED     0     0     0
          raidz1-0               DEGRADED     0     0     0
            gpt/disk1            ONLINE       0     0     0
            gpt/disk2            ONLINE       0     0     0
            4328278847211220960  UNAVAIL      0     0     0  was /dev/gpt/disk3

errors: No known data errors

  pool: zroot-2021
 state: ONLINE
status: The pool is formatted using a legacy on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on software that does not support feature
        flags.
  scan: none requested
config:

        NAME                STATE     READ WRITE CKSUM
        zroot-2021          ONLINE       0     0     0
          mirror-0          ONLINE       0     0     0
            gpt/disk1-2021  ONLINE       0     0     0
            gpt/disk2-2021  ONLINE       0     0     0

errors: No known data errors

I deleted all existing ZFS snapshots and created new one for all ZFS:
Code:
zfs snapshot -r zroot@b2021

Then I run following:
Code:
zfs send -R zroot@b2021 | zfs recv -v -u -F zroot-2021

I saw some activity and then it stucks:
Code:
# zpool iostat 1
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zroot       24,8G  1,33T      9     58   557K   163K
zroot-2021   222M   464G      0     13  1,20K   778K
----------  -----  -----  -----  -----  -----  -----
zroot       24,8G  1,33T      0      0      0      0
zroot-2021   222M   464G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
zroot       24,8G  1,33T      0      0      0      0
zroot-2021   222M   464G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
...

After 10-15mins -- the same, no progress.

zfs list output:
Code:
# zfs list
NAME                             USED  AVAIL  REFER  MOUNTPOINT
zroot                           16,5G   891G   941M  /zroot
zroot-2021                       222M   457G    31K  /mnt/zroot-2021
zroot/ftp                        204M  9,80G   204M  /ftp
zroot/ports-old                 1,39G   891G  1,39G  /usr/ports
zroot/tmp                        291M   891G   291M  /tmp
...

you see -- "zroot-2021" available but its size is far away from zroot/zroot

My current gpart configuration is:
Code:
# gpart show -lp
=>       34  976773101    ad4  GPT  (465G)
         34        128  ad4p1  (null)  (64k)
        162    4194304  ad4p2  swap1  (2.0G)
    4194466  972578669  ad4p3  disk1  (463G)

=>       34  976773101    ad5  GPT  (465G)
         34        128  ad5p1  (null)  (64k)
        162    4194304  ad5p2  swap2  (2.0G)
    4194466  972578669  ad5p3  disk2  (463G)

=>       34  976773101    ad6  GPT  (465G)
         34          6         - free -  (3.0k)
         40        216  ad6p1  boot0  (108k)
        256  976772872  ad6p2  disk1-2021  (465G)
  976773128          7         - free -  (3.5k)

=>       34  976773101    ad7  GPT  (465G)
         34          6         - free -  (3.0k)
         40        216  ad7p1  boot0  (108k)
        256  976772872  ad7p2  disk2-2021  (465G)
  976773128          7         - free -  (3.5k)
#

How to send all ZFS data to new pool and how to modify new pool to set it bootable correctly once former pool will be removed from server?
 
If one builds a custom kernel in FreeBSD12, can it support running 8.x binaries?
When you compile a FreeBSD 13 kernel you can choose,
COMPAT_FREEBSD4 up to COMPAT_FREEBSD12 is already included in the GENERIC kernel. If you need to run that binary straight on the host then you're also going to need misc/compat8x, misc/compat9x, etc, all the way up to misc/compat12x.

COMPAT_FREEBSD10 and misc/compat10x is for running 10 binaries on 11. COMPAT_FREEBSD11 and misc/compat11x is for running 11 binaries on 12. So if you need to run 8 binaries on 13 for example, you will need COMPAT_FREEBSD8, COMPAT_FREEBSD9, COMPAT_FREEBSD10, COMPAT_FREEBSD11 and COMPAT_FREEBSD12. you would also need all the compatNx (with N from 8 up to 12) libraries.
 
How to send all ZFS data to new pool and how to modify new pool to set it bootable correctly once former pool will be removed from server?
What is the original ask? Current HDD are old and need to be replaced. Sounds like in #11 you have the original raidz1 with 2 of the HDDs replaced with SSDs. Is that correct?
Why not simply replace the last HDD with and SSD and leave things alone? That is the simplest thing to do, don't concern yourself with trying to change the configurations of vdevs and such.
Safer to do then trying to remotely figure out new boot devices and their configuration.

I would replace the final HDD with an SSD, then start snapshotting the datasets and sending them to an external device to give you a backup of the data.

SirDice in #12 tells me that FreeBSD-12 should be able to run 8.x binaries if you pkg install a few things. That should give you an upgrade path
 
Yes, I replaced one HDD by SSD in system, but not in the ZFS pool.
I need it because I would like to move existing raidz pool to new mirror. This is why I need 2 SATA ports.
Replacing 2x HDDs by 2xSSD could be an option in my case. Thanks. But I will need some clarification from you:
1. in case SSD disk is smaller, let's say 1MB smaller. How zfs will manage it?
2. last (third) HDD will be left in raidz pool and will be slowest point, correct? But later it could be replaced by third SSD without any issues?

p.s. two attempts failed when new mirror pool's size was 4.93GB. It seems to me something wrong with my old pool. Running scrub.

Thanks.
 
ZFS pool size will typically adjust to the smallest. If you have say raidz1-0, 2 4TB drives and 1 2TB drive the overall pool will act as if it has 3 2TB drives. 1MB it typically noise on modern disk sizes.
Slowest device in the pool will drive the overall read/write times. It also depends on configuration ov the vdev. A mirror typically has reads complete from the fastest device, but writes complete from the slowest device. RAID configs where everything is striped across all devices tend to be driven by the slowest.

Scrubbing is definitely a good step. Exporting and then importing is also not a bad thing because they force/flush consistency.
 
Running scrub.
Without the redundancy provided by the third spindle for the RAIDZ1 set, the scrub will be hamstrung.
I can't see anything obviously wrong with the send/receive. So the situation is worrying. I sincerely hope that it turns out well.
I'd certainly be checking the backups. Thinking about a rescue plan for the application data would be in order.
 
Back
Top