ZFS ZFS: Change zfs mirror pool to a mirrored vdev in another pool

Good morning FreeBSD community.
I have a question on ZFS data handling and couldn't find anything on the Internet (alternatively I am too stupid).

My task is the following:
I have 2 separate zfs mirror pools, with data on it. For testing purpose, I set up a test environment with loopback devices.
Code:
  pool: zfs_r1
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        zfs_r1      ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            md0     ONLINE       0     0     0
            md1     ONLINE       0     0     0

  pool: zfs_r2
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        zfs_r2      ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            md2     ONLINE       0     0     0
            md3     ONLINE       0     0     0

I want to connect them to one data pool with two mirror vdevs, looking like
Code:
  pool: zfs_r1
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        zfs_r1      ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            md0     ONLINE       0     0     0
            md1     ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            md2     ONLINE       0     0     0
            md3     ONLINE       0     0     0

By detaching md3 from zfs_r2 and adding it to zfs_r1, I loose my data.
Is there a way a "moving" the md2/md3 mirror to a vdev in another pool?

Thank you for your help and your recommendations.
Kind regards, Martin.
 
By detaching md3 from zfs_r2 and adding it to zfs_r1, I loose my data.
The zfs_r2 pool has a single vdev which is a mirror of md2 and md3.
Detaching md3 will degrade the pool, but it will still be fully functional.
You can expand the zfs_r1 pool pool by adding a second vdev.
It can be ANY sort of vdev. There are no restrictions, except those imposed by common sense.
I suggest you test the following:
  1. detach md3 from pool zfs_r2;
  2. add md3 as an additional vdev to pool zfs_r1;
  3. copy the data in pool zfs_r2 to pool zfs_r1;
  4. destroy pool zfs_r2; and
  5. attach md2 to pool zfs_r1 device md3 (changing the md3 vdev from a single disk to a mirror).
The zfs_r1 pool is likely to be "unbalanced", but it should be in the configuration you want.
 
Oh great, thank you. That was more or less what I did so far. I detached the md3 from zfs_r2 and added md3 as vdev to zfs_r1...
When I had a look at zfs_r1, the data from md3 was missing. Because of mirrored data md2/md3 I thought data must be present in zfs_r1. Obviously a fallacy.

I am a bloody beginner with zfs, my question eventually seemed a bit stupid.
I'll check copying the data from zfs_r2 to zfs_r1, destroying and attaching md2 to zfs_r1.

What do you mean with unbalanced? You mean vdevs with different sizes (2x1TB and 2x4TB)?
Or what does unbalanced in the ZFS terminology mean?
 
What do you mean with unbalanced?
Fully functional, but slower than it could be because the data will not be striped evenly across the two mirrors -- because you populated the mirrors serially, rather than in parallel. ZFS will rectify this with time if the pool has active file deletions and creations.
 
Fully functional, but slower than it could be because the data will not be striped evenly across the two mirrors -- because you populated the mirrors serially, rather than in parallel. ZFS will rectify this with time if the pool has active file deletions and creations.
Thank you for the more detailed information. With "populating in parallel" I guess you mean creating two mirror vdevs at once, like:
Code:
zpool create zfs_r1 mirror /dev/md0 /dev/md1 mirror /dev/md2 /dev/md3
 
You can’t merge pools. You can use send/recv to move filesystems (optionally with snapshot histories) between pools.

Thanks you. Unfortunately, it didn't really help cause I am at the very beginning of FreeBSD and ZFS. I read somewhere on the web with doing snapshots and send/recv will work. At least for me, it didn't. Might have been a wrong usage or the wrong procedure.

Could you eventually tell me in more detail, what did you mean with "send/recv to move file systems"?

As gpw928 mentioned, I copied the data with the standard cp command. Is there a better way instead of cp? At the moment I am only juggling with random test data. By moving a productive pool, would be nice to be fast and not to lose data.
 
Could you eventually tell me in more detail, what did you mean with "send/recv to move file systems"?

There are plenty of how-tos available from googling “zfs send recv”. Here’s one: https://docs.oracle.com/cd/E18752_01/html/819-5461/gbchx.html

Send/recv will allow you to create an exact replica of a ZFS filesystem’s state (potentially with child filesystems and snapshot histories) in a new location — on the same pool, or on a different pool, or even on a pool on a different host.

Note that pool and filesystem are not synonymous. The pool is the management layer that describes and deals with the underlying (typically physical disks) storage. The filesystems are a posix-compatible (mountable) interface to store data on a pool. (In legacy terms; the pool is like configuring a raid system, the filesystem is similar to configuring partitions (on the raid) and formatting those partitions with filesystems and mounting them, but much more flexible and feature-laden, because ZFS manages the whole stack from filesystem layer to bytes on a device.)
 
Back
Top