ZFS ZFS questions

I know next to nothing about ZFS and the more I read about the more my head hurts so I thought I'd pose a few questions to the venerable people who read this forums.

How does import/export work? Can I, for example export an existing pool to a new device simply by creating a new pool on that device and can I export via SSH?
 
It looks like I need to use send and receive if I want to use SSH.

import/export is for pools that are local to each other.
 
If I have two ZFS devices both with a zpool name of zroot and I run zpool list should they both show up and how can I tell which one I have actually booted from?
 
I recommend to get you one, two or better more additional storage drives. Any not currently in use but functioning HDD/SSD will do, you may find such in your spare parts box in the attic, or get some for a few bucks at a second hand shop, or a garage sale.
Create some zpools with those, write some random data to it (e.g. cp your /home/) and simply do some experiments, like import, export, zfs send, snapshots, resilvering, even kill one drive within a mirror or raidz pool... - play with it!
Those practical experience you gather is worth more than hundreds pages of text.
ZFS(8) ZPOOL(8)
 
I recommend to get you one, two or better more additional storage drives. Any not currently in use but functioning HDD/SSD will do, you may find such in your spare parts box in the attic, or get some for a few bucks at a second hand shop, garage sale.
Create some zpools with those, write some random data to it (e.g. cp your /home/) and simply do some experiments, like import, export, zfs send, snapshots, resilvering, even kill one drive within a mirror or raidz pool... - play with it!
Those practical experience you gather is worth more than hundreds pages of text.
ZFS(8) ZPOOL(8)
I have a number of ZFS systems but they use ZROOT as a pool name. I didn't realise might have caused a problem until now.

If I have two devices connected is it possible to change the pool name of the system I haven't booted from?
 
I have just created a new 50GB parttion on a disk I booted from and created a zpool called ztest. I have another disk attached which has a bootable zfs system with a zpool called zroot.

How do I show that both pools exist?

If I run zfs list I only see ztest which I've just created.
 
use the zpool(8) command. Example:
Code:
root@kg-f2:~ # zpool status
  pool: z2
 state: ONLINE
  scan: scrub repaired 0B in 09:11:45 with 0 errors on Sun Mar 15 12:12:52 2026
config:

    NAME        STATE     READ WRITE CKSUM
    z2          ONLINE       0     0     0
      raidz2-0  ONLINE       0     0     0
        ada0p1  ONLINE       0     0     0
        ada1p1  ONLINE       0     0     0
        ada2p1  ONLINE       0     0     0
        ada3p1  ONLINE       0     0     0
        ada4p1  ONLINE       0     0     0

errors: No known data errors

  pool: zroot
 state: ONLINE
  scan: scrub repaired 0B in 00:02:30 with 0 errors on Sun Mar 15 03:03:55 2026
config:

    NAME        STATE     READ WRITE CKSUM
    zroot       ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        ada5p3  ONLINE       0     0     0
        ada6p3  ONLINE       0     0     0

errors: No known data errors
 
FreeBSD Mastery: ZFS and FreeBSD Mastery: Advanced ZFS by Michael W Lucas and Allan Jude are still some of the best books to start understanding ZFS.

Example:
zpool export is "roughly" equivalent to cleanly umnounting a filesystem. Anything not fllushed to device gets flushed, everything is made consistent, all datasets on that pool are no longer available.

zpool import is roughly mounting a filesystem; again since a pool may have more than one dataset, all datasets that have canmount true are mounted at whatever mountpoints they have defined.
 
If I have two devices connected is it possible to change the pool name of the system I haven't booted from?
I am not entirely certain what you want to achieve, but focusing just on this part, 'is it possible to change the pool name?' the answer is yes.

Eg. A Linux host has a ZFS on Root installed on rpool, change it to zroot:

Boot host using a recovery disk or live CD/flashdrive with same OS and same ZFS. This is a special case as you cannot unmount the root pool that you boot from. Using a live CD to boot from is booting a different filesystem.
Code:
zpool export rpool
zpool import rpool zroot
zpool export zroot
Reboot host without the live CD

Eg. Renaming an additional pool not used for booting to make it clear what it is used for (in this case Podman containers). No need to boot from a live CD.

Code:
zpool export zdata3
zpool import zdata3 podman
zpool export podman
zpool import podman
 
I have a number of ZFS systems but they use ZROOT as a pool name. I didn't realise might have caused a problem until now.

If I have two devices connected is it possible to change the pool name of the system I haven't booted from?
Yes, it is annoying. For this reason I use different pool names like "server1", "workstation2", for each system I manage.

zpool import

shows also the GUID of the pools to import. You can import the pool using the GUID, and reassigning a different name.

zpool import 1234567890000000 my_new_pool

Often it is also necessary to mount the automount directories using a different root, for avoiding conflicts. Or not mount them at all. There are options for this, listed in the manpage.
 
How do I show that both pools exist?

If I run zfs list I only see ztest which I've just created.
zpool list shows the available pools.

zfs list shows the datasets of already imported pools.

These are basic concepts, so it is better if you (re)read some introduction about ZFS, because otherwise there will never an end to the possible questions and answers.

Sadly, also after knowing the basic concepts, there are many corner cases of ZFS that can puzzle. For example I fought a lot with auto mounting the directories, and how the order of mounting can override some content on them.
 
It looks like I need to use send and receive if I want to use SSH.

import/export is for pools that are local to each other.
there are a few options.
With ssh you can setup an ip tunnel and use iscsi to connect to remote drives just like the are local.
Then you can import zpools like they are local.

Other option is zfs send over ssh , of one pool and receive on other ip end. Then recreate the pool there.
 
zpool list -v
You check partition size & zpool size
Create new partition & pool , see my previous posts , explaining in detail.
create a snapshot of source.
Zfs send zfs receive, note zfs send & receive have many options, not always easy.
man zfs-send
man zfs-receive
Eg
Code:
zfs send -Rwv SOURCE/SSD@nowx | pv | zfs receive -Fdu DESTINATION/SSD
 
I recommend to get you one, two or better more additional storage drives. Any not currently in use but functioning HDD/SSD will do, you may find such in your spare parts box in the attic, or get some for a few bucks at a second hand shop, or a garage sale.
Create some zpools with those, write some random data to it (e.g. cp your /home/) and simply do some experiments, like import, export, zfs send, snapshots, resilvering, even kill one drive within a mirror or raidz pool... - play with it!
I agree with this but I'd also like to point out that this can be done for free and with much less physical hassle in a VM with virtual disks.
 
zpool list -v
You check partition size & zpool size
Create new partition & pool , see my previous posts , explaining in detail.
create a snapshot of source.
Zfs send zfs receive, note zfs send & receive have many options, not always easy.
man zfs-send
man zfs-receive
Eg
Code:
zfs send -Rwv SOURCE/SSD@nowx | pv | zfs receive -Fdu DESTINATION/SSD
I don't know where I'm supposed to run this from.

And don't understand the parameters. You say 'explaining in detail' but then 'many options, not always easy'. For someone who knows zero about ZFS that sounds error prone.
 
this can be done for free and with much less physical hassle in a VM
That's right. Many things can. That's the one major point of VMs: Doing things without the needed hardware, and no risc to break it really physically. (The other point is to have several, different things on one machine at the same time instead of having multiple hardware.)
But for that you already have to be pretty versed in how to set up, handle, and deal with VMs first.

Apart from that it's good to have at least some experience in hardware anyway, so you can compare and rate what your VMs do with what happened on real hardware. After all it's a simulation.
Simulations can portray the reality close enough for a certain purpose, but they can never respect every parameter reality brings, which are not seldom the kind of: "F#c4! I'd never thought of that in my wildest dreams!"😁
Simulations can only be useful tools for those who can assess the differences between reality and virtuality, for which one needs enough experience in reality. For those who cannot, believing they can trust simulation blindly, so real world tests are not needed at all, simulation is a trap. Sooner or later they find themselves confronted with situations they cannot explain, and problems they cannot solve, if those are caused by reality's parameters the simulation did not respect.

After all, a somehow redundant zpool (mirror or raidz) for practical usage makes only real sense, when it's done with real physical drives. Having for example a raidz3 zpool consisting of several virtual drives all existing as files stored on one single physical drive makes only sense, when you wanna examine or test something (what is the case here), but gives no additional safety against hardware failure than that drive offers they are stored on. Because if that drive flares off, everything on it is toast, too, no matter how many redundant virtual drive files were stored on it.
 
I don't know where I'm supposed to run this from.

And don't understand the parameters. You say 'explaining in detail' but then 'many options, not always easy'. For someone who knows zero about ZFS that sounds error prone.
You run this from any freebsd environment where to pools are visible.
The command i have you has SOURCE , DESTINATION , that is clear. So you can only destroy DESTINATION :)
 
If I have a pool on one disk how do I create a new pool on another disk and copy over the contents?
Page 55 of FreeBSD Mastery: ZFS has a very simple set of commands that show you how to create a new zpool on a partition.
Chapters 0 through 3 have very good explainations of the different terminology used in ZFS (things like VDEV, SLOG, ZIL, pool, dataset)

You can also create a zpool using a "whole device" (the way Solaris/Illumos does)
A common recomendation is to use labels because labels are consistent but device numbering may not be.

Assume your new device shows up as /dev/da37, you are dedicating the whole thing to a zpool, here is how I would do it (obviously all commands done as root from a terminal window or console):

First create partitions and gpt label:
gpart create -s gpt da37
gpart add -a 1m -l myzfsthing -t freebsd-zfs da37

Now create the new zpool:
zpool create datastuff gpt/myzfsthing

Now you have a new zpool named "datastuff" on the gpt partition labelled "myzfsthing"

zpool list should show the new pool "datastuff" and it's vdev "gpt/myzfsthing"

zpools are interesting but datasets on them are more interesting.
To create a new dataset on zpool datastuff:
zfs create datastuff/myfirstdataset

zfs list
should now show that.

As for copying there are about a million different ways to do that, depending on what exactly you are trying to accomplish.

zfs send zfs receive is one way (there are probably half a billion examples doing a google search)
tar czvf | tar xzvf is another
standard cp -r

Honestly there are lots of references for doing zfs basics around, a bunch on this forum, a bunch over at klara systems, heck even the old Solaris/Oracle ones can be useful.
 
Lets say you have a zpool SOURCE and a zpool DESTINATION.
Code:
zfs snapshot -r SOURCE@migration_snap
zfs send -R SOURCE@migration_snap | zfs receive -F -u DESTINATION

It will recreate all datasets from source to destination , keeping all "sets,options" , receive is done with flags force & overwrite.
All mountpoints remain the same.
DESTINATION will be identical to SOURCE@migration_snap
 
I think I have changed the pool name on one device from zroot to something else, say xroot. How do I access it from my boot device?
 
If I have two ZFS devices both with a zpool name of zroot and I run zpool list should they both show up and how can I tell which one I have actually booted from?
Can you verify these assumptions were true before the rename?

1. You have a FreeBSD machine with two hard drives.
2. You installed FreeBSD ZFS on Root as per installer on one hard drive, this created a pool called zroot on one hard drive
3. You plugged-in/connected a second hard drive that had already been installed with FreeBSD/ZFS and now you have two zpools with the same name.

This disk that you boot from will be set in BIOS or UEFI. The zpool on that disk containing the root filesystem will be the one that boots.

Have you got anything on either of these disks that you want to keep?

Are the disks the same make, model, size or are they different?
 
Back
Top