Solved ZFS/GELI Reboot Failure

spmzt

Developer
Hi,
In a fresh installed FreeBSD 13 with ZFS/GELI, When I reboot or power on the system, I get the following error after entering the storage password.
Code:
GELI Passphrase for disk0p4:

Calculating GELI Decryption Key for disk0p4: 2224665 iterations...
....
zio_read error: 45
zio_read error: 45
zio_read error: 45
zio_read error: 45
...
ZFS: i/o error - all block copies unavailable
zio_read error: 45
ZFS: i/o error - all block copies unavailable
ERROR: error loading module 'config' from file ''/boot/lua/config.lua':
E        /boot/lua/config.lua:1: malformed number near '8q'.
E
E
Type '?'...
OK
But when I specifically choose the boot option from the UEFI, it turns on without any problems.
Any idea?
 
After a successful boot:

efibootmgr -v
zpool status
geom part show
freebsd-version -kru
uname -aKU

Please share the outputs from those commands.

Can you describe the hardware?

Hard disk drive? And so on. Thanks.
 
After a successful boot:

efibootmgr -v
zpool status
geom part show
freebsd-version -kru
uname -aKU

Please share the outputs from those commands.

Can you describe the hardware?

Hard disk drive? And so on. Thanks.
Code:
~# efibootmgr -v                                                                                                                                                                                                   1 ↵ 
Boot to FW : false
BootCurrent: 0010
Timeout    : 0 seconds
BootOrder  : 0010, 000B, 000F, 0013, 000E, 000A, 000C, 0014, 0000, 0001, 0002, 0003, 0004, 0005, 0006, 0007, 0008, 0009, 0011, 000D
+Boot0010* FreeBSD HD(1,GPT,becdd594-67d2-11ec-9fef-dc4a3e6975b1,0x28,0x82000)/File(\efi\freebsd\loader.efi)
                      ada0p1:/efi/freebsd/loader.efi /boot/efi//efi/freebsd/loader.efi
 Boot000B* Generic USB3.0-CRW 29203008282014000 PciRoot(0x0)/Pci(0x14,0x0)/USB(0x17,0x0)
 Boot000F* hp DVDRW DU8A6SH  PciRoot(0x0)/Pci(0x17,0x0)/Sata(0x2,0x0,0x0)
 Boot0013* ST500DM002-1BD142  PciRoot(0x0)/Pci(0x17,0x0)/Sata(0x1,0x0,0x0)
 Boot000E* HUAWEI :  BBS(CDROM,HUAWEI : ,0x500)/PciRoot(0x0)/Pci(0x14,0x0)
 Boot000A* hp DVDRW DU8A6SH :  BBS(CDROM,hp DVDRW DU8A6SH : ,0x400)/PciRoot(0x0)/Pci(0x17,0x0)/Sata(0x2,0x0,0x0)
 Boot000C* CT240BX500SSD1 :  BBS(HD,CT240BX500SSD1 : ,0x400)/PciRoot(0x0)/Pci(0x17,0x0)/Sata(0x0,0x0,0x0)
 Boot0014* ST500DM002-1BD142 :  BBS(HD,ST500DM002-1BD142 : ,0x400)/PciRoot(0x0)/Pci(0x17,0x0)/Sata(0x1,0x0,0x0)
 Boot0000  Startup Menu Fv(a881d567-6cb0-4eee-8435-2e72d33e45b5)/FvFile(9d8243e8-8381-453d-aceb-c350ee7757ca)
 Boot0001  System Information Fv(a881d567-6cb0-4eee-8435-2e72d33e45b5)/FvFile(9d8243e8-8381-453d-aceb-c350ee7757ca)
 Boot0002  Bios Setup Fv(a881d567-6cb0-4eee-8435-2e72d33e45b5)/FvFile(9d8243e8-8381-453d-aceb-c350ee7757ca)
 Boot0003  3rd Party Option ROM Management Fv(a881d567-6cb0-4eee-8435-2e72d33e45b5)/FvFile(9d8243e8-8381-453d-aceb-c350ee7757ca)
 Boot0004  System Diagnostics Fv(a881d567-6cb0-4eee-8435-2e72d33e45b5)/FvFile(9d8243e8-8381-453d-aceb-c350ee7757ca)
 Boot0005  System Diagnostics Fv(a881d567-6cb0-4eee-8435-2e72d33e45b5)/FvFile(9d8243e8-8381-453d-aceb-c350ee7757ca)
 Boot0006  System Diagnostics Fv(a881d567-6cb0-4eee-8435-2e72d33e45b5)/FvFile(9d8243e8-8381-453d-aceb-c350ee7757ca)
 Boot0007  System Diagnostics Fv(a881d567-6cb0-4eee-8435-2e72d33e45b5)/FvFile(9d8243e8-8381-453d-aceb-c350ee7757ca)
 Boot0008  Boot Menu Fv(a881d567-6cb0-4eee-8435-2e72d33e45b5)/FvFile(9d8243e8-8381-453d-aceb-c350ee7757ca)
 Boot0009  HP Recovery Fv(a881d567-6cb0-4eee-8435-2e72d33e45b5)/FvFile(9d8243e8-8381-453d-aceb-c350ee7757ca)
 Boot0011  Network Boot Fv(a881d567-6cb0-4eee-8435-2e72d33e45b5)/FvFile(9d8243e8-8381-453d-aceb-c350ee7757ca)
 Boot000D  HUAWEI PciRoot(0x0)/Pci(0x14,0x0)/USB(0xa,0x1)

# zpool status
  pool: zroot
 state: ONLINE
config:

        NAME          STATE     READ WRITE CKSUM
        zroot         ONLINE       0     0     0
          ada0p4.eli  ONLINE       0     0     0
          ada1p4.eli  ONLINE       0     0     0

# geom part show
=>       40  468862048  ada0  GPT  (224G)
         40     532480     1  efi  (260M)
     532520       1024     2  freebsd-boot  (512K)
     533544        984        - free -  (492K)
     534528    4194304     3  freebsd-swap  (2.0G)
    4728832  464132096     4  freebsd-zfs  (221G)
  468860928       1160        - free -  (580K)

=>       40  976773088  ada1  GPT  (466G)
         40     532480     1  efi  (260M)
     532520       1024     2  freebsd-boot  (512K)
     533544        984        - free -  (492K)
     534528    4194304     3  freebsd-swap  (2.0G)
    4728832  972044288     4  freebsd-zfs  (464G)
  976773120          8        - free -  (4.0K)

# freebsd-version -kru
13.0-RELEASE
13.0-RELEASE
13.0-RELEASE

# uname -aKU
FreeBSD hostname 13.0-RELEASE FreeBSD 13.0-RELEASE #0 releng/13.0-n244733-ea31abc261f: Fri Apr  9 02:00:00 UTC 2021     root@releng1.nyi.freebsd.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC  amd64 1300139 1300139
 
Is that two separate installations of FreeBSD (one per disk), both encrypted but with different encryption keys?

Code:
# freebsd-version -kru
13.0-RELEASE
13.0-RELEASE
13.0-RELEASE

Outdated, so (when you can) update the system. I don't expect it to resolve the boot issue; just standard advice.
 
Is that two separate installations of FreeBSD (one per disk), both encrypted but with different encryption keys?



Outdated, so (when you can) update the system. I don't expect it to resolve the boot issue; just standard advice.
No. During installation (bsdinstall), I just added my two drives as stripe to zfs and enabled encryption.
 
Can you share the diskinfo ada0 ? My first assumption would be you've native 4k sector on those disks; you can't legacy boot from those disks.
 
Can you share the diskinfo ada0 ? My first assumption would be you've native 4k sector on those disks; you can't legacy boot from those disks.
# diskinfo ada0 ada1
ada0 512 240057409536 468862128 0 0 465141 16 63
ada1 512 500107862016 976773168 4096 0 969021 16 63
 
It's not it then, sector size is reported as 512 (the ada1 is 512e).
I first thought you have those disks in mirror. It took me few times to read the output to see you're concating them together. Is this desired ?

Anyway I tried this in my VM. If I had only a small 20G disk all was ok. When I concated 1TB disk to it I got these errors. As if 1TB was just too much for it to handle.
 
Hello,
I think I have the same or a comparable problem.
This happens on a virtual server that I've rented.
The virtual server comes with a 1.5 TB (virtual) disk and I rented some additional space (also 1.5 TB) that I have available as a second virtual disk device.
With zfs auto install I striped the two disks together to one encrypted zfs pool.
After one successful boot after the installation all following boots result in an error (I cannot copy&paste text from my virtual console, thus the screenshot):

1736930024996.png



The problem seems to be reproduceable, means same behavior after I did a second fresh installation.
First boot successful, all following boots result in the error mention above.
So I could extract some informations:

efibootmgr returns:
1736930154662.png


geom part show returns:
1736930198124.png


diskinfo returns:
1736930244707.png


zpool status returns:
1736930285059.png



If I do an installation using only one of the two disks, it seems to work, at least I was able to do three successful boots in a row. :)

This happens on FreeBSD 14.2 AMD64.
I've read in another thread here that there are issues with disks larger 2TB and that a UEFI BIOS setting was necessary to handle that.
Since my disks are only 1.5 TB each and since this is a virtual server rented from an ISP (qemu based I think), I have no control over any such settings.

Any idea how I can get this to run? I really want to have one pool with that amount of free storage since I want to use the machine for remote backup purposes and also for hosting an own Nextcloud instance.
(If you are wondering: I really only want to stripe the disks and not have a zfs-raid because a) I have a physical NAS at home with a zfs-raid for my primary backup, this virtual server will be just a remote backup to have the latest snapshot of my data and b) the virtual disks I get from the ISP for my virtual server are already in a RAID.)

Maybe a workaround (haven't tried yet) would be to rent a third virtual disk with lets say 100GB and use that for the zroot pool and then add my two large disks striped in a zdata second pool.
But since I do not really understand what the whole problem is, I'm not sure if this problem only arises since I want to boot from that pool or if it might happen as well when I import the zdata pool after booting.

Any help is welcome!

Kind regards & thanks,
Fool
 
I was not able to replicate your issue. I did replicate your conditions, installed 14.2 and did several reboots:
Code:
root@fbsd14:~ # diskinfo vtbd0
vtbd0    512    1649267441664    3221225472    0    0    3195660    16    63
root@fbsd14:~ # diskinfo vtbd1
vtbd1    512    1649267441664    3221225472    0    0    3195660    16    63
root@fbsd14:~ #

root@fbsd14:~ # gpart show
=>        40  3221225392  vtbd0  GPT  (1.5T)
          40      532480      1  efi  (260M)
      532520        1024      2  freebsd-boot  (512K)
      533544         984         - free -  (492K)
      534528     4194304      3  freebsd-swap  (2.0G)
     4728832  3216494592      4  freebsd-zfs  (1.5T)
  3221223424        2008         - free -  (1.0M)

=>        40  3221225392  vtbd1  GPT  (1.5T)
          40      532480      1  efi  (260M)
      532520        1024      2  freebsd-boot  (512K)
      533544         984         - free -  (492K)
      534528     4194304      3  freebsd-swap  (2.0G)
     4728832  3216494592      4  freebsd-zfs  (1.5T)
  3221223424        2008         - free -  (1.0M)


root@fbsd14:~ # zpool status
  pool: rpool
 state: ONLINE
config:

    NAME           STATE     READ WRITE CKSUM
    rpool          ONLINE       0     0     0
      vtbd0p4.eli  ONLINE       0     0     0
      vtbd1p4.eli  ONLINE       0     0     0

errors: No known data errors
root@fbsd14:~ #

root@fbsd14:~ # zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  2.98T   968M  2.98T        -         -     0%     0%  1.00x    ONLINE  -
root@fbsd14:~ #

root@fbsd14:~ # last|grep boot
boot time                                  Wed Jan 15 10:59
boot time                                  Wed Jan 15 10:47
boot time                                  Wed Jan 15 10:45
boot time                                  Wed Jan 15 10:43
root@fbsd14:~ #

Note your disks are not the actual same size. While this should not matter in stripe it is weird. I'd expect disks to be exact same size (especially if they are virtual).

Is it really necessary to stripe virtual disks? Doesn't your provider give you better option on a single bigger disk? e.g. premium disk (higher throughput, etc.).
 
Hi,
thanks for your response!
A virtual server with a larger disk per default come with more CPU cores, RAM and of course a higher monthly price. Looking at the price it is cheaper to rent some additional space in a second disk compared to rent the 'larger' virtual server. I can of course contact support and ask if it is somehow possible that they give me my needed disk space as one virtual disk but I think chances are high that they are not willing to do so.
The disks are not matching the exact same size because I can freely choose the size of my additional storage (second device) in GB. I selected 1500 GB - I could delete that space and request a new one with some more or less GB to match the default disk - at least I hope that it would be possible that way to get disks the same size.
But - as you said - to my knowledge that shouldn't be an issue for striping disks...

Do you know what actually is the problem? What does zio_read error: 45 really mean? I haven't found an explanation yet, like error code 45 means xyz.
Do you think it has something to do with booting from that zpool? Do you think my workaround with having a small zroot on one disk and a second large "data zpool" that uses these existing disks as stripe?

Kind regards,
Fool
 
You can have a look for errors also errno(2) here.
If this is about the money then solution to stripe is justified.

This error is happening in early loader stage (stand/) when it tries to read the zfs. It's related to booting from that pool. It seems the read fails on reading config.lua. Why? Impossible (for me) to tell now.

Are you doing something that triggers this that you didn't mention?
I did another test with different disk sizes and still, I'm not able to trigger the problem:

Code:
root@fbsd14:~ # diskinfo vtbd0
vtbd0    512    1605286977024    3135326127    0    0    3110442    16    63
root@fbsd14:~ # diskinfo vtbd1
vtbd1    512    1594291860480    3113851290    0    0    3089138    16    63
root@fbsd14:~ #

root@fbsd14:~ # gpart show
=>        40  3135326048  vtbd0  GPT  (1.5T)
          40      532480      1  efi  (260M)
      532520        1024      2  freebsd-boot  (512K)
      533544         984         - free -  (492K)
      534528     4194304      3  freebsd-swap  (2.0G)
     4728832  3130595328      4  freebsd-zfs  (1.5T)
  3135324160        1928         - free -  (964K)

=>        40  3113851216  vtbd1  GPT  (1.4T)
          40      532480      1  efi  (260M)
      532520        1024      2  freebsd-boot  (512K)
      533544         984         - free -  (492K)
      534528     4194304      3  freebsd-swap  (2.0G)
     4728832  3109122048      4  freebsd-zfs  (1.4T)
  3113850880         376         - free -  (188K)


root@fbsd14:~ # zpool status
  pool: rpool
 state: ONLINE
config:

    NAME           STATE     READ WRITE CKSUM
    rpool          ONLINE       0     0     0
      vtbd0p4.eli  ONLINE       0     0     0
      vtbd1p4.eli  ONLINE       0     0     0

errors: No known data errors
root@fbsd14:~ # zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  2.90T   810M  2.90T        -         -     0%     0%  1.00x    ONLINE  -
root@fbsd14:~ #

root@fbsd14:~ # last | grep boot
boot time                                  Wed Jan 15 15:13
boot time                                  Wed Jan 15 15:12
boot time                                  Wed Jan 15 15:11
root@fbsd14:~ #

Having separate rpool for system and data pool for actual data is good design, it is something I'd do. In cloud env it's easier then to swap the system too - you take data disks and attach it to different VM. So yes, I would go this way.
I would avoid this only if it costs too much money for additional disk. 100G for system disk is too much, you can live with way less.
 
I couldn't replicate the error either. Not exactly the same setup, the storage devices are dynamically allocated, not fixed, running 14.2-RELEASE in a VirtualBox VM, two disks stripe, unequal disk sizes, around 1.5T, second test 2TB.

Perhaps the disk size of the Root-on-ZFS partition is the issue in your case.

Test the setup with a smaller zroot partition. When in the in the "ZFS Configuration" menu, choose a large swap size, eg. 1400g. This will result in Root-on-ZFS partitions of ~ 100g each. After the installation has finished, still in the installation session, delete swap create 8G swap on both disks, let the rest free. If the test results are satisfactory, the remaining free storage space can be used for the encrypted data pool later.

Eventually, move (dd(1)) the root-on-ZFS partitions from the end of the disk behind swap, after creating new zroot partitions with the exact size of the original shown by gpart on both disks.

If the problem persists, and the provider doesn't give a better option on a single bigger disk
Maybe a workaround (haven't tried yet) would be to rent a third virtual disk with lets say 100GB and use that for the zroot pool and then add my two large disks striped in a zdata second pool.
alternatively, instead of renting a third 100G disk for testing, remove one 100g disk from the zroot pool, use the entire other disk for the data pool.

Also, on a one disk system, the huge swap space method, followed by deleting and creating smaller partitions, can be used to separate root file system partition and data partition with the installers guided Root-on-ZFS menu. Instead of installing all of them manually, only the data pool has to be created manually
 
The original poster (spmzt) has shown some of the messages they observed and one message is of particular importance:

Calculating GELI Decryption Key for disk0p4: 2224665 iterations...

There is a message for decrypting one of the disks only. The other disk likely remains encrypted.

When one disk is accessible to ZFS and the other one isn't, reading the files necessary for booting becomes a game of chance. If all blocks that ZFS tries to read are on the first disk, you are lucky. If at least one block is not there, you get an all block copies unavailable error for that block.

The original poster (spmzt) removed the second disk from the pool and the problem went away. This is consistent with encryption being misconfigured on the second disk.

I reproduced the problem successfully in a VM by removing the flags BOOT (-b) and GELIBOOT (-g) from the second geli provider.
The output of the loader command lsdev -v illustrated the problem nicely.
 
Hello,
thanks for the additional info. I might go with the solution of another additional virtual disk for zroot and then having a second pool for the data. The additional cost for a small extra virtual disk is a manageable amount. ;-)

But checking the BOOT and GELIBOOT flags sounds interesting. I guess I'll do another installation and check how these are set.
I'm not deep into the boot process, but sound like I can see if these flags are set for a disk / geli provider with
lsdev -v?
I guess for both disks / geli provideds BOOT and GELIBOOT flags should be set? If that is not the case, might this be the reason?
How can I set these flags?

Thanks!
 
Hello,
thanks for the additional info. I might go with the solution of another additional virtual disk for zroot and then having a second pool for the data. The additional cost for a small extra virtual disk is a manageable amount. ;-)
Your original two disks will work perfectly without adding a third one if the geli encryption is configured properly. For me, the installer did configure the encryption properly.

But checking the BOOT and GELIBOOT flags sounds interesting. I guess I'll do another installation and check how these are set.
I'm not deep into the boot process, but sound like I can see if these flags are set for a disk / geli provider with
lsdev -v?
I guess for both disks / geli provideds BOOT and GELIBOOT flags should be set? If that is not the case, might this be the reason?
How can I set these flags?

The output of the loader command lsdev -v when encryption on the second disk is misconfigured looks like this:
zpool_seen_by_loader.png

It is immediately obvious where you need to focus your attention.

The flags BOOT and GELIBOOT should be set for all geli providers that ZFS will need to access during bootstrap. You can manually set the flags with geli configure -b -g <provider>.
 
Hi,

unfortunately I'm still struggling with this.
I did a fresh install and as usual I can shutdown the virtual server after the install, remove the FreeBSD iso from the virtual dvd drive and successfully boot the first time.
The two virtual disks I've striped with zfs auto installer are named vtbd0 and vtbd1:

Bildschirmfoto vom 2025-01-17 10-03-43.png


After this first successful boot the zpool is ok, both disks are online, non is degraded.

If I check /dev I also see two eli providers vtbd0p4.eli and vtbd1p4.eli. I assume that vtb0p2 and vtb1p2 are the freebsd-boot partition, *p3 swap etc, right?
(Wondering that all these partitions are mirrored over both virtual disks. I selected "mirror swap" in the zfs auto installer, so for swap it makes sense. But why for the efi and the freebsd-boot partition?)

Bildschirmfoto vom 2025-01-17 10-02-08.png


Now I checked vtbd0p4.eli and vtbd1p4.eli if the BOOT and GELIBOOT flags are set; that is the case for both disks:

Bildschirmfoto vom 2025-01-17 10-05-29.png

Bildschirmfoto vom 2025-01-17 10-05-47.png


But if I reboot now, the problem with the zio_read error 45 occurs and if I execute lsdev -v I see that vtbd1p4 is not decrypted / available:

Bildschirmfoto vom 2025-01-17 10-10-51.png


Do I need to check the other partitions (like freebsd-boot) also if the BOOT flag is set? (GELIBOOT doesn't make sense I guess, since the are not encrypted?)
How can I check these partitions for the flags? Since the are not encrypted / eli providers I can't check them with geli list.
I tried geom but not sure about the class. (Tried part but that didn't work.)

Can you tell me what partitions I need to check (with what command) and how I set the BOOT flag there if needed?
( geli configure -b -g only works for eli providers, I assume, not for other partitions? I've by the way executed this for both providers, just to be sure. But it didn't help. And as I said the BOOT and GELIBOOT flag already was set after the installation.)

Since my problem is reproducable I can provide more output if needed.

Thank you very much so far!

Kind regards,
Fool
 
Make sure that the 2nd disk, disk1, is not actually detached from the VM when powered on. lsdev confirms you do not have that disk there.
 
Hi,
how do you know that the disk is not attached when the VM is powered on?
Would it be listed as disk1 in the section disk devices in lsdev, if it is available but not decrypted?

If that really is the problem, not sure what I can do about besides writing to the support of my ISP. :(
I rented this VM and can manage it over the control panel the ISP provides me, but as you can image I only have limited possibilities there.

Thanks & kind regards,
Fool
 
how do you know that the disk is not attached when the VM is powered on?
lsdev showed us.

Yes, it would be there like this (using my 2nd example where VM has slightly different size of disks):
Code:
OK lsdev
cd devices:
    cd0:    0 blocks (no media)
disk devices:
    disk0:    3135326127 X 512 blocks
      disk0p1: EFI
      disk0p2: FreeBSD boot
      disk0p3: FreeBSD swap
      disk0p4: FreeBSD ZFS
    disk1:    3113851290 X 512 blocks
      disk1p1: EFI
      disk1p2: FreeBSD boot
      disk1p3: FreeBSD swap
      disk1p4: FreeBSD ZFS
http: (unknown)
net devices:
    net0:
zfs devices:
    zfs:rpool
OK
 
Just as an update for you: I contacted the support of my ISP but didn't get any response yet about the issue, that one of my virtual disks seems not to be available at boot time.
I'm using now my workaround with a third virtual disk that contains zroot and the other two disks are striped to a zdata-pool.
That solution works and is fine for me. In fact, when switching to another virtual server in the future, it might be easier do move over my data. :)

Thanks for all you help here!
 
I had the exact same issue here with libvirt (KVM/QEMU).
Capture d’écran du 2025-03-19 10-28-41.png

Capture d’écran du 2025-03-19 10-33-02.png


The problem was that my drives were in IDE instead of VirtIO and the second wasn't enabled in the boot order.
Strangely, it was working great in BIOS only with both disks in IDE and only one in the boot order.
Capture d’écran du 2025-03-19 11-50-13.png

sh:
# efibootmgr -v
Boot to FW : false
BootCurrent: 0001
Timeout    : 3 seconds
BootOrder  : 0001, 0003, 0000, 0002
+Boot0001* UEFI Misc Device PciRoot(0x0)/Pci(0xb,0x0)
 Boot0003* UEFI Misc Device 2 PciRoot(0x0)/Pci(0xc,0x0)
 Boot0000* UiApp Fv(7cb8bdc9-f8eb-4f34-aaea-3ee4af6516a1)/FvFile(462caa21-7614-4503-836e-8ab6f4662331)
 Boot0002* EFI Internal Shell Fv(7cb8bdc9-f8eb-4f34-aaea-3ee4af6516a1)/FvFile(7c04a583-9e3e-4f1c-ad65-e05268d0b4d1)

# gpart show
=>      34  29427133  vtbd0  GPT  (14G)
        34       306      1  freebsd-boot  (153K)
       340     66584      2  efi  (33M)
     66924   2097152      3  freebsd-swap  (1.0G)
   2164076  27263091      4  freebsd-zfs  (13G)

=>      40  14679984  vtbd1  GPT  (7.0G)
        40  14679984      1  freebsd-zfs  (7.0G)
# zpool status
  pool: vulture
 state: ONLINE
  scan: resilvered 4.03G in 00:00:26 with 0 errors on Mon Jun  3 15:38:10 2024
config:

    NAME        STATE     READ WRITE CKSUM
    vulture     ONLINE       0     0     0
      vtbd0p4   ONLINE       0     0     0
      vtbd1p1   ONLINE       0     0     0

errors: No known data errors
# zpool list
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
vulture    19G  16.6G  2.36G        -         -    59%    87%  1.00x    ONLINE  -
 
Back
Top