Solved FreeBSD 9 GEOM / ZFS bug?

Trying to create a 1 MiB aligned GPT partition for a root zpool in VirtualBox with FreeBSD-9.0-RC1-amd64 Live CD:

Code:
# gpart create -s gpt ada0
ada0 created

# gpart add -t freebsd-zfs -b 2048 ada0
ada0p1 added

# zpool create tank ada0p1
cannot mount '/tank': failed to create mountpoint
(this is because livecd root is not writable, but it does not matter here)

# zpool export tank
# zpool import tank
cannot import 'tank': one or more devices is currently unavailable

# zpool import
  pool: tank
    id: 102288411352558724
 state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-5E
config:

        tank                   FAULTED corrupted data
          7405121539160497590  UNAVAIL corrupted data

It seems the GEOM writes something to the zfs-partition, thus corrupting it?
I also tested it with a MBR partitioning scheme, same result.

Importing the pool with FreeBSD-8.2 works fine.

Any thoughts?
 
Ah, I misunderstood. I thought you were trying to create a 1MB filesystem.
 
I found a partial solution to this. You'll have to leave some empty space (at least about 500 sectors) after the freebsd-zfs partition to export+import the zpool successfully.

For example:
Code:
# gpart destroy -F ada0
ada0 destroyed

# gpart create -s gpt ada0
ada0 created

# camcontrol identify ada0 | grep LBA48
LBA48 supported       488282112 sectors

The above figure is my disk size in (512-byte) sectors.
From there we can calculate 1 MiB of empty space for both start and end:
Partition size: 488282112 - 2048 - 2048 = 488278016 sectors.

Code:
# gpart add -t freebsd-zfs -b2048 -s488278016 ada0
ada0p1 added

# gpart show
=>       34  488282045  ada0  GPT  (232G)
         34       2014        - free -  (1M)
       2048  488278016     1  freebsd-zfs  (232G)
  488280064       2015        - free -  (1M)

As we can see from the above, the GPT grabs 34 sectors from both start and end of the disk, as it should do.

Finally we create a zpool and try to export & import it back.
Code:
# zpool create media ada0p1
# zpool export media
# zpool import media
No error messages, success!

...BUT FreeBSD-8 does not need such empty space in the end of disk.

The configuration I presented in the first post works perfectly fine there.

But such zpool on a GPT partition that fills the disk to the end is completely unimportable in FreeBSD-9!
 
Works here using real hardware.
Code:
root@whitezone ~ # gpart show da2
=>       34  976773097  da2  GPT  (465G)
         34       2014       - free -  (1M)
       2048  976771083    1  freebsd-zfs  (465G)

root@whitezone ~ #

Code:
root@whitezone ~ # uname -a
FreeBSD whitezone.joesgarage 9.0-RC1 FreeBSD 9.0-RC1 #0: Mon Oct 17 16:01:16 EEST 2011     root@whitezone.joesgarage:/usr/obj/usr/src/sys/GENERIC  amd64

After creating single disk pool, zpool export, zpool import:

Code:
root@whitezone ~ # zpool status tank
  pool: tank
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          da2p1     ONLINE       0     0     0

errors: No known data errors
root@whitezone ~ #

Maybe this is something particular to VirtualBox?
 
mix_room said:
One question is why you are using ZFS with partitions. Why not just access the disks directly.

I am trying to create a root filesystem on ZFS. It does not work with ZFS on whole disk. Complete partition setup in FreeBSD-8 is like this (export+import does not work with this setup on FreeBSD-9):
Code:
# gpart show
=>       34  488282045  ada0  GPT  (232G)
         34          6        - free -  (3.0k)
         40       2008     1  freebsd-boot  (1M)
       2048    8386560     2  freebsd-swap  (4G)
    8388608  479893471     3  freebsd-zfs  (228G)

The boot partition is aligned to 4096 bytes and other partitions are aligned to 1 MiB.
I have just simplified the other examples by leaving off the boot and swap partitions.

kpa said:
Works here using real hardware.
Maybe this is something particular to VirtualBox?

Thanks for testing this.

I have also tested this on real hardware, but with a smaller disk. There I was able to reproduce the failure in importing the pool.

It seems this has maybe something to do with particular offsets, I'll have to research more.
 
Here are some partition setups I tried with real hardware:

disk: 250GB WDC WD2500JS-55NCB1
Code:
# gpart show ada0
=>       34  488397101  ada0  GPT  (232G)
         34          6        - free -  (3.0k)
         40       2008     1  freebsd-boot  (1M)
       2048    8386560     2  freebsd-swap  (4G)
    8388608  480008527     3  freebsd-zfs  (228G)
[color="Red"]can not import zpool![/color]
Code:
# gpart show ada0
=>       34  488397101  ada0  GPT  (232G)
         34          6        - free -  (3.0k)
         40       2008     1  freebsd-boot  (1M)
       2048    8386560     2  freebsd-swap  (4G)
    8388608  480008192     3  freebsd-zfs  (228G)
  488396800        335        - free -  (167k)
[color="Red"]can not import zpool![/color]
Code:
# gpart show ada0
=>       34  488397101  ada0  GPT  (232G)
         34          6        - free -  (3.0k)
         40       2008     1  freebsd-boot  (1M)
       2048    8386560     2  freebsd-swap  (4G)
    8388608  480006144     3  freebsd-zfs  (228G)
  488394752       2383        - free -  (1.2M)
[color="SeaGreen"]zpool import works fine.[/color]


disk: 2TB SAMSUNG HD204UI
Code:
# gpart add -t freebsd-zfs -b2048 ada1
ada1p1 added
# gpart show ada1
=>        34  3907029101  ada1  GPT  (1.8T)
          34        2014        - free -  (1M)
        2048  3907027080     1  freebsd-zfs  (1.8T)
  3907029128           7        - free -  (3.5k)
[I](gpart seems to automatically leave some empty space at the end here)[/I]
[color="Red"]can not import zpool![/color]
Code:
# gpart add -t freebsd-zfs -b2048 -s3907027087 ada1
ada1p1 added
# gpart show ada1
=>        34  3907029101  ada1  GPT  (1.8T)
          34        2014        - free -  (1M)
        2048  3907027087     1  freebsd-zfs  (1.8T)
[color="Red"]can not import zpool![/color]
 
If you shrink your freebsd-boot partition to 512 KB, and then create your freebsd-zfs partition starting at block 2048 (1M), I bet you'll be able to access everything correctly.

The FreeBSD boot loader has issues with large boot partitions. 128 KB is the "standard" size for the boot partition. 512 KB gives you room to grow in the future. Anything over that can cause issues. Anything over 1 MB will fail.

Code:
# gpart create -s GPT ada1
# gpart add -t freebsd-boot -s 256 ada1
# gpart add -t freebsd-zfs -b 2048 ada1
# gpart show ada1
=>       34  125045357  ada0  GPT  (59G)
         34        256     1  freebsd-boot  (128k)
        290       1758        - free -  (879k)
       2048   33554432     2  freebsd-zfs  (16G)
 
phoenix said:
If you shrink your freebsd-boot partition to 512 KB, and then create your freebsd-zfs partition starting at block 2048 (1M), I bet you'll be able to access everything correctly.

The FreeBSD boot loader has issues with large boot partitions. 128 KB is the "standard" size for the boot partition. 512 KB gives you room to grow in the future. Anything over that can cause issues. Anything over 1 MB will fail.

Yes, I found this out the hard way: "Boot loader too large" message when trying to boot.
Also the new gpart man page with BOOTSTRAPPING section says: "The freebsd-boot partition should be smaller than 545 KB".


Back to the topic:
It seems this problem with zpool import happens only when using the Live CD option with the installer. This may have to do with the fact that /boot/zfs/zpool.cache cannot be created on a read-only filesystem.


I also found out that to be able to reproduce zpool import failure at least the first 545 sectors of the device must be empty:
Code:
# dd if=/dev/zero of=/dev/da0 count=545

# gpart create -s gpt da0
da0 created
# gpart add -t freebsd-zfs -b2048 da0
da0p1 added

# zpool create media da0p1
# zpool export media
# zpool import media
cannot import 'media': one or more devices is currently unavailable


But if you first create a zpool to a whole disk, destroy it and then create a zpool inside a freebsd-zfs partition, importing works:
Code:
# zpool create media da0
# zpool destroy media
Code:
# gpart create -s gpt da0
da0 created
# gpart add -t freebsd-zfs -b2048 da0
da0p1 added
Code:
# zpool create media da0p1
# zpool export media
# zpool import media
# zpool status media
  pool: media
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        media       ONLINE       0     0     0
          da0p1     ONLINE       0     0     0

errors: No known data errors


Why is this?
 
This is in VirtualBox. Whenever I create a zpool starting at a block which is a multiple of 2048, the pool cannot be imported again.

Code:
# gpart create -s gpt ada2
ada2 created
# gpart add -s 512k -t freebsd-boot ada2
ada2p1 added
# gpart add -b 2048 -t freebsd-zfs ada2
ada2p2 added
# zpool create tank2 /dev/ada2p2
# zpool export tank2
# zpool import tank2
cannot import 'tank2': one or more devices is currently unavailable
# gpart show ada2
=>     34  4194237  ada2  GPT  (2.0G)
       34     1024     1  freebsd-boot  (512k)
     1058      990        - free -  (495k)
     2048  4192223     2  freebsd-zfs  (2G)
[color="Red"]can not import zpool![/color]



Code:
# gpart create -s gpt ada3
ada3 created
# gpart add -s 512k -t freebsd-boot ada3
ada3p1 added
# gpart add -t freebsd-zfs ada3
ada3p2 added
# zpool create tank3 /dev/ada3p2
# zpool export tank3
# zpool import tank3
# gpart show ada3
=>     34  4194237  ada3  GPT  (2.0G)
       34     1024     1  freebsd-boot  (512k)
     1058  4193213     2  freebsd-zfs  (2G)
[color="SeaGreen"]zpool import works fine.[/color]



Code:
# gpart create -s gpt ada4
ada4 created
# gpart add -s 512k -t freebsd-boot ada4
ada4p1 added
# gpart add -b 1059 -t freebsd-zfs ada4
ada4p2 added
# zpool create tank4 /dev/ada4p2
# zpool export tank4
# zpool import tank4
# gpart show ada4
=>     34  4194237  ada4  GPT  (2.0G)
       34     1024     1  freebsd-boot  (512k)
     1058        1        - free -  (512B)
     1059  4193212     2  freebsd-zfs  (2G)
[color="SeaGreen"]zpool import works fine.[/color]



Code:
# gpart create -s gpt ada5
ada5 created
# gpart add -s 512k -t freebsd-boot ada5
ada5p1 added
# gpart add -b 2047 -t freebsd-zfs ada5
ada5p2 added
# zpool create tank5 /dev/ada5p2
# zpool export tank5
# zpool import tank5
# gpart show ada5
=>     34  4194237  ada5  GPT  (2.0G)
       34     1024     1  freebsd-boot  (512k)
     1058      989        - free -  (494k)
     2047  4192224     2  freebsd-zfs  (2G)
[color="SeaGreen"]zpool import works fine.[/color]


Code:
# gpart add -s 512k -t freebsd-boot ada6
ada6p1 added
# gpart add -b 2049 -t freebsd-zfs ada6
ada6p2 added
# zpool create tank6 /dev/ada6p2
# zpool export tank6
# zpool import tank6
# gpart show ada6
=>     34  4194237  ada6  GPT  (2.0G)
       34     1024     1  freebsd-boot  (512k)
     1058      991        - free -  (495k)
     2049  4192222     2  freebsd-zfs  (2G)
[color="SeaGreen"]zpool import works fine.[/color]


Code:
# gpart create -s gpt ada7
ada7 created
# gpart add -s 512k -t freebsd-boot ada7
ada7p1 added
# gpart add -b 4096 -t freebsd-zfs ada7
ada7p2 added
# zpool create tank7 /dev/ada7p2
# zpool export tank7
# zpool import tank7
cannot import 'tank7': one or more devices is currently unavailable
# gpart show ada7
=>     34  4194237  ada7  GPT  (2.0G)
       34     1024     1  freebsd-boot  (512k)
     1058     3038        - free -  (1.5M)
     4096  4190175     2  freebsd-zfs  (2G)
[color="Red"]can not import zpool![/color]


Code:
# gpart create -s gpt ada8
ada8 created
# gpart add -s 512k -t freebsd-boot ada8
ada8p1 added
# gpart add -b 6144 -t freebsd-zfs ada8
ada8p2 added
# zpool create tank8 /dev/ada8p2
# zpool export tank8
# zpool import tank8
cannot import 'tank8': one or more devices is currently unavailable
# gpart show ada8
=>     34  4194237  ada8  GPT  (2.0G)
       34     1024     1  freebsd-boot  (512k)
     1058     5086        - free -  (2.5M)
     6144  4188127     2  freebsd-zfs  (2G)
[color="Red"]can not import zpool![/color]


Code:
# gpart create -s gpt ada9
ada9 created
# gpart add -s 512k -t freebsd-boot ada9
ada9p1 added
# gpart add -b 6148 -t freebsd-zfs ada9
ada9p2 added
# zpool create tank9 /dev/ada9p2
# zpool export tank9
# zpool import tank9
# gpart show ada9
=>     34  4194237  ada9  GPT  (2.0G)
       34     1024     1  freebsd-boot  (512k)
     1058     5090        - free -  (2.5M)
     6148  4188123     2  freebsd-zfs  (2G)
[color="SeaGreen"]zpool import works fine.[/color]
 
I have problems reproducing this bug with yesterdays 9-STABLE. The bug might have been fixed (or is better hidden :).

Code:
# uname -a
FreeBSD freebsd21a 9.0-RC1 FreeBSD 9.0-RC1 #0: Thu Nov 10 03:08:49 CET 2011     ski@freebsd21a:/usr/obj/usr/src/sys/GENERIC  amd64
# gpart create -s gpt ada2
ada2 created
# gpart add -s 512k -t freebsd-boot ada2
ada2p1 added
# gpart add -b 2048 -t freebsd-zfs ada2
ada2p2 added
# zpool create tank2 /dev/ada2p2
# zpool export tank2
# zpool import tank2
cannot import 'tank2': one or more devices is currently unavailable
# gpart show ada2
=>     34  4194237  ada2  GPT  (2.0G)
       34     1024     1  freebsd-boot  (512k)
     1058      990        - free -  (495k)
     2048  4192223     2  freebsd-zfs  (2G)
# zpool status tank2
cannot open 'tank2': no such pool
#
[color="Red"]can not import zpool on 9.0-RC1![/color]

Code:
# cat /root/stable\-supfile
*default host=cvsup.ch.freebsd.org
*default base=/var/db
*default prefix=/usr
*default release=cvs tag=RELENG_9
*default delete use-rel-suffix
*default compress
src-all
# uname -a
FreeBSD freebsd21b 9.0-RC2 FreeBSD 9.0-RC2 #0: Thu Nov 10 02:54:19 CET 2011     ski@freebsd21b:/usr/obj/usr/src/sys/GENERIC  amd64
# gpart create -s gpt ada2
ada2 created
# gpart add -s 512k -t freebsd-boot ada2
ada2p1 added
# gpart add -b 2048 -t freebsd-zfs ada2
ada2p2 added
# zpool create tank2 /dev/ada2p2
# zpool export tank2
# zpool import tank2
# gpart show ada2
=>     34  4194237  ada2  GPT  (2.0G)
       34     1024     1  freebsd-boot  (512k)
     1058      990        - free -  (495k)
     2048  4192223     2  freebsd-zfs  (2G)

# zpool status tank2
  pool: tank2
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank2       ONLINE       0     0     0
          ada2p2    ONLINE       0     0     0

errors: No known data errors
#
[color="SeaGreen"]zpool import works fine with 9-STABLE.[/color]
 
skirmess said:
I have problems reproducing this bug with yesterdays 9-STABLE. The bug might have been fixed (or is better hidden :).

I confirm this. It seems that this bug is fixed in RC2.
 
Back
Top