Migrate ZFS to FreeBSD 9 from OpenIndiana

I am trying to move my ZFS file server from OpenIndiana (Illumos) to FreeBSD 9. I am new to FreeBSD so some fundamentals may be escaping me at the moment. The current issue I am running into is that FreeBSD picks up my zpool as degraded as it seems to think there are missing devices inside my vdev. My immediate thought is simply that FreeBSD may be expecting GPT partion headers versus whatever OpenIndiana used to create the original zpool.

Anyone information pertaining to the conflict I am running into would be appreciated. Better yet, if someone has done this same exact thing. Thanks in advance.
 
Here is my process of moving the zpool:


Code:
root@gear:~# uname -a
SunOS gear 5.11 oi_151a i86pc i386 i86pc Solaris

Code:
root@gear:~# zpool status

  pool: tank
 state: ONLINE
  scan: scrub repaired 1.50K in 5h25m with 0 errors on Fri Mar  9 04:50:10 2012
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c3t8d0  ONLINE       0     0     0
            c3t6d0  ONLINE       0     0     0
            c3t5d0  ONLINE       0     0     0
            c3t4d0  ONLINE       0     0     0
            c3t2d0  ONLINE       0     0     0
            c3t3d0  ONLINE       0     0     0

errors: No known data errors

Code:
root@gear:~# zpool export tank
root@gear:~#

Now drives presented to FreeBSD 9:

Code:
cog# uname -a
FreeBSD cog 9.0-RELEASE FreeBSD 9.0-RELEASE #0: Tue Jan  3 07:46:30 UTC 2012     root@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC  amd64

Code:
cog# zpool import
  pool: tank
    id: 10795692000504601357
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

        tank        ONLINE
          da0       ONLINE

  pool: tank
    id: 14196293768778797782
 state: DEGRADED
status: One or more devices are missing from the system.
action: The pool can be imported despite missing or damaged devices.  The
        fault tolerance of the pool may be compromised if imported.
   see: http://www.sun.com/msg/ZFS-8000-2Q
config:

        tank                      DEGRADED
          raidz1-0                DEGRADED
            11553247276612425822  UNAVAIL  cannot open
            da5p1                 ONLINE
            da4p1                 ONLINE
            da3p1                 ONLINE
            da1p1                 ONLINE
            da2p1                 ONLINE
cog#

The above may lead you to beleive I have 6 disks in this raidz, but truth is, there's only 5. Re-importing these back into OpenIndiana gives me 0 trouble. I want to leave OpenIndiana because the activity and user base is worrisome. FreeBSD seems to be much more vibrant.
 
Can you post output of ls -l /dev/da* or camcontrol devlist?

Also look in to /var/run/dmesg.boot and /var/log/messages.
Do you see some errors or strange things related to your missing disk device da0?
 
Can you elaborate on how you only have 5? It looks from your OpenIndiana output that you have 6.

As far as GPT is concerned, my understanding is that what they call "EFI" labels in the Solaris world is the same thing as "GPT" in the BSD world. By default I know if you give Solaris 10 or 11 a "full disk" (e.g. c0t0d0), it will put an EFI label there for you, and put its info in one of those "slices" (that's the p1 you are seeing in FreeBSD)

It seems like the issue is for some reason one of them isn't getting moved over OK. I wonder if maybe you can import the degraded pool and then rebuild the one "bad" vdev.
 
On the FreeBSD side, use gpart(8) to check the validity of the GPT headers on each of the disks:
View general info on all disks:
# gpart show

Then, view specifics on each disk (change da0 below to each of the devices listed above):
# gpart show -l da0

Make sure they are all viewable and look right/the same.

And try using -d on the import command, as that will force ZFS to check every single disk device listed under /dev for ZFS metadata, instead of only checking devices it knows about:
# zpool import -d /dev tank
 
I apologize for the delay in response. I had to rebuild my server(s) over the weekend swapping motherboards and such. Secondly, I apologize for stating that I only had 5 drives and not 6. This was false I cannot explain why that went through my head when I was typing the post. I do in fact have six 2TB drives in my zpool. Now, back on track.

Code:
cog# gpart show
=>       34  125829053  da0  GPT  (60G)
         34        128    1  freebsd-boot  (64k)
        162  119537536    2  freebsd-ufs  (57G)
  119537698    6291388    3  freebsd-swap  (3G)
  125829086          1       - free -  (512B)

=>        34  3907029101  da1  GPT  (1.8T)
          34         222       - free -  (111k)
         256  3907012495    1  !6a898cc3-1dd2-11b2-99a6-080020736631  (1.8T)
  3907012751       16384    9  !6a945a3b-1dd2-11b2-99a6-080020736631  (8.0M)

...

da1-6 are consistent.

Code:
cog# gpart show -l da1
=>        34  3907029101  da1  GPT  (1.8T)
          34         222       - free -  (111k)
         256  3907012495    1  zfs  (1.8T)
  3907012751       16384    9  (null)  (8.0M)

...

again, da1-6 are consistent.

Since the server rebuild:

Code:
cog# zpool import
  pool: tank
    id: 14196293768778797782
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

        tank        ONLINE
          raidz1-0  ONLINE
            da1p1   ONLINE
            da2p1   ONLINE
            da3p1   ONLINE
            da6p1   ONLINE
            da4p1   ONLINE
            da5p1   ONLINE

Code:
cog# zpool import -d /dev tank
Unsupported share protocol: 1.
Unsupported share protocol: 1.
Unsupported share protocol: 1.
Unsupported share protocol: 1.
Unsupported share protocol: 1.

Code:
cog# zpool status tank
  pool: tank
 state: ONLINE
 scan: resilvered 19.5K in 0h0m with 0 errors on Sat Mar 10 15:23:12 2012
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            da1p1   ONLINE       0     0     0
            da2p1   ONLINE       0     0     0
            da3p1   ONLINE       0     0     0
            da6p1   ONLINE       0     0     0
            da4p1   ONLINE       0     0     0
            da5p1   ONLINE       0     0     0

errors: No known data errors

Code:
cog# ls /tank
.$EXTEND        exports         storage         vms


As long as the gpart info looks okay to you all (remember I'm brand new to BSD) it looks like a simple server rebuild did something. It still doesn't make sense, however, as the time I wrote this original post it was based on a completely fresh install of FreeBSD 9 and so are these results.
 
Sounds like one of the connections or ports was just flakey enough for FreeBSD to not detect the disk right.
 
phoenix said:
Sounds like one of the connections or ports was just flakey enough for FreeBSD to not detect the disk right.

That would be a fine theory, but in both instances OpenIndiana and FreeBSD were virtualized on the same exact hardware.

What I did do differently is used # zpool import -d /dev tank as suggested by you earlier. So that may have been the key. Thank you.
 
Okay, here we go. This was what originally had me post this thread but I couldn't duplicate it until now. What follows is my actions of adding the zpool and then trying to navigate through a few folders and finally the kernel panic.

Code:
[root@cog ~]# zpool status
no pools available
[root@cog ~]# zpool import
  pool: tank
    id: 14196293768778797782
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

        tank        ONLINE
          raidz1-0  ONLINE
            da1p1   ONLINE
            da2p1   ONLINE
            da3p1   ONLINE
            da6p1   ONLINE
            da4p1   ONLINE
            da5p1   ONLINE
[root@cog ~]# zpool import -d /dev tank
[root@cog ~]# zpool status tank
  pool: tank
 state: ONLINE
 scan: resilvered 19.5K in 0h0m with 0 errors on Sat Mar 10 15:23:12 2012
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            da1p1   ONLINE       0     0     0
            da2p1   ONLINE       0     0     0
            da3p1   ONLINE       0     0     0
            da6p1   ONLINE       0     0     0
            da4p1   ONLINE       0     0     0
            da5p1   ONLINE       0     0     0

errors: No known data errors
[root@cog ~]# cd /tank
[root@cog /tank]# ls
.$EXTEND        exports         storage         vms
[root@cog /tank]# cd storage/
[root@cog /tank/storage]# ls
.$EXTEND        downloads       home            projects
backups         fromtempzfs     media           software
[root@cog /tank/storage]# cd home

Instant reboot. Panic log:

Code:
panic: avl_find() succeeded inside avl_add()
cpuid = 0
KDB: stack backtrace:
#0 0xffffffff808680fe at kdb_backtrace+0x5e
#1 0xffffffff80832cb7 at panic+0x187
#2 0xffffffff81412f4b at avl_add+0x4b
#3 0xffffffff8148d5d8 at zfs_fuid_table_load+0x198
#4 0xffffffff8148d83c at zfs_fuid_init+0x12c
#5 0xffffffff8148d917 at zfs_fuid_find_by_idx+0xc7
#6 0xffffffff8148d959 at zfs_fuid_map_id+0x19
#7 0xffffffff8148d976 at zfs_groupmember+0x16
#8 0xffffffff814a1b96 at zfs_zaccess_aces_check+0x196
#9 0xffffffff814a1fa6 at zfs_zaccess+0xc6
#10 0xffffffff814bbf21 at zfs_freebsd_getattr+0x1c1
#11 0xffffffff808d3620 at vn_stat+0xb0
#12 0xffffffff808cae39 at kern_statat_vnhook+0xf9
#13 0xffffffff808cafb5 at kern_statat+0x15
#14 0xffffffff808cb15a at sys_stat+0x2a
#15 0xffffffff80b17cf0 at amd64_syscall+0x450
#16 0xffffffff80b03427 at Xfast_syscall+0xf7
Uptime: 6m5s


Any ideas? When I bring the pool back to OpenIndiana, everything works fine.
 
Back
Top