Solved Building 11.2-Release from source with root on ZFS

hanzer

Member

Thanks: 7
Messages: 97

#1
Hi,

I have an 11.2-Release system with root on a 3-disk ZFS RAIDZ. I just did a buildworld, buildkernel, installkernel and after reboot with the new kernel it fails to start and says something like:

Solaris: NOTICE: Cannot find the pool label for 'zroot'
Mounting from zfs:zroot/ROOT/default failed with error 5.

My /etc/make.conf is fairly simple and the kernel configuration is just GENERIC with some of the unnecessary RAID and network card drivers removed. This build was just the first iteration through the process.

Does anyone have any idea about why ZFS wouldn't work after building from source? Since this is a root on ZFS system, might it make sense to build ZFS into the kernel (if that is even possible)?
 

Eric A. Borisch

Well-Known Member

Thanks: 215
Messages: 352

#2
So when you boot into the working system, what does zpool list -v zroot look like (and are you sure the drivers that are needed to access the disk were not removed from the kernel?) zfs_load="YES" is in /etc/rc.conf?
 
OP
OP
hanzer

hanzer

Member

Thanks: 7
Messages: 97

#3
/etc/rc.conf has the line zfs_enable="YES"
and /boot/loader.conf has the line zfs_load="YES"
and /etc/sysctl.conf has the lines vfs.zfs.min_auto_ashift=12 and vfs.usermount=1

And if the original kernel is booted zpool list -v zroot says:
Code:
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT                               
zroot       5.44T  3.92T  1.51T        -         -     1%    72%  1.00x  ONLINE  -                                     
  raidz1    5.44T  3.92T  1.51T        -         -     1%    72%                                                       
    ada0p3      -      -      -        -         -      -      -                                                       
    ada1p3      -      -      -        -         -      -      -                                                       
    ada2p3      -      -      -        -         -      -      -
 
OP
OP
hanzer

hanzer

Member

Thanks: 7
Messages: 97

#4
... are you sure the drivers that are needed to access the disk were not removed from the kernel?...
Oh no! That might have actually happened. After removing support for Floppy drives I moved right through ATA controllers, happily commenting out lines. Rebuilding now...
 
OP
OP
hanzer

hanzer

Member

Thanks: 7
Messages: 97

#6
A quick update for posterity - the raidz1 performance seemed poor so I added another hard-drive to the machine and reinstalled the system as a raid10. zpool list -v zroot
Code:
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT                                                     
zroot       3.62T   201G  3.43T        -         -     0%     5%  1.00x  ONLINE  -                                                           
  mirror    1.81T   100G  1.71T        -         -     0%     5%
    ada0p3      -      -      -        -         -      -      -
    ada1p3      -      -      -        -         -      -      -
  mirror    1.81T   101G  1.71T        -         -     0%     5%
    ada2p3      -      -      -        -         -      -      -
    ada3p3      -      -      -        -         -      -      -
Also, compression is turned off with zfs set compression=off zroot. Performance seems to be much better now.
 

kpa

Beastie's Twin

Thanks: 1,791
Messages: 6,305

#7
Performance on ZFS comes from the number of different VDEVs on the pool, as you noticed when you moved from one raidz1 VDEV to two mirror VDEVs you immediately got better performance. In fact, mirrors are the best performance compromise when using ZFS, raidz ones are best suited for guaranteeing redundancy.

I really think you should give compression another chance, it has been shown that performance is almost always better with compression on.
 
Top