Solved nullfs not mounting during boot due to failing disk in ZFS raidz

I had a brief power outage on the weekend (no UPS) and after I booted back up I was getting thrown into the recovery shell. I was seeing errors in dmesg about the zfs pool not being available when trying to mount a couple of nullfs entries in fstab. Sure enough, the pool is not mounted while in the recovery shell.

It turns out that smartctl -a is showing me that one of the drives in the pool is failing, however after commenting out the nullfs entries in fstab, booting would be successful and the pool was mounted without any further manual intervention. After booting I was able to run mount -a (after uncommenting the lines in fstab again) and they would mount fine.

I'm just trying to understand what causes this behaviour. It seems like because there is a failing disk the zpool being mounted is delayed which causes the nullfs mounts to fail.
 
Dear sand_man,
as far as I know things like that can happen when the file system is not ready. As a countermeasure you can add the late option to the lines in /etc/fstab. Please see an example below which is out of one of my jails fstab file.
Code:
/var/cache/pkg /usr/jails/fox/var/cache/pkg nullfs rw,late 0 0
 
Back
Top