Adding PCI sata controller causes device node change

Hi,
I'm trying to add a PCI SATA card to my machine (it has one onboard as well). However, when I insert the card, I think because it has a lower IRQ than the onboard one (14 for the PCI card, 20 for the onboard), it's getting processed first during boot, and jacking up the location of my disks (the disk [connected to the onboard controller] that used to be at /dev/ad6 is now at /dev/ad12, etc...).

Normally, it wouldn't be a big deal to drop into single user and change the entries in the /etc/fstab, but I've got a zpool that includes a slice on the current ad6 and all of ad4, and I don't know how to non-destructively change where ZFS looks for the disks that belong to the zpool.

I've tried using device.hints to pick up the onboard controller first, and then the PCI, but to no avail. So, if anyone has a way to either a.) change where ZFS thinks the components of the zpool are (/dev/ad6s2 and /dev/ad4 to /dev/ad12s2 and /dev/ad8), or b.) force the onboard controller to get picked up first, I'd appreciate it.

Thanks!
 
I've experienced the exact same issue earlier this week, until I read around and realized the existence of glabel.

The basic purpose of labels is to simply have a hard-drive labeled to your likings, and having FreeBSD "follow" that labeled hard-drive regardless of which I/O or JBOD controller it is on, or worry about FreeBSD giving a hard-drive a different device node then the previous reboot or when adding new hardware.

In my example, I am in the process of labelling all of my 16 drives. I chose the label which represents the hard-drive manufacture (Western Digital, Seagate, or Hitachi) and last 5 characters of the serial number:

Code:
$ zpool status
  pool: TANK
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
	continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 3h27m, 75.17% done, 1h8m to go
config:

	NAME                  STATE     READ WRITE CKSUM
	TANK                  ONLINE       0     0     0
	  raidz2              ONLINE       0     0     0
	    label/HI_1NJKE    ONLINE       0     0     0  1.84G resilvered
	    label/WD_28896    ONLINE       0     0     0  1.84G resilvered
	    label/WD_13920    ONLINE       0     0     0  1.84G resilvered
	    label/WD_00658    ONLINE       0     0     0  1.84G resilvered
	    label/WD_01756    ONLINE       0     0     0  1.84G resilvered
	    replacing         ONLINE       0     0     0
	      ad20            ONLINE       0     0     0
	      label/TB_SPARE  ONLINE       0     0     0  259G resilvered
	    da0               ONLINE       0     0     0  1.84G resilvered
	    da1               ONLINE       0     0     0  1.84G resilvered
	    da2               ONLINE       0     0     0  1.84G resilvered
	    da4               ONLINE       0     0     0  1.84G resilvered
	    da3               ONLINE       0     0     0  1.84G resilvered

errors: No known data errors

  pool: iSCSI
 state: ONLINE
 scrub: none requested
config:

	NAME                STATE     READ WRITE CKSUM
	iSCSI               ONLINE       0     0     0
	  raidz1            ONLINE       0     0     0
	    label/SG_4J5LF  ONLINE       0     0     0
	    label/SG_4J4VH  ONLINE       0     0     0
	    label/SG_32XBH  ONLINE       0     0     0
	    label/SG_4J68R  ONLINE       0     0     0
	    label/WD_90436  ONLINE       0     0     0

errors: No known data errors
$

Just to give you an idea of how it looks.
The point is that if you want to avoid your drives from being reordered, then the most convenient way to do so is by adding labels.

Here is the manpage for glabel(8)
 
Yeah, I considered that, but wasn't sure if a.) it works with ZFS, or b.) I could do it after the zpool was already created pointing to the device nodes, and re-point it. You've answered a (thank you!), any thoughts on b?
 
DMXRoid said:
Yeah, I considered that, but wasn't sure if a.) it works with ZFS, or b.) I could do it after the zpool was already created pointing to the device nodes, and re-point it. You've answered a (thank you!), any thoughts on b?

If you noticed on my TANK zpool, there is a "TB_SPARE" drive. All those drives in that zpool are 1TB drives, and I happened to have a spare 1TB drive laying around to use as a substitute when I start labelling my drive. It is connected via USB.

The procedure I'm using is the following:
1 - Perform ' zpool replace ' on the unlabelled drive with spare drive
2 - Briefly zero out the unlabelled (~5 - 10 seconds). [Without doing this, you might have messages stating that glabel is not written in the correct place on the drive, or something to that effect.]
3 - Label the drive using glabel [glabel label NEWLABEL devicenode -- i.e.: glabel label MyHardDrive ad10]
4 - Perform ' zpool replace ' on the spare drive with the newly labelled drive [make sure to use the actual drive label, not the device node when adding it back into the pool]

Just to give you the heads up. In my zpool I have 11 drives, therefore my zpool will be resilvered 22 times (devicenode -> spare drive, then from spare drive -> labelled drive). It takes time, and does create extra wear on the drive with excessive reading/writing... then again, with RAID who cares?

Not sure what else to suggest if you don't happen to have another spare drive laying around (with similar or greater size).
 
Back
Top