Changing disk node names in zpool

Hello,

Having moved away from Illumos as my go-to OS for NFS storage I decided to give FeeBSD a go. So far, nothing but good things. In order to test stability and performance I created a large zpool consisting of 13 vdevs, each consisting of 10 drives in RAIDZ2 (I know the math, this is just a test) with two hot spares for a total of 132 drives. The drives are Seagate capacity SAS drives in three Supermicro 44 drive JBODs connected to a pair of LSI 9300-8E controllers.

When I created the pool I used the drive node names reported during boot (da6 through da137). So far so good. After some testing I exported the pool and imported it. After the next reboot I noticed that 116 of the drive node names had changed to the diskid/DISK-Z4D2VH880000R552TRAV format. This was a pleasant surprise but the fact that 16 of the drives are still using the daXX format bothers me a bit. Since then, the system has gone through two more export/import cycles and a number of reboots, no change.

When I checked the /dev/diskid directory I do, indeed, only find 116 entries.

I read a post where the recommendation was given to export the pool and on import, force the use of devices in /dev/diskid only. I'm a little hesitant only from the perspective that I'm over a week into my test cycle and I'd rather not start from scratch (beginning with the creation of my 250TB test set).

Any insight would be much appreciated.

Wim
 
Hi,

WOW. Impressive pool.

I always use a labeling strategy with gpart(8). It's safe and works good. Look at this - data-1-sces3-3tb-Z1Y0P0DK. (data1 is a first vdev of pool named data. uses Seagate Constellation ES3 3TB drive with a serial number Z1Y0P0DK). Nice and easy. If the drive will fail - I will spend just a minute to get it removed and replaced. Creating pools is easy too - zpool create *some options* data raidz2 /dev/gpt/data-1*
 
Hello,

Having moved away from Illumos as my go-to OS for NFS storage I decided to give FeeBSD a go. So far, nothing but good things. In order to test stability and performance I created a large zpool consisting of 13 vdevs, each consisting of 10 drives in RAIDZ2 (I know the math, this is just a test) with two hot spares for a total of 132 drives. The drives are Seagate capacity SAS drives in three Supermicro 44 drive JBODs connected to a pair of LSI 9300-8E controllers.

When I created the pool I used the drive node names reported during boot (da6 through da137). So far so good. After some testing I exported the pool and imported it. After the next reboot I noticed that 116 of the drive node names had changed to the diskid/DISK-Z4D2VH880000R552TRAV format. This was a pleasant surprise but the fact that 16 of the drives are still using the daXX format bothers me a bit. Since then, the system has gone through two more export/import cycles and a number of reboots, no change.

When I checked the /dev/diskid directory I do, indeed, only find 116 entries.

I read a post where the recommendation was given to export the pool and on import, force the use of devices in /dev/diskid only. I'm a little hesitant only from the perspective that I'm over a week into my test cycle and I'd rather not start from scratch (beginning with the creation of my 250TB test set).

Any insight would be much appreciated.

Wim

Code:
# zpool export <poolname>
# zpool import -d /dev/diskid <poolname>
# zpool export <poolname>
# zpool import <poolname>

That should get you a pool that always imports using the DiskID entries from /dev/diskid. However, unless you have those DiskIDs listed on the outside of each drive bay, I'd recommend using disk labels (GPT labels, preferably) and naming the disks after their physical location in the rack (jbodX-colY-rowZ) or something along those lines. It's much easier to get someone to "replace disk A5 in JBOD 3" then to get them to "replace disk 12345678901234567890". :)
 
Back
Top