Solved Drive location information

Hi all,

I've got a home server with 12 hard drive slots. I've mapped out on paper the location all of the hard drives I've inserted, what their serial numbers are, and some additional details.
Next I would like to label each of the drives with information like <location>-<serial> e.g. a3-WD-WMAWZ1371366.

I know I can do this by hand, but I would like to automate it so as to remove human error - at home I fear I may be sloppy with labelling replaced disks, or make a typo. I'm looking to use camcontrol(8) to grab out the serial number, but I'm not sure of a reliable way to figure out where the drive physically is.
If I map out the device name (e.g. da0) would that be enough? I want to say that this can change depending on the order the loader picks up each drive?
I've previously mapped out the scbus/target/lun numbers, but these appear to have changed over time even though there have been no hardware changes (the original mappings took place under 12.0-RELEASE a few years ago, and I've recently got the server out of storage and put a fresh install of FreeBSD 13.1-RELEASE on it, if that matters).

Is there a reliable way in FreeBSD to map some information to the physical location of a drive in the chassis?
 
Depends on the chassis; do you have any /dev/ses* entries? If you do, sesutil(8)'s map command will show you what is connected where, and also has the ability to turn on indicator/fault lights.

I'm surprised the mappings changed without hardware changes; those are usually quite static if no hardware is moved. (USB excluded.)
 
Personally I glabel all the disks and don't worry about their physical location.

As long as you reference the disk by the label you create, you can move them all around, or the loader can even randomize the order, it won't matter.

If you can't blink the bay lights, then a sticker with the serial number on the physical disk should suffice.
 
And, if you use ZFS, none of this matters anyway.
True. But yanking out the wrong drive can be quite disastrous. Happened once, ZFS RAID 10 (striped set of mirrors), 1 broken disk, I took out the working mirror drive instead of the broken one. Poof. And it's gone.... Thankfully it happened on a not-so-important server. I was less thrilled about the situation than my boss.
 
Depends on the chassis
That's exactly it. How are your disks connected?

Extreme case #1: You have a motherboard with 12 SATA ports (or enough SATA cards in PCI slots). You just hand-wire them with 12 cables to 12 physical disk locations. There is no way for the computer to know which cable goes where, and what the physical layout of the disks is.

Extreme case #2: You are using a commercial disk enclosure, connected via SAS. The enclosure contains both a SAS expander and an enclosure controller (those are called SES devices, and that's not a spelling error). The firmware in the enclosure controller has been carefully built by a quality vendor, to report sensible disk location names, and provide disk presence detection. The enclosure has indicator lights for each disk slot, and can perhaps even control whether disks get power. You can use sesutils to communicate with the enclosure controller, find out which disk is where, whether the disk is plugged in and powered up, and turn lights on and off.

As leebrown said, using paper labels attached to the disks is a very good starting point, and a backup mechanism for when any computerized system fails.

As SirDice said: The #1 source of data loss in some production environments is humans pulling the wrong disk out. At a previous employer, there were actual measurements confirming this.
 
Ah yes, I knew I probably should have added more details!

The server is a Supermicro X9DRi-LN4F+, four disks are connected to a SAS port on the motherboard, the other eight are connected via an LSI 9200-8i HBA in IT mode (i.e. no RAID). I do have a single SES device (/dev/ses0), so in case of a failure I can probably flag a disk.
True. But yanking out the wrong drive can be quite disastrous. Happened once, ZFS RAID 10 (striped set of mirrors), 1 broken disk, I took out the working mirror drive instead of the broken one. Poof. And it's gone.... Thankfully it happened on a not-so-important server. I was less thrilled about the situation than my boss.
This is my main worry - the main data zpool disk layout will be 4 mirrored pairs, pulling the wrong disk isn't going to be fun! Having the location of a degraded device in the output of the zpool command just helps to reduce some of my own potential error.

As leebrown said, using paper labels attached to the disks is a very good starting point, and a backup mechanism for when any computerized system fails.

As SirDice said: The #1 source of data loss in some production environments is humans pulling the wrong disk out. At a previous employer, there were actual measurements confirming this.
It's been some time since I inserted the disks, it might be that I already labelled each disk. There isn't much room on the front of each drive, plus the front of the drive is vented and is where the majority of the cool air comes in - so I wouldn't want to block that up too much.

It's been a few years since I've dealt with hardware at work, and my position now has an aspect of attempting to limit human error as much as possible, hence my thought to automate the labelling. But to be honest, if I label each drive and have to shut the server down to do replacements so I can pull each drive in turn, for a home server that isn't the end of the world!
 
I have 8 disks in a server an 8 in an external SAS chassis and I just use paper labels on the sleds, stickers on the drives, and record the serial numbers because as my notes say " camcontrol devlist : Note these targets DO NOT LINE UP WITH THE DRIVE BAY NUMBERS"
 
... the other eight are connected via an LSI 9200-8i HBA in IT mode (i.e. no RAID). I do have a single SES device (/dev/ses0), so in case of a failure I can probably flag a disk.
You probably want to practice, experiment and write down instructions BEFORE the failure.

Your SES device is most likely the backplane of the cage that holds those 8 disks, if there is such a thing. If you have the version of the LSI card with 8 individual connectors, or with octopus cables, then the SES device is probably not very useful, since you're back to identifying individual cables.

But to be honest, if I label each drive and have to shut the server down to do replacements so I can pull each drive in turn, for a home server that isn't the end of the world!
Absolutely. My home server is even worse: It theoretically has hot-swap drive bays. But the way it is placed (on a bookshelf in the basement), those drive bays are in the back, flush against a wall. And to pull it away from the wall you first have to uncable it, which requires (obviously) to turn it off. This is why I like to have paper tape labels, attached directly to the disk: For any maintenance, I'll be taking things apart anyway.

On the other hand, for a professional system, with uptime requirements and a real expectation that drives will be swapped hot: Different story.
 
Back
Top