ZFS How find numeric id of drive zfs?

And how set it up so it uses uuid or something?
I hate when one drive goes offline and da1 becomes da0 and causes me havok.
I have 2 drives thinking they are same zpool!
:(
 
Under Linux I used to use blkid to show the UUID, also it supported mounting by device label so even if the UUID or dev changed it still got correctly mounted (assuming set that way). I'm a relative neub to FreeBSD and don't know if or what the equivalent might be, but perhaps a starting point to use to search.
....
quick search revealed this Thread 31078
 
And how set it up so it uses ... or something?

I think the question you are really asking is: How do I identify my block devices (whole disks or partitions on the disk), so I don't get the ambiguity caused by the OS renumbering devices depending on which devices were found in hardware (which makes da0 into da1)?

My solution is: Never use whole devices (da0), only partitions on those devices, even if a physical device holds only one partition. And then I use gpt partitioning, which allows giving "names" (as strings) to partitions: for example, with the two commands gpart add -t freebsd-zfs -l name_one ... da0 and gpart add -t freebsd-zfs -l name_two ... da0 I can create two partitions that are clearly distinguishable by their names, that name is visible in gpart list -l da0, and I can use the device names /dev/gpt/name_one in mount commands (for traditional file systems) and in zpool attach commands for ZFS.

I have 2 drives thinking they are same zpool!
It is difficult to understand how that can have happened. Because ZFS (and other modern file systems) puts its own identifying information on the block device when it is added to the file system (which for example happens when you issue the zpool attach command), in theory ZFS should always know how to identify a block device when it finds it. I saw your thread with two disks both thinking that they are the same, and I haven't had time to dig through all the information to figure out what went wrong. In theory, unless you use tools like dd to copy the content of one block device to another, this should not happen. And that should be independent of whether the human who manages the system uses da0, da1, ... or explicit GPT partitions names for their work.
 
Followup question (not specific to the situation of the OP, but in general about ZFS):

Say I take a block device (whether it is /dev/da0 or /dev/gpt/name_one makes no difference), and add it to a zpool. Now that device is "taken over" by ZFS, which happens by ZFS putting some identifying information into the data blocks of that device. Now I take another block device (another whole disk, or another partition), of the same size or larger, and copy the bits from the first block device to the second one, for example using dd. What will ZFS do when it finds the second copy? Will it think that it is the same entity as the first one? What will happen when ZFS finds two copies of the same disk at the same time?

I can express the same question in a different way: When ZFS finds a disk which it thinks it owns (it is part of a zpool), how does it identify that disk? Does it go solely by things that are written onto that device? Or does it use other information specific to the hardware or the partitioning scheme to identify (for example the Inquiry command for SCSI disks, Identify command for SATA disks, UUIDs or GUIDs if available from the partitioning scheme)? Can someone point me to some documentation about how ZFS identifies disks?
 
My solution is: Never use whole devices (da0), only partitions on those devices, even if a physical device holds only one partition. And then I use gpt partitioning, which allows giving "names" (as strings) to partitions: for example, with the two commands

Exactly this!

Use GPT-Labels and set these sysctls accordingly (via boot/loader.conf):
Code:
kern.geom.label.disk_ident.enable: 0
kern.geom.label.gptid.enable: 0
kern.geom.label.gpt.enable: 1
Now the providers are primarily identified by their GPT-Labels. Adding them by these labels to your pool will also show them with their labels when calling zpool status:
Code:
# zpool status
  pool: stor1
 state: ONLINE
  scan: scrub repaired 0 in 2h14m with 0 errors on Mon Sep 18 18:42:32 2017
config:

        NAME                    STATE     READ WRITE CKSUM
        stor1                   ONLINE       0     0     0
          raidz1-0              ONLINE       0     0     0
            label/cca255027d59  ONLINE       0     0     0
            label/cca255027f11  ONLINE       0     0     0
            label/cca2550291e5  ONLINE       0     0     0
        logs
          mirror-1              ONLINE       0     0     0
            gpt/slog-IN896a9    ONLINE       0     0     0
            gpt/slog-IN89db9    ONLINE       0     0     0
        cache
          gpt/l2arc-IN896a9     ONLINE       0     0     0
          gpt/l2arc-IN89db9     ONLINE       0     0     0

Use GPT-labels either corresponding to the physical position of the drive, SAS address or (part of) the GUID or serial number of the drive (or "lunid" in geom terms) or any combination of these. E.g. the GPT-Label "E1D4-HG-cca255027f10" would relate to the 4th Disk in Enclosure 1, which is a HGST with LUN-ID "cca255027f10".
Whatever you use - make sure to put a corresponding printed label (or sharpie on insulating tape...) on the drive caddy to be able to identify a drive when standing in front of the machine.

Your best friends when trying to "find" a drive are sesutil(8), mptutil(8) or the LSI/Avago/WhoeverBuysThemNext proprietary sysutils/sas2ircu for SAS drives/controllers, camcontrol(8) or geom(8) for drives in general and gpart(8) when looking for partitions and its labels.
You won't need all of them - some provide you with overlapping informations. Try what works best for you.
 
  • Thanks
Reactions: Oko
And how set it up so it uses uuid or something?
I hate when one drive goes offline and da1 becomes da0 and causes me havok.
I have 2 drives thinking they are same zpool!
:(

That is expected. Don't you have the documentation with HDD UUIDs which belong to the specific ZFS pool? Each HDD is carefully recorded in my spreadsheets before it goes to server (drive Bay, zpool Name, Type, Disk size, Manufacturer, Model Number, Serial Number as well as current device label if you are using /dev/da* which is expected to change in the case of the drive failure).


I think the question you are really asking is: How do I identify my block devices (whole disks or partitions on the disk), so I don't get the ambiguity caused by the OS renumbering devices depending on which devices were found in hardware (which makes da0 into da1)?
Well put. This is probably should be the first choice


Code:
root@hera:~ # geom disk list
Geom name: ada0
Providers:
1. Name: ada0
   Mediasize: 64023257088 (60G)
   Sectorsize: 512
   Mode: r2w2e4
   descr: TS64GSSD370S
   ident: D646741862
   rotationrate: 0
   fwsectors: 63
   fwheads: 16

Geom name: ada1
Providers:
1. Name: ada1
   Mediasize: 64023257088 (60G)
   Sectorsize: 512
   Mode: r2w2e4
   descr: TS64GSSD370S
   ident: D646741935
   rotationrate: 0
   fwsectors: 63
   fwheads: 16

Geom name: ada2
Providers:
1. Name: ada2
   Mediasize: 4000787030016 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   descr: WDC WD40EFRX-68N32N0
   lunid: 50014ee263e2f181
   ident: WD-WCC7K4TFR4Z3
   rotationrate: 5400
   fwsectors: 63
   fwheads: 16

Geom name: ada3
Providers:
1. Name: ada3
   Mediasize: 4000787030016 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   descr: WDC WD40EFRX-68N32N0
   lunid: 50014ee2b93fc0c7
   ident: WD-WCC7K5HLXRH7
   rotationrate: 5400
   fwsectors: 63
   fwheads: 16

Or even more elementary

Code:
root@hera:~ # sysctl -a | grep DISK
kern.geom.conftxt: 0 DISK ada3 4000787030016 512 hd 16 sc 63
0 DISK ada2 4000787030016 512 hd 16 sc 63
0 DISK ada1 64023257088 512 hd 16 sc 63
0 DISK ada0 64023257088 512 hd 16 sc 63
z0xfffff8000a3c2700 [shape=box,label="DISK\nada3\nr#1"];
z0xfffff8000a3c2a00 [shape=box,label="DISK\nada2\nr#1"];
z0xfffff8000a3c2d00 [shape=box,label="DISK\nada1\nr#1"];
z0xfffff8000a7a2800 [shape=box,label="DISK\nada0\nr#1"];
    <name>DISK</name>

or maybe you just need to learn how to read dmesg

Code:
root@hera:~ # cat /var/run/dmesg.boot | grep ada
ada0 at ahcich2 bus 0 scbus2 target 0 lun 0
ada0: <TS64GSSD370S P1225CA> ACS-2 ATA SATA 3.x device
ada0: Serial Number D646741862
ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 1024bytes)
ada0: Command Queueing enabled
ada0: 61057MB (125045424 512 byte sectors)
ada1 at ahcich3 bus 0 scbus3 target 0 lun 0
ada1: <TS64GSSD370S P1225CA> ACS-2 ATA SATA 3.x device
ada1: Serial Number D646741935
ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 1024bytes)
ada1: Command Queueing enabled
ada1: 61057MB (125045424 512 byte sectors)
ada2 at ahcich4 bus 0 scbus4 target 0 lun 0
ada2: <WDC WD40EFRX-68N32N0 82.00A82> ACS-3 ATA SATA 3.x device
ada2: Serial Number WD-WCC7K4TFR4Z3
ada2: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
ada2: Command Queueing enabled
ada2: 3815447MB (7814037168 512 byte sectors)
ada2: quirks=0x1<4K>
ada3 at ahcich5 bus 0 scbus5 target 0 lun 0
ada3: <WDC WD40EFRX-68N32N0 82.00A82> ACS-3 ATA SATA 3.x device
ada3: Serial Number WD-WCC7K5HLXRH7
ada3: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
ada3: Command Queueing enabled
ada3: 3815447MB (7814037168 512 byte sectors)
ada3: quirks=0x1<4K>

Or use camcontrol to list HDDs on the first boot


Code:
root@hera:~ # camcontrol devlist
<TS64GSSD370S P1225CA>             at scbus2 target 0 lun 0 (pass0,ada0)
<TS64GSSD370S P1225CA>             at scbus3 target 0 lun 0 (pass1,ada1)
<WDC WD40EFRX-68N32N0 82.00A82>    at scbus4 target 0 lun 0 (pass2,ada2)
<WDC WD40EFRX-68N32N0 82.00A82>    at scbus5 target 0 lun 0 (pass3,ada3)

find out more about them and create documetation


Code:
root@hera:~ # camcontrol identify ada0
pass0: <TS64GSSD370S P1225CA> ACS-2 ATA SATA 3.x device
pass0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 1024bytes)

protocol              ATA/ATAPI-9 SATA 3.x
device model          TS64GSSD370S
firmware revision     P1225CA
serial number         D646741862
cylinders             16383
heads                 16
sectors/track         63
sector size           logical 512, physical 512, offset 0
LBA supported         125045424 sectors
LBA48 supported       125045424 sectors
PIO supported         PIO4
DMA supported         WDMA2 UDMA6
media RPM             non-rotating

Feature                      Support  Enabled   Value           Vendor
read ahead                     yes      yes
write cache                    yes      yes
flush cache                    yes      yes
overlap                        no
Tagged Command Queuing (TCQ)   no       no
Native Command Queuing (NCQ)   yes              32 tags
NCQ Queue Management           no
NCQ Streaming                  no
Receive & Send FPDMA Queued    no
SMART                          yes      yes
microcode download             yes      yes
security                       yes      no
power management               yes      yes
advanced power management      no       no
automatic acoustic management  yes      no      0/0x00  0/0x00
media status notification      no       no
power-up in Standby            no       no
write-read-verify              no       no
unload                         no       no
general purpose logging        yes      yes
free-fall                      no       no
Data Set Management (DSM/TRIM) yes
DSM - max 512byte blocks       yes              8
DSM - deterministic read       yes              zeroed
Host Protected Area (HPA)      yes      no      125045424/125045424
HPA - Security


Finally if you want to protect your data you will need to monitor physical devices with SMART so you might as well learn how to use it.



Code:
root@hera:~ # smartctl -i /dev/ada0
smartctl 6.5 2016-05-07 r4318 [FreeBSD 11.1-RELEASE-p1 amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     SiliconMotion based SSDs
Device Model:     TS64GSSD370S
Serial Number:    D646741862
Firmware Version: P1225CA
User Capacity:    64,023,257,088 bytes [64.0 GB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    Solid State Device
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2 (minor revision not indicated)
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Tue Sep 19 18:15:16 2017 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

My solution is: Never use whole devices (da0), only partitions on those devices, even if a physical device holds only one partition. And then I use gpt partitioning, which allows giving "names" (as strings) to partitions: for example, with the two commands gpart add -t freebsd-zfs -l name_one ... da0 and gpart add -t freebsd-zfs -l name_two ... da0 I can create two partitions that are clearly distinguishable by their names, that name is visible in gpart list -l da0, and I can use the device names /dev/gpt/name_one in mount commands (for traditional file systems) and in zpool attach commands for ZFS.

Dude that is a really bad advise! Generally speaking storage drives (the one which are not used to boot from) should not contain any partitions in spite the claim that unlike Solaris FreeBSD doesn't penalize for using partitions

Disk - The most basic type of vdev is a standard block device. This can be an entire disk (such as /dev/ada0 or /dev/da0) or a partition (/dev/ada0p3). On FreeBSD, there is no performance penalty for using a partition rather than the entire disk. This differs from recommendations made by the Solaris documentation.

However replacing failed HDDs which contain partitions in the ZFS pool is more involving than replacing disk which don't contain partitions. Finally
One should use always disk UIDs in /etc/fstab as follows on my desktop

Code:
predrag@oko$ more /etc/fstab
7ccf1c164f5347b4.b none swap sw
7ccf1c164f5347b4.a / ffs rw 1 1
7ccf1c164f5347b4.l /home ffs rw,nodev,nosuid 1 2
7ccf1c164f5347b4.d /tmp ffs rw,nodev,nosuid 1 2
7ccf1c164f5347b4.f /usr ffs rw,nodev 1 2
7ccf1c164f5347b4.g /usr/X11R6 ffs rw,nodev 1 2
7ccf1c164f5347b4.h /usr/local ffs rw,nodev,wxallowed 1 2
7ccf1c164f5347b4.j /usr/obj ffs rw,nodev,nosuid 1 2
7ccf1c164f5347b4.k /usr/ports ffs rw,nodev,nosuid 1 2
7ccf1c164f5347b4.i /usr/src ffs rw,nodev,nosuid 1 2
7ccf1c164f5347b4.e /var ffs rw,nodev,nosuid 1 2

For people who in spite my advise decide to use partitions on data drives command glabel is must. These are my boot drives.

Code:
root@hera:~ # glabel list
Geom name: ada0p1
Providers:
1. Name: gpt/gptboot0
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0
   secoffset: 0
   offset: 0
   seclength: 1024
   length: 524288
   index: 0
Consumers:
1. Name: ada0p1
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0

Geom name: ada1p1
Providers:
1. Name: gpt/gptboot1
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0
   secoffset: 0
   offset: 0
   seclength: 1024
   length: 524288
   index: 0
Consumers:
1. Name: ada1p1
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0

root@hera:~ # glabel list
Geom name: ada0p1
Providers:
1. Name: gpt/gptboot0
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0
   secoffset: 0
   offset: 0
   seclength: 1024
   length: 524288
   index: 0
Consumers:
1. Name: ada0p1
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0

Geom name: ada1p1
Providers:
1. Name: gpt/gptboot1
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0
   secoffset: 0
   offset: 0
   seclength: 1024
   length: 524288
   index: 0
Consumers:
1. Name: ada1p1
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0
 
Dude that is a really bad advise! Generally speaking storage drives (the one which are not used to boot from) should not contain any partitions in spite the claim that unlike Solaris FreeBSD doesn't penalize for using partitions

The major problem with using whole disks is, the amount of bean-counters working at various disk vendors varies greatly - so some make their disks smaller than others (i.e: actual disk size is antiproportional to the number of bean counters).
But seriously: Its not unlikely to end up with 3 (slightly) different disk sizes when buying 3 supposedly "4TB" disks from 3 different vendors. Using partitions and just leaving a few dozen MB unused can spare some headache when a disk needs (urgent) replacement. OTOH: Often disks are replaced with bigger ones, so this might not be a valid point for all environments.

The ability to use GPT-labels is more of a convenient side-effect of using partition tables.
BTW: do geom-labels still interfere with multipath labels? Haven't tried it recently, but I wasn't able to get this working ~2 years ago on 10.3. So this might be another show-stopper for using bare disks...

However replacing failed HDDs which contain partitions in the ZFS pool is more involving than replacing disk which don't contain partitions.
Adding a single-partition GPT table is more or less a no-brainer and could be easily scripted if you have to replace disks on a daily basis.
 
  • Thanks
Reactions: Oko
The peace of mind that comes from having human-readable labels in zpool, geom, and dmesg output easily beats the extra 10 seconds it takes to run:
Code:
# gpart create -s gpt da15
# gpart add -t freebsd-zfs -a 1M -l jbod1-a5 da15
# zpool replace poolname gpt/jbod1-a5

UUIDs, disk IDs, and other long strings of hex may be useful for computers, but they aren't always useful for humans. Especially when a disk with UUID X dies completely, in such a way that you can't query it at all, so there's no way to identify it without pulling it from the system. Which one do you pull? Sure, you could put a 56-character ID label onto each drive bay, but then you have to read each and every one trying to find the correct one as they won't be organised or sorted in any way, which would be a royal pain on a system with 90+ drive bays in it.

Labelling the drive itself with its location in the chassis is much easier to work with, compared to labelling the drive bay with the ID of the disk. At least for me. :) Other labelling methods work "better" for other people and/or other situations. :D
 
  • Thanks
Reactions: Oko
Back
Top