And how set it up so it uses ... or something?
gpart add -t freebsd-zfs -l name_one ... da0
and gpart add -t freebsd-zfs -l name_two ... da0
I can create two partitions that are clearly distinguishable by their names, that name is visible in gpart list -l da0
, and I can use the device names /dev/gpt/name_one in mount commands (for traditional file systems) and in zpool attach
commands for ZFS.It is difficult to understand how that can have happened. Because ZFS (and other modern file systems) puts its own identifying information on the block device when it is added to the file system (which for example happens when you issue theI have 2 drives thinking they are same zpool!
zpool attach
command), in theory ZFS should always know how to identify a block device when it finds it. I saw your thread with two disks both thinking that they are the same, and I haven't had time to dig through all the information to figure out what went wrong. In theory, unless you use tools like dd
to copy the content of one block device to another, this should not happen. And that should be independent of whether the human who manages the system uses da0, da1, ... or explicit GPT partitions names for their work. dd
. What will ZFS do when it finds the second copy? Will it think that it is the same entity as the first one? What will happen when ZFS finds two copies of the same disk at the same time? zdb -l /dev/da0
My solution is: Never use whole devices (da0), only partitions on those devices, even if a physical device holds only one partition. And then I use gpt partitioning, which allows giving "names" (as strings) to partitions: for example, with the two commands
kern.geom.label.disk_ident.enable: 0
kern.geom.label.gptid.enable: 0
kern.geom.label.gpt.enable: 1
zpool status
:# zpool status
pool: stor1
state: ONLINE
scan: scrub repaired 0 in 2h14m with 0 errors on Mon Sep 18 18:42:32 2017
config:
NAME STATE READ WRITE CKSUM
stor1 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
label/cca255027d59 ONLINE 0 0 0
label/cca255027f11 ONLINE 0 0 0
label/cca2550291e5 ONLINE 0 0 0
logs
mirror-1 ONLINE 0 0 0
gpt/slog-IN896a9 ONLINE 0 0 0
gpt/slog-IN89db9 ONLINE 0 0 0
cache
gpt/l2arc-IN896a9 ONLINE 0 0 0
gpt/l2arc-IN89db9 ONLINE 0 0 0
And how set it up so it uses uuid or something?
I hate when one drive goes offline and da1 becomes da0 and causes me havok.
I have 2 drives thinking they are same zpool!
Well put. This is probably should be the first choiceI think the question you are really asking is: How do I identify my block devices (whole disks or partitions on the disk), so I don't get the ambiguity caused by the OS renumbering devices depending on which devices were found in hardware (which makes da0 into da1)?
root@hera:~ # geom disk list
Geom name: ada0
Providers:
1. Name: ada0
Mediasize: 64023257088 (60G)
Sectorsize: 512
Mode: r2w2e4
descr: TS64GSSD370S
ident: D646741862
rotationrate: 0
fwsectors: 63
fwheads: 16
Geom name: ada1
Providers:
1. Name: ada1
Mediasize: 64023257088 (60G)
Sectorsize: 512
Mode: r2w2e4
descr: TS64GSSD370S
ident: D646741935
rotationrate: 0
fwsectors: 63
fwheads: 16
Geom name: ada2
Providers:
1. Name: ada2
Mediasize: 4000787030016 (3.6T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e1
descr: WDC WD40EFRX-68N32N0
lunid: 50014ee263e2f181
ident: WD-WCC7K4TFR4Z3
rotationrate: 5400
fwsectors: 63
fwheads: 16
Geom name: ada3
Providers:
1. Name: ada3
Mediasize: 4000787030016 (3.6T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e1
descr: WDC WD40EFRX-68N32N0
lunid: 50014ee2b93fc0c7
ident: WD-WCC7K5HLXRH7
rotationrate: 5400
fwsectors: 63
fwheads: 16
root@hera:~ # sysctl -a | grep DISK
kern.geom.conftxt: 0 DISK ada3 4000787030016 512 hd 16 sc 63
0 DISK ada2 4000787030016 512 hd 16 sc 63
0 DISK ada1 64023257088 512 hd 16 sc 63
0 DISK ada0 64023257088 512 hd 16 sc 63
z0xfffff8000a3c2700 [shape=box,label="DISK\nada3\nr#1"];
z0xfffff8000a3c2a00 [shape=box,label="DISK\nada2\nr#1"];
z0xfffff8000a3c2d00 [shape=box,label="DISK\nada1\nr#1"];
z0xfffff8000a7a2800 [shape=box,label="DISK\nada0\nr#1"];
<name>DISK</name>
root@hera:~ # cat /var/run/dmesg.boot | grep ada
ada0 at ahcich2 bus 0 scbus2 target 0 lun 0
ada0: <TS64GSSD370S P1225CA> ACS-2 ATA SATA 3.x device
ada0: Serial Number D646741862
ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 1024bytes)
ada0: Command Queueing enabled
ada0: 61057MB (125045424 512 byte sectors)
ada1 at ahcich3 bus 0 scbus3 target 0 lun 0
ada1: <TS64GSSD370S P1225CA> ACS-2 ATA SATA 3.x device
ada1: Serial Number D646741935
ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 1024bytes)
ada1: Command Queueing enabled
ada1: 61057MB (125045424 512 byte sectors)
ada2 at ahcich4 bus 0 scbus4 target 0 lun 0
ada2: <WDC WD40EFRX-68N32N0 82.00A82> ACS-3 ATA SATA 3.x device
ada2: Serial Number WD-WCC7K4TFR4Z3
ada2: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
ada2: Command Queueing enabled
ada2: 3815447MB (7814037168 512 byte sectors)
ada2: quirks=0x1<4K>
ada3 at ahcich5 bus 0 scbus5 target 0 lun 0
ada3: <WDC WD40EFRX-68N32N0 82.00A82> ACS-3 ATA SATA 3.x device
ada3: Serial Number WD-WCC7K5HLXRH7
ada3: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
ada3: Command Queueing enabled
ada3: 3815447MB (7814037168 512 byte sectors)
ada3: quirks=0x1<4K>
camcontrol
to list HDDs on the first bootroot@hera:~ # camcontrol devlist
<TS64GSSD370S P1225CA> at scbus2 target 0 lun 0 (pass0,ada0)
<TS64GSSD370S P1225CA> at scbus3 target 0 lun 0 (pass1,ada1)
<WDC WD40EFRX-68N32N0 82.00A82> at scbus4 target 0 lun 0 (pass2,ada2)
<WDC WD40EFRX-68N32N0 82.00A82> at scbus5 target 0 lun 0 (pass3,ada3)
root@hera:~ # camcontrol identify ada0
pass0: <TS64GSSD370S P1225CA> ACS-2 ATA SATA 3.x device
pass0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 1024bytes)
protocol ATA/ATAPI-9 SATA 3.x
device model TS64GSSD370S
firmware revision P1225CA
serial number D646741862
cylinders 16383
heads 16
sectors/track 63
sector size logical 512, physical 512, offset 0
LBA supported 125045424 sectors
LBA48 supported 125045424 sectors
PIO supported PIO4
DMA supported WDMA2 UDMA6
media RPM non-rotating
Feature Support Enabled Value Vendor
read ahead yes yes
write cache yes yes
flush cache yes yes
overlap no
Tagged Command Queuing (TCQ) no no
Native Command Queuing (NCQ) yes 32 tags
NCQ Queue Management no
NCQ Streaming no
Receive & Send FPDMA Queued no
SMART yes yes
microcode download yes yes
security yes no
power management yes yes
advanced power management no no
automatic acoustic management yes no 0/0x00 0/0x00
media status notification no no
power-up in Standby no no
write-read-verify no no
unload no no
general purpose logging yes yes
free-fall no no
Data Set Management (DSM/TRIM) yes
DSM - max 512byte blocks yes 8
DSM - deterministic read yes zeroed
Host Protected Area (HPA) yes no 125045424/125045424
HPA - Security
root@hera:~ # smartctl -i /dev/ada0
smartctl 6.5 2016-05-07 r4318 [FreeBSD 11.1-RELEASE-p1 amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: SiliconMotion based SSDs
Device Model: TS64GSSD370S
Serial Number: D646741862
Firmware Version: P1225CA
User Capacity: 64,023,257,088 bytes [64.0 GB]
Sector Size: 512 bytes logical/physical
Rotation Rate: Solid State Device
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-2 (minor revision not indicated)
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is: Tue Sep 19 18:15:16 2017 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
My solution is: Never use whole devices (da0), only partitions on those devices, even if a physical device holds only one partition. And then I use gpt partitioning, which allows giving "names" (as strings) to partitions: for example, with the two commandsgpart add -t freebsd-zfs -l name_one ... da0
andgpart add -t freebsd-zfs -l name_two ... da0
I can create two partitions that are clearly distinguishable by their names, that name is visible ingpart list -l da0
, and I can use the device names /dev/gpt/name_one in mount commands (for traditional file systems) and inzpool attach
commands for ZFS.
Disk - The most basic type of vdev is a standard block device. This can be an entire disk (such as /dev/ada0 or /dev/da0) or a partition (/dev/ada0p3). On FreeBSD, there is no performance penalty for using a partition rather than the entire disk. This differs from recommendations made by the Solaris documentation.
predrag@oko$ more /etc/fstab
7ccf1c164f5347b4.b none swap sw
7ccf1c164f5347b4.a / ffs rw 1 1
7ccf1c164f5347b4.l /home ffs rw,nodev,nosuid 1 2
7ccf1c164f5347b4.d /tmp ffs rw,nodev,nosuid 1 2
7ccf1c164f5347b4.f /usr ffs rw,nodev 1 2
7ccf1c164f5347b4.g /usr/X11R6 ffs rw,nodev 1 2
7ccf1c164f5347b4.h /usr/local ffs rw,nodev,wxallowed 1 2
7ccf1c164f5347b4.j /usr/obj ffs rw,nodev,nosuid 1 2
7ccf1c164f5347b4.k /usr/ports ffs rw,nodev,nosuid 1 2
7ccf1c164f5347b4.i /usr/src ffs rw,nodev,nosuid 1 2
7ccf1c164f5347b4.e /var ffs rw,nodev,nosuid 1 2
glabel
is must. These are my boot drives.root@hera:~ # glabel list
Geom name: ada0p1
Providers:
1. Name: gpt/gptboot0
Mediasize: 524288 (512K)
Sectorsize: 512
Stripesize: 0
Stripeoffset: 20480
Mode: r0w0e0
secoffset: 0
offset: 0
seclength: 1024
length: 524288
index: 0
Consumers:
1. Name: ada0p1
Mediasize: 524288 (512K)
Sectorsize: 512
Stripesize: 0
Stripeoffset: 20480
Mode: r0w0e0
Geom name: ada1p1
Providers:
1. Name: gpt/gptboot1
Mediasize: 524288 (512K)
Sectorsize: 512
Stripesize: 0
Stripeoffset: 20480
Mode: r0w0e0
secoffset: 0
offset: 0
seclength: 1024
length: 524288
index: 0
Consumers:
1. Name: ada1p1
Mediasize: 524288 (512K)
Sectorsize: 512
Stripesize: 0
Stripeoffset: 20480
Mode: r0w0e0
root@hera:~ # glabel list
Geom name: ada0p1
Providers:
1. Name: gpt/gptboot0
Mediasize: 524288 (512K)
Sectorsize: 512
Stripesize: 0
Stripeoffset: 20480
Mode: r0w0e0
secoffset: 0
offset: 0
seclength: 1024
length: 524288
index: 0
Consumers:
1. Name: ada0p1
Mediasize: 524288 (512K)
Sectorsize: 512
Stripesize: 0
Stripeoffset: 20480
Mode: r0w0e0
Geom name: ada1p1
Providers:
1. Name: gpt/gptboot1
Mediasize: 524288 (512K)
Sectorsize: 512
Stripesize: 0
Stripeoffset: 20480
Mode: r0w0e0
secoffset: 0
offset: 0
seclength: 1024
length: 524288
index: 0
Consumers:
1. Name: ada1p1
Mediasize: 524288 (512K)
Sectorsize: 512
Stripesize: 0
Stripeoffset: 20480
Mode: r0w0e0
Dude that is a really bad advise! Generally speaking storage drives (the one which are not used to boot from) should not contain any partitions in spite the claim that unlike Solaris FreeBSD doesn't penalize for using partitions
Adding a single-partition GPT table is more or less a no-brainer and could be easily scripted if you have to replace disks on a daily basis.However replacing failed HDDs which contain partitions in the ZFS pool is more involving than replacing disk which don't contain partitions.
# gpart create -s gpt da15
# gpart add -t freebsd-zfs -a 1M -l jbod1-a5 da15
# zpool replace poolname gpt/jbod1-a5