MB change and Zpool-mirror

Hallo

Is ist possible to change the MB and still have access to an existing zpool-mirror ?

At the Monent there are 3 Disk (1x system ada0,ada1 and ada2 are the zpool-mirror, I guess ada0 ist connected to sata0 and ada1 and ada2 are connected to sata3 and sata4. The new MB have 6 SATA-ports but I will still connect to sata0 and sata3+4.

Will the zpool detected automatically or will there be problems while detecting,because of new hardware?


regards
schwedenmann
 
I recently did this. I have two drives mirrored for boot/os, then two more drives mirrored for /home.
If I remember correctly, I just made sure that the drives were plugged in to the corresponding SATA ports from one motherboard to the other, and there was no further configuration needed.
 
I'll toss in my agreement; it is also a good reason to use labels of some kind when creating the zpool.
I'm not sure exactly what would happen if you swapped the connections (like old ada0 into new ada3) but as SirDice implies the zpool exists at a higher level and my understanding is the combination of GEOM and ZFS looks at all the devices then puts them back together with the other devices as needed.
 
Order doesn't matter too. Except maybe the first disks, as they have typically have the boot code. This is obviously needed to boot the system. But other than that, ZFS isn't going to care, it'll find each drive based on the metadata that's stored on the disk itself. You can shuffle them, reattach each disk to its own controller and ZFS will still find everything. It might take a little longer the first time but this won't be a problem.

The only thing you need to watch out for are disks that have been set up as a single drive RAID0 volume. As these depend on metadata that's been put on the drive by the RAID controller. But as long as these are "real" JBOD disks this shouldn't be an issue.
 
Wow.. on paper one would not think a ZFS pool is tied to a chip set on the board.
But... if "He chose poorly" was to apply, I would cry a whole lot if that pool was not entirely backed up.
 
ZFS doesn't care about the naming of the drives, it has everything it needs in the metadata of the specific provider and uses UUIDs internally.
You can change kern.geom.label.disk_ident.enable, kern.geom.label.gptid.enable and kern.geom.label.gpt.enable as you like to switch between the representations in e.g. zpool status output (e.g. from disk labels like ada0 to gpt labels). The pool will still work as intended but with new "names" for the providers. Same goes for randomly switching around the drives e.g. by rearranging them in the backplane - ZFS simply doesn't care.
 
ZFS doesn't care about the naming of the drives, it has everything it needs in the metadata of the specific provider and uses UUIDs internally.
What happens if I have "cloned" one disk to a different disk using dd()? May be even before I have setup a pool. From my understanding the disks should have similar UUIDs after the "cloning". Or is there a disk specific UUID which is stored in a Flash or EEPROM which cannot be overwritten?
 
My understanding:
dd is reading disk blocks (spinning disks or SSDs) so if it's written on a disk block it would get duplicated.
If you use something like geom, camcontrol or smartctl you are actually getting information unique to the device. Is it stored in flash or eeprom? I don't know but most likely.
 
What happens if I have "cloned" one disk to a different disk using dd()? May be even before I have setup a pool. From my understanding the disks should have similar UUIDs after the "cloning". Or is there a disk specific UUID which is stored in a Flash or EEPROM which cannot be overwritten?
this should work just fine, even if you dd to a completely different disk type (e.g. nvme). as said: zfs only cares about its own metadata, not some arbitrary name or simplified representation the GEOM-layer provides to us meatsacks who can't really work with UUIDs...
 
Thanks for the answers, so ist should work the change of MB.

here same infos:
the pool
zpool status
pool: data
state: ONLINE
scan: resilvered 766G in 03:45:48 with 0 errors on Sun Oct 28 14:35:30 2018
config:

NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada1p1 ONLINE 0 0 0
diskid/DISK-ZFN18C7Zp1 ONLINE 0 0 0

errors: No known data errors
freebsd version is :
freebsdserver# freebsd-version
13.1-RELEASE-p5
this will be updated as soon as possible to release 14

I have only one more question. as you can see in zpool status I have 1disk with a partition, the other has no partition, only a GPT partition table. What is best
for a ZFS -pool a disk with one partition, or without ?

thanks
schwedenmann
 
What is best
for a ZFS -pool a disk with one partition, or without ?
My opinion, I like partitions, others like no partitions (whole disk). Why do I like partitions? Consistency. Not all 1TB drives are created equal (same size); with partitions you can guarantee that every provider in a vdev is the same size.
There may also be advantages if you are using SSDs; partitions mean you are leaving some of the disk unused which means the firmware can have an easier time on erasing and remapping. It may be theoretical, it may be real.
I ran across this shortly after it was written, it made a lot of sense to me and I still follow the advice today.

 
Back
Top