Solved GELI RAIDZ-2 HDD replacement failed - system does not boot anymore

  • Thread starter Deleted member 43773
  • Start date
D

Deleted member 43773

Guest
Hi there,

thanks for reading and thanks for any useful help & tips in advance.

I actually get much done by searching and reading by myself since nearly all problems I face anybody else already had and solutions can be found, but now....

I wanted to build a NAS for our LAN (Media share, Backups).
So I used a HP Medium Tower that was taken out of service and fallen in my hands, placed 4 equal 500GB HDDs into it and installed FreeBSD 11.2.
My ideas were:
- place the whole system completely within the RAID system, not the data pool only, so also the OS is RAID-protected if one (or two) disks fail.
- ZFS seems to have two adavantages for me: easy replacing failed disks, storage volume can be grown to any value by adding/replacing disks
- encrypt the whole system

So in the installation menu I have chosen:
a) Guided ZFS RAIDZ-2 - using all four disks completely - GPT
b) encrypt the disks (GELI)

That brought me the following partion scheme for each disk:
...disk-id ... GPT (466GB)
1 efi (200M)
2 freebsd-boot (512K)
- free - (492K)
3 freebsd-zfs (2.0G)
4 freebsd-swap (8.0G)
5 freebsd-zfs (456G)
- free - (4.0K)

which gave me two pools with four times each
adaXp3 -> bootpool
and
adaXp5.eli -> mypool (those are the encrypted ones I understand)
df shows me an overall systems HDD capacity of 846G (>500G, 1.6 times greater - that's nice, but I don't actually get it besides that RAIDZ-2 does not mean pure four times data redundancy only.)

However, everything was working nicely and fine so far.

Then I've tried to replace the four not failed 500 GB HDDs by four (equal) 1 TB HDDs - as a test to figure out how to handle HDD replacements within zfs and of course enlarge the capacity.

For that I placed every single new 1 TB HDD additionally at the remaining free SATA port 5,
copied each partition scheme onto the associated new HDD, since I understand zpool replace cannot use a naked blank disk as a target; the disk needs to be available in the system,
so each new 1 TB HDD had exact the same partition scheme as the former 500 GB one - except for more free space behind p5 (app. 500G) -
before I did zpool replace and then was physically replaced after that procedure to the same SATA port as the former HDD where its partition scheme was copied from.

But before the physical replacement I started zpool replacement on each HDD
#zpool replace bootpool ada3p3 ada4p3
#zpool replace mypool ada3p5 ada4p5
and since all disks are bootdisks as I understand I also copied the bootsequence each time as the zpool message said
" Assuming you use GPT partitioning and 'da0' is your new boot disk
you may use the following command:

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0"
So I did:
# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada4
(Maybe that was a mistake. Or the mistake was I should have had waited for doing that until resilvering was done.)

Then I waited until zpool status showed me the system is not doing resilvering anymore, shutdown the PC, replaced the drive physically, rebooted and redid the same procedure with the next of the remaining 3 HDDs
ada2, ada1, ada0 - in this order. (But the order should not have any effect, or does it?)

I was a bit perplex about two things.
After each step when Iooked at zpool status the partition names were changed
old disk ada3p5.eli -> new disk ada3p5
(So the encryption was removed respective the data copied but the new parttion was not encrypted yet (zfs/zpool job done independently from GELI)?)
and after #gpart bootcode....
ada3p5 -> /gpt/disk3 (or anything similar. I think, #gpart bootcode was a mistake, but I don't know why.)

However
if the four new 1 TB HDDs failed to be a running system I had no trouble at all - looking for which mistakes I made and try again.
If anything fails, I always can fall back again on my 'old' running system consisting of my 4 500G HDDs - I thought. WRONG!

The system is not booting anymore:
Even not - and that's the really annyoing part for me - from the former four 500 GB HDDs after I replaced them physically 1:1 again at their former SATA ports!
"gptzfsboot: No ZFS pools located, can't boot"

All zfs/zpool related stuff I find on the internet only deals either with one single zpool only or is about GELI encryption.
But I have two zpools, and one consists of four encrypted partitions.

So, here are my questions:
For my understanding copying anything to another place shall not affect the source.
zpool replace is obviously not a simple copy routine but also affects the source? How? Why?
What I do not understand?

Does the data still exists on the former 500 GB HDDs and is there a chance to get access to it again?

When I boot in a live-System (FreeBSD 11.2 from USB-Stick) I can see all four disks with their partioning scheme - but I cannot mount them, because they are parts of a zfs/zpool and encrypted - as far I got it.
But what I do need to do first:
Get access to the zpool(s) and then care about the encryption
or vice versa, deal with the encryption first and then get access to the zpool?

Thanks for reading, understanding and for any useful help

Profighost
 
Well, after learning a bit more about that stuff I think I may understand the following points:
The replace command detached the old partitions - they are in some "passive/excluded state"; so data may still available, but it could be hard to get the system back running as it was before, since I have not only to reattach the partions to their former pool (that also have to be made from an external live system anyhow...) but additonlly comibining that at the same time with the RAIDZ plus combining the encryption...
However since I still have the data backupped on other HDDs I suffer of no real data losses and so it would be more efficient and quicker to set up a new system ....

Furthermore I've learned, that RAID may not offer me the systems data-loss security as I hoped for and ZFS however works a bit other as I thought it does.
I have to get a better understanding of the zfs's differences and consistences of drives, vdevs, pools etc.
and maybe better work with snapshots, mirrors and "traditional" backups anyway instead of relying on RAIDZ2 on ZFS only.

I have to rethink the topology for the next shot and have to take a closer look before I decide.

Thanks for reading and publishing, but I think this one can be removed :)

Thanks

Profighost
 
So I figured out my problem:
when doing
#gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
it is very useful not to write the bootcode(s) onto the correct drive only (adaX) but into the correct boot-partition, too:
-i X (!)
So in my example above (1st post) I should have written
#gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 ada4
because ada4p2 is the boot-partition, not 1....

No wonder the system does not boot anymore if the bootpartition contains not bootcode, dah!

RTFMaui! ...and understand it! :-D
 
I'm still facing problems with this sh...-it.
My Idea was to set up a 4x500GB RAIDZ2 ZFS filesystem and then grow it to 4x3TB.
Why don't set up directly 4x3TB?
To see how to handle disk replacements and how the system behaves - to learn.
If I simply set up the system, which would take me 3...4 hours plus copy data and then rely on it I'm pretty sure when the first disk fails in a couple of years or so and I have absolutely no experience whatsowever a data loss would be way more likely as if I already had done things, experiences, notes and can handle the problem as cool and without panicking as I already had done some disk replacements.
And I want to rely on that it can be done.

So setting up a RAIDZ2 ZFS on 4x500 GB with guided diskusage while installation works fine and gives me the partition scheme as I shown in my first post. (btw is anybody actually reading this? If not - simply close and kill it, please.)
Since I understood RAIDZ2 accords RAID5 - 4 disks are combined to use faster write/read plus data redundance for the cost of app. half available storage - it withstands a failure of two disks, I detach two disks and look what happens.
And yes, no matter which two disks I detach, the system boots, comes up and looks like as nothing has happend - besides of course zpool status calls a Degraded pool with the two missing disks/partitions.
So my next idea was:
If this is so, why not replace the disk(s) by "simulate" a disk failure, because if one or two disks fail I have to replace the disks anyway - and there shall be no data losses.
So I unplugged 2x 500GB HDDs, attached 2x3TBs and reboot. Everything's fine so far.
Then I copy the partiton scheme on the new HDDs, replace the partiton in the pool, wait until resilvering is down, copy the bootcodes, shutdown the PC, detach the remaining 2 x500GBs and attaching the second 2x3TBs and try to reboot...
Does not work. The system does not come up anymore.
That's what I do not understand, what really irritates me.
I cannot figure out: Do I something wrong or is it something about ZFS/RAIDZ2 I simply do not understand yet?

All I can make out is that there must be a difference as well within the disks as their replacement has to be handled.
However that is not what I want.
I want if one or in this case 2 disks (worst case scenario) fail I want to be sure I simply can replace them and the system comes back up again like as nothing has happend.

So after all I figured out it doesn't matter if I do not understand ZFS right, it simply does not help me with what I want, for my current needs it simply complicates my system only, so I do not need it - at least not now for this job.

However, I rethought about what I actually want:
Must:
- I want to have additional storage space available within our LAN (NFS/Samba)
- it shall be our backup storage pool, so there must be no data loss if a determined number of physical storage devices (HDD) fail physically (aging/wastage).
Nice:
- failed disks can be replaced
- the storage volume shall be growable, expanded by adding new disks or replacing disks with more capacity
Don't care:
- file system or partition scheme as long as it can provide readable/writeable data via LAN (NFS/Samba) for other machines running FreeBSD, MacOS, Windows or Linux.

So I think it would best for my task to reduce to the tools that simply already provide do to the job:
graid, geom
 
SOLVED - at least the boot problem.
After reconsidering it again I gave ZFS another shot, and now it runs the way I wanted.
The problems wasn't neither ZFS nor GPT but (U)EFI.
UEFI => more security, less boot
After copying/creating the parttable according to https://wiki.freebsd.org/UEFI i used newfs_msdos to create a fs on adaXp1 (see parttable above) and copied the loader.efi.
Thus was the crucial missing point why my replaced HDDs weren't bootable.
 
Back
Top