D
Deleted member 43773
Guest
Hi there,
thanks for reading and thanks for any useful help & tips in advance.
I actually get much done by searching and reading by myself since nearly all problems I face anybody else already had and solutions can be found, but now....
I wanted to build a NAS for our LAN (Media share, Backups).
So I used a HP Medium Tower that was taken out of service and fallen in my hands, placed 4 equal 500GB HDDs into it and installed FreeBSD 11.2.
My ideas were:
- place the whole system completely within the RAID system, not the data pool only, so also the OS is RAID-protected if one (or two) disks fail.
- ZFS seems to have two adavantages for me: easy replacing failed disks, storage volume can be grown to any value by adding/replacing disks
- encrypt the whole system
So in the installation menu I have chosen:
a) Guided ZFS RAIDZ-2 - using all four disks completely - GPT
b) encrypt the disks (GELI)
That brought me the following partion scheme for each disk:
...disk-id ... GPT (466GB)
1 efi (200M)
2 freebsd-boot (512K)
- free - (492K)
3 freebsd-zfs (2.0G)
4 freebsd-swap (8.0G)
5 freebsd-zfs (456G)
- free - (4.0K)
which gave me two pools with four times each
adaXp3 -> bootpool
and
adaXp5.eli -> mypool (those are the encrypted ones I understand)
df shows me an overall systems HDD capacity of 846G (>500G, 1.6 times greater - that's nice, but I don't actually get it besides that RAIDZ-2 does not mean pure four times data redundancy only.)
However, everything was working nicely and fine so far.
Then I've tried to replace the four not failed 500 GB HDDs by four (equal) 1 TB HDDs - as a test to figure out how to handle HDD replacements within zfs and of course enlarge the capacity.
For that I placed every single new 1 TB HDD additionally at the remaining free SATA port 5,
copied each partition scheme onto the associated new HDD, since I understand zpool replace cannot use a naked blank disk as a target; the disk needs to be available in the system,
so each new 1 TB HDD had exact the same partition scheme as the former 500 GB one - except for more free space behind p5 (app. 500G) -
before I did zpool replace and then was physically replaced after that procedure to the same SATA port as the former HDD where its partition scheme was copied from.
But before the physical replacement I started zpool replacement on each HDD
#zpool replace bootpool ada3p3 ada4p3
#zpool replace mypool ada3p5 ada4p5
and since all disks are bootdisks as I understand I also copied the bootsequence each time as the zpool message said
" Assuming you use GPT partitioning and 'da0' is your new boot disk
you may use the following command:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0"
So I did:
# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada4
(Maybe that was a mistake. Or the mistake was I should have had waited for doing that until resilvering was done.)
Then I waited until zpool status showed me the system is not doing resilvering anymore, shutdown the PC, replaced the drive physically, rebooted and redid the same procedure with the next of the remaining 3 HDDs
ada2, ada1, ada0 - in this order. (But the order should not have any effect, or does it?)
I was a bit perplex about two things.
After each step when Iooked at zpool status the partition names were changed
old disk ada3p5.eli -> new disk ada3p5
(So the encryption was removed respective the data copied but the new parttion was not encrypted yet (zfs/zpool job done independently from GELI)?)
and after #gpart bootcode....
ada3p5 -> /gpt/disk3 (or anything similar. I think, #gpart bootcode was a mistake, but I don't know why.)
However
if the four new 1 TB HDDs failed to be a running system I had no trouble at all - looking for which mistakes I made and try again.
If anything fails, I always can fall back again on my 'old' running system consisting of my 4 500G HDDs - I thought. WRONG!
The system is not booting anymore:
Even not - and that's the really annyoing part for me - from the former four 500 GB HDDs after I replaced them physically 1:1 again at their former SATA ports!
"gptzfsboot: No ZFS pools located, can't boot"
All zfs/zpool related stuff I find on the internet only deals either with one single zpool only or is about GELI encryption.
But I have two zpools, and one consists of four encrypted partitions.
So, here are my questions:
For my understanding copying anything to another place shall not affect the source.
zpool replace is obviously not a simple copy routine but also affects the source? How? Why?
What I do not understand?
Does the data still exists on the former 500 GB HDDs and is there a chance to get access to it again?
When I boot in a live-System (FreeBSD 11.2 from USB-Stick) I can see all four disks with their partioning scheme - but I cannot mount them, because they are parts of a zfs/zpool and encrypted - as far I got it.
But what I do need to do first:
Get access to the zpool(s) and then care about the encryption
or vice versa, deal with the encryption first and then get access to the zpool?
Thanks for reading, understanding and for any useful help
Profighost
thanks for reading and thanks for any useful help & tips in advance.
I actually get much done by searching and reading by myself since nearly all problems I face anybody else already had and solutions can be found, but now....
I wanted to build a NAS for our LAN (Media share, Backups).
So I used a HP Medium Tower that was taken out of service and fallen in my hands, placed 4 equal 500GB HDDs into it and installed FreeBSD 11.2.
My ideas were:
- place the whole system completely within the RAID system, not the data pool only, so also the OS is RAID-protected if one (or two) disks fail.
- ZFS seems to have two adavantages for me: easy replacing failed disks, storage volume can be grown to any value by adding/replacing disks
- encrypt the whole system
So in the installation menu I have chosen:
a) Guided ZFS RAIDZ-2 - using all four disks completely - GPT
b) encrypt the disks (GELI)
That brought me the following partion scheme for each disk:
...disk-id ... GPT (466GB)
1 efi (200M)
2 freebsd-boot (512K)
- free - (492K)
3 freebsd-zfs (2.0G)
4 freebsd-swap (8.0G)
5 freebsd-zfs (456G)
- free - (4.0K)
which gave me two pools with four times each
adaXp3 -> bootpool
and
adaXp5.eli -> mypool (those are the encrypted ones I understand)
df shows me an overall systems HDD capacity of 846G (>500G, 1.6 times greater - that's nice, but I don't actually get it besides that RAIDZ-2 does not mean pure four times data redundancy only.)
However, everything was working nicely and fine so far.
Then I've tried to replace the four not failed 500 GB HDDs by four (equal) 1 TB HDDs - as a test to figure out how to handle HDD replacements within zfs and of course enlarge the capacity.
For that I placed every single new 1 TB HDD additionally at the remaining free SATA port 5,
copied each partition scheme onto the associated new HDD, since I understand zpool replace cannot use a naked blank disk as a target; the disk needs to be available in the system,
so each new 1 TB HDD had exact the same partition scheme as the former 500 GB one - except for more free space behind p5 (app. 500G) -
before I did zpool replace and then was physically replaced after that procedure to the same SATA port as the former HDD where its partition scheme was copied from.
But before the physical replacement I started zpool replacement on each HDD
#zpool replace bootpool ada3p3 ada4p3
#zpool replace mypool ada3p5 ada4p5
and since all disks are bootdisks as I understand I also copied the bootsequence each time as the zpool message said
" Assuming you use GPT partitioning and 'da0' is your new boot disk
you may use the following command:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0"
So I did:
# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada4
(Maybe that was a mistake. Or the mistake was I should have had waited for doing that until resilvering was done.)
Then I waited until zpool status showed me the system is not doing resilvering anymore, shutdown the PC, replaced the drive physically, rebooted and redid the same procedure with the next of the remaining 3 HDDs
ada2, ada1, ada0 - in this order. (But the order should not have any effect, or does it?)
I was a bit perplex about two things.
After each step when Iooked at zpool status the partition names were changed
old disk ada3p5.eli -> new disk ada3p5
(So the encryption was removed respective the data copied but the new parttion was not encrypted yet (zfs/zpool job done independently from GELI)?)
and after #gpart bootcode....
ada3p5 -> /gpt/disk3 (or anything similar. I think, #gpart bootcode was a mistake, but I don't know why.)
However
if the four new 1 TB HDDs failed to be a running system I had no trouble at all - looking for which mistakes I made and try again.
If anything fails, I always can fall back again on my 'old' running system consisting of my 4 500G HDDs - I thought. WRONG!
The system is not booting anymore:
Even not - and that's the really annyoing part for me - from the former four 500 GB HDDs after I replaced them physically 1:1 again at their former SATA ports!
"gptzfsboot: No ZFS pools located, can't boot"
All zfs/zpool related stuff I find on the internet only deals either with one single zpool only or is about GELI encryption.
But I have two zpools, and one consists of four encrypted partitions.
So, here are my questions:
For my understanding copying anything to another place shall not affect the source.
zpool replace is obviously not a simple copy routine but also affects the source? How? Why?
What I do not understand?
Does the data still exists on the former 500 GB HDDs and is there a chance to get access to it again?
When I boot in a live-System (FreeBSD 11.2 from USB-Stick) I can see all four disks with their partioning scheme - but I cannot mount them, because they are parts of a zfs/zpool and encrypted - as far I got it.
But what I do need to do first:
Get access to the zpool(s) and then care about the encryption
or vice versa, deal with the encryption first and then get access to the zpool?
Thanks for reading, understanding and for any useful help
Profighost