FreeBSD just destroyed my UEFI

  • Thread starter Deleted member 63539
  • Start date
Final answer:

FreeBSD didn't destroy my UEFI

Linux didn't arbitrary/automatically flash/update my BIOS without asking me

What really caused all of this is a corrupted GPT partition table, and this BIOS doesn't like that.

There is no way to completely clear the signature of a zpool.

I'm very careful.

I imported the zpool on the FreeBSD live memstick and zpool destroy it. Not works.

I used gpart destroy -F ada1. Not works.

I used gpt destroy da1 on Dragonfly. Not works.

Why I know it's not work? Open Gparted on Linux, and it's still show the old partition table of the previous FreeBSD installation even though I have overwritten this installation with both Dragonfly and OpenBSD. An orange colored zfs label still shown there.

dd if=/dev/zero of=/dev/sdb bs=1M count=256 not works. The corrupted GPT partition table and zfs label is still there.

So what worked?

Only dd if=/dev/zero of=/dev/sdb bs=10M. Yes, it's let it zero-fill the SSD completely that could cleared out the corrupted GPT partition table and zfs label!

I feared this ZFS too much! The current FreeBSD installation of mine is on UFS2. No ZFS, please.
 
Remember after you do the:

# dd if=/dev/zero of=/dev/sdb bs=1M count=256

Also do a:

# sync

Just to be safe. Otherwise it might still be in a buffer by the time an installer analyses the disks.
 
Remember after you do the:

# dd if=/dev/zero of=/dev/sdb bs=1M count=256

Also do a:

# sync

Just to be safe. Otherwise it might still be in a buffer by the time an installer analyses the disks.
I did these dd with a live Linux usb so I restart immediately after dd was done. And no, only completely zero-fill the disk with dd could destroy zfs signature. It's a time consuming trial and error, so it took me many days to come up to the final answer.
 
Set sysctl kern.geom.part.check_integrity=0 before doing gpart destroy -F?
Don't know. I have spent hours dd-ed the disks so I do not want to do that again. Maybe a tutorial in the Howto section about How to clear ZFS signature and the GPT partition table completely is needed.
 
To not let you access the partition table is clearly documented in RTFM dd(1): BUGS section and RTFM geom(4): DIAGNOSTICS; i.e. sysctl kern.geom.debugflags=0x10 (allow foot-shooting).



.
 
To not let you access the partition table is clearly documented in RTFM dd(1): BUGS section and RTFM geom(4): DIAGNOSTICS; i.e. sysctl kern.geom.debugflags=0x10 (allow foot-shooting).



.
No. I did the dd on Linux, a live Linux system on my usb stick. The commands run on FreeBSD are only zpool destroy and gpart destroy -F, both on the live FreeBSD memstick.
 
In a (admittedly) sad way, i have to agree with the OP:
My two fileservers running FreeBSD12.1 are on UFS, no ZFS.
It just wasn't worth the hassle (at least for me).
 
In a (admittedly) sad way, i have to agree with the OP:
My two fileservers running FreeBSD12.1 are on UFS, no ZFS.
It just wasn't worth the hassle (at least for me).
Err, ZFS is not suitable for a test machine that usually be reinstalled with many OSes like mine but it's reasonable for a fileserver, though 😐
 
In a (admittedly) sad way, i have to agree with the OP:
My two fileservers running FreeBSD12.1 are on UFS, no ZFS.
It just wasn't worth the hassle (at least for me).

To be perfectly honest i personally haven't even tried ZFS. On a fileserver i might do it but on a desktop i just fail to see what problem those "advanced filesystems" would fix for me so i just stick to the beaten path of UFS2 on FreeBSD or Ext/XFS on Linux. Some day maybe i'll give it a chance but i am not in a hurry.
 
Err, ZFS is not suitable for a test machine that usually be reinstalled with many OSes like mine but it's reasonable for a fileserver, though 😐
True, but my "problem" was/is that i have to export a glustervolume via Samba (2x2 Disks on both Servers), and the hassle with ZFS just wasn't worth it (albeit i got it working in a first "incarnation").
 
To be perfectly honest i personally haven't even tried ZFS. On a fileserver i might do it but on a desktop i just fail to see what problem those "advanced filesystems" would fix for me so i just stick to the beaten path of UFS2 on FreeBSD or Ext/XFS on Linux. Some day maybe i'll give it a chance but i am not in a hurry.
  • instant snapshots (e.g. every 15 minutes with one of the many utilities in the ports tree)
  • boot envoronments
  • enhanced data integrity along the path from storage media to application
  • advanced management, espc. for jails & VMs
 
  • instant snapshots (e.g. every 15 minutes with one of the many utilities in the ports tree)
  • boot envoronments
  • enhanced data integrity along the path from storage media to application
  • advanced management, espc. for jails & VMs

I am not saying it doesn't provide any useful/interesting features. I am basically being lazy because there is nothing that really pressures me into using it. Also the memory requirement somewhat scares me.

I once tried Btrfs which somewhat is Linux's wannabe version of ZFS and it did little more than confuse me and also quickly got to a pretty hard to recover from state. Don't get me wrong, i very much expect ZFS to be of way higher quality but i am still somewhat reluctant since as long as i can remember traditional filesystems have just worked for me.
 
On an old laptop I was happy with UFS on gjournal(8) (article) & inserted the gsched(8) I/O scheduler (rc script). These two are independent from each other, you can have both or pick just one. gjournal(8) requires setup during install, though.

I have to admit that FreeBSDs storage/disk concepts are still a bit of a mystery to me but some setup involving gjournal is very likely what i think i will end up using (unless it turns out soft updates really make journaling redundant at least - i've read this somewhere but it kind of seems to good to be true). I'll also probably setup some mirroring also but i am not really sure about the exact approach yet. I am somewhat drawn to ccd(4) though as it seems to be reasonably easy and also somewhat compatible with Linux's software RAID (not that i think i'll really need that but it's a nice touch).
 
dd if=/dev/zero of=/dev/sdb bs=1M count=256 not works. The corrupted GPT partition table and zfs label is still there.

So what worked?

Only dd if=/dev/zero of=/dev/sdb bs=10M. Yes, it's let it zero-fill the SSD completely that could cleared out the corrupted GPT partition table and zfs label!

Note that a second copy of the GPT table is stored at the end of a hard disk image. This may be why just cleaning the first couple of sectors did not work.
 
Back
Top