Solved hard drive gets corrupted after exporting

Hi. I have hard drive called ada0, it does get mounted on boot but it doesn't show up in gpart show command. It has a zfs pool on it called wdc, when I export this pool, it shows up on gpart show command and there is saying CORRUPT on the ada0. I tried using gpart recover ada0 twice, it works but it goes back to corrupted state when I export it out. I had a informational output that saying "The primary GPT table is corrupt" while I was trying to fix it.
Code:
# dmesg | grep GEOM
GEOM: ada0: the primary GPT table is corrupt or invalid.
GEOM: ada0: using the secondary instead -- recovery strongly advised.
Code:
=>        40  1953525088  ada0  GPT  (932G) [CORRUPT]
          40  1953525088     1  freebsd-zfs  (932G)

I booted a Windows 10 ISO and saw that something wrong with my hard drive, it was looking like its formatted in Windows' partitioning section. That's how i noticed that.
 
My guess is that you first create an GPT scheme on the disk and created an freebsd-zfs partition (ada0p1) but when you created the ZFS pool instead of giving the partition ada0p1 for the pool you created with the entire disk (ada0) only. Which corrupt your GPT table at the start of the disk. You can check the zpool history to see how the pool was created and if it was created via ada0 instead of ada0p1 then do not try to recover the GPT partition as it it won't be needed. Instead use the entire disk as it's now.
 
If the whole disk was added to a ZFS pool then there shouldn't be a partition table. Both ZFS and GPT partition table write some data at the end of the disk. They keep overwriting each other's meta data (that's why the partition table is marked as "CORRUPT").

Either ignore it, or remove the partition table completely.
 
You guys hit the jackpot! Thank you very much.
History for 'wdc':
2023-12-18.15:58:20 zpool create wdc /dev/ada0
 
I didn't look up what ZFS hold's on the first LBA0-LBA3 sectors it may not be critical but if the disk is still empty and you don't have to backup/restore the data i would suggest you to recreate the pool.
 
I didn't look up what ZFS hold's on the first LBA0-LBA3 sectors it may not be critical but if the disk is still empty and you don't have to backup/restore the data i would suggest you to recreate the pool.
ada0 has some data but the same data is staying in another drive too, so i recreated the pool and i will start copying data back to ada0.
 
Copying is done. Also, I had to remove zpool label on /dev/ada0 with zpool labelclear -f /dev/ada0 otherwise when i trying to import it i was getting the message below.
# zpool import wdc
cannot import 'wdc': more than one matching pool
import by numeric ID instead
 
Back
Top