I think I did it.
Code:
Every 3.0s: zpool status galba: Thu Oct 24 21:35:36 2024
pool: data
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub in progress since Thu Oct 24 21:34:12 2024
223G scanned at 2.65G/s, 12.1G issued at 147M/s, 2.97T total
0B repaired, 0.40% done, 05:51:18 to go
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
md-uuid-93c000f1:64b4b6c9:5dd48d17:338fd75c ONLINE 0 0 35
ata-WDC_WD80EFPX-68C4ZN0_WD-RD1ALJSD ONLINE 0 0 40
errors: 1 data errors, use '-v' for a list
For anyone who has the same problem as me, I'll include some useful commands
My MDADM array superblock is gone after a restart. What to do?
This is a set of commands assuming that you created a RAID on devices, not partitions, like /dev/sdx
I had a RAID0, so I had to make sure I use the same order in which I created the array, but nothing happens if you don't get it on a first try.
This is how to recreate
RAID0 array after the superblock has been lost.
Code:
mdadm --create /dev/md0 --level 0 --raid-devices=2 /dev/sdf /dev/sde --assume-clean
Assume clean is very important because that doesn't override any existing data
After this, my old ZFS partitions came back!
Code:
root@galba:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 32G 0 disk
├─sda1 8:1 0 31G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 975M 0 part [SWAP]
sdb 8:16 0 64G 0 disk
└─sdb1 8:17 0 64G 0 part
sdc 8:32 0 7.3T 0 disk
├─sdc1 8:33 0 7.3T 0 part
└─sdc9 8:41 0 8M 0 part
sdd 8:48 0 7.3T 0 disk
├─sdd1 8:49 0 7.3T 0 part
└─sdd9 8:57 0 8M 0 part
sde 8:64 0 3.6T 0 disk
└─md0 9:0 0 7.3T 0 raid0
├─md0p1 259:0 0 7.3T 0 part
└─md0p9 259:1 0 8M 0 part
sdf 8:80 0 3.6T 0 disk
└─md0 9:0 0 7.3T 0 raid0
├─md0p1 259:0 0 7.3T 0 part
└─md0p9 259:1 0 8M 0 part
I had to run the mdadm --create twice, because I didn't get the order of the disks right on the first try
The disk still UNAVAIL, so I had to bring back the old UUID
Running zpool import didn't do anything
This brings back the old UUID
Code:
mdadm --assemble --update=uuid --uuid=93c000f1:64b4b6c9:5dd48d17:338fd75c /dev/md0
After that, the disk was REMOVED instead of UNAVAIL!
Now it's important to update mdadm.conf if you haven't (I didn't have to because UUID matched.) and
update initramfs!
After that I rebooted and voila! I successfully brought back the MD Array.
So I detached the DEGRADED drive and currently running zpool scrub. It's twice as fast. Maybe thanks to RAID0 but I think the old drive was slowing it down.
Code:
root@galba:~# zpool status
pool: data
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub in progress since Thu Oct 24 21:34:12 2024
452G scanned at 544M/s, 161G issued at 194M/s, 2.97T total
0B repaired, 5.31% done, 04:13:00 to go
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
md-uuid-93c000f1:64b4b6c9:5dd48d17:338fd75c ONLINE 0 0 35
ata-WDC_WD80EFPX-68C4ZN0_WD-RD1ALJSD ONLINE 0 0 40
errors: 1 data errors, use '-v' for a list