Short version: I think I messed up the slice or partition on one of my storage drives and I'd like to fix it or at least recover the data.
Long version:
I'll preface this by saying I'm pretty much a Unix newbie, so I may have missed something obvious and, if you have any help to offer, I'll probably need you to be as explicit as possible.
I have file server with multiple RAID 5 arrays that was running FreeBSD 6.0 for years without issue. Recently, however, I attempted to setup a copy of several different Linux distros on a spare drive. I couldn't get the drivers for one of my two RAID controllers to work under any of the various distros and versions I tried and, in the process of one of the installs, I managed to write GRUB to my FreeBSD system drive. This meant I could no longer boot the old system so I decided I might as well install FreeBSD 7.2 on the spare drive.
The install went fine and I was pleasantly surprised to see that 7.2 supported both of my RAID controllers without any additional configuration. However, the older controller, a RocketRAID 464, had two RAID 5 arrays connected to it and I was only able to mount one of them.
Upon investigating /dev, I found that the array having the issue showed up only as ar0 while, for the working array on the same card, /dev contained ar1, ar1s1, ar1s1c and ar1s1d. This leads me to believe the slice was somehow damaged. The array was formatted with ufs2, as was everything else.
I can think of two possible causes. The first would be that something occurred during one of the various installs. The other possibility is that the array was somehow damaged when I rebuilt it as, when rebooting to perform the first Linux install, I got an error message from the RR 464's BIOS saying the array I'm currently having an issue with was missing a drive. The missing drive was still detecting in the card's BIOS so, after the first Linux distro I tried wouldn't support the card, I had it rebuild the array overnight. The card indicated the process was successful but I was unable to verify this in any OS until I install FreeBSD 7.2 and found it wouldn't mount.
So, with the history out of the way, these are the diagnostics I've run thus far:
fdisk /dev/ar0 yields:
The data for partition 1 seems correct, but the message about "invalid fdisk partition table" is worrying.
Based on this thread, I also tried scan_ffs /dev/ad0 which gave this output:
Right now I'm at a loss as to how to proceed and any help or suggestions you can provide would be greatly appreciated. Some folks in the thread linked above recommended testdisk which sounds applicable but one poster mentioned it may have issues with ufs which makes me hesitant to try it.
Long version:
I'll preface this by saying I'm pretty much a Unix newbie, so I may have missed something obvious and, if you have any help to offer, I'll probably need you to be as explicit as possible.
I have file server with multiple RAID 5 arrays that was running FreeBSD 6.0 for years without issue. Recently, however, I attempted to setup a copy of several different Linux distros on a spare drive. I couldn't get the drivers for one of my two RAID controllers to work under any of the various distros and versions I tried and, in the process of one of the installs, I managed to write GRUB to my FreeBSD system drive. This meant I could no longer boot the old system so I decided I might as well install FreeBSD 7.2 on the spare drive.
The install went fine and I was pleasantly surprised to see that 7.2 supported both of my RAID controllers without any additional configuration. However, the older controller, a RocketRAID 464, had two RAID 5 arrays connected to it and I was only able to mount one of them.
Upon investigating /dev, I found that the array having the issue showed up only as ar0 while, for the working array on the same card, /dev contained ar1, ar1s1, ar1s1c and ar1s1d. This leads me to believe the slice was somehow damaged. The array was formatted with ufs2, as was everything else.
I can think of two possible causes. The first would be that something occurred during one of the various installs. The other possibility is that the array was somehow damaged when I rebuilt it as, when rebooting to perform the first Linux install, I got an error message from the RR 464's BIOS saying the array I'm currently having an issue with was missing a drive. The missing drive was still detecting in the card's BIOS so, after the first Linux distro I tried wouldn't support the card, I had it rebuild the array overnight. The card indicated the process was successful but I was unable to verify this in any OS until I install FreeBSD 7.2 and found it wouldn't mount.
So, with the history out of the way, these are the diagnostics I've run thus far:
fdisk /dev/ar0 yields:
Code:
******* Working on device /dev/ar0 *******
parameters extracted from in-core disklabel are:
cylinders=61031 heads=255 sectors/track=63 (16065 blks/cyl)
Figures below won't work with BIOS for partitions not in cyl 1
parameters to be used for BIOS calculations are:
cylinders=61031 heads=255 sectors/track=63 (16065 blks/cyl)
fdisk: invalid fdisk partition table found
Media sector size is 512
Warning: BIOS sector numbering starts with sector 1
Information from DOS bootblock is:
The data for partition 1 is:
sysid 165 (0xa5),(FreeBSD/NetBSD/386BSD)
start 63, size 980462952 (478741 Meg), flag 80 (active)
beg: cyl 0/ head 1/ sector 1;
end: cyl 614/ head 254/ sector 63
The data for partition 2 is:
<UNUSED>
The data for partition 3 is:
<UNUSED>
The data for partition 4 is:
<UNUSED>
Based on this thread, I also tried scan_ffs /dev/ad0 which gave this output:
Code:
ufs2 at 63 size 245115738 mount /storage time Thu Mar 16 22:14:48 2006
ufs1 at 907144059 size 2880 mount /mnt time Thu Nov 3 03:49:18 2005
ufs1 at 907146947 size 2880 mount /mnt time Thu Nov 3 03:49:19 2005
ufs1 at 907149827 size 2880 mount /mnt time Thu Nov 3 03:49:17 2005
scan_ffs: read: Input/output error
Right now I'm at a loss as to how to proceed and any help or suggestions you can provide would be greatly appreciated. Some folks in the thread linked above recommended testdisk which sounds applicable but one poster mentioned it may have issues with ufs which makes me hesitant to try it.