UFS 2 partitions, one on top of the other

I'm migrating from an old FreeBSD box to a new one. To save a little money I had planned to backup the data and reuse the hard drives in the new box. I used a 8TB WD My Book drive for the external backup drive. It came with a ntfs partition. I used dd if=/dev/zero of=/dev/da0 count=2 and then created a new ufs filesystem with newfs. I did not remove the previous ntfs partition before creating the new ufs partition. After the migration I plugged in the external drive and couldn't mount it. When I checked gpart was showing only a ntfs partition. The messages log shows when I plugged it in it said the secondary gpt is corrupt or invalid. I tried using TestDisk recovery software. It shows both a ufs and a ntfs partition with the same start and end numbers on that disk. I'm looking into see if that can correct the issue.

Does anyone know of any way I can mount the ufs partition on the disk?
 
Know for sure? No.

Here is what you did. At the beginning of the disk is a partition table. Let's assume that it is a modern GPT partition table; those things are a few dozen KB long. There is a second backup copy of the partition table at the end of the disk (that's part of the GPT specification). Using dd, you severely damaged the partition table, by overwriting the first 1KB of it: The dd command you used write two blocks, each 512 bytes long (that's the default block size in dd). At this point, you have a "broken" disk. By broken I mean that the two copies of the partition table, at the beginning and end, are different.

Then you created a UFS file system. How did you do that? Did you partition the disk first (for example using gpart)? And what device did you create the file system on, /dev/da0 (which would be the whole disk, including the partition table), or /dev/da0p1 (which would be the first partition table)? If it is the former, then the UFS file system has now completely blown away the first partition table. If it is the latter, then I wonder what partition table was used to define /dev/da0p1, since the one at the beginning of the disk is already damaged.

Then you rebooted. I have no idea how FreeBSD reacts when the two copies of the partition table are different. But what's worse: There are some helpful BIOSes which verify that each copy of the partition table is uncorrupted (the one at the beginning was at this point at least damaged, perhaps completely blown away), and if they find one good one and one corrupted one, they "helpfully" (ha ha) overwrite the damaged one with the good one. If you have a UFS file system on /dev/da0, and the BIOS "helpfully fixed" the partition table, then you now have a UFS file system with the very first bit damaged by an old partition table having been dropped on top of it, like a piano falls out of a window in a cartoon. Even if your BIOS did make things worse, you still have something broken, and it's hard to guess how tools react to broken on-disk data.

Then you ran TestDisk. I have no idea what it does. The fact that it shows two contradictory things demonstrates that your disk is pretty confused right now.

Before we can really help, we'll need a lot more information, both about what exactly you did (ideally you have records of which commands you ran), and some output from programs such as gpart to show what the current state of affairs is.

Honestly, I fear that repairing this after the fact will be so difficult as to be practically impossible, but maybe the damage is only slight.
 
I noticed a typo I made in my post. The dd command was against da1 not da0. After I ran dd I ran newfs -m 0 /dev/da1. That created /dev/da1p1. I was then able to mount da1p1, mount -t ufs /dev/da1p1 /backups/usb_drive, and write files to it with no issues. Below is the output of show and list from gpart.

root@nas001:~ # gpart show da1
=> 34 15628052413 da1 GPT (7.3T) [CORRUPT]
34 2014 - free - (1.0M)
2048 15628048384 1 ms-basic-data (7.3T)
15628050432 2015 - free - (1.0M)

root@nas001:~ # gpart list da1
Geom name: da1
modified: false
state: CORRUPT
fwheads: 255
fwsectors: 63
last: 15628052446
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da1p1
Mediasize: 8001560772608 (7.3T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r0w0e0
rawuuid: 8546235e-6e95-40f7-810c-b3d2cfb00b2a
rawtype: ebd0a0a2-b9e5-4433-87c0-68b6b72699c7
label: easystore
length: 8001560772608
offset: 1048576
type: ms-basic-data
index: 1
end: 15628050431
start: 2048
Consumers:
1. Name: da1
Mediasize: 8001562869760 (7.3T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r0w0e0


Not sure if it would be helpful but here's some of the output from the TestDisk log.

Disk /dev/sdb - 8001 GB / 7452 GiB - CHS 972801 255 63
UFS 2 - Little Endian 0 0 1 972801 70 5 15628052480 [/backups/usb_drive]
UFS2 blocksize=4096, 8001 GB / 7452 GiB
UFS 2 - Little Endian 0 1 2 972801 71 6 15628052480
UFS2 blocksize=4096, 8001 GB / 7452 GiB
UFS 2 - Little Endian 0 1 50 972801 71 54 15628052480 [/backups/usb_drive]
UFS2 blocksize=4096, 8001 GB / 7452 GiB
UFS 2 - Little Endian 0 2 51 972801 72 55 15628052480
UFS2 blocksize=4096, 8001 GB / 7452 GiB
NTFS at 0/32/33
filesystem size 15628048384
sectors_per_cluster 8
mft_lcn 786432
mftmirr_lcn 2
clusters_per_mft_record -10
clusters_per_index_record 1
NTFS part_offset=1048576, part_size=8001560772608, sector_size=512
NTFS partition cannot be added (part_offset<part_size).
NTFS at 0/32/33
filesystem size 15628048384
sectors_per_cluster 8
mft_lcn 786432
mftmirr_lcn 2
clusters_per_mft_record -10
clusters_per_index_record 1
NTFS 0 32 33 972801 37 36 15628048384
NTFS, blocksize=4096, 8001 GB / 7452 GiB
UFS 2 - Little Endian 79 209 3 972881 24 7 15628052480
UFS2 blocksize=4096, 8001 GB / 7452 GiB
 
I noticed a typo I made in my post.
No matter ... as long as we're consistent now.

After I ran dd I ran newfs -m 0 /dev/da1. That created /dev/da1p1.
Strange. I didn't know that newfs on a whole disk would create a partition table. The documentation doesn't say anything about that.

Below is the output of show and list from apart.
Which shows only a Microsoft partition. The "corrupt" part in the output means that it didn't like one of the copies of the GPT (probably the one at the beginning), and is using the other. You could now use "gpart recover" to fix that corrupt problem ... but that probably would only make it worse, since the only partition table gpart can read is the one where the UFS file system is missing.

I don't understand the output from test disk at all.

At this point, the only idea that crosses my mind is to read up on the internal format of GPT partition tables, and manually (using a disk editor) figure out whether anything useful can be found in the two copies, and try to manually put it back together. That is many hours of work for an expert. Perhaps someone else has better ideas.
 
Because you didn't partition the disk, just created a filesystem on it, the system used the backup GPT partition table. That's why you got a p1, it's a holdover from the original partition table. But, your ufs filesystem goes overlaps ontop of it, so not fully accessible (start of filesystem doesn't match start of position so all file offsets will be wrong).

At this point, the only way to "fix" it is to completely repartition it from scratch and do it properly. Any data you have on it is pretty much useless. (Unless you're going to spend dozens of hours trying to recover it using some very low-level tools.)

Edit: you might get lucky by mounting da1 directly, as that's where the ufs filesystem starts. But if you copied data to it using da1p1 as the mountpoint, you may get corrupted data.

Better off starting from scratch and doing it properly.
 
Back
Top