UFS Cannot mount UFS partition from eSATA dock but ZFS is OK.

I have a machine running FreeBSD 13:

Code:
# uname -a
FreeBSD guarneri.mannynkapy.net 13.0-RELEASE FreeBSD 13.0-RELEASE #0 releng/13.0-n244733-ea31abc261f: Thu Apr 29 21:07:57 PDT 2021
#

with an eSATA PCIe card and two external eSATA docks. The system was initially set up as entirely ZFS, the eSATA card and docks were added later.

One dock contains a disk with a ZFS file system that mounts just fine. The other contains an identical disk with a GPT partition containing a UFS file system that won't mount, with the error message:

"No such file or directory".

1) The drive is recognized at boot (relevant lines from /var/log/messages):

Code:
Aug  8 15:29:32 guarneri kernel: ada4 at ahcich1 bus 0 scbus1 target 0 lun 0
Aug  8 15:29:32 guarneri kernel: ada4: <WDC WD30EFRX-68EUZN0 80.00A80> ACS-2 ATA SATA 3.x device
Aug  8 15:29:32 guarneri kernel: ada4: Serial Number WD-WCC4NFLDUCV6
Aug  8 15:29:32 guarneri kernel: ada4: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
Aug  8 15:29:32 guarneri kernel: ada4: Command Queueing enabled
Aug  8 15:29:32 guarneri kernel: ada4: 2861588MB (5860533168 512 byte sectors)
Aug  8 15:29:32 guarneri kernel: ada4: quirks=0x1<4K>

2) I can see the GPT drive with the 'ls' command:

Code:
# ls -l /dev/ada4*
crw-rw----  1 root  operator  0x93 Aug  8 15:28 /dev/ada4
crw-rw----  1 root  operator  0x9f Aug  8 15:28 /dev/ada4p1
#

3) I can see the GPT drive with the 'gpart' command:

Code:
# gpart list ada4
Geom name: ada4
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 5860533134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada4p1
   Mediasize: 3000592941056 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(1,GPT,a04e2460-8a45-11e5-918a-0001c0171347,0x28,0x15d50a360)
   rawuuid: a04e2460-8a45-11e5-918a-0001c0171347
   rawtype: 516e7cb6-6ecf-11d6-8ff8-00022d09712b
   label: Data1
   length: 3000592941056
   offset: 20480
   type: freebsd-ufs
   index: 1
   end: 5860533127
   start: 40
Consumers:
1. Name: ada4
   Mediasize: 3000592982016 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0

#

4) I can read blocks from the GPT drive with the 'dd' command:

Code:
# dd if=/dev/ada4p1 of=/dev/null bs=1m count=4
4+0 records in
4+0 records out
4194304 bytes transferred in 0.011330 secs (370209428 bytes/sec)
#

5) I have a regular directory to use as a mount point:

Code:
# ls -ld /data/1
drwxr-xr-x  2 root  wheel  2 Aug  7 17:08 /data/1
#

6) But even with all of the above, I can't mount the drive:

Code:
# mount -t ufs /dev/ada4p1 /data/1
mount: /dev/ada4p1: No such file or directory
#

Can anybody suggest what I might be missing?
 
No, it shows it's not an UFS formatted partition. Even though the partition type is freebsd-ufs that doesn't mean it's been formatted with UFS. It's bad form if you do this but there's nothing stopping you from using a freebsd-ufs partition and formatting it with FAT32 for example. The question now is, what is the actual filesystem on it?

Lets assume you actually used ZFS on there, what does zdb -l /dev/ada4p1 output?
 
Good news, thank you for the lead! Although the partition was uninformative:

Code:
# zdb -l /dev/ada4p1
failed to unpack label 0
failed to unpack label 1
failed to unpack label 2
failed to unpack label 3
#

ZFS is the only likely alternative to UFS, so I decided to try:

Code:
# zdb -l /dev/ada4
failed to unpack label 0
------------------------------------
LABEL 1
------------------------------------
    version: 5000
    name: 'pool_data_4'
    state: 1
    txg: 3589145
    pool_guid: 9372444553212849935
    hostid: 1427974294
    hostname: 'aristarchus.mannynkapy.net'
    top_guid: 5813426613154126946
    guid: 5813426613154126946
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 5813426613154126946
        path: '/dev/ada0'
        whole_disk: 1
        metaslab_array: 37
        metaslab_shift: 34
        ashift: 12
        asize: 3000588042240
        is_log: 0
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 1 2 3 
#

My apologies to all for wasting your time. I can not get at the original server that held the disk to see how it was mounted there (since it is in an area under an evacuation order due to wildfire danger), and I had some startup issues with the eSATA hardware so that my first try at zpool import did not identify the pool. I tried guessing from the info provided by gpart list ada4 after I got the eSATA working but I did not know it could be a ZFS filesystem when gpart indicated UFS.

A simple zpool import pool_data_4 succeeded!

Thanks again for your help and instruction!
 
My apologies to all for wasting your time.
If you learned something then it wasn't a waste.


but I did not know it could be a ZFS filesystem when gpart indicated UFS.
I suspect this disk used to have a partition table on it, with a regular UFS filesystem. But at some point in time the whole disk was used in a ZFS pool without wiping the old partition table. That partition table survived, the data inside those partitions however did not.

Something to keep in mind next time you want to use a whole disk for ZFS, make sure any existing partition table is completely wiped ( gpart destroy ...) before adding it to the pool.
 
It's been in 4 machines that I know of if you count the current eSATA hook up as "in". The gpart destroy is another good tip, thanks! (The ZFS docs make it sound like ZFS will take care of the whole disk, but obviously it doesn't.)

Is there a way to wrap this thread up? Mark it solved, or edit the top comment, or?
 
Speaking of tips: Putting a GPT partition table on any disk is a good idea. It prevents screwups. This immediately implies not using a whole disk for ZFS, but only a partition (even if that partition is nearly as big as the whole disk). And putting human-readable partition labels on the partition is a GREAT idea. If the partition table had said that ada4p1 is named "zfs_elephant_pictures", you could have saved some drama. Somewhere here is a thread about "Best practice for using disks or partitions for ZFS" or a similar title, which discusses some good ideas.
 
I would imagine that it's good practice to clean a disk out completely, BEFORE adding it to a ZFS pool. As in, back up the important files, and format the disk to match the pool you're adding it to. Yeah, it's some additional work, but I'd rather have a clean, matching disk to work with, than try to attach a disk 'as-is' and be faced with hard-to-debug surprises.
 
Back
Top