UFS FreeBSD 9.1 cannot boot

Hello,

I have an old FreeBSD 9.1 that cannot boot. Yes, I know that is very old, but it was working very well until now.

I need help, please.

I got some GEOM: da0: media size does nor match label
WARNING: da0a expected rqwoffset 0, found 63

The boot stop with error:

Trying to mount root from ufs:/dev/da0s1a [rw]
g_vfs_gone():da0s1a[READ(offset=210565021696, length=16384)] error = 5
Mount from ufs:/dev/da0s1a failed with error 5.

The server is running from ESXi and I tried to expand the filesystem, so I powered off the system, expanded it in ESXi and then power on ... I now I cant boot.

Any idea please ?

Thanks!
 
FreeBSD 9.1 has been end-of-life since December 2014 and is not supported any more.
Topics about unsupported FreeBSD versions

I got some GEOM: da0: media size does nor match label
WARNING: da0a expected rqwoffset 0, found 63
and I tried to expand the filesystem
Your partition table is screwed up.
 
SirDice Yes, yes, yes... i know that is not supported, but was working very well with no problem.

About the partition table, I only resized the image in ESXi, not in FreeBSD.

I managed to boot with a CD and gpart return this (I cant copy&paste, so I write it) :

Code:
# gpart show
=> 0 2097152000 da0 BSD (1.0T)
   0 63 — free — (32K)
   63 733699200 1 freebsd—ufs (350G)
   733699263 1363452737 — free — (650G)


The additional space is the new storage. But I havent touched the partition table yet.

How is possible that the boot failed only growing the storage but not the partition table ?

Please, any help is really appreciated.
 
This is bsdlabel return:

Code:
# bsdlabel /dev/da0
8 partitions:
#    size    offset    fstype    [fsize    zsize    bps/cpg]
a:    733699200    63    4.2BSD    0    0    0
c:    734003200    0    unused    0    0          # "raw" part, don't edit
 
And fdisk return

Code:
sysid 165 (0xa5), (FreeBSD/NetBSD/386BSD)
    start 63, size 319436397 (155974 Meg), flag 0


Is this the error ? fdisk show size 319436397 (155974 Meg). This should be 733699200 ?
 
This appears to be a "dangerously dedicated" disk. You have no partition table at all only a bsdlabel, that's probably part of the problem. According to the boot data there should be an MBR with at least one slice (ufs:/dev/da0s1a). You either didn't write out everything or the resizing messed up the partition tables. As this is VMWare, did you make a snapshot before you expanded the disk? I would just restore the snapshot, that's the easiest solution.
 
Finally got the server to boot correctly. For some reason (probably because it's a "dangerously dedicated" as you said), the boot slice is now /dev/da0a

I modified the fstab file pointing root to that device and it boot. This is gpart output now on the running system:

Code:
# gpart show
=>        0  319436397  da0cs1  BSD  (152G)
          0  311036440       1  freebsd-ufs  (148G)
  311036440    8388517       2  freebsd-swap  (4G)
  319424957      11440          - free -  (5.6M)

=>        0  419424957  da0cs1c  BSD  (200G)
          0  311036440        1  freebsd-ufs  (148G)
  311036440    8388517        2  freebsd-swap  (4G)
  319424957  100000000           - free -  (47G)

Which I even don't understand such numbers and slices.

Code:
# df -h
Filesystem                                                   Size    Used   Avail Capacity  Mounted on
/dev/da0a                                                    338G    310G    923M   100%    /
devfs                                                        1.0k    1.0k      0B   100%    /dev

I don't know exactly what happened. I only expanded the disk in VMWare, I haven't touched the partition, labels .... This server was running for several years without rebooting. So, the screwed up partition table or labels was here since a lot of time.

I can't take a snapshot because you can't have snapshots if you want to expand the disk image.

I'm really lost, but at least the server is up again ... but out of disk space.
 
This server was running for several years without rebooting.
A good reason to regularly (re)boot servers is to find issues like these. I never understood this contempt for rebooting servers. Yes, you shouldn't need to do it, but it's also a good way to test if the changes you made to the system were actually correct and the system is still able to boot properly, services are correctly started, etc. There's always a risk of a power outage and you want things to work properly when the power is restored. Besides that, an uptime of several months or even years also means somebody hasn't been installing their security updates either. That's just bad practice these days.

I don't know what this server is running but I still recommend building a new server with a supported version and migrating the data off it. Then keep it regularly updated. Keeping the status quo (never patch, never update) because someone is afraid it might break the application is how we managed servers 30 years ago, time's have changed. Time to join the rest of us in the 21st century.
 
Back
Top