Resizing a UFS volume

I'm using FreeBSD 8.3.

I have HW raid hardware RAID and I choose using this rather than ZFS mostly for my ability to grow and rebuild my RAID6 setup which ZFS cannot do within the same array according to documentation.

I started with five drives of 1.5 TB in RAID6 giving one disk of 4 TB as far as FreeNAS is concerned. I created a UFS volume and filled it with data. Then I added six more drives and rebuilt the array. This gave me a 12 TB drive but my volume was still 4 TB, which is to be expected as I still had to extend the partition and file system which apparently is not something people do.

Some of you might be critical of the choices above but what I'm really looking for here rather than what I should have done is what I should do now.

Here's what I did after reading posts like this one:
http://bsdbased.com/2009/11/30/grow-freebsd-ufs-filesystem-on-vmware-hdds
  • I booted in single user mode
  • I ran some checks with gpart which told me the table was corrupted
  • I recovered with gpart: # gpart recover mfid2
  • Then I resized the partition: # gpart resize -i2 mfid2
  • Ran a check and now have a full 12 TB partition
  • Tried to extend the UFS filesystem: # growfs mfid2, which returned the error:
    Code:
    superblock not recognized

That's where I am at now the volume does not mount anymore. The data on the disk could not possibly be conveniently backed up due to the size. I backed what could not be replaced so it is not ultra critical for me to recover it but it would be very nice to and would save me a lot of time.

Does anyone have an idea of how I can recover from this sorry state and how to grow properly? I have room for eight more disks and even if I say goodbye to my data and start from scratch I will want to grow in the future.

Thanks for any light you guys can shed.
P
 
FreeNAS may do some things by default that are different than ordinary FreeBSD.

The output of gpart show could be helpful. mfid2 appears to be a full device with a GPT partition scheme but no partitions. newfs(8)ing the whole device would overwrite one or both of the partition tables, explaining the "GPT corrupted" error. But then gpart recover would "repair" that, overwriting part of the filesystem, either the first or last blocks. And that would explain why growfs(8) has a problem.

If you're going to partition the array, put the filesystems inside the partitions. Those will have device nodes like mfid2p1, mfid2p2, and so on.

I don't use growfs(8). It's probably reasonably safe, but it does not produce the same end result as starting with a large filesystem. But that doesn't really apply, because for this many disks I would use ZFS. ZFS can grow, depending on how the disks are arranged. There are people here with experience with mid- and large-sized arrays who can advise the best way to set that up.
 
Thanks for that. Indeed my use of growfs was obviously wrong. I did:
Code:
#growfs /dev/mfid2p2
growfs: we are not growing (1097375467 ->71949547)

Now that's strange. A couple of things, my system is a 64 bit AMD machine and the disk was full. I read there were some bugs a while back with 64 bit systems and that you need free space in the first cylinder but the numbers above don't make sense.

Code:
#df /dev/mfid2p2
df: mkdtemp (/tmp/df.68gUro") failed: Read-only file system

I set up FreeNAS on the USB drive which might explain but when I do:
Code:
#diskinfo /dev/mfis2p2
/dev/mfid2p2    512    13488844881408 263445400159  0  2147549184  1639925  255  63

And:
Code:
#fsck /dev/mfid2p2
fsck: Could not determine file system type

But:
Code:
#gpart show mfid2
=>  34  26349594557  mfid2  gpt  (12T)
    34            94  - free -  (47k)
    128      4194304      1  freebsd-swap  (2.0G)
4194432  26345400159      2  freebsd-ufs  (12T)
 
Back
Top