Other How to shrink / reduce / decrease partition size without losing data with "gpart resize"?

I've searched about resizing partitions and only found about increasing size.

Since sysutils/gpart gpart(8) is the default tool on FreeBSD, I would like to know how to do it with gpart resize .

I tried to experiment shrinking of freebsd-zfs and ntfs partitions on VirtualBox and found that there are two thing I don't like about sysutils/gpart gpart(8) ; (correct me if I'm wrong)
  1. Shrinking cause corrupted disk. There is no warning or nothing. Expanding back to original size can fix the disk.
  2. Allowing shrink / resize even though there is no enough free space. And there is no warning or nothing about it too.
Maybe I'm not using the right tool. If sysutils/gpart gpart(8) can't do that, which tool in FreeBSD can be used for that purpose?

These are the commands and outputs. ( ntfsfix can't fix the error)

Code:
root@FBSD:~ # gpart show -p
=>      40  20971440    ada0  GPT  (10G)
        40    532480  ada0p1  efi  (260M)
    532520      2008          - free -  (1.0M)
    534528   4194304  ada0p2  freebsd-swap  (2.0G)
   4728832   7854040  ada0p3  freebsd-zfs  (3.7G)
  12582872   8388608          - free -  (4.0G)

root@FBSD:~ # gart add -s 2g -t ms-basic-data ada0
ada0p4 added
root@FBSD:~ # mkntfs --quick /dev/ada0p4
The partition start sector was not specified for /dev/ada0p4 and it could not be obtained automatically.  It has been set to 0.
The number of sectors per track was not specified for /dev/ada0p4 and it could not be obtained automatically.  It has been set to 0.
The number of heads was not specified for /dev/ada0p4 and it could not be obtained automatically.  It has been set to 0.
Cluster size has been automatically set to 4096 bytes.
To boot from a device, Windows needs the 'partition start sector', the 'sectors per track' and the 'number of heads' to be set.
Windows will not be able to boot from this device.
Creating NTFS volume structures.
mkntfs completed successfully. Have a nice day.
root@FBSD:~ # ntfs-3g /dev/ada0p4 /mnt/
root@FBSD:~ # touch /mnt/test.txt
root@FBSD:~ # ls /mnt/test.txt
/mnt/test.txt
root@FBSD:~ # umount /mnt/
root@FBSD:~ # gpart resize -i 4 -s 1g ada0
ada0p4 resize
root@FBSD:~ # gpart show -p
=>      40  20971440    ada0  GPT  (10G)
        40    532480  ada0p1  efi  (260M)
    532520      2008          - free -  (1.0M)
    534528   4194304  ada0p2  freebsd-swap  (2.0G)
   4728832   7854040  ada0p3  freebsd-zfs  (3.7G)
  12582872   2097152  ada0p4  ms-basic-data  (1.0G)
  14680024   6291456          - free -  (3.0G)

root@FBSD:~ # ntfs-3g /dev/ada0p4 /mnt/
Failed to read last sector (4194302): Invalid argument
HINTS: Either the volume is a RAID/LDM but it wasn't setup yet,
   or it was not setup correctly (e.g. by not using mdadm --build ...),
   or a wrong device is tried to be mounted,
   or the partition table is corrupt (partition is smaller than NTFS),
   or the NTFS boot sector is corrupt (NTFS size is not valid).
Failed to mount '/dev/ada0p4': Invalid argument
The device '/dev/ada0p4' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
 
  1. You're using gpart(8), which is part of FreeBSD base, NOT sysutils/gpart, which is an entirely different tool, check the port's website.
  2. gpart(8) only works on partitions (creating or modifying partition tables in different formats). To shrink a partition that's actually used by a filesystem, you first need a filesystem that supports size changes and then you must shrink that filesystem before you shrink its partition.
  3. Modifying partition tables is a potentially dangerous operation, it's expected that a sysadmin is aware of that.
 
Btw, if you're thinking about tools like this ancient "Partition Magic", these have knowledge about the filesystems they operate on and include their own code to shrink the filesystems (which isn't a trivial thing to do, filesystems tend to use all the space available to them, so it involves a lot of moving data and updating meta-data structures on disk).

Such features are out of scope for gpart(8), it really only deals with the partitions.
 
  1. You're using gpart(8), which is part of FreeBSD base, NOT sysutils/gpart, which is an entirely different tool, check the port's website.
  2. gpart(8) only works on partitions (creating or modifying partition tables in different formats). To shrink a partition that's actually used by a filesystem, you first need a filesystem that supports size changes and then you must shrink that filesystem before you shrink its partition.
  3. Modifying partition tables is a potentially dangerous operation, it's expected that a sysadmin is aware of that.
  1. Thanks for pointing out. Fixed in the post.
  2. Didn't know how it works under the hood. Thanks.
  3. I think it'd be better if there is a notice or warning as in fdisk or GParted .
Btw, if you're thinking about tools like this ancient "Partition Magic", these have knowledge about the filesystems they operate on and include their own code to shrink the filesystems (which isn't a trivial thing to do, filesystems tend to use all the space available to them, so it involves a lot of moving data and updating meta-data structures on disk).

Such features are out of scope for gpart(8), it really only deals with the partitions.
What FreeBSD tool would you recommend for that purpose?

I mean, I've multi-booted with other OSes and have access to other tools but I'd like to stick to FreeBSD as much as I can.
 
Can't help you with that, sorry. I never looked for a solution cause I never had this problem. FreeBSD is the only OS I have running bare-metal, using ZFS everywhere kind of solves any "resize problem" (even with sparse zvols for virtual machines when the guest supports TRIM) for me.

Edit: Not sure there is a tool that can resize a partition used by ZFS. To my knowledge, the only way would be to export the whole pool and recreate/restore it with zfs send/recv. But maybe someone on here knows better.
 
Shrinking a partition with a filesystem on it is not a good thing to try. Metadata is often stored in specific places so the tools can easily find it. The first block of the partition and the last or next to last block are typical places.
Filesystems may also have multiple copies of metadata stored, again first and last blocks are typical spots. First and last blocks of a filesystem may not be the same as the first and last blocks of a partition.

The output that you posted in the OP kind of leads you to that.
When you first created ada0p4 the gpart meta data would show that where it started and that it was 4G. Then you created a filesystem on it, the internal metadata of the ntfs filesystem would have references to where it started and stopped.
The gpart resize command updates the partition table but the ntfs metadata still hase references to the old sizes so it can't be mounted. Perhaps the NTFS equivalent of a UFS fsck would fix it, I don't know.
If after the gpart resize you reran mkntfs on ada0p4 would probably be able to mount the partition, but yes you would lose your data.

Think about the order of operations:
create partition
create filesystem on new partition

To resize it downwards, the obvious order would be
modify the size of the filesystem this step may or may not lose data
modify the size of the partition

Growing the order needs to be:
increase the size of the partition
grow the filesystem to fill the partition

Growing a partition tends to be safer because you are likely moving the end that doesn't have anything written. Some filesystems have commands that will grow a filesystem after you've grown the partition (UFS has growfs).
If one needs to resize a partition downwards, the assumption is all data will be lost so do a backup first, then resize partition, recreate filesystem.
 
gpart knows nothing about the filesystem layout. The type of a partition (-t option when you add a partition) is merely to tell other tools what may be sitting there. It is like the ethertype field in an ethernet packet, or the protocol field in an IP header. It merely tells the OS what higher level protocol layer code should be run. Similarly, you still need filesystem specific utility to shrink the filesystem. I don’t know if there is an open source program that can do this but you might try this newfangled thing called websearch (some people call it googling) to search the web for one. Anyway, if you can shrink the space used by a FS, you need to do it *before* shrinking the partition.
 
This was the best tool ever.
But conceptually limited. As a standalone tool, it can only work for filesystems it knows (pretty deep knowledge btw), so it can resize them without destroying them.

I think nowadays, we have better concepts for flexibility with storage than resizing partitions, see for example ZFS datasets and zvols ;)
 
Most file systems do NOT implement a shrink functionality. Reason: It's really hard to do correctly; you need to pick up data in the area that is going away, and move it, and do that synchronously (while the file system is in use) and atomically (so if a crash happens, you never leave the system in a bad state). If it has to be done asynchronously anyway, then it is just easier to physically move the data (using for example dump and restore).

As far as I know, ZFS does not implement shrinking a volume. I think it allows growing a volume (I remember using the "-e" switch on zpool online for that, but please check the documentation).

For UFS, there is a growfs command, read the documentation. I think it does not allow shrinking a volume.
 
Shrinking the disk is just so darn fast, convenient, and requires no new hardware. That's why I loved Partition Magic. Filesystems could support this operation not because it's necessary but because it is useful.

Oh well.
 
Filesystems could support this operation not because it's necessary but because it is useful.
Yet, it is rarely requested by users. And in those cases where it is requested, there is usually an alternative (more tedious: copy data over). In contrast, growing existing file systems is frequently wanted.

There is also the difficulty of it. Growing a file system is usually between easy and somewhat easy to implement; the "somewhat hard" examples involve file systems that have built-in assumptions about internal balance, which require re-balancing after growing; or file systems where meta-data structures in a fixed format or place depend on the size of the file system. In contrast, shrinking an existing file system is hard to implement, because existing data needs to be moved, and that needs to be done transparently and safely. For many file systems, there is no need to ever move data within a file system, so they don't have an infrastructure for it, and then shrinking the file system has to start by implementing that infrastructure. And as I said above, that shrinkage needs to be safe, even against crashes in the most inopportune times, and while the file system is in operation.

So one simple alternative is to do it the way partition magic does: Offline, while the shrinking is running you can't access the file system. If you have to do it offline anyway, then the alternative of copying the whole file system is usually acceptable.
 
It only has to be done offline because the OS doesn't support it. There's no reason an OS couldn't lock a file and move it on disk when asked. That's what "defragging" a drive used to do. The "alternative" requires more offline time (copy) and resources (a new disk!). Even if it did have to go offline, there's a huge difference between a couple seconds to change structures and hours to copy data.

I'm just lamenting that nobody thinks this is useful, apparently because they themselves don't/wouldn't use it for reasons that seem to boil down to "I just wouldn't want to do it that way."
 
Even filesystem that try to balance data will leave old data alone so usually growing a FS simply requires adjusting its idea of sizes and disks it is using and may be moving some metadata to the new end. If you want to rebalance the old data you pretty much have to use the copy to a separate place method. Yes this is slow but at least as a side effect you now have a backup!

The other thing to note is that on spinning disks data layout matters for performance. Even if a FS allows shrinking by moving data, it will have spread around moved data where it can find free space so reading such files will involve lots of seeks and performance will suck. Shrinking FS is usually a bad idea!
 
I'm just lamenting that nobody thinks this is useful, apparently because they themselves don't/wouldn't use it for reasons that seem to boil down to "I just wouldn't want to do it that way."
Yeah! Lamenting is egocentric. Maybe start asking yourself why others prefer not doing it. You got some profound hints already.
 
Interesting conversations. Reading alot and having done this stuff for a bit the only time I can think of shrinking a filesystem/partition it's in conjunction with needing to grow another one.
If you think of it from that way, you wind up with
I need to shrink the end of this one and grow the beginning of the one that follows
or
I need to shrink the beginning of this one and grow the end of the preceeding one

Two very different operations. Think about the layout of filesystems, the beginning is typically more important
than the end so things can be very messy trying to grow/shrink.
 
No, there is no FreeBSD equivalent to Linux resize2fs. If you have multiple disks you can do dump | restore. But if you have only one disk then dump to a files and restore after the partition/slice has been made smaller is your only option. Remember you will need to boot from ISO, USB stick or a rescue drive.

Personally, I maintain a rescue drive on USB hard disk (could be USB SSD if you choose to). My rescue drive has been used for this purpose and to recover an unrecoverable UFS boot drive. My rescue drive contains a bootable FreeBSD UFS slice followed by a ZFS slice containing UFS dumps of each of my machines. A separate USB backup disk contains backups of my ZFS pools (plus a copy of the one NTFS also on each of my laptops).

I haven't used the rescue USB disk for resizing of partitions/slices for quite some time because typically I maintain gmirror mirrors of my boot drives. I boot from the alternate disk without geom_mirror active using the physical device names. Destroy the mirror. Repartition and newfs the disk and copy the boot disk partitions using dump | restore. Then create the mirror and reboot using it. I then "loosely" duplicate the process with old half of the former mirror and finally attach it to the new mirror.

For the laptops, since they have only one disk, the rescue USB disk is the only approach.
 
I'm just lamenting that nobody thinks this is useful, ...
You are seeing it black and white. I've worked on implementing file systems (as an employee, in big teams). Sure we thought it was useful. But we had dozens or hundreds of things that we needed to do that were more useful, in some cases absolutely necessary. We rather spend time on things that many customers want or need, not on rare operations that can be useful in limited circumstances.

When is shrinking an existing file system in place useful? Mostly if one needs to re-partition a single disk, because one has made a mistake in how the disk was split over multiple file system. In the real world (of people who do computers for money), systems tend to be bigger, and have multiple disks (dozens, hundreds, or millions). Individual computers tend to not partition disks: they may have a boot/root disk, and everything else comes over the network. Or sometimes they have locally attached extra disks, but those also don't get partitioned. Also, the need to re-partition a disk in place indicates is that the administrator made a mistake in setting up the disk in the first place. But that also indicates that there is a problem with their demand planning, which probably means that more changes to the overall system (of many computers are necessary).

So what I'm really saying: While shrinking (and growing) a disk partition in place can be useful, that usually happens in few cases, and those cases are typically amateurs playing with computers, not important production use. So for the people developing software it's just not worth investing much effort into those cases.
 
It only has to be done offline because the OS doesn't support it.
And there are reasons that OS (actually file systems) don't support it. It's too much work for too little benefit.

There's no reason an OS couldn't lock a file and move it on disk when asked.
Let's think this through a little bit. Sure, I can ask to lock file /home/a. But it is not just that one file that needs to be locked. It is also the metadata on disk that describes where that file is. And somewhere is an "allocation table", which shows which block on disk is free, that also needs to be locked. That allocation table is stored somewhere (memory or disk, perhaps only partially as needed). Lots of parts of the code (and parallel processes) rely in knowing how long that allocation table is; they all need to be "locked" against using that knowledge for a moment, so the length can be changed.

If there is only a single user process, this might be relatively easy. But now do this in a parallel environment (which all of Unix is). Moments later you have deadlocks, priority inversions, stale data being used, and so on. Sure, there are programming techniques to guard against all these cases: it can be done. But it is hard, tedious, and requires lots of work.

That's what "defragging" a drive used to do.
When was the last time you used a file system that required or even supported defragging? Right, about 30 years ago. The world has moved on. Things have gotten better (faster, more parallel, more features), but a certain simplicity like defragging has gone away.

And if you remember defragging on MS-DOS, you will find that it was an offline operation: You couldn't do anything else on your PC while defragging was in operation. And if your computer crashed while defragging (like power outage), more often than not the file system was toast afterwards. We don't want to go back to that mode of operation.

As the ext2/3/4 file systems demonstrate, shrinking file systems can be implemented. It is possible. And knowing those folks (some are friends), I bet they even did it correctly and well. But it only makes economic sense for cases where there is a huge user base (to amortize the development cost over), and a lot of available developer time (to do the hard work), and no higher priority work. This seems to not have been the case for UFS and ZFS.
 
For the laptops, since they have only one disk, ...
For a single-user computer with a single disk (like a laptop), I wouldn't even do multiple data partitions. Small fixed size partitions for things that really need to be separate, and otherwise a single file system. Good candidates for those small fixed size things are: (a) a rescue partition that contains an installer, most Mac and Windows machines have it so you can get the OS back if you screw it up. (b) Swap partition. (c) Boot partition if your OS needs it.

All the rest, pack into one partition with a single file system. Modern file systems (like ZFS) are good enough that you can do management of all data in a single partition or volume.

Servers or clustered computers: Different story.
 
The steady state of disks is full — Ken Thompson
Also true of filesystems. This is why growing a filesystem size is much more in demand than shrinking. This is why zfs will automatically add space when you replace all your disks with larger disks but won’t allow you to shrink. I did this with twice with my zfs originally built from 4 160GB disks to 330GB to 1TB disks. When I moved to 8TB disks, I copied everything using zfs send, recv.
 
Back
Top