resizing partitions/slices or replace FreeBSD

Hey All,

I'm faced with having to replace FreeBSD on some servers in favor of an OS that where I can easily resize partitions/slices. Is there anything new/recent that will work to resize partitions/slices on FreeBSD?

I'm virtualizing pretty much every server I can, and it looks like VMware will be supporting FreeBSD in their new version 4 vSphere of ESX (which is great!), however, resizing VMs is pretty standard, and with it being difficult and dangerous to the point that I don't even want to mess with it, FreeBSD may no longer be an option for me.

Is there anybody out there that knows if gparted, etc. will be supporting FreeBSD in the near future? Even though FreeBSD will be supported on VMware 4, are there other admins out there that will realize you can't really resize the server easily and dump it in favor of pretty much any other server OS?

I'm probably going to have to replace FreeBSD with something like Ubuntu Server.

Mike
 
Pardon my noseyness, but why do you need to resize anything and what specifically do you need to resize?

There is growfs(8), but as you might gather from its name it only allows one to enlarge filesystems.

Take a look at gvirstor(8) too.

Not sure what "gparted" is in the context you've used it, but FreeBSD has a disk partitioning tool called gpart(8).
 
We have a number of FreeBSD servers we use for web servers...so they run Apache, MySQL, etc.

Noseyness is fine by me, if you have better ideas my ears are open...here are some reasons I need to do resizing:

1) We are...not always, but somewhat frequently adding/removing services like Moodle and the amount of required storage space may grow quickly.

2) We are looking to virtualize pretty much all the servers we can and it doesn't make sense to allocate all the space a physical server has allocated when it's only using a fraction of it.

3) For virtual machines, I allocate the system resources they need today, not what they might need some day. Resizing storage has been easy and fast for other OSes so when I need more it's an easy task.

I've used gparted to resize linux storage and it's worked great http://gparted.sourceforge.net/ I'm also looking into parted magic http://partedmagic.com/

I've looked into growfs, but it's just too risky for me. I actually tested it out and didn't have a lot of luck (probably user error, though). Having to rebuild a server isn't something I really want to deal with.

I'll look into gvirstor.

Mike
 
Where / how is your storage managed? Are you using Network Attached Storage (NAS) via NFS/SMB/CIFS etc? Are you using Storage Are Network (SAN) via iSCSI, FibreChannel, etc? Are you using Direct Attached Storage (DAS) where the disks are in the same system as the VMs?

Mike_MT said:
1) We are...not always, but somewhat frequently adding/removing services like Moodle and the amount of required storage space may grow quickly.

growfs can be used here to expand filesystems/partitions to use extra space in slices.

2) We are looking to virtualize pretty much all the servers we can and it doesn't make sense to allocate all the space a physical server has allocated when it's only using a fraction of it.

Thin-provisioning is what you are looking for here. You allocate X GB of storage to a VM. However, you only provision X-Y GB of physical storage. That way, the VM sees a disk that is X GB in size, but the actual phyiscal storage in use is X-Y. When the VM gets close to using all the physical storage, then you provision more physical storage to the VM. The VM never sees the changes, since it just see a disk of X GB.

For example: you create a VM, allocate a 500 GB disk to is, but only provision 100 GB of physical storage. If the VM gets close to using the full 100 GB, then you provision more physical storage, say an extra 100 GB. So the VM still sees a 500 GB disk, but you are only using 200 GB of physical storage.

3) For virtual machines, I allocate the system resources they need today, not what they might need some day. Resizing storage has been easy and fast for other OSes so when I need more it's an easy task.

See above about thin-provisioning. Configure things for what they'll need in 3 years, but only provide what they need right now. That way, you don't have to change anything down the road.

I'll look into gvirstor.

That will allow you to do thin-provisioning at the OS level. You want to do it at the VM level.

However, it all depends on how your storage is managed (see question at the very start of post).

(ZFS includes some very nice thin-provisioning features, which is why we're moving all our storage to FreeBSD+ZFS, and using that to build a SAN for our VM servers.)
 
FWIW, I never partition all physical disk space in a system. Nowadays that often means leaving over 100 gig unallocated. By doing this I can always create a new partition and mount it somewhere where more space is needed. This has another side effect - short stroking. And if you think about it, this is a lot like running things in a VM and creating virtual disk image files for the amount of space you want to allocate to the VM. Instead of files you're doing the same with partitions.

If partitions are too much trouble or impossible on existing systems, there's also md(4) that allows you to create and mount file backed filesystems without needing to create or edit any partition information.
 
There are some options with FreeBSD here, but first, I'd like to understand the gparted thing better:

Do you shutdown the virtual machine and then grow an ext[34]fs with gparted running on the host machine?

This seems a bit unpractical. You can attach storage to a running virtual machine, no matter what type of storage you use.

The most natural solution would probably be to just attach a new disk, newfs it and mount it into the system. You can use symlinks and configure your services to use multiple filesystems.

Of course, it's less work to have a growing filesystem.

Zfs is definitely a solution if you don't mind to shutdown the vm for growing the virtual disk. No need to use tools like gparted: Zfs can immediately use the additional space. Without shutting down, you can only migrate the data to a larger disk (zfs copies all used data blocks to the new disk), which I wouldn't recommend in a virtual machine, because the I/O bottleneck on a loaded ESX system is likely to be one of your greatest problems anyway.


Other solutions involve choosing a reasonable virtual size like 2TB without having the physical storage. Like described by phoenix, this seems to be possible with ESX already. All the desktop virtual machines can do it. With qemu you can even use a sparse file as a harddisk for the vm.


geom_virstor is also such a solution, but is implemented in the system running on the virtual machine.


A completely different solution is to use nfs, optionally with netbooting. I don't have too much experience with network/nfs performance in a esx vm though.
 
Wow! Thanks for all of the ideas and great information! I'm going to look into ZFS and go from there.

As far as using gparted, growing the fs is by far the easiest thing for me. I just shutdown the vm, extend the disk using vmkfstools, boot the server using gparted and extend it - done.

A lot of the time I don't have anything to do with the applications on the servers, just the OS/system administration and adding disks can get a little confusing for some others. I have added disks before to the VMs, and it has worked, but moving the apps, creating links and pointers, etc. ends up being more work than we want for just adding storage.

My SAN is already configured and in production. Reconfiguring it isn't a good idea, especially for this project. These servers are important, but are only a fraction of the systems that use SAN storage.

This has been the only real issue with FreeBSD as far as my use goes. And now that VMware will be supporting it, it'll be unfortunate for me if I can't figure out an easy way to resolve this issue.

The other piece to this project is that I do need to keep the solution pretty easy as I will be having others in my department do some of this work and take on responsibility for these systems.

Thanks again for all your input!
Mike
 
Mike_MT said:
As far as using gparted, growing the fs is by far the easiest thing for me. I just shutdown the vm, extend the disk using vmkfstools, boot the server using gparted and extend it - done.

Ok, so you don't really mind to shutdown. Well, you can get rid of booting with gparted with zfs in this situation, because zfs will extend the zpool automatically, if the disk was enlarged.

For this to work, you have to put a zpool directly on a disk, meaning you have to boot from somewhere else. Otherwise, you would have to update the partition table.

Ok, to be honest, you cannot really boot from zfs with the current release (7.2), so it is probably a good idea anyway.

Basically, the instructions on http://wiki.freebsd.org/ZFSOnRoot would apply for you, with the exception that you would have a small disk as ad0, and would abandon the d-partition, because you would attach a second disk, and create the zpool directly on it: e.g. zpool create tank ad1

done.

I don't know if all this can be done from an installation CD. I always use an existing installation and attach disks to prepare the new system directly with install.sh.
 
So I've been playing with ZFS and it looks like it should work for me. I understand adding drives to a pool, but is there a way to remove a drive from a raidz pool?

Mike
 
You cannot directly remove a drive from a raidz pool. You can only migrate the pool by replacing single disks. Shrinking is not possible with any kind of zpool at the moment. You would have to move the data to a new (smaller) pool.

If shrinking the filesystems of your VMs with gparted regularly REALLY makes sense to you, I'd suggest to "replace FreeBSD" instead, as there probably won't be a satisfying solution any time soon.

Otherwise...

I don't understand why you would use raidz with a SAN.

I don't understand why you would add drives. If it is for performance (disks on different controllers, or something), I think that raidz is still not a good option (use raid0).

I thought you want to enlarge an existing drive.

To clarify, zfs has no idea of "concatenating" multiple disks. Let's better say, it will try to stripe your data over all available disks all the time. Of course it can do mirroring as well, but that won't give you more capacity. So adding disks is not really a reasonable way to enhance the capacity of a zpool, but it is a way to enhance the performance of a zpool. On the other hand, if the disks are virtual disks (like disk images or partitions on a single disk), you would destroy the performance of the pool by generating random I/O going to a single target.
 
I'm getting a clearer picture now of what the best solution probably is - a single disk zpool.

My 'disk' in the zpool will be just a vmdk file attached to a vmware server, and by default it's already on a SAN so RAID is taken care of...also, it seems like a pretty straight forward task when I need more space (please correct me if I'm wrong) - just create a new, larger size 'disk' (vmdk file) and replace the existing 'disk' in the zpool.

And if I need to reclaim space, I'll have to create a new pool with a smaller disk, migrate the data, then destroy the first pool.

Thanks for all your help!

-Mike
 
I'm getting a clearer picture now of what the best solution probably is - a single disk zpool.
right.

...also, it seems like a pretty straight forward task when I need more space (please correct me if I'm wrong) - just create a new, larger size 'disk' (vmdk file) and replace the existing 'disk' in the zpool.
right. You can use [font="Courier New"]zpool replace pool device new_device[/font] (no shutdown required, but data will be copied).

or

the solution I tried to explain erlier (shutdown required, but no copying of the data), now step by step:

1. You have two disks in the virtual machine: 1 to boot FreeBSD (contains only the /boot directory). And 1 disk with nothing but a zpool on it:
Code:
fdisk -I da0
bsdlabel -wB da0s1
bsdlabel -e da0s1
(... edit the disk label: You need ~512m for the a: partition, and if you need swap, make a b: partition ...)
newfs /dev/da0s1a
zpool create pool da1
zfs set mountpoint=legacy pool
mount -t zfs pool /mnt
mkdir /mnt/.bootfs
mount /dev/da0s1a /mnt/.bootfs
mkdir /mnt/.bootfs/boot
ln -s .bootfs/boot /mnt/boot
(... now install the system to /mnt, setup fstab and loader.conf ...)

2. shutdown the virtual machine
3. enlarge the vmdk:
Code:
vmkfstools -X new_size vmfs_name:disk_name
4. startup the virtual machine, and your zpool will show the new size.

And if I need to reclaim space, I'll have to create a new pool with a smaller disk, migrate the data, then destroy the first pool.
right. You can use zfs send/receive for this task.
 
Back
Top