Expanding partition after increasing KVM container

I have LVM container for KVM from which I run my VPS.
I have:
vtbd0p2 as /
vtbd0p3 as swap
vtbd0p4 as /usr

I needed to increase space on vtbd0p4 so I created a bigger LVM container and moved there FreeBSD. The problem is I cannot expand last partition on disk.
I used fdisk -u in single user mode according to this article http://bsdbased.com/2009/11/30/grow-freebsd-ufs-filesystem-on-vmware-hdds.

Unfortunately that advice did not work. [CMD="bsdlabel"]-e /dev/vtbd0[/CMD] and [CMD="disklabel"]-e /dev/vtbd0[/CMD] printed this to terminal: /dev/vtbd0: no valid label found
I tryed gpart bud I wasn't successful because after boot to multiuser mode df -h did not show the same sizes as gpart. And df -h was right with smaller size.

Thank you
 
mums said:
vtbd0p2 as /
vtbd0p3 as swap
vtbd0p4 as /usr
Those "p"´s should indicate that you´re using a GPT partition scheme. What is the output of the command:
# gpart show

mums said:
...so I created a bigger LVM container and moved there FreeBSD.
I´m having trouble understanding that part. Maybe you could explain that in a little more detail?

/Sebulon
 
Also, keep in mind that expanding a partition alone will not grow the FS. That needs to be done seperately, using growfs.
 
Sebulon said:
Those "p"´s should indicate that you´re using a GPT partition scheme. What is the output of the command:
# gpart show


I´m having trouble understanding that part. Maybe you could explain that in a little more detail?

/Sebulon

I use virt-manager on Debian to manage KVM. There you can create volumes for virtual machines. I had 11GB volume, I created 30GB and used: # dd if=/dev/backup/template of=/dev/vps/test bs=512K
22528+0 records in
22528+0 records out
11811160064 bytes (12 GB) copied, 181.426 s, 65.1 MB/s
Where 'template' is 11GB and 'test' is 30GB and 'vps' and 'backup' is LVM volume groups. Each one on different physical disk.

Than I booted 30GB image in single user mode:
# gpart show
=> 34 23068605 vtbd0 GPT (30G) [CORRUPT]
34 128 1 freebsd-boot (64k)
162 6291456 2 freebsd-ufs (3.0G)
6291618 3145728 3 freebsd-swap (1.5G)
9437346 13629440 4 freebsd-ufs (6.5G)
23066786 1853 - free - (926k)

df on system in multiuser mode, i single user mode the last line is missing.
# df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/vtbd0p2 3G 484M 2.2G 17% /
devfs 1.0k 1.0k 0B 100% /dev
/dev/vtbd0p4 6.4G 1.9G 4G 32% /usr

# fdisk -s
/dev/vtbd0: 62415 cyl 16 hd 63 sec
Part Start Size Type Flags
1: 1 23068671 0xee 0x80

than I # gpart recover vtbd0
vtbd0 recovered

# gpart show
=> 34 62914493 vtbd0 GPT (30G)
34 128 1 freebsd-boot (64k)
162 6291456 2 freebsd-ufs (3.0G)
6291618 3145728 3 freebsd-swap (1.5G)
9437346 13629440 4 freebsd-ufs (6.5G)
23066786 39847741 - free - (19GB)

but fdisk -s stayed the same
After# gparted resize -i 4 -s 19G vtbd0
# gpart show
=> 34 62914493 vtbd0 GPT (30G)
34 128 1 freebsd-boot (64k)
162 6291456 2 freebsd-ufs (3.0G)
6291618 3145728 3 freebsd-swap (1.5G)
9437346 39845888 4 freebsd-ufs (19G)
49283234 13631293 - free - (6.5G)

But everything else is the same. Even after running fsck.
 
Crivens said:
Also, keep in mind that expanding a partition alone will not grow the FS. That needs to be done seperately, using growfs.

# growfs -s ANYTHING vtbd0p4
We strongly recommend you to make a backup before growing the file system.
Did you backup your data (Yes/No)? yes

Nothing done


current size < ANYTHING < max size
 
So the problem was between keyboard and chair. growfs is case sensitive in question about backup. Everything works as expected now. :)
 
Sebulon said:
Yeah, but that´s rather silly though isn´t it? I´m mean, most commands do accept either or.

/Sebulon

Unfortunately this one doesn't behave like most commands do.
 
Back
Top