Solved Disk space full on VM.

Greetings,

I've ran into a space issue I have never experienced before and was wondering if anybody knows what I'm missing here.

Code:
[2005][root@base ~ ] # du -h / | awk -F/ 'NF <= 2'
4.0K    /.snap
3.0K    /dev
21G    /usr
44K    /tmp
4.0K    /mnt
237M    /root
4.0K    /proc
2.8G    /var
4.0K    /media
98M    /boot
4.0K    /net
5.6M    /sbin
8.1M    /rescue
3.2M    /etc
1.1M    /bin
9.0M    /lib
156K    /libexec
24G    /
[2005][root@base ~ ] # df -h && df -i
Filesystem      Size    Used   Avail Capacity  Mounted on
/dev/ada0s1a     74G     26G     42G    38%    /
devfs           1.0K    1.0K      0B   100%    /dev
Filesystem   1K-blocks     Used    Avail Capacity iused   ifree %iused  Mounted on
/dev/ada0s1a  77175676 27201372 43800252    38%  359215 9672783    4%   /
devfs                1        1        0   100%       0       0  100%   /dev
[2005][root@base ~ ] # fstat -f / | awk '{print $8}' | sort -rn | head
79691776
50331648
50331648
14680064
11534336
11293908
891156
855288
647028
618332
[2005][root@base ~ ] # dd if=/dev/zero of=test bs=1m count=1G

/: write failed, filesystem is full
dd: test: No space left on device
140+0 records in
139+0 records out
145752064 bytes transferred in 1.187241 secs (122765344 bytes/sec)
[2005][root@base ~ ] # stat -f "%z" test
145752064
[2005][root@base ~ ] # df -h && df -i
Filesystem      Size    Used   Avail Capacity  Mounted on
/dev/ada0s1a     74G     26G     42G    39%    /
devfs           1.0K    1.0K      0B   100%    /dev
Filesystem   1K-blocks     Used    Avail Capacity iused   ifree %iused  Mounted on
/dev/ada0s1a  77175676 27343804 43657820    39%  359216 9672782    4%   /
devfs                1        1        0   100%       0       0  100%   /dev
[2005][root@base ~ ] # quota root
Disk quotas for user root (uid 0): none

It is worth noting that this is a VM of 11.2 Release. The parent is CentOS 7.7, running Xen Hypervisor 4.8.5.86

Everything looks to be in order on the LVM. There are 4 VMs set up with various OSs under the volume group, which is 500G.
Code:
[2101][root@dreams ~] # vgdisplay vg0
  --- Volume group ---
  VG Name               vg0
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  33
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                4
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <465.76 GiB
  PE Size               4.00 MiB
  Total PE              119234
  Alloc PE / Size       43520 / 170.00 GiB
  Free  PE / Size       75714 / <295.76 GiB

[2147][root@dreams ~] # lvdisplay vg0 | grep -E 'Path|Size'
  LV Path                /dev/vg0/domU-test-disk
  LV Size                20.00 GiB
  LV Path                /dev/vg0/xen-win7
  LV Size                50.00 GiB
  LV Path                /dev/vg0/custom-linux
  LV Size                20.00 GiB
  LV Path                /dev/vg0/base
  LV Size                80.00 GiB

Any help would be greatly appreciated.

Warm Regards,
 
Went the route of just creating a new VM with FreeBSD 12.1 and copying everything I needed to the new installation. Things are working well now.

I managed to dig up my notes on how this VM came to be. I remember that it was once a standalone server that I migrated over to the VM. The process of how I did this I did not though as it had been a few years back.

According to my notes this was originally a FreeBSD 10.4 Releases and was curious to see if I could move it over to a VM to avoid a fresh reinstall since it was just upgraded to 10.4 (wich was the latest release at the time) a few months ago and was too lazy to want to rebuild the ports.

After creating the LVM with the proper size I just did a dd to push it over from the standalone server and that's where I'm sure I messed up. Even though I created the correct size in the LVM dd does that block for block copy and ignored zeroed data. This caused it to only fill up a portion of the LVM that it occupied with the partition tables all reporting the old situation from the standalone server since the partition tables were also copied over with dd.

That makes sense as to why I had a little bit of additional space since I never zeroed the free space on the server out before copying it over. If I had of zeroed out the disk before copying I would of ran into the issue much sooner. Looking back I should of created a UFS on the LVM and then ran a dump.

Kind regards,
 
Back
Top