My zpool was running out of space (99%, ~60GB left), so I started to delete some files I didn't need anymore. After deleting the files I checked with
Info:
Some things I already tried:
- Deleting more files, doesn't help of course
- Truncating a big file (
- Complete scrub, no errors
- Multiple sync commands
- Don't have snapshots, so can't delete those
- Umount/mount
- Shutdown/reboot
-
Just so you know:
- I know it's best practise to keep the maximum of space to around 80/85% regarding speed, but it's just a simple fileserver and I never had any problems before. But if I can fix this, I'm going to set a quota just to be sure.
- I know I should scrub more
- No volumes are created, but hey it was 4 years ago when I created the zpool, I didn't know any better.
- I'm not in the position right now to just backup the complete 12TB of data somewhere else and rebuild the zpool, I do keep backups of the most important files (~6TB) though.
But the question is, how can I fix this?
df to see how much space I created. Well, it looks like it actually took space. It went from 60GB left, to 0. So now my zpool is at 100% capacity and deleting other files won't free any more space. Just some background info, before deleting the files I upgraded my system to 10.1 and upgraded my zpool.Info:
Code:
[root@fileserver /disk2]# df -h
Filesystem Size Used Avail Capacity Mounted on
zroot/ROOT/default 792G 1.2G 791G 0% /
devfs 1.0K 1.0K 0B 100% /dev
fdescfs 1.0K 1.0K 0B 100% /dev/fd
zroot/classics 818G 27G 791G 3% /classics
disk1 3.9T 3.8T 171G 96% /disk1
disk2 12T 12T 0B 100% /disk2
zroot/music 1.7T 976G 791G 55% /music
zroot/tmp 791G 160K 791G 0% /tmp
zroot/usr/home 791G 192K 791G 0% /usr/home
zroot/usr/ports 792G 994M 791G 0% /usr/ports
zroot/usr/src 791G 552M 791G 0% /usr/src
zroot/var 792G 1.5G 791G 0% /var
zroot/var/crash 791G 148K 791G 0% /var/crash
zroot/var/log 791G 1.3M 791G 0% /var/log
zroot/var/mail 791G 288K 791G 0% /var/mail
zroot/var/tmp 791G 152K 791G 0% /var/tm
Code:
[root@fileserver /]# zpool status disk2
pool: disk2
state: ONLINE
scan: scrub repaired 0 in 15h48m with 0 errors on Wed Nov 19 01:42:01 2014
config:
NAME STATE READ WRITE CKSUM
disk2 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
da0 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
da3 ONLINE 0 0 0
da4 ONLINE 0 0 0
da5 ONLINE 0 0 0
da6 ONLINE 0 0 0
da7 ONLINE 0 0 0
errors: No known data errors
Code:
[root@fileserver /]# zpool get all disk2
NAME PROPERTY VALUE SOURCE
disk2 size 14.5T -
disk2 capacity 97% -
disk2 altroot - default
disk2 health ONLINE -
disk2 guid 13177175766481485457 default
disk2 version - default
disk2 bootfs - default
disk2 delegation on default
disk2 autoreplace on local
disk2 cachefile - default
disk2 failmode wait default
disk2 listsnapshots off default
disk2 autoexpand off default
disk2 dedupditto 0 default
disk2 dedupratio 1.00x -
disk2 free 442G -
disk2 allocated 14.1T -
disk2 readonly off -
disk2 comment - default
disk2 expandsize 0 -
disk2 freeing 0 default
disk2 fragmentation 4% -
disk2 leaked 0 default
disk2 feature@async_destroy enabled local
disk2 feature@empty_bpobj enabled local
disk2 feature@lz4_compress active local
disk2 feature@multi_vdev_crash_dump enabled local
disk2 feature@spacemap_histogram active local
disk2 feature@enabled_txg active local
disk2 feature@hole_birth active local
disk2 feature@extensible_dataset enabled local
disk2 feature@embedded_data active local
disk2 feature@bookmarks enabled local
disk2 feature@filesystem_limits enabled local
Code:
[root@fileserver /]# zfs list disk2
NAME USED AVAIL REFER MOUNTPOINT
disk2 12.2T 0 12.2T /disk2
Code:
[root@fileserver /]# zdb -h disk2
History:
2010-08-08.19:49:22 zpool create disk2 raidz da0 da1 da2 da3 da4 da5 da6 da7
2010-08-08.19:49:38 [internal pool property set txg:6] autoreplace 1 disk2
2010-08-08.19:49:38 zpool set autoreplace=on disk2
2011-03-06.21:51:37 zpool upgrade disk2
2011-03-06.21:51:52 [internal filesystem version upgrade txg:5413585] oldver=3 newver=4 dataset = 16
2011-07-14.20:55:06 [internal pool scrub txg:5788798] func=1 mintxg=0 maxtxg=5788798
2011-07-14.20:55:09 zpool scrub disk2
2011-07-15.04:33:37 [internal pool scrub done txg:5789714] complete=1
2012-01-16.17:00:28 [internal filesystem version upgrade txg:6326190] oldver=4 newver=5 dataset = 16
2013-10-19.16:51:29 zpool upgrade disk2
2014-04-20.14:22:08 zpool upgrade -a
2014-05-13.19:20:06 zpool import disk2
2014-11-17.19:44:24 zpool upgrade disk2
2014-11-18.09:53:47 zpool scrub disk2
Some things I already tried:
- Deleting more files, doesn't help of course
- Truncating a big file (
truncate -s 0 my_large_file)- Complete scrub, no errors
- Multiple sync commands
- Don't have snapshots, so can't delete those
- Umount/mount
- Shutdown/reboot
-
zdb -L (cancelled, because it was going to take ~9000 hours...)Just so you know:
- I know it's best practise to keep the maximum of space to around 80/85% regarding speed, but it's just a simple fileserver and I never had any problems before. But if I can fix this, I'm going to set a quota just to be sure.
- I know I should scrub more
- No volumes are created, but hey it was 4 years ago when I created the zpool, I didn't know any better.
- I'm not in the position right now to just backup the complete 12TB of data somewhere else and rebuild the zpool, I do keep backups of the most important files (~6TB) though.
But the question is, how can I fix this?