Hi
I have a FreeBSD 10.2-RELEASE-p9 system with a fairly large zpool.
It's a backup server and does a lot of read/writes and has been working fine and performance has been ok.
The server also runs zrep (zfs send/recv script) and replicates to another server, it syncs every 10 minutes and normally that's enough of a window to complete a send/recv.
Lately the backup server application has been struggling and becomes very sluggish, the zfs send/recv is taking longer and longer and once it goes down this path the server I/O seems to get slower and slower to the point where I have to reboot the server, the console is fine and responsive but running any zfs commands take a while to come back e.g. "zfs list" might take 30-40 seconds, normally its instant.
It will come up again and work fine for 4/5 days and then go again, although the pattern does seem to indicate its more likely to struggle when its under heavier load.
The server has plenty of memory and I also limit the ARC memory to 4G via vfs.zfs.arc_max="4G" in /boot/loader.conf
After researching I believe my issue is down to using to much space, the zpool is around 90% full and I now understand performance can start to degrade anywhere after 80%?
I was wondering what my options were, would any of the following help
1. Adding another vdev to the pool to increase capacity to bring it back under 80% utilization?
2. Deleting some data?
From researching it would appear the best course of action is to build another (larger) pool and transfer the data out, with the amount of data I have and the frequency it updates that's not going to be an easy option so was hoping it can be improved without transferring the whole lot?
Any assistance appreciated.
Kind Regards
Paul
I have a FreeBSD 10.2-RELEASE-p9 system with a fairly large zpool.
Code:
root@freebsd03:~ # zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
s12d33 54.5T 49.5T 5.02T - 49% 90% 1.00x ONLINE -
root@freebsd03:~ # zpool status
pool: s12d33
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
s12d33 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
multipath/J12F12-1EJDBAEJ ONLINE 0 0 0
multipath/J12F13-1EJDGHWJ ONLINE 0 0 0
multipath/J12F14-1EJAWSMJ ONLINE 0 0 0
multipath/J12F15-1EJDGL9J ONLINE 0 0 0
multipath/J12F16-1EJAUE5J ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
multipath/J12F17-1EJD9K1J ONLINE 0 0 0
multipath/J12F18-1EJAUZ4J ONLINE 0 0 0
multipath/J12F19-1EJ9PP2J ONLINE 0 0 0
multipath/J12F20-1EJ7X50J ONLINE 0 0 0
multipath/J12F21-1EJAUNKJ ONLINE 0 0 0
errors: No known data errors
The server also runs zrep (zfs send/recv script) and replicates to another server, it syncs every 10 minutes and normally that's enough of a window to complete a send/recv.
Lately the backup server application has been struggling and becomes very sluggish, the zfs send/recv is taking longer and longer and once it goes down this path the server I/O seems to get slower and slower to the point where I have to reboot the server, the console is fine and responsive but running any zfs commands take a while to come back e.g. "zfs list" might take 30-40 seconds, normally its instant.
It will come up again and work fine for 4/5 days and then go again, although the pattern does seem to indicate its more likely to struggle when its under heavier load.
The server has plenty of memory and I also limit the ARC memory to 4G via vfs.zfs.arc_max="4G" in /boot/loader.conf
After researching I believe my issue is down to using to much space, the zpool is around 90% full and I now understand performance can start to degrade anywhere after 80%?
I was wondering what my options were, would any of the following help
1. Adding another vdev to the pool to increase capacity to bring it back under 80% utilization?
2. Deleting some data?
From researching it would appear the best course of action is to build another (larger) pool and transfer the data out, with the amount of data I have and the frequency it updates that's not going to be an easy option so was hoping it can be improved without transferring the whole lot?
Any assistance appreciated.
Kind Regards
Paul