ZFS nice for disk?

I am unpacking a large tar file. top -m io shows that bsdtar uses 100.00% (not sure of what, probably disk io). Is it possible to "throttle" the disk usage when starting the tar command in a similar way that nice -n 20 would "throttle" the amount of CPU the process takes (alters the scheduling priority)? Or would using nice with tar also alter the disk priority?
 
Not directly related to your question, but may help. My desktop system was sluggish under high disc load. I solved it by installing a second SSD and configured it for ZIL and L2ARC caches. I also set atime property to disabled.
 
I doubt that separate a ZIL device helped much in your desktop system. There's very few applications that produce high amounts of synchronous writes, NFS is one of them but not much else. The atime property is however a performance killer if you don't turn it off because everything on a dataset will have to have its atime updated on access. Turn it off for everything else but directories with the mailboxes, they are just about the only ones that really need atime to work correctly.
 
atime is off on all datasets apart from those that hold emails. And I don't have any more free SATA/PCI slots available to install any new disks. I have one raidZ2 with 6 spinning disks and one mirror with 2 SSD disks.

In this case I was editing a file over NFS in an editor and unpacking this 20GB archive directly on the host. The editor was responsive for 20-30 seconds and then unresponsive for 2-3 minutes, and then again the same in a few cycles. My suspicion is that when ZFS was filling in the memory the editor was responsive and then when ZFS was running out of free memory it tried to flush everything to disk and it was making the NFS daemon unresponsive. Since those are spinning disks I doubt adding any caching would help since the data would have to written to disk eventually. So it would be probably more about asking ZFS to write to disk less data more often rather than in big spurts? The server has 32GB of memory so it can fit quite a lot in a buffer (however, I am not exactly sure how ZFS handles memory buffers).
 
amiramix, in your case you want to have a separate ZIL device on a very fast SSD. NFS does a lot of synchronous writes because it wants to guarantee data integrity on every write operation it does. If a separate ZIL doesn't help or doesn't help enough you might need to do some tuning to force ZFS to write in smaller bursts, it's quite common that the defaults are too optimistic and only work on very high end hardware. As for the buffering on ZFS, that's what the ARC cache is for, see the ARC cache tuning sections on the linked documents.

https://wiki.freebsd.org/ZFSTuningGuide

https://www.freebsd.org/doc/handbook/zfs-advanced.html
 
The NFS access isn't critical, it's just to store some files away from the desktop because the server has more space, and it's very rarely that I need to unpack such big files. The performance of the pool on the server is of much bigger priority than over NFS.

Having said that, since now I have the new SSD mirror added to the server I would certainly want to investigate the possibility of speeding up the raidz2 pool, which is build only with spinning drives. However, I can't find any information if it's possible to add ZIL to another pool without affecting data that's already stored there, e.g. these are my pools:

Code:
  pool: tank1
state: ONLINE
  scan: scrub repaired 0 in 8h4m with 0 errors on Sun Jun  5 07:00:49 2016
config:

  NAME  STATE  READ WRITE CKSUM
  tank1  ONLINE  0  0  0
  raidz2-0  ONLINE  0  0  0
  ada0p3  ONLINE  0  0  0
  ada1p3  ONLINE  0  0  0
  ada2p3  ONLINE  0  0  0
  ada3p3  ONLINE  0  0  0
  ada4p3  ONLINE  0  0  0
  ada5p3  ONLINE  0  0  0

errors: No known data errors

  pool: tank5
state: ONLINE
  scan: none requested
config:

  NAME  STATE  READ WRITE CKSUM
  tank5  ONLINE  0  0  0
  mirror-0  ONLINE  0  0  0
  nvd0  ONLINE  0  0  0
  nvd1  ONLINE  0  0  0

errors: No known data errors

Can I store tank1's ZIL on tank5 and still keep the mirror and existing data on tank5?

BTW I just realized that vdev.cache is not as recommended in one of your articles:

Code:
vfs.zfs.vdev.cache.bshift: 16
vfs.zfs.vdev.cache.size: 0
vfs.zfs.vdev.cache.max: 16384

In particular, vfs.zfs.vdev.cache.size is 0 whereas https://wiki.freebsd.org/ZFSTuningGuide suggests 5M. Should this be corrected? I wasn't changing that value, the 0 value must be some default.
 
kpa On my desktop system I regularly used rsync a bunch of large files or cat smaller files into one. Without caches on SSD ZFS was noticeably slower than UFS.
 
kpa On my desktop system I regularly used rsync a bunch of large files or cat smaller files into one. Without caches on SSD ZFS was noticeably slower than UFS.

ZIL is not a cache at all so that doesn't contribute to normal read/write performance, it counts only on synchronous writes when the application asks for O_SYNC write(2)s. I don't doubt that L2ARC helps on your system but that's entirely unrelated to ZIL.
 
You are right with regards to terminology, ZIL is not cache. All I'm saying is that ZIL and L2ARC on a separate SSD helped me for desktop usage (where sometimes I copy/move/concatenate a lot of files). I did not experiment to see how much each one contributed and in which area.
 
Back
Top