Hello,
I've made some tests and benchmarks on ZFS, and I've found something I was not expecting. I've written a big post with plots here: http://www.patpro.net/blog/index.php/20 ... -metadata/ but the main problem boils down to this:
Any idea/hint about this behavior? I don't understand why reading a 4.8 GB file from a ZFS dataset could yield to 77.95 GB in read bandwidth when primarycache is set to "metadata".
I've made some tests and benchmarks on ZFS, and I've found something I was not expecting. I've written a big post with plots here: http://www.patpro.net/blog/index.php/20 ... -metadata/ but the main problem boils down to this:
I create 2 brand new datasets, both with primarycache=none and compression=lz4, and I copy in each one a 4.8GB file (2.05x compressratio). Then I set primarycache=all on the first one, and primarycache=metadata on the second one.
I cat the first file into /dev/null with zpool iostat running in another terminal. And finally, I cat the second file the same way.
The sum of read bandwidth column is (almost) exactly the physical size of the file on the disk (du output) for the dataset with primarycache=all: 2.44GB.
For the other dataset, with primarycache=metadata, the sum of the read bandwidth column is ...wait for it... 77.95GB.
Any idea/hint about this behavior? I don't understand why reading a 4.8 GB file from a ZFS dataset could yield to 77.95 GB in read bandwidth when primarycache is set to "metadata".