Solved check what compression was used for a particular block

Hello,
I've setup a server 2 years ago with gzip compression on some of its file systems. I've copied data from the old server, then few weeks later I've switched compression to lz4. Is there a way (maybe with zdb?) to know what FS blocks or files are using gzip and what FS blocks or files are using lz4?
I'm still running FreeBSD 12.x, I plan to move to 13.x later this year.
 
Please explain why zfs list -o name,compression,compressratio is not enough and why you care about "FS blocks or files" instead of zfs-datasets?
 
He changed the compression algorithm. As this only applies to new files (new writes) he's wondering what's still on the previous algorithm.
 
The question is understandable, but I have no idea how you could check it. I assume you can get some of that information from a careful zdb(8) analysis.
 
patpro
and you want to keep the files with the old compression?
Moving them to a new dataset with the compression of your choice wouldn't do it?
 
zdb was my first idea but the man page was not really helping at this point. I've found something here that set me on the right track: https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSFilePartialAndHoleStorage

Bash:
sudo zdb -vv -bbbb -O zroot/foo/bar path/to/file

It's very verbose but returns for each block the compression algorithm used. This is exactly what I was looking for, even though a more statistical approach would have been enough.

Sample output:

Code:
...
Indirect blocks:
               0 L1  DVA[0]=<0:362c6ba000:2000> DVA[1]=<0:310075c000:2000> [L1 ZFS plain file] fletcher4 lz4 unencrypted ...
               0  L0 DVA[0]=<0:38069e4000:14000> [L0 ZFS plain file] fletcher4 lz4 unencrypted ...
          100000  L0 DVA[0]=<0:3806a18000:55000> [L0 ZFS plain file] fletcher4 lz4 unencrypted ...
          ...
 
patpro
and you want to keep the files with the old compression?
Moving them to a new dataset with the compression of your choice wouldn't do it?
No I don't need to retain the old compression, it's more for analysis of what is going on on my storage.

Full story: recently I've also switch recordsize from 128K to 1M and willing to test the impact of this setting I've moved a +73GB directory on another dataset then back to the original dataset. Final "size" was +90GB, then I realized this very old directory was retrieved from the old server and at this time my local compression was gzip, so in fact I've replaced gzip compressed data with lz4 compressed data while I was intending to test the impact of my recordsize change. Eventually I've thought it would be nice to be able to tell what block is gzip'ed, what block is lz4'ed, etc.
 
Back
Top