ZFS 28 compression - simply the best

Code:
ponto:(admin)~>zfs list zroot/usr/home/ftpcrawl
NAME                      USED  AVAIL  REFER  MOUNTPOINT
zroot/usr/home/ftpcrawl  9.04G   178G  9.04G  /usr/home/ftpcrawl
ponto:(admin)~>zfs get compressratio zroot/usr/home/ftpcrawl
NAME                     PROPERTY       VALUE  SOURCE
zroot/usr/home/ftpcrawl  compressratio  [B][U]10.30x[/U][/B]  -

I highly recommend update to 8-STABLE and dont forget to get arc_summary.pl
 
Yeah, as far as I know there is no difference in compression (algorithms etc) between any ZFS versions so I don't think its a reason to move to STABLE...
WRT ZFS v28, for those that normally run RELEASE, a month or two more and we should have FreeBSD 9 that will be running ZFS v28.

cheers Andy.
 
Which compression algo is that? gzip-9, lzjw, zle?

Compression ratios really depend on the data being compressed. If it's all text files, then it compresses really well. If it's all JPEG files, then it hardly compresses at all. :)

Comparing compression ratios is pretty meaningless without a description of what is being compressed.

@graudeejs: compressratio property is separate from dedupe ratio (one is a filesystem property, the other is a pool property).

You need to view the output of # zdb -DD <poolname> to get the full view of the relationship between compression ratio, dedupe ratio, extra copies of files, metadata copies, etc. It gives the "overall disk savings ratio".

Code:
[root@alphadrive ~]# zdb -DD storage
DDT-sha256-zap-duplicate: 22891152 entries, size 1126 on disk, 181 in core
DDT-sha256-zap-unique: 44102144 entries, size 1157 on disk, 187 in core

DDT histogram (aggregated over all DDTs):

bucket              allocated                       referenced          
______   ______________________________   ______________________________
refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
------   ------   -----   -----   -----   ------   -----   -----   -----
     1    42.1M   3.91T   3.00T   3.15T    42.1M   3.91T   3.00T   3.15T
     2    11.6M   1.18T    972G   1005G    26.0M   2.65T   2.12T   2.20T
     4    6.89M    576G    431G    458G    35.6M   2.86T   2.16T   2.29T
     8    1.53M    129G   88.2G   95.4G    16.4M   1.35T    929G   1008G
    16     881K   81.0G   54.2G   58.2G    18.1M   1.68T   1.12T   1.20T
    32     793K   61.6G   38.8G   42.6G    34.8M   2.57T   1.65T   1.81T
    64     133K   4.29G   2.82G   3.61G    11.5M    374G    243G    313G
   128    50.8K   1.83G   1.19G   1.49G    8.22M    291G    186G    236G
   256    18.4K    740M    525M    631M    6.21M    266G    195G    230G
   512    8.00K    200M    125M    176M    5.75M    131G   79.0G    116G
    1K    1.35K   20.5M   8.79M   18.3M    1.86M   26.7G   11.6G   24.7G
    2K      499   9.33M   4.18M   7.56M    1.32M   27.2G   12.1G   21.2G
    4K      326   4.78M   2.32M   4.52M    1.63M   29.0G   14.4G   25.5G
    8K      248   7.41M   3.57M   5.25M    2.78M   89.9G   43.8G   62.9G
   16K      294   2.97M   2.14M   4.18M    7.26M   57.2G   40.2G   92.6G
   32K       49    696K    463K    791K    1.81M   25.4G   15.9G   28.1G
   64K        9    136K   8.50K   79.9K     723K   9.28G    640M   6.19G
  128K        1     512     512   7.99K     194K   96.9M   96.9M   1.51G
  256K        1     512     512   7.99K     406K    203M    203M   3.17G
 Total    63.9M   5.93T   4.56T   4.78T     223M   16.3T   11.8T   12.8T

dedup = 2.67, compress = 1.38, copies = 1.08, dedup * compress / copies = 3.41

The 256K line is most impressive: 1 block is referenced over 256 thousand times, storing 3.17 GB of data in just 203 MB of actual disk space. :) (This is using lzjw compression with dedupe.)
 
ZFS 28 has also pretty impressive thruput:

2 x 7200 RPM SATA2 drives in ZRAID-1

Code:
ponto:(admin)~>zpool iostat 1
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zroot       73.1G   387G     10     38   978K  1.18M
zroot       73.1G   387G  1.20K      0   152M      0
zroot       73.1G   387G  1.24K      0   158M      0
zroot       73.1G   387G  1.12K    108   143M   193K
zroot       73.1G   387G  1.26K      0   160M      0
zroot       73.1G   387G  1.24K      0   158M      0
zroot       73.1G   387G    984      0   120M      0
zroot       73.1G   387G  1.15K      0   147M      0
zroot       73.1G   387G  1.22K    157   157M   266K
zroot       73.1G   387G  1.23K      0   157M      0
zroot       73.1G   387G  1.41K      0   180M      0
zroot       73.1G   387G  1.30K      0   167M      0
zroot       73.1G   387G  1.38K      0   176M      0
zroot       73.1G   387G  1.42K     69   182M   132K
zroot       73.1G   387G  1.32K     35   168M  71.9K
zroot       73.1G   387G  1.20K      0   154M      0
zroot       73.1G   387G  1.29K      0   165M      0
zroot       73.1G   387G  1.39K      0   178M      0
zroot       73.1G   387G  1.24K      0   159M      0
zroot       73.1G   387G  1.38K      0   177M      0
zroot       73.1G   387G  1.43K      0   183M      0

It is going to be part of hadoop cluster for data processing.
 
Back
Top