ZFS zfs - Filesystem-Migration - That's right?

I'm not a newbie on ZFS but I have never used before a RAIDZ2 pool. Now I have migrated two VMs to another host. The capacity on the destination pool has grown by the factor of two :eek: I know that a RAIDZ2 setup needs more capacity but the output of "zfs list" is a little bit curious. I hope the "zfs list"-output tells me only what the dataset is using with raid-failover-overhead or not?!

Code:
Example:
=======
zfs list rpool/vmm/ns2/systemLUN
NAME                      USED  AVAIL  REFER  MOUNTPOINT
rpool/vmm/ns2/systemLUN  22.6G  1.58T  22.6G  -

$ zfs get all rpool/vmm/ns2/systemLUN == SOURCE: mirrored pool
NAME                     PROPERTY              VALUE                  SOURCE
rpool/vmm/ns2/systemLUN  type                  volume                 -
rpool/vmm/ns2/systemLUN  creation              Mon May 13 14:47 2019  -
rpool/vmm/ns2/systemLUN  used                  22.6G                  -
rpool/vmm/ns2/systemLUN  available             1.58T                  -
rpool/vmm/ns2/systemLUN  referenced            22.6G                  -
rpool/vmm/ns2/systemLUN  compressratio         1.15x                  -
rpool/vmm/ns2/systemLUN  reservation           none                   default
rpool/vmm/ns2/systemLUN  volsize               50G                    local
rpool/vmm/ns2/systemLUN  volblocksize          8K                     default
rpool/vmm/ns2/systemLUN  checksum              on                     default
rpool/vmm/ns2/systemLUN  compression           lz4                    inherited from rpool
rpool/vmm/ns2/systemLUN  readonly              off                    default
rpool/vmm/ns2/systemLUN  createtxg             3399310                -
rpool/vmm/ns2/systemLUN  copies                1                      default
rpool/vmm/ns2/systemLUN  refreservation        none                   received
rpool/vmm/ns2/systemLUN  guid                  11665848094354567547   -
rpool/vmm/ns2/systemLUN  primarycache          all                    default
rpool/vmm/ns2/systemLUN  secondarycache        all                    default
rpool/vmm/ns2/systemLUN  usedbysnapshots       0                      -
rpool/vmm/ns2/systemLUN  usedbydataset         22.6G                  -
rpool/vmm/ns2/systemLUN  usedbychildren        0                      -
rpool/vmm/ns2/systemLUN  usedbyrefreservation  0                      -
rpool/vmm/ns2/systemLUN  logbias               latency                default
rpool/vmm/ns2/systemLUN  dedup                 off                    default
rpool/vmm/ns2/systemLUN  mlslabel                                     -
rpool/vmm/ns2/systemLUN  sync                  standard               default
rpool/vmm/ns2/systemLUN  refcompressratio      1.15x                  -
rpool/vmm/ns2/systemLUN  written               22.6G                  -
rpool/vmm/ns2/systemLUN  logicalused           25.8G                  -
rpool/vmm/ns2/systemLUN  logicalreferenced     25.8G                  -
rpool/vmm/ns2/systemLUN  volmode               dev                    received
rpool/vmm/ns2/systemLUN  snapshot_limit        none                   default
rpool/vmm/ns2/systemLUN  snapshot_count        none                   default
rpool/vmm/ns2/systemLUN  redundant_metadata    all                    default

$ zfs list rpool/vmm/ns2/systemLUN
NAME                      USED  AVAIL  REFER  MOUNTPOINT
rpool/vmm/ns2/systemLUN  44.8G   330G  44.8G  -

$ zfs get all rpool/vmm/ns2/systemLUN == TARGET: raidz2 pool
NAME                     PROPERTY              VALUE                  SOURCE
rpool/vmm/ns2/systemLUN  type                  volume                 -
rpool/vmm/ns2/systemLUN  creation              Sun Jun  2 20:42 2019  -
rpool/vmm/ns2/systemLUN  used                  44.8G                  -
rpool/vmm/ns2/systemLUN  available             352G                   -
rpool/vmm/ns2/systemLUN  referenced            44.8G                  -
rpool/vmm/ns2/systemLUN  compressratio         1.15x                  -
rpool/vmm/ns2/systemLUN  reservation           none                   default
rpool/vmm/ns2/systemLUN  volsize               50G                    local
rpool/vmm/ns2/systemLUN  volblocksize          8K                     default
rpool/vmm/ns2/systemLUN  checksum              on                     default
rpool/vmm/ns2/systemLUN  compression           lz4                    inherited from rpool
rpool/vmm/ns2/systemLUN  readonly              off                    default
rpool/vmm/ns2/systemLUN  createtxg             13138                  -
rpool/vmm/ns2/systemLUN  copies                1                      default
rpool/vmm/ns2/systemLUN  refreservation        none                   received
rpool/vmm/ns2/systemLUN  guid                  2920675046356825502    -
rpool/vmm/ns2/systemLUN  primarycache          all                    default
rpool/vmm/ns2/systemLUN  secondarycache        all                    default
rpool/vmm/ns2/systemLUN  usedbysnapshots       0                      -
rpool/vmm/ns2/systemLUN  usedbydataset         44.8G                  -
rpool/vmm/ns2/systemLUN  usedbychildren        0                      -
rpool/vmm/ns2/systemLUN  usedbyrefreservation  0                      -
rpool/vmm/ns2/systemLUN  logbias               latency                default
rpool/vmm/ns2/systemLUN  dedup                 off                    default
rpool/vmm/ns2/systemLUN  mlslabel                                     -
rpool/vmm/ns2/systemLUN  sync                  standard               default
rpool/vmm/ns2/systemLUN  refcompressratio      1.15x                  -
rpool/vmm/ns2/systemLUN  written               44.8G                  -
rpool/vmm/ns2/systemLUN  logicalused           25.8G                  -
rpool/vmm/ns2/systemLUN  logicalreferenced     25.8G                  -
rpool/vmm/ns2/systemLUN  volmode               dev                    received
rpool/vmm/ns2/systemLUN  snapshot_limit        none                   default
rpool/vmm/ns2/systemLUN  snapshot_count        none                   default
rpool/vmm/ns2/systemLUN  redundant_metadata    all                    default
 
On ZOL the same output for an created ZVOL with a ext3 filesystem. After a file was copied with 4.83 GB into this filesystem the "zfs list" command shows 11.1 GB used (file size + raidz2-overhead aka "parity"). I think that's a output error because normal filesystems will be reported with the "logicalused"-value but "ZVOL-entries" was shown by the total overhead capacity. I'm confused. Without "zfs get logicalused ..." I don't know how much capacity the volumes (on creation time) has after data was written into.
 
Hi, is your RAID Z2 array made up of only 3 disks perhaps? Can you post output of zpool status?

PS actually maybe you need more than 3 disks for RAID Z2, still interested to see the zpool info..
 
Back
Top