Other GELI + ZFS how much space overhead is normal?

Hi,

after creating a zpool on a geli encrypted disk I just noticed:

Code:
=>
% gpart show da4
        34  7814037100  da4  GPT  (3.6T)
          34           6       - free -  (3.0K)
          40  7813988352    1  freebsd-zfs  (3.6T)
  7813988392       48742       - free -  (24M)

and

Code:
% zfs list -o space mypool
NAME                      AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
mypool                    1.76T   768K        0B     96K             0B       672K

I expected about 10 to 15% overhead but this is half the size. 👀

From the logs I see "GEOM_ELI: gpt/xxx.eli: Failed to authenticate 512 bytes of data at offset 2000381017088"
Which would after my guess be around 1.8 TB.

So could it be that I have to initialise the whole drive with data for geli before I can create the pool?

Kind regards,
 
Output of geli list

Code:
Geom name: gpt/xxx.eli
State: ACTIVE
EncryptionAlgorithm: AES-XTS
KeyLength: 128
AuthenticationAlgorithm: HMAC/SHA256
Crypto: software
Version: 7
UsedKey: 0
Flags: AUTH, AUTORESIZE
KeysAllocated: 7452
KeysTotal: 7452
Providers:
1. Name: gpt/xxx.eli
   Mediasize: 2000381017600 (1.8T)
   Sectorsize: 512
   Mode: r1w1e0
Consumers:
1. Name: gpt/xxx
   Mediasize: 4000762036224 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1

Now this is interesting. 👀
 
Output for an internal drive:

Code:
Geom name: ada1.eli
State: ACTIVE
EncryptionAlgorithm: AES-XTS
KeyLength: 128
AuthenticationAlgorithm: HMAC/SHA256
Crypto: software
Version: 7
UsedKey: 0
Flags: BOOT, AUTH, AUTORESIZE
KeysAllocated: 3727
KeysTotal: 3727
Providers:
1. Name: ada1.eli
   Mediasize: 1778132381696 (1.6T)
   Sectorsize: 4096
   Mode: r1w1e1
Consumers:
1. Name: ada1
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1

This looks better and is within the expected overhead range for the HMAC tax. But why is it so far off for the other drive?
 
I tried it with a second drive (also bought recently) and it shows the exact same behaviour.
Also it seems to be related to geli because if I create a zpool without the geli layer then I get the full disk space:

Code:
NAME        USED  AVAIL  REFER  MOUNTPOINT
mypool      396K  3.51T    96K  none

So far I'm out of ideas. I've created larger geli devices in the past without issue. 🤔
 
Back
Top