I've the following ZFS setup:
pool portal: 6 x 2TB drivers in raidz2 + SSD read cache
pool temple: created on top of the portal pool, encrypted.
portal was created using following commands and has now the following status:
As mentioned on top of it I've created an encrypted pool using following commands (
Now my problem is with the utilization of portal pool. What I don't understand is why when I write something on temple pool, utilization of portal is increasing. Currently I've the following utilization status:
But I can't write anything big on temple as portal utilization gets to 0. But why? portal/bolt00 was not created as sparse volume, so I'd expect utilization reported in temple is the actual free space.
What is even worse when I noticed this issue I had still around 50GB free space on portal pool. During my tests I did create some large files using
The question is: but why?
I've noticed one more strange behavior. I've following FS:
Utilization of portal is dependent on files being written here. Again, why? :/
pool portal: 6 x 2TB drivers in raidz2 + SSD read cache
pool temple: created on top of the portal pool, encrypted.
portal was created using following commands and has now the following status:
Code:
# zpool create portal raidz2 /dev/da0.nop /dev/da1 /dev/da2 /dev/da3 /dev/da4 /dev/da6 cache /dev/da5
# zpool status portal
pool: portal
state: ONLINE
scan: scrub repaired 0 in 25h6m with 0 errors on Sat Feb 1 01:29:35 2014
config:
NAME STATE READ WRITE CKSUM
portal ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
da0 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
da4 ONLINE 0 0 0
da5 ONLINE 0 0 0
da7 ONLINE 0 0 0
cache
da6 ONLINE 0 0 0
errors: No known data errors
geli
commands are omitted here):
Code:
# zfs create -V 4T portal/bolt00
# zpool create temple /dev/zvol/portal/bolt00.eli
# zpool status temple
pool: temple
state: ONLINE
scan: scrub repaired 0 in 14h19m with 0 errors on Mon Feb 3 12:52:22 2014
config:
NAME STATE READ WRITE CKSUM
temple ONLINE 0 0 0
zvol/portal/bolt00.eli ONLINE 0 0 0
errors: No known data errors
Now my problem is with the utilization of portal pool. What I don't understand is why when I write something on temple pool, utilization of portal is increasing. Currently I've the following utilization status:
Code:
# zfs list portal temple
NAME USED AVAIL REFER MOUNTPOINT
portal 7.13T 2.91G 288K none
temple 3.09T 831G 144K none
#
But I can't write anything big on temple as portal utilization gets to 0. But why? portal/bolt00 was not created as sparse volume, so I'd expect utilization reported in temple is the actual free space.
What is even worse when I noticed this issue I had still around 50GB free space on portal pool. During my tests I did create some large files using
dd
on temple filesets. After my tests when I did remove the test files from temple filesets (I created couple of 10GB files to see the utilization on both pools) utilization of portal didn't come back but stayed full as is shown in my example. The question is: but why?
I've noticed one more strange behavior. I've following FS:
Code:
# zfs list temple/tbout
NAME USED AVAIL REFER MOUNTPOINT
temple/tbout 137G 865G 137G /local/spool/tb/out
# mount |grep out
temple/tbout on /local/spool/tb/out (zfs, local, noatime, nosuid, nfsv4acls)
#
# df -m /local/spool/tb/out
Filesystem 1M-blocks Used Avail Capacity Mounted on
temple/tbout 1026861 140725 886136 14% /local/spool/tb/out
#