ZFS Adding mirror to nearly-full pool

Hi,

I've just gone through an exercise of replacing my 4x3TB raidz1 array with a mirrored-vdev arrangement.

I have 8 slots in two boxes. I had 4x3TB disks in one box and 4x4TB disks in another.

I set up a new pool containing the 4x4TB disks arranged as 2 two-disk mirrors and migrated the data from the original pool. Since the new pool is a bit smaller than the old raidz1, I had to prune some data. Still, the new pool filled to ~96%.

Now that the raidz1 is empty, I've added two of the 3TB disks as a third mirrored pair to the new pool to give it some headroom.

Now, the original 4 disks in that new pool are essentially full, with all my unallocated space sitting on the two added disks.

Will I see a performance issue here, since ZFS wasn't able to spread the load across the additional disks? Is it worth me copying a TB or so off the pool then back on? Or should I just relax?

[edit]: For clarity, here's the final arrangement. mirrors 0 and 1 are the 4TB disks that got ~96% filled. Mirror 2 is the two added 3TB disks.

Code:
mediabox@trillian:/store/media # zpool status
  pool: store
 state: ONLINE
  scan: scrub canceled on Mon Dec 21 12:01:57 2015
config:

        NAME                 STATE     READ WRITE CKSUM
        store                ONLINE       0     0     0
          mirror-0           ONLINE       0     0     0
            label/box0slot2  ONLINE       0     0     0
            label/box0slot3  ONLINE       0     0     0
          mirror-1           ONLINE       0     0     0
            label/box0slot0  ONLINE       0     0     0
            label/box0slot1  ONLINE       0     0     0
          mirror-2           ONLINE       0     0     0
            label/slot0      ONLINE       0     0     0
            label/slot1      ONLINE       0     0     0

errors: No known data errors


Many thanks!

Chris
 
Thanks for confirming that -- it does make sense.

Is there some way to ask ZFS to re-stripe data across to the newly-added vdevs? If not, do I just need to put together a script to move dirs out of the pool, then back in again in order to get them striped properly?

Cheers,
Chris
 
AFAIK you have to remove the data from pool in order to get it striped over all HDDs.
If the data is changing, e.g. snapshot + new copy on write + remove of old snapshots, the new data will striped across all HDDs.
But this should take a really long time.
 
I dont know how ZFS handle the data in pools with HDDs filled over 80% exactly. 500GB out of 8TB = 6,25% ... so the 4TB mirrors are still filled up to ~90% after deleting 500GB. Perhaps most of the data that is copied back on the pool will be stored only on the empty 3TB mirror. But you can test that and give some feedback here if you want. :)
 
Back
Top