Migrating ZFS to a new set of disks

Hi,

I'm currently having a problem with my zpool. The zpool currently consists of two vdevs, both are raid-z2, both contain four disks. The first vdev contains 4x 1.5 TB and the second 4x 3TB. The server has 4 sata ports and contains an additional sas controller with 8 ports.

Currently I'm facing three problems: the pool is filling up quickly, with only 1TB of storage left, I'm going to be receiving a large backup in a couple weeks of about 700GB, leaving me with around 300GB of spare room. The second problem is that zpool status is complaining that the disks are not using their native block size: "block size: 512B configured, 4096B native".

Lastly, one of the 3TB disks is experiencing intermittent failures. I have to re-add it to the vdev about twice a week and resilver. Obviously the last problem has me worried, but I'm at a loss as to how to go about fixing this.

I would like to upgrade the 1.5TB disks to 4TB disks, in addition I'd like to replace the broken 3TB disk with a 4TB one, so that I can upgrade the other 3 in a while to get a total of 8x 4TB.

In this process I'd like to solve all the above mentioned problems, but as far as I can tell there is no way that you can use different block sizes on different disks in the same vdev, so this means when I replace the broken 3TB disk I'll still be stuck with 512b blocks.

I have the following procedure in mind, but it's cumbersome and feels dangerous:
  • Remove all 4 1.5TB disks, leaving the entire pool in degraded state.
  • Install 4x 4TB disks and create a new zpool using the correct block size.
  • Remove two of the new 4TB disks, and install 2x 1.5TB, this should leave both zpools in degraded, but usable state.
  • Copy all data to the new pool (this is a problem, because it's going to be slightly too small, although I can shuffle stuff around.)
  • Destroy the old zpool, reinstall the last two 4TB disks, resilver. This should put the new zpool into online state, with all data and the correct block size.
  • Re-create a vdev with 3x 3TB and 1x 4TB disks, with the correct block size.

As far as I can tell this would leave with with one zpool that is correctly configured, and contains all data. But it feels cumbersome and really dangerous. Is there any better way?
 
Hi @blubber!

Your procedure fails at the first step:) You cannot remove all 1.5 TB drives from the first vdev, it would fault the pool. The redundancy is per vdev, no pool wide. So the maximum you can pull while still keeping your head above water is two per vdev.

/Sebulon
 
Last edited by a moderator:
If you have 4 ports on-board and another 8 on the SAS controller, is it not possible to connect 4 (or at least 2) of the 4TB disks while still having the 8 original disks online?

If you can only have 8 data disks connected at any one time then your plan seems fairly reasonable. The only comments I would have are:

1) Your first step appears to be removing all disks from one vdev, which would make the pool faulted/offilne. The alternative would be to remove 2 x 1.5TB and 2 x 3TB, leaving both vdevs in a precarious state, but fully online.

If we assume the new disks are in better condition that the old ones (which isn't particularly true as it's unfortunately common to get brand new faulty disks). Also that it doesn't really matter hugely if the new disks have a problem while copying data as you still have the original pool and can get replacements for the new disks, I would suggest the following:

* Offline 2 disks in each vdev on the old pool and remove them (one being the 'dodgy' 3TB disk)
* Put the 4 new disks in and create a pool with 4 x 4TB in RAID-Z2.
* Offline 2 of the new disks and put the 1.5TB and better 3TB disk back in (remember to zpool online them and let any changes resilver).
* Copy the data, then export the old pool, pull all the disks, finish building the new pool

With this you only have the old pool at its 'maximum' degraded state for the short time it takes to create the new pool. After that, you are back to having 1 disk redundancy in each vdev. You have no redundancy in the new pool while copying data, but you didn't in the original plan either and I think it makes more sense to keep the source disks redundant. If you lose a new disk while copying, it can be fixed and the data is intact. If the old pool has no redundancy and screws up during copying, you could be in trouble.

Edit:
Another alternative I've just thought of which means less disk juggling uses 'fake' memory disks. A few people on here have used this when copying data if they haven't got enough ports:

Code:
> offline and remove 2 of the original disks, one from each vdev
> connect 4TB disk 1
> connect 4TB disk 2
# mdconfig -a -t malloc -s 4T
md0
# mdconfig -a -t malloc -s 4T
md1
# zpool create newpool raidz2 /dev/4tb1 /dev/4tb2 /dev/md0 /dev/md1
# zpool offline newpool md0
# zpool offline newpool md1
# mdconfig -d -u 0
# mdconfig -d -u 1

This uses 2 4TB memory disks to allow you to create a 4 disk RAID-Z2 vdev while actually only having 2 of the disks connected. Obviously you have to make sure you offline the memory disks before putting any data on it.
 
Thanks for the replies, I think i'll go with the original plan, with the alteration that usdmatt suggested (removing two disk from both vdevs.) The trick with the memory disks feels a bit dodgy to me, but that's just gut feeling.
 
Back
Top