Replacing disks in a ZFS pool is stupidly easy when you are able to add as many disks as you want to your server or virtual environment. My hoster doesn't allow me to do that. So here it goes:
Replacing disks and expanding the size of a ZFS mirror - the hard way.
I tried to keep this tutorial as simple as possible. You may want to adopt the steps below for 4k-aligned partitions etc.
I guess I should also say that I have set
Remember that you can create ZFS pools from files! You can test all the steps below before you apply them to a production system.
First do a scrub:
Then remove the second device:
Partition the new disk:
Attach the new disk to the old one:
Don't forget to write the boot code to your new disk:
This was fun, wasn't it? Repeat the steps above for the other disk.
Don't freak out if you can't boot anymore. I had to change the order of my disks to be able to boot (ada1 -> disk0, ada0 -> disk1).
That's it! You can verify the new pool size with
Replacing disks and expanding the size of a ZFS mirror - the hard way.
I tried to keep this tutorial as simple as possible. You may want to adopt the steps below for 4k-aligned partitions etc.
I guess I should also say that I have set
# zpool set autoreplace=off myPool
and# zpool set autoexpand=on myPool
.Remember that you can create ZFS pools from files! You can test all the steps below before you apply them to a production system.
First do a scrub:
Code:
# zpool scrub myPool # get a cup of coffee
$ zpool status
pool: myPool
state: ONLINE
scan: scrub repaired 0 in 0h9m with 0 errors on Thu Aug 29 12:32:03 2013
config:
NAME STATE READ WRITE CKSUM
myPool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gpt/disk0 ONLINE 0 0 0
gpt/disk1 ONLINE 0 0 0
errors: No known data errors
Then remove the second device:
Code:
# zpool detach myPool gpt/disk1
# poweroff
and "physically" replace the disk.Partition the new disk:
Code:
# gpart create -s gpt ada1
# gpart add -t freebsd-boot -s 128k ada1
# gpart add -t freebsd-zfs -l disk1 ada1
$ gpart show
=> 34 20971453 ada0 GPT (10G)
34 256 1 freebsd-boot (128k)
290 20971197 2 freebsd-zfs (10G)
=> 34 33554365 ada1 GPT (16G)
34 256 1 freebsd-boot (128k)
290 33554109 2 freebsd-zfs (16G)
Attach the new disk to the old one:
Code:
# zpool attach myPool /dev/gpt/disk0 /dev/gpt/disk1 # get another cup of coffee
$ zpool status
pool: myPool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Thu Aug 29 13:36:23 2013
48.3M scanned out of 4.77G at 315K/s, 4h22m to go
48.3M resilvered, 0.99% done
config:
NAME STATE READ WRITE CKSUM
myPool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gpt/disk0 ONLINE 0 0 0
gpt/disk1 ONLINE 0 0 0 (resilvering)
errors: No known data errors
Don't forget to write the boot code to your new disk:
# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
This was fun, wasn't it? Repeat the steps above for the other disk.
Don't freak out if you can't boot anymore. I had to change the order of my disks to be able to boot (ada1 -> disk0, ada0 -> disk1).
That's it! You can verify the new pool size with
$ zpool list
Code:
$ zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
myPool 15.9G 4.77G 11.2G 29% 1.00x ONLINE -
$ zpool status
pool: myPool
state: ONLINE
scan: scrub repaired 0 in 0h3m with 0 errors on Thu Aug 29 15:45:16 2013
config:
NAME STATE READ WRITE CKSUM
myPool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gpt/disk1 ONLINE 0 0 0
gpt/disk0 ONLINE 0 0 0
errors: No known data errors