ZFS newbie questions

  • Thread starter Thread starter Deleted member 2077
  • Start date Start date
D

Deleted member 2077

Guest
New server, will be FreeBSD 8.1 amd64 with 4 gigs of ram.. Root/boot drive will be an SSD.

I have 4x 2TB disks. One has data, the others not.
Can I install 3x of the disk into a pool, copy over the data and then add the fourth disk to the pool?

What's the best configuration for this, I care more about redundancy than space. raidz? I just want one big slice.

Anyone have a good setup guide for 8.1? most of the ones I've looked at are outdated or for booting from zfs
 
This wiki isn't (to my knowledge) out of date and served me well a couple weeks ago: http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot

To answer your question it depends on what kind of zpool you make. If you're making a raidz then yes you can add the fourth drive later. However, I don't think I'd recommend doing that if you're worried about performance. I don't think there is anyway to get ZFS to redistribute the data from the three original drives to all four without moving it off the zpool and then back onto it.

If you care more about redundancy than space then I think raidz2 would be good for you.
 
No, it isn't possible to add a single extra disk to a raidz vdev after creation, as follows:

Code:
NAME                          NAME
tank                          tank
  raidz1       ->               raidz1
    ad0                           ad0
    ad1                           ad1
    ad2                           ad2
                                  ad3

You would only be able to add another three disks, forming a new raidz1 vdev, and then ZFS would stripe across the two three-disk vdevs, like this:

Code:
NAME
tank
  raidz1
    ad0
    ad1
    ad2
  raidz1
    ad4
    ad5
    ad6

It is possible to add additional single disks to a non-resilient stripe, but you stated that resilience is important to you.
 
If you can get a spare 1T disk, you would have the following choices without compromising redundancy.

First use glabel to label the disks.

# glabel label disk1 /dev/ada0

In the following, I assume you label your 1T disk as 1Tspare.

Best redundancy:

Code:
[CMD="#"]zpool create mypool raidz2 label/disk1 label/disk2 label/disk3 label/1Tspare[/CMD]

Copy the data to mypool.

And finally

[CMD="#"]zpool replace mypool label/1Tspare label/disk4[/CMD]

Best performance:

Code:
[CMD="#"]zpool create mypool mirror label/disk1 label/disk2 mirror label/disk3 label/1Tspare[/CMD]

Copy data to mypool.

Then

[CMD="#"]zpool attach mypool label/1Tspare label/disk4[/CMD]

After resilvering,

[CMD="#"]zpool detach mypool label/1Tspare[/CMD]
 
This is doable, but requires a bit of "hackery", and will run the pool in a non-redundant fashion for a little bit. (The following is completely untested, and based on a method used by OSol admins.)

Create a sparse file 2 TB in size (something like dd if=/dev/zero of=/path/to/disk.img bs=1M count=2M).

Then create the pool with a raidz1 vdev, using 3 physical disks and the sparse file:
# zpool create mypool raidz1 da0 da1 da2 /path/to/disk.img

Then offline the disk.img and delete it:
# zpool offline mypool disk.img
# rm -f /path/to/disk.img

Create your ZFS filesystems and copy all the data from the existing 2 TB drive into the pool.

Finally, add the 4th 2TB drive to the raidz1 vdev as a replacement for the disk.img:
# zpool replace mypool disk.img da3

Once that finishes resilvering, you'll have a raidz1 vdev comprised of 4x 2TB drives.
 
  • Thanks
Reactions: jem
I got this to work by creating a sparse file with truncate, ie

Code:
truncate -s 2T sparse.img

There was a kernel panic when offlining the fake device, but after a reboot the sparse file came up as UNAVAIL and voila, new degraded 6-disk raidz2 with only 5 real disks.
 
Actually spoke too soon, this method causes unending kernel panics and will not work. Recovery is a *beep**beep**beep**beep**beep* too because you have to wipe the zfs metadata manually.

I think this method works though.
 
Back
Top