zfs, raidz and total number of vdevs?

Another question just popped up. When dealing with compression, Can you switch types? if so it only affects the new data right?
 
wonslung said:
Another question just popped up. When dealing with compression, Can you switch types? if so it only affects the new data right?

Correct. The ZFS property compression applies to new data as it is written to the filesystem. And you can change it at any time.
 
wonslung said:
i'm just currious, but why did you go with ufs instead of zfs for root.

Booting off ZFS isn't enabled in FreeBSD 7.2, and all the hacked up methods for getting / on ZFS were too hackish for use in production. Once ZFS booting is enabled on a -RELEASE, we'll look at migrating to it.

i was thinking how cool it would be to have zfs snapshots for when you decide to upgrade stuff.

Yes, this is indead very interesting. There's a project out there for creating Boot Environments, where you can boot to different filesystems, snapshots, and clones. So you can install, create a BE, upgrade, create a BE, and have access to either/or at boot time.

Solaris is using this now, it's part of their installer/upgrade tools.

I guess you could maintain a backup on zfs as well, is that what you do?

Yeah, I added "localhost" as a "remote server" to be backed up via the normal backup process, with an exclude list for everything except / and /usr. :) That way, if things go horribly, horribly wrong, and we can't boot off either part of the mirror, we can boot off a LiveCD, export/import the pool, and restore from the backups.
 
well i know the boot loader won't boot zfs but what's wrong with using the compact flash for /boot and putting everything else on zfs. i mean, is there a technical reason not to do this?

i'm asking mainly because that's the way i did it on the last install i did and i want to know if there is any major issue with that?

as far as the backup system goes, i'm really going to have to read up on what you have going.

I'm also interested in using NFS and maybe samba as well.

My main purpose is a media server for tv show and movie backups which stream to 3 htpcs, 2 xbox 360's, 2 xbox's with xbox media center and a couple ps3's (i know, it's a lot, haha) I'm really looking forward to the compression aspect of zfs. From what i've read on the net it seems if you have a powerful enough cpu it actually speeds up file read/write due to less actual disk i/o. I'm hoping that zfs compression=gzip or gzip-9 will be ok with media files on a quad core cpu but i'll do some testing (can't really find much information on people using it for media servers yet)

anyways, most of the computer in the house run freebsd or linux of some kind with the exception of 2 windows machines, both laptops so nfs would be awesome, especially for home directories over the network.
 
First of all, thanks for all the tips you gave out in this thread Phoenix. I appreciate it, as I'm sure many others do.

phoenix said:
No, the vdevs don't have to be symmetrical. You can create a pool with a mirrored vdev, a 5-drive raidz1 vdev, a 6-drive raidz2 vdev, a single drive vdev, and so on. ZFS will then create, in essence, a RAID0 strips across all the vdevs.

I had trouble trying to add unsymmetrical vdevs to my pool.
Code:
[root@Touzyoh ~]# zpool add mokona raidz da2 da3 da4 da5
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses 5-way raidz and new vdev uses 4-way raidz

when I ran zpool upgrade -v I noticed that I was running version 6 of ZFS. :q I assume the error I'm getting is because of my extremely outdated version (saw you mention version 13). What I'm confused about is that zpool upgrade -v says my current platform will only support version 6. I just rebuilt world to 7.2-STABLE last month, how can I get version 13 so I can upgrade my pool? I thought rebuilding world would have done the trick.

Code:
root@Touzyoh ~]# zpool upgrade -v
This system is currently running ZFS version 6.

The following versions are supported:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property 
For more information on a particular version, including supported releases, see:

http://www.opensolaris.org/os/community/zfs/version/N

Where 'N' is the version number.
[root@Touzyoh ~]# uname -a 
FreeBSD Touzyoh.example.me 7.2-STABLE FreeBSD 7.2-STABLE #2: Mon May 11 08:32:11 UTC 2009     root@Touzyoh.example.me:/usr/obj/usr/src/sys/GENERIC  amd64
 
UnixMan said:
v
This system is currently running ZFS version 6.

The following versions are supported:

VER DESCRIPTION
--- --------------------------------------------------------
1 Initial ZFS version
2 Ditto blocks (replicated metadata)
3 Hot spares and double parity RAID-Z
4 zpool history
5 Compression using the gzip algorithm
6 bootfs pool property
For more information on a particular version, including supported releases, see:

http://www.opensolaris.org/os/community/zfs/version/N

Where 'N' is the version number.
[root@Touzyoh ~]# uname -a
FreeBSD Touzyoh.example.me 7.2-STABLE FreeBSD 7.2-STABLE #2: Mon May 11 08:32:11 UTC 2009 root@Touzyoh.example.me:/usr/obj/usr/src/sys/GENERIC amd64
[/CODE]


version 6 is what comes with 7.0-7.1 (and originally 7.2)

13 has been MFC and you can update to 13 using cvsup the normal way.

I'm using 13, it's great, all kinds of new features like delegated managment and refquotas...and a ton of other stuff, let me find the release notes and edit this when i have them.

http://svn.freebsd.org/viewvc/base?view=revision&revision=192498
http://www.bsdunix.ch/serendipity/i...FS-Version-13-to-FreeBSD-stable-RELENG_7.html
 
wonslung said:
well i know the boot loader won't boot zfs but what's wrong with using the compact flash for /boot and putting everything else on zfs. i mean, is there a technical reason not to do this?

i'm asking mainly because that's the way i did it on the last install i did and i want to know if there is any major issue with that?

Yeah, that should work. You'll have to change a few loader.conf settings to tell it to boot from /kernel/kernel instead of /boot/kernel/kernel, but otherwise it should work.

As these were work boxes, we went with a very conservative setup, keeping the OS on UFS (/ and /usr) and just putting the data on ZFS. We also started with ZFS back when it first was imported into FreeBSD (7.0 timeframe) and there were some issues. We wanted to make sure we could boot into a full OS to track down/fix any ZFS issues.

Now, ZFS is much more stable and reliable in FreeBSD. So you shouldn't have any issues putting just /boot on UFS, and everything else on ZFS>

anyways, most of the computer in the house run freebsd or linux of some kind with the exception of 2 windows machines, both laptops so nfs would be awesome, especially for home directories over the network.

NFS support is built right into ZFS. You still set all the same /etc/rc.conf variables (nfs_server, mountd, statd, etc). Then you just set the sharenfs property for the filesystem you want to share, using the same syntax as in the /etc/exports file. And ZFS does the rest, calling nfsd and mountd to export the filesystem. And anytime you edit the sharenfs property, mountd gets refreshed.

CIFS and iSCSI support are not built into ZFS on FreeBSD, so you still need to use the Samba port or an iSCSI target port.
 
UnixMan said:
I had trouble trying to add unsymmetrical vdevs to my pool.
Code:
[root@Touzyoh ~]# zpool add mokona raidz da2 da3 da4 da5
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses 5-way raidz and new vdev uses 4-way raidz

Hrm, good to know. I haven't tried to use any non-symmetrical vdevs, just assumed you could. The docs that I've read don't specify this, and just made it sound like you could add any vdevs to the pool. Guess that's a reason to have multiple pools per system (always wondered why anyone would do that).

You need to update to a newer 7-STABLE in order to get ZFSv13 support. It was only MFC'd in the last couple of weeks.
 
Sorry for reviving dead thread but

what would you think is best

4 Drive vdev1 raidz1
4 drive vdev2 raidz1
Pooled together as RAID5
allowing for 2 Drive failurs as long as its not in any of the the other vdev's
has less IO but shorter re silvering ?

8 drive vdev raidz2
as RAID6 (2 Drive Failur , longer re silvering times )

Normally I have 8 Drives together , but my new SAS cards have only get 4 ports per channel and to make things easier I was thinking of just grouping them together by these smaller connection groups

the final system will be 3 SAS cards Pooled together

I am just unsure which will be best 8 disk raidz2 vdev or 2x 4 disk raidz1 per vdev

its only for multimedia storage and needing better read speeds then write speeds ( which is what seemed to have happened when I went 4 disk the write speeds seemed better ?? but also using another version of OS now as well so I am unsure )
 
Depends on the size. If you are using drives under 1 TB, then raidz1 would be okay, and give better performance. For drives over 1 TB, you should use raidz2. Ideally, with 6 disks per vdev, although 8 works as well.

The reason? The time it takes to resilver a dead drive. For drives over 1 TB, you're looking at several days to over a week (depending on how full and fragmented the pool is), during which time a raidz1 vdev would have 0 redundancy. If you lose a second drive while the first is resilvering ... you lose the pool!!

Thus, for larger drives that take a long time to resilver, use raidz2 (or even raidz3) to protect the pool during the resilver.
 
What does resilvering time depend on, besides pool type? Disk size, other computer specs?
How much would it take to resilver 2TB drive in 6 drive vdev pool with raidz2?
 
Resilvering touches every block of data on the disk, in the order that it was created (temporal order). If you create and delete a lot of snapshots, and you create and delete a lot of files, then new blocks will be interspersed with old blocks. Thus, resilvering will thrash the drive heads.

If you search through the archives for the zfs-discuss mailing list, you'll find several threads where various formulas are given for determing the worst-case resilvering times of a vdev, based on the number of drives in the vdev, the type of vdev (raidz1, raidz2, etc), and the size of the drives. There's no "one true number".

Suffice to say, a 2 TB drive in a 50% full pool will take several days to resilver in a raidz2 vdev.

In our oldest ZFS box (within a week of ZFSv6 hitting FreeBSD 7-STABLE we built the box) it takes almost 3 weeks to resilver a 500 GB drive in an 8-drive raidz2 vdev. The pool is over 2 years old, with snapshots created daily, and snapshots deleted after about 6-8 months, with data changing on a daily basis. Pool has 3x 8-drive raidz2 vdevs. Drives connected to 3Ware PCIe controllers as Single Disk arrays.

In our newest ZFS box, resilvering a 500 GB drive takes about 3 days. This pool is under 50% full, has snapshots created daily, has no snapshots deleted, has dedupe enabled, and has 4x 6-drive raidz2 vdevs. Drives connected to SuperMicro AOC-USAS-8Li SATA controllers.
 
ZFS really needs the block rewrite functionality. Right now if you want to defrag, the only solution is to yearly move your data to a new storage pool.
 
If a resilvering is abruptly stopped, say during a brief power outage, what happens to disk, pool? Does it continue where it stopped ?
 
Back
Top