ZFS Unsure about RAID Configuration and possible loss of data

1 posts

20 December 2015 @ 16:40

Hello I have build up an Storage system with zfs the proformance an stability is ok but now I'm not sure about my configuration.
And after an Loss of one of my Hardisk I'am able to replace them but I'am not sure
when i look at the Status what is configured really and and what exactly could be broken if a different disk is broken .
The ideas was to create 3 unindependend mirror of disk which then will be mirrored again
like an Raid 10 in an normal array but I'am no longer sure that i have this now
can anyone help me to get a clear picture .
I 'also configure two SSD Disk break them into two Partition and mirror them again each other and use one partition as an ZIL and the second Partition as an ARC Cache for performance also this I want to diskuss it this is a good idea or not .
Thanks in advance for every answer also on that .

here are the Output of my POOL Status
Code:
root@FWOMV:~# zpool status
pool: DATA
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(5) for details.
scan: scrub repaired 0 in 3h6m with 0 errors on Sun Dec 20 06:06:58 2015
config:

NAME STATE READ WRITE CKSUM
DATA ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-SAMSUNG_HD204UI_S2H7JD1ZA05372 ONLINE 0 0 0
ata-SAMSUNG_HD204UI_S2H7J1AZA02091 ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
ata-SAMSUNG_HD204UI_S2H7J1EZC00506 ONLINE 0 0 0
ata-SAMSUNG_HD204UI_S2JGJ1SZA00127 ONLINE 0 0 0
ata-SAMSUNG_HD204UI_S2H7J90B633105 ONLINE 0 0 0
ata-SAMSUNG_HD204UI_S2H7JD1ZA05369 ONLINE 0 0 0
logs
mirror-2 ONLINE 0 0 0
ata-OCZ-VERTEX4_OCZ-WCJXK54HXCXB71Z5-part1 ONLINE 0 0 0
ata-OCZ-VERTEX4_OCZ-2D0JWO51EN2B8W5K-part1 ONLINE 0 0 0
cache
scsi-SATA_OCZ-VERTEX4_OCZ-WCJXK54HXCXB71Z5-part2 ONLINE 0 0 0
scsi-SATA_OCZ-VERTEX4_OCZ-2D0JWO51EN2B8W5K-part2 ONLINE 0 0 0
 
Ignoring the logs/cache, your pool is made up of two vdevs - mirror-0 and raidz1-1.

The mirror is made up of the first two disks, and obviously you can lose either one of those and still recover - but not both.
RAID-Z1 is similar to RAID5. This vdev is made up of 4 disks, so you can lose any one of those 4 disks, but only one of them.

If you lose both disks in the mirror, or more than one of the disks in the raidz1, that vdev will be faulted.

When you have more than one vdev, data is striped across them. If you write a file to your pool, some of it may end up on the mirror, and some of it may end up on the raidz1. As such if one of the vdevs becomes faulted, the entire pool becomes faulted, because there is no redundancy between the vdevs themselves.

The ideas was to create 3 unindependend mirror of disk which then will be mirrored again
If I understand what you're getting at here, this isn't possible. If you have 3 mirrored pairs of disks in ZFS, there is no way to tell ZFS that you want to mirror between the mirrors so to speak. If it has multiple vdevs, it will always stripe them.

Generally speaking it's advised to always use the same type of vdev in a pool. I'm not aware of any technical reason for this though. I think it just comes from the original Sun days when they provided enterprise support and didn't want to have to deal with weird performance issues because a customer had created a crazy pool layout.

Depending on your use case and how big your disks are, I would of probably gone for 3 mirrors (allowing you to lose one disk in each mirror before data loss), or one 6 disk RAID-Z2, which allows you to lose any 2 disks.
 
Hello usdmatt.

First, thanks for the detail reply.

But now that I have an idea what is configured and what I have to do maybe better I have the following question how can I do it.

It is possible to to delete or reconfigure the mirror1 or and the raidz1-1 with the existing hardisk without loss of all data? Or do I have to by knew hardisks made an send of the data pool to the new pool reconfigure the old disks and then send it back to the pool.

What do you think has better performance the 3 mirrored striped vdevs or the 6 raidz2 vdev?

Also doing a send/receive, will this also recovery the access rights and also other attributes of the files?

I think the free size is better on the 6 disk raid2 variant because only 2 drives are lost for free space.

The next open point for me is that the data pool after an reboot is mounted but without any data if I do then an export and after the deletion of the mountpoint directorythen force an import by dev-id like shown in the status delete the directory and import the pool again?

FRANK
 
If you need the most disk space possible, use raidz (1, 2, or 3 depending on your needs). Write performance will be similar to a single disk, while read performance will be better than a single disk. If you need (slightly) better performance, add multiple raidz vdevs to the pool.

If you need the most/best disk performance, use mirror vdevs. The more mirror vdevs you add to the pool, the better the performance will be.

You need to determine what's most important (performance, disk space, redundancy/data protection) and build your vdevs/pool around that.
 
It is possible to to delete or reconfigure the mirror1 or and the raidz1-1 with the existing hardisk without loss of all data?
Unfortunately not. You'll need to copy the data off and re-create the pool. If the data will fit on one of your disks, it's possible, although messy, to remove 2 disks from you pool and make them into a stand-alone temporary mirror to hold the data.

Also doing a send/receive, will this also recovery the access rights and also other attributes of the files?
Send/recv should create a completely identical copy of the data. This is done above the file system layer. All file system data - permissions/ACLs/etc is just stored in ZFS records, and the job of send/recv is to perfectly duplicate those records.

The next open point for me is that the data pool after an reboot is mounted but without any data if I do then an export and after the deletion of the mountpoint directorythen force an import by dev-id like shown in the status delete the directory and import the pool again?
If the pool is imported (i.e. shows in zpool list), it's imported. It makes absolutely no difference whether you import it using the raw disks, GPT labels, disk ID labels, whatever.

If the pool is imported, but you can't see any data, I would hazard a guess that you are missing zfs_enable="YES" from /etc/rc.conf. Without this setting, no ZFS file systems are mounted on boot, even if the pool is imported successfully.
 
If you need the most disk space possible, use raidz (1, 2, or 3 depending on your needs). Write performance will be similar to a single disk, while read performance will be better than a single disk. If you need (slightly) better performance, add multiple raidz vdevs to the pool.

If you need the most/best disk performance, use mirror vdevs. The more mirror vdevs you add to the pool, the better the performance will be.

You need to determine what's most important (performance, disk space, redundancy/data protection) and build your vdevs/pool around that.
Just to clarify. A single RaidZ-N VDEV has the read/write IOPS performance of the slowest disk in the VDEV. Bandwidth, on the other hand, is the collective bandwidth of every data drive in the VDEV. (i.e. number of drives minus N)

Mirrors work a little differently, and for writing, have the IOPS AND bandwidth of the slowest drive in the mirror. For reads, it has the combined IOPS of the VDEVs members, and the combined bandwidth of the VDEVs members, assuming the reads are either large enough or many enough. This is assuming the reading logic has zero overhead, which is quite the assumption.
 
Back
Top