gpart create -s GPT aacd0
gpart add -t freebsd-zfs aacd0
zpool create data /dev/aacd0p1
zpool add -f data aacd0p1
zfs set mountpoint=/home/data data
/opt/StorMan/arcconf getconfig 1 LD
Logical device number 0
Logical device name : eph-dat
RAID level : 5
Status of logical device : Logical device Reconfiguring
Size : 13332470 MB
Stripe-unit size : 256 KB
dmesg | grep MB
aacd0: 11427830MB (23404195840 sectors)
danbi wrote:If you forget about your controller's 'hardware RAID' you may increase the capacity in ZFS by replacing your existing drives with larger capacity drives. Here again, it is best to use mirror vdevs, as you need to replace only two drives in order to see new capacity. If you have 5 drive raidz (RAID-5) then you need to replace all 5 in order to see more capacity.
phoenix wrote:If you are going to rebuild the array anyway, consider not using a RAID5 array. Instead, put the controller into "single disk" mode or JBOD mode. If the controller doesn't support those, then create a bunch of 1-disk RAID0 arrays.
Then create the pool using the individual disks, and let ZFS manage it all.
tovo wrote:Really ? And what about the performance ? I always thought that hardware Raid was always better because of the performance (read/write speed, data integrity) and the cpu usage.
My controller supports JBOD mode but how will I implement the redundancy by this way?
phoenix wrote:Depends on the RAID controller. Some high-end controller from Areca, LSI/3Ware (PCIe 8x+) are super-fast and may be faster than software RAID. However, if you have lots of CPU and RAM, software RAID may be faster. Depends on the workload.
In this day of multi-GHz, multi-core CPUs, you don't need high-end, specialised controllers, if using software like ZFS, gmirror, graid3/graid5.
Via ZFS: [cmd=#]zpool create mypool raidz2 da0 da1 da2 da3 da4 da5 raidz2 da6 da7 da8 da9 da10 da11[/cmd]
That create a ZFS pool named "mypool", which is comprised of two raidz2 (RAID6) vdevs, each with 6 drives. This is, essentially, a "RAID60" array, as ZFS will stripe reads/writes across all the vdevs in the pool (essentially a RAID0 stripe).
phoenix wrote:IOW, a single RAID6 array using 12-drives will be slower than 2 RAID6 arrays using 6 drives each in a RAID0 stripeset. Yes, you lose a bit more raw storage space ... but you gain a lot more redundancy (can lose 4 drives instead of just 2 without losing data) and a lot more raw throughput.
phoenix wrote:Putting ZFS on top of a single device (hardware RAID array) causes you to miss out on close to half the features of ZFS as it can only detect errors, it cannot fix them.
Users browsing this forum: No registered users and 0 guests