SATA RAID controller recs?

Tiger Direct has this SATA RAID controller for $50, 1) will FreeBSD support it? 2) better ideas?

===
StarTech PCISATA4R1 4-Port Serial ATA RAID 0,1 PCI Card
...
Whether you need Striping (RAID 0), Mirroring (RAID 1), or both (RAID 0+1), this card has you covered. The PCISATA4R1 also supports RAID 1+S (Mirrored-Sparing), which automatically replaces a failed hard drive and rebuilds the system if a booting hard drive fails.

Specifications
* Bus Type: PCI / PCI-X (5 / 3.3V)
* Internal Ports: 4 - SATA Data 7-pin Female
* Maximum Data Transfer Rate: 150 MBytes/Sec
* FIFO: 256 Bytes per channel
* OS Support: Windows 98/98SE/ME/NT 4.0/2000/XP/Server 2003/Vista
* Chipset: Silicon Image 3114R
 
mmm.... I want RAID *and* SATA... as far as I know, you cannot easily/cheapily get 1.5TB disks with anything other than SATA and I want to do RAID1 with 1.5TB disks. And the budget is tight. The motherboard I'd probably use is older and doesn't have SATA, so I'm thinking a $50-60 SATA/RAID PCI card, no? bad idea?

I know I could always do software RAID1, and perhaps I should (advice welcome), but I also would rather not have to think about it too much...

How about the Promise SATA/RAID cards?
 
Try to avoid SiliconImage SATA chipsets. While they have improved over time, they aren't worth the silicon when it comes to RAID. Most of their RAID chipsets are nothing more than plain SATA controllers with software RAID built-in to the driver.

For RAID0 or RAID1, I'd recommend any old PCI SATA controller (non-RAID) and either gstripe(8) (RAID0), gmirror(8) (RAID1), or even ZFS (especially if you want to expand it).

Promise makes some good chipsets. Same with HighPoint.

3Ware and Areca make the best hardware RAID controllers, and both have excellent FreeBSD support (drivers made by the vendors).
 
Thanks for the input...

ya know, before the days of 200GB+ disks, what I found was a good compromise between "backups" and RAID1 was simply doing a dd copy of a disk to its twin, say once or twice a week. It achieves much of the same effect as RAID1 yet also gave you the opportunity to recover accidentally deleted files or corrupted files. And of course no special hardware or software needed, just an entry in crontab...

I don't suppose there is a more modern equivalent of this "lagged" RAID1... maybe something with rsync makes sense... (is this in vogue at all?)
 
if you take the time to properly setup rsnapshot, test it,
and run it with "gnice" (gnice is from some /port/), that
could be your backup. But how valuable would be the data,
would you have copies offsite, etc.
..........
Not using raid here because adequate backups are not simple nor
fast enough as it is, hesitate to make storage any larger. But
I note all gjournal, zfs, "hardware raid" posts, thinking that
raid may make recovery easier in case of 1-of-a-set of a set of
drives failing.
 
There is a distinction that often gets lost when talking about a raid / backup solution. Using raid is not a solution to you backup needs! Raid is for your working copy of data so that you do not lose it in case of drive failure, backup is a separate drive(or raid on another machine).

In my setup I use 750GB drives and a ZFS file system that currently sums up to 8 TB. This is used for CFD and DEM simulations where the amount of data quickly can become large, and is a working active filesystem. However I also have two 1 TB drives in a external enclosure that is connected using e-sata thats only for backup. I do not backup the entire zfs raid, just the important parts as most of the data can be recreated, it just might take a some time.

So if your need is to create a storage pool, then raid is a good thing. If you need to secure your data, backup to a another disk is the way to go. As the goal seem to be creating a backup solution, I would go with separate drives, not raid. Personally I use Synkron for keeping the backup data fresh as I run it daily to secure my data before shutting down the machine.
 
gilinko said:
So if your need is to create a storage pool, then raid is a good thing. If you need to secure your data, backup to a another disk is the way to go.
I agree. A mirror will not save you if your users delete files accidentally, a backup will :e
 
so to be clear, I believe my goals are:
1) provide some kind of "short-term" backup for some protection against accidentally deleted files, or more commonly, Windows programs that go haywire and corrupt their files, and
2) fast and easy restore of the system in case of a disk failure. I generally don't have time to do reinstalls, reloading from backup tapes, etc. When a disk goes, I need to be back up and running within 15 minutes max, even if it is a system copy where the state is a few days old.

That's why I found the "lagged RAID1" (perhaps a stupid name) to be a reasonable compromise. A dd of the disks to their twin a few times a week gives you the ability to recover files from a day or two ago AND the ability to just boot off the twin disk(s) in case the original(s) die.

I would probably still do this except that the bigger 200GB-1TB disks are starting to make dd's impractical (take too long). Hence my comment about rsync. Also the longer the dd's take, the more one is open to filesystem corruption even on a relatively quiescent filesystem at 4am. So a "filesystem-level" solution is probably better than a sector-level solution. We don't run any big databases to have to worry about those kind of corruption problems.
 
If your data is critical, you *may* want rsnapshot to be
the only thing running (I had a mbr or ufs filesystem or
something disappear when working on a disk in one tty-N that
was being backed up to with rsnapshot). That is why I
suggested gnice, though it may have been a problem with the
usb bus/chip (no problems since switching the rsnapshot to
a promise sata controller, so far).
 
Create a zfs pool with the second disk. Do rsyncs from the one drive to the other. Than create a zfs snapshot (use the date as the name). Continue each night as needed. You'll have an easily-accessible archive of your changes via the snapshots.

Even better, create a zpool using multiple vdevs, where each vdev is a mirror or raidz, and you get both data protection (drive redundancy) and data backups (snapshots).
 
Back
Top