Solved pci to sata hot-plugging card and FreeBSD

Hello Guys !

Please, can you tell me news about "pci to sata" hot-plugging cards that are supported by FreeBSD ?

My need is to externally connect sata hard disks to sata ports, avoiding to connect them on the motherboard, using self-powered boxes.

Have you already tested models that work without problem with the FreeBSD ?
Thanks in advance.

Bye !
 
Hot-plugging the card itself might be problematic. But hot-plugging a SATA device shouldn't be a problem if the card supports hot-plugging. There's even a specific standard plug for external SATA; eSATA (https://en.wikipedia.org/wiki/Serial_ATA#eSATA)

(I have several LSI based SAS/SATA cards that support hot-plugging disks)
 
I have several LSI based SAS/SATA cards that support hot-plugging disks

Hello Sir !
This news is welcome !
Please, can you enlight me about some good models and where can I buy on of them ? Amazon for example?

In which a way do you connect the hard disks ? one by one ? or have you a multiple housing box ?

Thanks in advance.
 
I have several LSI cards, one of my clients also has a bunch of FreeBSD servers with LSI cards. Can't really go wrong with any of them. The LSI SAS9207-8i is a nice card; not too expensive. Can probably be picked up fairly cheap second-hand.

or have you a multiple housing box
Most of my disks are in hot-swap cases, Chenbro, Chieftec, ICY Box and ICY Dock all have them. They're nice as they allow you to put 5 x 3.5" disks in a 3 x 5.25" slot for example. There are many variations for various prices available.

Example: http://www.raidsonic.de/products/internal_cases/backplanes/index_en.php?we_objectID=1152
 
If you buy these things for your home servers, buy them all from the same manufacturer. The reason is that some require a tray and those trays generally fit on other models from the same manufacturer but not on others. Another thing to watch for is the fan on the case. Make sure it's easily replaced. If your home is anything like mine it's going to gather dust and will start getting noisy after some time. So you'll need to be able to replace the fan at some point.
 
Yes, I'm agree with you.

I'll take a detailed look at the technical specifications. However, before buy them I'll contact you again for more tips.

All the models you've mentioned seem very interesting. They are exactly what I want and I was looking for, also if they seems only for internal and no external use. So, I need a powered case where to mount and power them.

I'll contact you again. I accept advice from where to buy them

Thanks very much.
 
Most of my disks are in hot-swap cases, Chenbro, Chieftec, ICY Box and ICY Dock all have them.

Hello Sir,

I've read that the hot-swap cases have the possibility to use disks in stand-alone or raid operative modality.

Please, if I would to use them as raid systems, can you tell me what type of filesystem do they support (filesystem's driver) ?

Thanks in advance.

Bye !
 
If you use (hardware) RAID stick to UFS. Only use ZFS if you can access each disk individually (JBOD).
 
Hello Sir !

QUESTION 1:
Only use ZFS if you can access each disk individually (JBOD).
Why: "only use ZFS for access each disk individually"?

My need is to use these stations to imaging (dd) FAT/NTFS/HFS/HFS++ hard drives.
If I buy one of these stations can I use them to duplicate an hard drive to another one ?

QUESTION 2:
My apologize for the following question; my knowdleges about ZFS filesystem driver are limited.
Using these stations in RAID mode, has the ZFS filesystem driver a "log system" that details any internal operations it does on the disks during the usual functioning ?
In particular, if a single drive failure occurs, does ZFS maintain a log, reporting duplication operations of damaged sectors, rewriting of the allocation tables, ecc... ?

QUESTION 3:
The Chieftec CBP-3141SAS has not aRAID controller included (Technical Data). Do you think that LSI SAS9207-8i is a good card for it ? I seem that this card support RAID features.
Or is better the ICY-BOX IB-555 ?


Thanks very much.
 
Why: "only use ZFS for access each disk individually"?


ZFS has its own built-in RAID functionality. Which works much better than using ZFS on top of another RAID implementation (*). For this reason, one should use individual disks as the storage devices for ZFS, and leave RAID to it.

(* Footnote: This statement is true for the RAID implementations that one would practically find in ZFS deployments. There are high-end RAID implementations that have features that ZFS is lacking, but I don't think people who have those high-end RAID systems would be tempted to use ZFS anyway).

My need is to use these stations to imaging (dd) FAT/NTFS/HFS/HFS++ hard drives.
You want to just do bit-wise copies of hard disks. In that case, you don't care at all about RAID and file systems anyway. All you need is block device drivers which attach and detach (create and destroy) the /dev/adaXX and /dev/daXX devices when drives are plugged and unplugged. That works to my knowledge in FreeBSD, both at the SATA and USB layer; I have never tried the LSI card SAS drivers on FreeBSD, but they just have to work that way too (anything else would be insane).

Using these stations in RAID mode, has the ZFS filesystem driver a "log system" that details any internal operations it does on the disks during the usual functioning ?
In particular, if a single drive failure occurs, does ZFS maintain a log, reporting duplication operations of damaged sectors, rewriting of the allocation tables, ecc... ?

I have never seen ZFS do logging (in a log file) of operations at that level of granularity. I don't think you really want that anyway, as the log files would become insanely huge: If a terabyte-size disk fails, rebuilding that data into a spare disk (to resolver the RAID array) will have to perform rebuild on about a billion sectors or allocation blocks. I don't think you want to read a log file that has a billion entries. What ZFS does provide is an easy-to-use and very sensible interface (using the zpool status command) to see what the state and health of each disk is.
 
Back
Top