Supermicro storage bridge bay

Is there a way for freebsd FreeBSD to work with the SuperMicro storage bridge bay products? It is basically two systems that share a single drive array via dual port SAS and have a 10 Gbps interconnect between the systems. The purpose is for building a HA storage system without having to have separate arrays of disks

http://www.supermicro.com/products/nfo/sbb.cfm
 
If it is just about dual-port SAS and 10GbE, then why would it not work? LSI 2008 controllers are supported by mps driver. I don't see there NIC model specification, but many of them are supported AFAIK.
 
may just be my lack of knowledge but i am not following how both systems can be connected to the same drives without corrupting the data? im guessing there would have to be something to keep one node passive unless the second dies.
 
http://www.supermicro.com/products/nfo/sbb.cfm

[...]
Each of the two Serverboard canisters contain support for Dual-Processors (5500/5600 series CPUs), 6 DIMM slots, 3 PCI-E Gen2 slots and 6Gbps SAS (SAS2). With the dual 10GbE connection between the Serverboards via the midplane, if one serverboard fails, the other serverboard is able to take over control and access the HDD's (both controllers can also work as Active-active mode), keeping the system up and running. Storage software is the key to enable this feature, which is available from several Supermicro's partners.
[...]

"Storage software" ... very specific.
Look like its closed source.

http://www.supermicro.com/newsroom/pressreleases/2010/press100412_snw.cfm
 
Yes, if you speak with Supermicro about HA storages they will tell you, we support Nexenta.

@cmbaker82,

You need SAS drives for that. And in order to avoid data corruption you will need to have the all pools imported to the active system only. You can use devd(8)() events to trigger a failover.

I am describing a very simplistic approach here but basically you need to create the software yourself. Have a look at HAST. It is actually the opposite approach but it should give you some hints.
 
cmbaker82 said:
may just be my lack of knowledge but i am not following how both systems can be connected to the same drives without corrupting the data? im guessing there would have to be something to keep one node passive unless the second dies.

You have to use either a cluster filesystem designed for shared storage and trust your SAS controllers, SAS expanders and drives implement the edge cases correctly or use an active/passive system. The chassis use SAS expanders with two upstream ports.
I can confirm that the FreeBSD mps driver supports the LSI2008 controller with IT firmware. The ixgb(e) driver supports the 10Gbit/s NIC. The 1Gbit/s NIC is supported by the em/igb driver. All in all it looks like the perfect hardware to build a ZFS Pool with a failover headnode or play with the CAM target layer to build a HA SAN.
 
So if I am understanding this correctly I could possibly do something like this:

Each unit has FreeBSD installed via USB disk or something else that is not shared.
Have one unit as primary that would have the ZFS volumes imported
The second unit would just monitor the first one and if it fails it would then import the ZFS volumes and take over its IP address using something like CARP?

During boot each node would have to check somehow if it was supposed to be primary before mounting the ZFS pool?
 
I found this link that verifies the config for CARP: http://support.ixsystems.com/index.php? ... de-rev-207

I still need to know what's going on under the hood for the GUI section: System > Failover. I don't think HAST is involved here since there is no need to synchronize the actual block data. I presume it involves two things:
  1. automated synchronization of the FreeNAS system configuration
  2. export / import of zpools

Any thoughts?
 
mrmarcel said:
Any thoughts?

Yes. If you are using a file system (such as ZFS or any other single-node file system) that is not built for SAN access, you have to be 100% sure that at most one server is accessing the disks at any given moment. Absolutely, positively sure. No if's or but's. Even in all possible corner cases. Even if IP connectivity or the Ethernet fails.

This can be done (all cluster or SAN file systems are capable of it). The biggest single ingredient is a group services package, which makes sure the two servers always know whether the other guy is alive or dead, and whether the other guy knows that I am alive or dead. In some cases, this uses hardware assists, like independent management networks (so clogging on the normal IP network doesn't cause spurious failover), memory-to-memory bridges (PCIe is quite suitable for this), and remote power control for SOTH functionality (Shoot the Other guy in the Head), which is the best way to make sure only one node is up.

Also, the failover requires the other node mounting a file system that was not cleanly dismounted. Make sure whatever file system you use is really good about fsck, and uses some technology (logs, journals, non-overwrite, transactions …) that makes data loss on crash/restart somewhere completely impossible.

This is not for the faint of heart. There is a good reason software companies make good money selling such solutions.
 
Back
Top