Other Looking for HP SmartArray/SmartHBA RAID controller users...

I have created a patch for the 'ciss' device driver that allows FreeBSD to talk to HP SmartArray/SmartHBA RAID/HBA controller cards that fixes a number of problems we have had with it on our systems (HP servers with the HP SmartHBA H241 controllers and multiple HP D6020 external SAS disk cabinets (70 SAS drives per cabinet). Now, most of these normal users might not have seen or experienced:

1. More than (ca) 48 physical drives per SAS bus can't be used if the card is put into HBA/JBOD/passthru mode.
2. If a physical drive have the same target number as the max number of logical (RAID volumes) supported then it is silently skipped
3. SES enumeration didn't work ("sesutil map" didn't show anything connected to these controllers).
4. Unplugging a SAS cable and reconnecting it "on the fly" every now and then (often) the server would panic and reboot

The patch fixes this plus adds a couple of things:

5. Added sysctl support so "sysctl -a" now lists the kernel tunables available.
6. Added a "hw.ciss.verbose" tunable to be able to get more verbose output.

(also fixes a couple of spelling errors but that's just cosmetic)

I've tested it on systems with HP H241 "SmartHBA" cards and also some old systems with HP P400 "SmartArray" cards (but only with a few drives). But it would be great if some more people could test-drive it to make sure it doesn't break stuff for other users...

So I'm looking for other people with other types of HP RAID cards (using the "ciss" driver) willing to testdrive with patch... (you'll need to build you own kernel with the patch applied. After having built & installed the custom kernel, set 'hw.ciss.verbose = "2"' in /boot/loader.conf and reboot).

Differential:
https://reviews.freebsd.org/D25155

Bug reports:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=246279
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=246280
 

Attachments

  • ciss.patch.txt
    11.2 KB · Views: 230
Thank you for being so thorough, and sharing the results of your labor!
1. More than (ca) 48 physical drives per SAS bus can't be used if the card is put into HBA/JBOD/passthru mode.
Very few users on the public forum have systems of that scale. I know a few of them exist (Terry Kennedy for example), but I don't know whether any use the HP card.

Use of SES is also pretty rare; few people actually manage the disks in their enclosures at all (nost only know "I have a whole bunch of disks, and I put some sort of RAID on it, and I don't care about the details). I've seen so many cases where people are uncapable of identifying a disk drive, which bites really badly once you have to replace them. By the way, this has been studied: The #1 cause of data loss in RAID systems is users pulling the wrong drive out when attempting disk replacement after disk failure.
The forums are mostly populated by desktop users (single laptop), and small servers, often virtualized. Remember, a system with 48 drives that's used in commercial production has a 5-digit price in Euro or US-$.

4. Unplugging a SAS cable and reconnecting it "on the fly" every now and then (often) the server would panic and reboot
I've seen that on every OS that I've tried on. Fixing this is really hard, many months of work. In my experience (which was NOT FreeBSD, nor the HP cards) fixing it requires working at all levels of the stack, namely firmware stack (drive, enclosure SES = SAS expander, HBA), OS drivers, and application. All of those have to not get into latched software states when cables are disconnected, and all have to have appropriate retry loops around failed operations. System integration is hard.
 
Very few users on the public forum have systems of that scale. I know a few of them exist (Terry Kennedy for example), but I don't know whether any use the HP card.

Yep. Even we didn't see this bug until we tried to connect a second fully (70 drives) stocked D6020 box (we connected half the D6020 to one controller and the other half to another so our controllers only saw 35 drives/controller then).

However, I'd like to make sure the patched ciss driver doesn't cause problems for small users, of which I'm sure there are a number of (some people get retired HP server hardware from datacenters :) It _shouldn't_ misbehave but you never know...

Use of SES is also pretty rare; few people actually manage the disks in their enclosures at all (nost only know "I have a whole bunch of disks, and I put some sort of RAID on it, and I don't care about the details). I've seen so many cases where people are uncapable of identifying a disk drive, which bites really badly once you have to replace them. By the way, this has been studied: The #1 cause of data loss in RAID systems is users pulling the wrong drive out when attempting disk replacement after disk failure.

Yeah, I didn't really bother with this (we used the "cciss_vol_status" tool to get info without SES). But it's still nice to have. Especially since cciss_vol_status has it's own unique set of bugs... For example - that tool stops enumerating disks at the first missing one. So if drive 10 out of 70 is dead then it'll only see the first 9 drives. Sigh.


The forums are mostly populated by desktop users (single laptop), and small servers, often virtualized. Remember, a system with 48 drives that's used in commercial production has a 5-digit price in Euro or US-$.

Yeah, it's not really cheap :)


I've seen that on every OS that I've tried on. Fixing this is really hard, many months of work. In my experience (which was NOT FreeBSD, nor the HP cards) fixing it requires working at all levels of the stack, namely firmware stack (drive, enclosure SES = SAS expander, HBA), OS drivers, and application. All of those have to not get into latched software states when cables are disconnected, and all have to have appropriate retry loops around failed operations. System integration is hard.

Normally can be really difficult. But in this case it wasn't so hard - everything really is set up for hotplug (and unplug). It was just a case of NULL pointer causing a call to panic() which was pretty easy to fix.
 
All the companies I've worked at in the past 15 years that had FreeBSD (hence my employment) have since switched to Linux. The project is in a deplorable state, there simply aren't any strong selling points. And don't get me started on the "why don't you take things into your own hands" stuff.

That being said, I'm now running the patch on my HP Proliant DL380 G6 with a P400 controller and 6x disks in a raidz2 pool.
 
All the companies I've worked at in the past 15 years that had FreeBSD (hence my employment) have since switched to Linux. The project is in a deplorable state, there simply aren't any strong selling points. And don't get me started on the "why don't you take things into your own hands" stuff.

First - thanks for testing the patch. (I too am running it on a DL380G6 with P400 controller btw :) )

Second, well - for things like university file servers (serving both NFS and SMB) then Linux simply is not a good choice. It might be in the future but currently it's a long way from the mature stability, usability and performance of FreeBSD and also lacks a couple of needed (mature) features - like ZFS (yes, I know it's there on Ubuntu now - but would I trust a petabyte of files on it? No way.

Or real ACLs (not the abomination that is POSIX.1e ACLs). I know there is work on adding some kind of support for it, but it's not usable yet.

We did a quite thourogh comparision of FreeBSD vs Linux vs Solaris when we started that project and they each have their strengths and weaknesses.

FreeBSD:
+++ ZFS
++ NFS
+++ ACLs (NFSv4/ZFS/NFS)
++ SMB (Samba - good support, but fam/inotify support has problems with huge systems)
- No service manager (I do _not_ want SystemD but something more lightweight)
+++ Easy to talk to the OS developers
++ A lot of people using it in the form of FreeNAS.

Linux:
+ ZFS (good features, but not time-tested)
+ NFS (lastest release but bugs bugs bugs)
? ACLs (only POSIX on server side)
+++ SMB (Samba - very good support)
+/- SystemD (yuk!)
++ A gazillion distributions and who know who to talk to and get them to listen. Frustrating..

OpenSolaris (OmniOS):
+++ ZFS
+++ NFS (just v4.0, but super-stable - "gold standard")
++ ACLs (good, but not at FreeBSD level)
+ SMB (kernel locking & som fam/inotify emulation problems)
++ SMF (service manager, restart services in case of problems)
+ Tiny development team, but easy to talk to.

(Oracle Solaris probably should have been in the mix but... Oracle... No thanks)

The amount of bugs found (and that we've had to fix for the Linux clients to work) in Linux NFS code is... staggering to say the least. And it has always been like that from the beginning of time (I've been working with it from the early days :). SMB (Samba) support for Linux is probably the best though. Mature time-tested ZFS OpenSolaris wins hands with FreeBSD in second place and for me I think the safety of the stored files in prio #1 then a mature ZFS is super important... :)
 
Second, well - for things like university file servers (serving both NFS and SMB) then Linux simply is not a good choice.
Actually, it is. I've run it while I worked in the VFX industry and we had tons of clients all over germany connecting to the same server, blades, LDAP + home dirs, etc. No issues whatsoever.
It might be in the future but currently it's a long way from the mature stability, usability and performance of FreeBSD and also lacks a couple of needed (mature) features - like ZFS (yes, I know it's there on Ubuntu now - but would I trust a petabyte of files on it? No way.
1 word, ceph. for really high usability and performance, ceph.
Yes, the ZFS port on linux is an abomination but ZFS 0.7 underperforms immensely compared to ext3, ext4, btrfs, xfs 🤷

FreeBSD:
+++ ZFS
++ NFS
++ SMB (Samba - good support, but fam/inotify support has problems with huge systems)
- No service manager (I do _not_ want SystemD but something more lightweight)
+++ Easy to talk to the OS developers
++ A lot of people using it in the form of FreeNAS.
Agree on the systemd thingy, everyone hates it, including linux guys but the easy talking to the devs is useless. look at the pf bullshit that happened some years back.

Linux:
+ ZFS (good features, but not time-tested)
+ NFS (lastest release but bugs bugs bugs)
+++ SMB (Samba - very good support)
+/- SystemD (yuk!)
++ A gazillion distributions and who know who to talk to and get them to listen. Frustrating..
ZFS, yes, absolute abomination. Distros and talking to them to listen; well, that's what support is for :) and if you're talking real use case (read big companies), freebsd is simply not an option. smaller companies also since linux has a much wider adoption and it's a hell of a lot easier to hire talent (and cheaper).


and for me I think the safety of the stored files in prio #1 then a mature ZFS is super important... :)
ceph. welcome to the cloud.

It pains me to see FreeBSD in its current crappy state but it is what it is. Too much infighting, too many fragile egos, bleah, it just hurts the project.
 
1 word, ceph. for really high usability and performance, ceph.

Well, not in the Windows client world. Our Windows clients outrank Linux clients by more than 10-1 so Ceph is a fat no-go there. And ACLs is a killer "app" for clients in the "office" world where you need to share documents and data between users in a secure way and still have local (as in "not in the cloud") storage. Cloud storage is convenient but a security nightmare...
 
Back
Top