DELL H755 RAID support

You are saying that DELL engineers for "consumer" laptops are not good but for servers are very good?
No, enterprise grade server hardware is designed to be serviceable (well, most of them anyway). Consumer laptops are designed to look pretty, not to be tampered with by us mere mortals.
 
skipping the last three comments about laptops.........

As 'richard...' above noted, DELL (part #H... here) branded RAID boards have used LSI Broadcom chipsets back as far as the H700 RAID board, maybe older. It seems that Dell had failed to identify the chipset up to the H730P boards. So looking for drivers seems to be mainly a matter of looking for a chipset # rather than the DELL PERC H###.

LSI and DELL both are supporting the H755 with SAS3916 chipset with drivers for Windows, VMware, and Linux. I successfully installed Windows 10 and Ubuntu 20 onto RAID1 sets in the PE R750 with the H755 RAID board. But support for FreeBSD has not been directly provided by LSI or DELL as far back as I can trace the LSI connection.
( except for the <mrsas> a few years back for the H720 or H730, I forget which. ) We have been using DELL servers going back into the PE 2900 and 2950 era. FreeBSD RAID drivers were not hard to find then.

I downloaded that 'src - FreeBSD source tree', but I don't know how to interpret the 'PERC H755' section. Does this mean it is 'to be done', 'in progress', or 'available'?
 
Oh, and one other comment to SirDice. DELL Servers are designed to be 'easily serviced'. There was one time that I had a DELL Tech here to replace the HDD backplane, He took the system almost completely apart without even a screwdriver. Everything was 'clip in' - just a twist or pull on the little blue clips. DELL systems are not cheap, but they are well designed. Just my 2¢ worth. (PS - I don't work for DELL :) )
 
You are saying that DELL engineers for "consumer" laptops are not good but for servers are very good?
Your criteria for "good" are all wrong.

The engineers (and other staff) who designed the Dell laptop you're complaining about had certain goals. In consumer laptops, making parts easily replaceable is typically NOT a goal, because most users do not take their laptops apart. Instead, compactness, weight, battery life, and features are a design goal. You can't fault the engineers for building a laptop that most consumers would like, and which you bought in spite of the fact that it is not designed for your particular use (which is disassembling and reassembling it).
 
So looking for drivers seems to be mainly a matter of looking for a chipset # rather than the DELL PERC H###.
This is not specific to Dell, but is true industry wide. LSI SCSI chips are used by a wide variety of vendors (including LSI themselves by the way); to use them with Linux or FreeBSD drivers, you have to know what chip model is in which board model number. I know that both Linux and FreeBSD drivers do attempt to create lists of what OEM-branded cards are supported (that includes Dell, IBM, Lenovo, Intel, HP, ...), but those lists tend to be often obsolete.

LSI and DELL both are supporting the H755 with SAS3916 chipset with drivers for Windows, VMware, and Linux.
I think most LSI chips can be run with the mrsas driver that is in the released code in FreeBSD; some may use the mfi or mpr drivers. I think those drivers are "provided" by LSI, in the sense that LSI pays someone (might be a consultant to them, might be an employee) to write that code. The same goes for the setup utility (it used to be called megacli, I think it has a new name now). If all this is true, then LSI does "support" FreeBSD.

Usually, LSI also publishes firmware updates (and various versions, like IT versus RAID) on their website. I don't think the firmware files are in general OS dependent, although during beta tests, there may be firmware versions that fix specific OS-dependent bugs. So from a firmware point of view, I think LSI supports FreeBSD as much or as little as any other OS.

I think in earlier days, LSI used to publish updated driver versions for Linux and FreeBSD on their web site; the expectation was that power users would replace the drivers (such as mrsas) that come with the normal source release of things like RedHat or SUSE or FreeBSD with drivers downloaded from them. I don't know whether they still do that, haven't looked in years.

Part of the problem you have here is that you are caught in the chasm between Dell and LSI. That affects both chip numbering and finding help. From this viewpoint, buying Dell gear might be a mistake; you might be better off buying your SAS cards directly from LSI, and then using their technical support.

(When I say "LSI" above, I mean whatever name the corporation has this week, it has recently been Avago and Broadcom.)
 
DELL was top brand and hope still is. But after the case with Laptop HDD and another case with specific fans in DELL desktops, I decided to avoid DELL hardware when possible. The problem with fans is that they were non standard - when stopped working it was hard to find spare part. And they frequently stopped working which means: bad decision for non-standard fan + low quality when faults are not rare = somebody in DELL makes bad design. The problem in this topic shows that RAID card is not excellent.
 
I downloaded that 'src - FreeBSD source tree', but I don't know how to interpret the 'PERC H755' section. Does this mean it is 'to be done', 'in progress', or 'available'?
The bit I was pointing to looked like it was identification data - so just so that FreeBSD can match a hardware ID to a string e.g. for dmesg.

I don’t know if anyone has done any more than that - I very much hope so but don’t know.
 
The problem in this topic shows that RAID card is not excellent.
Ummm, the main problem is that there doesn't yet seem to be a FreeBSD driver for the new generation RAID card.

My experience of cheap low-end Dells (laptop and desktop) has been the same as my experience with cheap low-end Lenovo/HP/Compaq/etc. machines - compromises are made and you get what you pay for.
 
SOLVED - I downloaded FreeBSD 13.1 PRE RELEASE 20220303. The Trap 12 error on boot has been fixed, and the PERC H755 RAID setup now works! I know this is PRE RELEASE, and NOT FINAL, but it is a BIG step in the right direction.
 
That's awesome news, thank you for checking and posting. :)

I'm still figuring out my next Dell server purchases - meant to update this thread to say my earlier posts about the R6515 appear to be wrong - looks like they have the H730 generation PERC, not this newer one.
 
Richard, The R6515 is an older system with AMD CPUs (nothing wong there, I just have not used them) using the LSI SAS 3108 chipset. You will have no problems with that RAID board, with FreeBSD 9.0 and forward. (any RAID board in this series --> HW RAID: PERC 9/10 - HBA330, H330, H730P, H740P, H840, 12G SAS HBA)
 
Yes, thank you - I just haven't decided yet about trying the AMD route for a change or going for the Rx50 range and staying with Intel.

That's why I was so interested in how you were getting on with the H755 support. If FreeBSD not going to work any time soon with the H755 that was an additional push for the R6515 but it's looking promising for the Rx50/H755 range so I still have options.
 
use the newer LSI SAS3916 Tri-mode (whatever that is?)
Tri-Mode gives you the ability to run SATA/SAS/NVMe drives from the same controller.
SATA/SAS use the same mixed mode firmware.
For NVMe there is a different firmware you need to flash to the controller. So a 16 port card supports 4 NVMe.
These take very special LSI cables not ordinary NVMe SFF-8643.
Theirs has a extra signal line included. Cables are $100 for two drives.
That feature may have become part of the U.3 standard form factor change (U.2 connector specs upgraded for PCIe4).
So in a nutshell, The SAS3916 card used U.3 cable spec for NVMe before it was a standard.
That allowed them to charge exorbitant cable prices as there were no competitors.
It also meant lots of fail when people tried to use standard U.2 NVMe cables.
LSI did not announce that they were U.3 compatible as they have no reason too.
They have an overpriced part number for you.
I still don't see many U.3 cables on the markets yet.

With the speed hit of NVMe over the controller I really dont see the use case for NVMe on LSI.
Hot swap might be the one case. It is handled different on the controller level.
 
Thanks for the clarification on Tri-Mode. A few years back, we bought several DELL PE R630 10 x 2.5" bay systems that had 6 SAS/SATA bays and 4 NVMe bays. At that time, I don't think hardware RAID on NVMe was available, so we went with RAIDz1 on the two NVMe drives, which worked well. But there was a special board and cabling to install into the system for the NVMe drives. So the SAS3916 should eliminate that second drive interface?

Moving forward - FreeBSD13.0-RELEASE did not work at all on the "newest" DELL PE R450 15th Gen system. It crashed with a "Trap 12 error" shortly after boot start. I just downloaded FBSD 13.1-PRERELEASE (20220303) -> (YES, I KNOW this is not ready for production - I am just testing progress of 13.0/1) This seems to have fixed the Trap12 error, and also allowing full hardware RAID ability with FBSD13 and the DELL H755 PERC 11 board.

Now to the problem... We also needed to put FBSD13.0-RELEASE onto a NUC11 for another function. 13.0-RELEASE did not have the correct drivers for Intel i225 network chipset, booted OK, but no network access, did not even see the i225 interface. I found 13.0-STABLE (20220217) which had the fix in the driver so the network was active and running. But we cannot use the STABLE version for production system.

So, going back to the FBSD13.1-PRERELEASE, I put that on the NUC11 for testing. 13.1 does 'see' the i225 chipset, an tries to active the network but throws an error and does not retrieve a DHCP IP address.

Sorry for the long-winded description, but I just wanted to outline my steps to this question(?) FBSD13.0-STABLE fixed a network issue, but that fix does not seem to have carried forward to FBSD13.1-PRERELEASE. As a lowly "user" out here waiting for updates, is there a mechanism (posting?) to describe errors as we see them during field testing?
 

Attachments

  • NUC11 NIC error FBSD13.1.pdf
    271.7 KB · Views: 104
Sorry for the long-winded description, but I just wanted to outline my steps to this question(?) FBSD13.0-STABLE fixed a network issue, but that fix does not seem to have carried forward to FBSD13.1-PRERELEASE.
PRERELEASE is a moniker that's used on the -STABLE branch (it gets this just before a release branch is made). I suggest you try the BETA1 of 13.1, it was released a couple of days ago.

 
did not work at all on the "newest" DELL PE R450 15th Gen system.
My guess here is the PCIID for the Dell versions were not up to date with the driver.
The controller itself is supported for a while now.
FreeBSD imports the LSI driver every once in a while.
The Dell and Intel OEM versions (PCIID) of the cards are not included with LSI driver and must be added.
 
Phishfry, where did you find info on DELL H755 RAID controller being supported earlier? I looked for several months for that info. Obviously I was looking in the wrong places...

FBSD 13.1-BETA1 is looking very good for new DELL 15Gen sytems, and also the latest NUC11 systems. Basic booting up and hardware functions seem good on both systems. Lots of testing to go to confirm 'full' hardware and software support though. The NUC11 network port works fine after I realized that I forgot the set rc.conf to - ifconfig_igc0="DHCP". Isn't learning NEW stuff so much fun??? :)
 
Phishfry, where did you find info on DELL H755 RAID controller being supported earlier?
I am just surmising. I had the same experience with Intel Branded SAS TriMode card.
The LSI card was working but my Intel one was not.
A point revision for FreeBSD came along and suddenly the Intel card worked.
So I am guessing here.
LSI driver is imported and someone has to manually add Intel, Dell, HP, SuperMicro OEM version PCI-ID's for the new cards. LSI does not supply that.
 
PCI-ID is the method for recognizing hardware. If the PCI-ID is unknown it won't work.
I can imagine why manufacturers like Dell uses their own PCI-ID instead of the LSI PCI-ID.
For example mezzanine cards. I would imagine there could be differences between Dell and HP's implementation. Windows detects hardware by PCI ID's too so driver detection is based off PCI-ID's.
pci.ids is an index file with devices much like usbdevs. Vendor ID and Priduct ID are the fields.
pciconf(8)
PCI vendor and device information is read from
/usr/local/share/pciids/pci.ids. If that file is not present, it is read
from /usr/share/misc/pci_vendors. This path can be overridden by setting
the environment variable PCICONF_VENDOR_DATABASE.
 
I will technically qualify what I said here. pci.ids are a method for device detection.
But for the LSI cards the PCI-ID's are in the driver. So just guessing here.
Kernel is compiled with LSI support. This means PCI-ID's are in kernel via the driver.
I know the PCI ID is in the driver because I tried and failed to add my Intel SAS9400 card to the FreeBSD driver.

Yes, the FreeBSD tree is a bit different. Avago has out of box drivers that
are maintained, and we try to keep the FreeBSD tree up to date with the
in-box changes. You should be OK to go with the in-box driver.
My experience was fixed by FreeBSD 12.1
 
First of all, THANKS everyone for all of the support above. We are working on moving OUR software over to FreeBSD 13.1-BETA2 (realizing of course BETA is not final). The DELL 15Gen 'hardware' portion of the system is booting fine now with the H755 PERC11 controller. I have been able to load 'smartmontools' and run 'smartctl' and 'smartd' commands to check individual drives in my system. So far, so good. But way back (about 5 years ago) when LSI introduced the LSI 9300 series of controllers, we lost the ability to run 'mfiutils' to monitor the RAID controller itself. 'mfiutils' is loaded and if I just type the command, it lists out the command options. But if I try 'mfiutils show drives', it fails with 'mfi_open: Command not found'. I have been going around in circles with man pages and posts for 'mfi', 'mrsas', 'megaRAID', trying to find some combination that works, with no luck. Any help would be greatly appreciated. 'mfiutil' is great for remotely diagnosing when the RAID controller is causing drive issues.
 
Look at the MegaCli port - it's not quite the same but should get you on the right path.

EDIT - note I'm on the older generation of PERC so not 100% sure how/if it works with the H755 - but hopefully points you in the right direction.
 
Code:
xyz@pqr:~ % pkg info | grep Mega
megacli-8.07.14                SAS MegaRAID FreeBSD MegaCLI
root@pqr:/home/xyz # MegaCli -LDInfo -Lall -aALL
                                     

Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name                :R430RAID1
RAID Level          : Primary-1, Secondary-0, RAID Level Qualifier-0
Size                : 3.637 TB
Sector Size         : 512
Is VD emulated      : No
Mirror Data         : 3.637 TB
State               : Optimal
Strip Size          : 64 KB
Number Of Drives    : 2
Span Depth          : 1
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy   : Disk's Default
Encryption Type     : None
Default Power Savings Policy: Controller Defined
Current Power Savings Policy: None
Can spin up in 1 minute: Yes
LD has drives that support T10 power conditions: Yes
LD's IO profile supports MAX power savings with cached writes: No
Bad Blocks Exist: No
Is VD Cached: No


Virtual Drive: 1 (Target Id: 1)
Name                :
RAID Level          : Primary-1, Secondary-0, RAID Level Qualifier-0
Size                : 7.276 TB
Sector Size         : 512
Is VD emulated      : Yes
Mirror Data         : 7.276 TB
State               : Optimal
Strip Size          : 64 KB
Number Of Drives    : 2
Span Depth          : 1
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy   : Disk's Default
Encryption Type     : None
Default Power Savings Policy: Controller Defined
Current Power Savings Policy: None
Can spin up in 1 minute: Yes
LD has drives that support T10 power conditions: Yes
LD's IO profile supports MAX power savings with cached writes: No
Bad Blocks Exist: No
Is VD Cached: No



Exit Code: 0x00
 
But way back (about 5 years ago) when LSI introduced the LSI 9300 series of controllers, we lost the ability to run 'mfiutils' to monitor the RAID controller itself. 'mfiutils' is loaded and if I just type the command, it lists out the command options. But if I try 'mfiutils show drives', it fails with 'mfi_open: Command not found'. I have been going around in circles with man pages and posts for 'mfi', 'mrsas', 'megaRAID', trying to find some combination that works, with no luck. Any help would be greatly appreciated. 'mfiutil' is great for remotely diagnosing when the RAID controller is causing drive issues.
mfiutil(8) only works with cards supported by the mfi(4) driver. mpsutil(8) for mps(4), etc. There's no such specific tool for the mrsas(4) driven cards. sysutils/megacli should work on all of them. It's the same tool LSI recommends.
 
I have tried loading MegaCLi but having problems getting it to run... Your first three lines may be where I am having problems with the loading and setup. I don't think I had the latest version either. Will try that now. We saw the 'mfiutil' / 'mrsas' conflict back when the LSI9300 series chipsets were released, but at that time we did not try MegaCLi as a replacement. There is now a push to get it working again.
 
Back
Top