Recommend 16-port HBA (SATA) ?

Hi,

Currently for my home setup, I have a self-built system made out of a Fujitsu D3417-B1 motherboard fitted with a i3-6100 CPU (really happy with that combo, low energy and quite a lot of processing power), 16GB of ECC memory, and at this moment 8 2TB drives (SAS1068E card) in a ZFS raidz1 pool. This setup works very nice for me, the 'server' is fitted with onboard 1Gbps NICs that wire through the house, and I have it fitted with a 10Gbps card as well that directly connects to my workstation (same room) for faster direct access to large files. This all works perfect.

I've recently gotten my hands on 16 (used, but still healthy) Samsung EVO 850 1TB SSD's, and I'm thinking it would be nice to replace the 8 2TB HDD's with 16 1TB SSD's, for a) speed (al though I don't really need more speed but always nice), but mostly b) lower power consumption. I do spin down the HDD's when they're idle for more than 20 minutes and this works well but I'm sure this will all work better with SSD's (power consumtion in idle mode is so low, don't need to do anything there, and they're instanty available when needed, spinning up the 8 HDD's takes some time, specially when clients are Windows that connect to Samba shares).

Only thing I need is a 16-port SATA controller, something affordable (if possible not much more than EUR 200?, perhaps something older, second hand?), and of course, something that works well in FreeBSD. It does not have to be stellar quality or enterprise like stuff. Pure SATA controller, no RAID.

Anybody know something from own experience?
 
I would go with the LSI (now known as Avago and Broadcom) cards, just because they're so reliable and fast. But: I don't think the 16-port version will be cheap. Also, to wire 16 devices to a single card you will need four octopus cables and a lot of power splitters. Mechanically mounting and then cabling this many disks in a normal chassis will be either a nightmare or a fascinating engineering project. When using this many devices, it becomes much more sensible to buy an external enclosure (a JBOD), but the ones I know about are optimized for rack-mounting in data centers, and (a) big, (b) noisy, and (c) expensive.
 
There's a range of Broadcom JBOD HBAs. They all do SATA and SAS, but the octopus cables are different, and you have to order the correct type for your needs.

The 9305-16 cards looks appropriate. They need an x8 PCIe 3.0 lane. PCIe 4.0 versions are available, but I'm not sure if they are known to work on FreeBSD.

The 9305-16 cards have a maximum of 6.5 GB/s sequential read throughput. That's an OK match for 16 x Samsung 850 EVOs at 540MB/s each, but two 8-port controllers might be cheaper and faster (if you have the motherboard PCI bus capacity).

I often use sticky velcro patches to mount my SSDs. But getting 16 in a case would be a challenge!

As mentioned above, you are going to need buy and manage a lot of octopus and power cables. You would also need to design the power supply budget.
 
Your request for 16 ports and sub-200 EUR are going to conflict with each other, in my opinion. You should just search for "SAS 16-port Internal HBA" ignoring FreeBSD suitability and decide if you can find an "affordable" one. LSI/Broadcom and Areca's are out of your budget.
 
16x 500MB/sec =8000MB/sec
PCIe 2.1 x8=Half the speed you need.
Now if you are using ZFS you will be fine as it is dog slow.
 
Depending on how you setup your array, cutting down the per-port bandwidth to 250Mb/s, using ZFS, and spreading it out over sixteen drives may be indistinguishable from two 8TB hard drives.

Also, you've increased the failure points. Make sure the array is backed up.

Four SFF-8087 fanout cables should run 100 USD.
Also, when buying from China, expect counterfeits.
 
For the power cable I like these for a tidy setup:

I have used these for disk mounting:
 
That's the same technology as my LSI SAS 9211-8i. I expect it will function quite well, even if constrained by the bus. Hope you got the octopus cables, as they can be quite expensive.

I have several of them, so that's covered.
 
Depending on how you setup your array, cutting down the per-port bandwidth to 250Mb/s, using ZFS, and spreading it out over sixteen drives may be indistinguishable from two 8TB hard drives.

Also, you've increased the failure points. Make sure the array is backed up.

Four SFF-8087 fanout cables should run 100 USD.
Also, when buying from China, expect counterfeits.

Not sure yet how I will configure ZFS, but it's going to be at least raidz2, and I have 17 SSD's, so 3 spares to begin with. And yes, the zpool is backed up locally/daily (with snapshots of course), and about weekly remote on my own server in a datacenter in another city from where I live.

I have several (Supermicro, so fairly good ones too) SFF-8087 fanout cables already.

I've read about the counterfeits of this controller, after I ordered it. So this is a concern indeed.
 
If you can't figure out how to mount them ... Get a plastic crate (in the US they're called milk crates). You will find that cardboard hanging file folders fit perfectly into them. Put each SSD into file folder loosely, and dangle the cables over the top. Seriously, I'm not joking; a former colleague (long retired, perhaps not even alive any more) built one of the first RAID system prototypes of the 1990s this way (using disk drives), and it worked well for a research lab.
 
Given that the SATA cable length will be a limiting factor, physical accommodation will be an issue.

I was wondering if it might be possible to stack the drives by using a small velcro patch at each of the 4 corners of the drive. This would permit drive removal, and give a few millimeters clearance between each drive.

Might this be sufficient clearance for any heat dissipation needs?
 
I've read about the counterfeits of this controller, after I ordered it. So this is a concern indeed.
A good indicator on LSI products are the stickers on the backside.
They have the model number sticker and other stickers.
Look at a genuine card like this and compare stickers.
Now these are not as good as yottamark stickers that can be tracked.
But I have found that the fakes are often chip comparable but they skimp on the stickers.
 
Is this the seller for your card?
Notice how even the silkscreening is different.
This text is missing below the heatsink:
PCI
Express
PCIe2 x8
Maybe there was different revisions of the circuit board and this is fine. Who knows.

The absence of a firmware version sticker on the ROM is odd.(Lower left on back)
The addition of a generic driver disk does not help the case.
 
I recently bought this card from China and I am pretty sure it is a fake
I already owned an Intel version of the 9400-8i and this was retail box so I have something genuine to compare.
Notice it has none of the right stickers. I was worried it was an OEM Lenovo 530-8i that does not offer NVMe/TriMode.
That said it works fine and I was able to flash the newest firmware onto it.
I have two NVMe running on it.
The Intel card was slightly different as it had the MegaRAID firmware on it and I have not updated it yet.
Both take NVMe with the special LSI cable.
 
Yes, this could end up being a disaster. It hasn't even left China yet and already I was required to pay an additional EUR 44 in taxes .. yay.
 
Well, the card came in today, MUCH sooner than expected (delivery date was April 28th, and here it is ..)

Fitted the card into a Supermicro Chassis to test it, and at first it wasn't even seen (not in BIOS, not in any OS I tried), so that began really well. I then tried another slot, and things started to work. I could boot from it, first thing I tried was Ubuntu 18.04, and that all worked fine. Performance seemed pretty OK as well, on par with what I was hoping for. Next test was FreeBSD of course. Created a USB stick, fitted two 512GB Samsung 840 PRO's, and installed 12.1-RELEASE onto that (ZFS raid0/stripe config). Booted well, no problems.

First, let's see how FreeBSD sees everything:

Code:
# pciconf -lvc
mps0@pci0:4:0:0:        class=0x010700 card=0x30c01000 chip=0x00641000 rev=0x02 hdr=0x00
    vendor     = 'LSI Logic / Symbios Logic'
    device     = 'SAS2116 PCI-Express Fusion-MPT SAS-2 [Meteor]'
    class      = mass storage
    subclass   = SAS
    cap 01[50] = powerspec 3  supports D0 D1 D2 D3  current D0
    cap 10[68] = PCI-Express 2 endpoint max data 256(4096) FLR NS
                 link x8(x8) speed 5.0(5.0) ASPM disabled(L0s)
    cap 03[d0] = VPD
    cap 05[a8] = MSI supports 1 message, 64 bit
    cap 11[c0] = MSI-X supports 15 messages, enabled
                 Table in map 0x14[0x2000], PBA in map 0x14[0x3800]
    ecap 0001[100] = AER 1 0 fatal 0 non-fatal 0 corrected
    ecap 0004[138] = Power Budgeting 1
    ecap 0010[150] = SR-IOV 1 IOV disabled, Memory Space disabled, ARI disabled
                     0 VFs configured out of 7 supported
                     First VF RID Offset 0x0001, VF RID Stride 0x0001
                     VF Device ID 0x0064
                     Page Sizes: 4096 (enabled), 8192, 65536, 262144, 1048576, 4194304
    ecap 000e[190] = ARI 1

/var/run/dmesg.boot entry:

Code:
mps0: <Avago Technologies (LSI) SAS2116> port 0xe000-0xe0ff mem 0xfbb9c000-0xfbb9ffff,0xfbb40000-0xfbb7ffff irq 40 at device 0.0 on pci4
mps0: Firmware: 19.00.00.00, Driver: 21.02.00.00-fbsd
mps0: IOCCapabilities: 1285c<ScsiTaskFull,DiagTrace,SnapBuf,EEDP,TransRetry,EventReplay,HostDisc>

So we can use FreeBSD's mpsutil..

Code:
# mpsutil show adapter
mps0 Adapter:
       Board Name: SAS9201-16i
   Board Assembly:
        Chip Name: LSISAS2116
    Chip Revision: ALL
    BIOS Revision: 7.37.00.00
Firmware Revision: 19.00.00.00
  Integrated RAID: no

PhyNum  CtlrHandle  DevHandle  Disabled  Speed   Min    Max    Device
0                              N                 1.5    6.0    SAS Initiator
1                              N                 1.5    6.0    SAS Initiator
2       0001        0011       N         6.0     1.5    6.0    SAS Initiator
3       0002        0012       N         6.0     1.5    6.0    SAS Initiator
4                              N                 1.5    6.0    SAS Initiator
5                              N                 1.5    6.0    SAS Initiator
6                              N                 1.5    6.0    SAS Initiator
7                              N                 1.5    6.0    SAS Initiator
8                              N                 1.5    6.0    SAS Initiator
9                              N                 1.5    6.0    SAS Initiator
10                             N                 1.5    6.0    SAS Initiator
11                             N                 1.5    6.0    SAS Initiator
12                             N                 1.5    6.0    SAS Initiator
13                             N                 1.5    6.0    SAS Initiator
14                             N                 1.5    6.0    SAS Initiator
15                             N                 1.5    6.0    SAS Initiator

So far so good!

Code:
# mpsutil show iocfacts
          MsgVersion: 02.00
           MsgLength: 16
            Function: 0x3
       HeaderVersion: 34,00
           IOCNumber: 0
            MsgFlags: 0x0
               VP_ID: 0
               VF_ID: 0
       IOCExceptions: 0
           IOCStatus: 0
          IOCLogInfo: 0x0
       MaxChainDepth: 128
             WhoInit: 0x4
       NumberOfPorts: 1
      MaxMSIxVectors: 0
       RequestCredit: 7632
           ProductID: 0x2213
     IOCCapabilities: 0x1285c <ScsiTaskFull,DiagTrace,SnapBuf,EEDP,TransRetry,EventReplay,HostDisc>
           FWVersion: 0x13000000
 IOCRequestFrameSize: 32
       MaxInitiators: 30
          MaxTargets: 756
     MaxSasExpanders: 224
       MaxEnclosures: 224
       ProtocolFlags: 0x3 <ScsiTarget,ScsiInitiator>
  HighPriorityCredit: 120
MaxRepDescPostQDepth: 65504
      ReplyFrameSize: 32
          MaxVolumes: 0
        MaxDevHandle: 1026
MaxPersistentEntries: 128
        MinDevHandle: 17

Code:
# mpsutil show devices
B____T    SAS Address      Handle  Parent    Device        Speed Enc  Slot  Wdt
00   32   4433221102000000 0011    0001      SATA Target   6.0   0001 01    1
00   33   4433221103000000 0012    0002      SATA Target   6.0   0001 00    1

Next up, some performance test. I usually do this with OpenSSL to create fast random data, like so:

openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero > randomfile.bin

While this is writing, I monitor the zpool:

Code:
# zpool iostat 1                        
               capacity     operations    bandwidth  
pool        alloc   free   read  write   read  write 
----------  -----  -----  -----  -----  -----  ----- 
zroot       33.1G   911G      2    676  55.5K  82.9M 
zroot       35.1G   909G      0  7.34K      0   919M 
zroot       35.1G   909G      0  7.66K      0   980M 
zroot       37.0G   907G      0  7.51K      0   923M 
zroot       37.0G   907G      0  7.27K      0   930M 
zroot       38.9G   905G      0  7.48K      0   935M 
zroot       38.9G   905G      0  7.48K      0   957M 
zroot       38.9G   905G      0  7.81K      0   983M 
zroot       40.9G   903G      0  7.23K      0   903M 
zroot       40.9G   903G      0  7.67K      0   982M 
zroot       42.8G   901G      0  7.24K      0   903M 
zroot       42.8G   901G      0  7.67K      0   981M 
zroot       44.7G   899G      0  7.52K      0   921M 
zroot       44.7G   899G      0  7.67K      0   982M 
zroot       46.7G   897G      0  7.32K      0   911M 
zroot       46.7G   897G      0  7.67K      0   982M 
zroot       48.6G   895G      0  7.35K      0   899M 
zroot       48.6G   895G      0  7.68K      0   983M 
zroot       50.5G   893G      0  7.21K      0   901M 
zroot       50.5G   893G      0  7.67K      0   981M 
zroot       52.5G   892G      0  7.32K      0   897M 
zroot       52.5G   892G      0  7.67K      0   982M 
zroot       54.4G   890G      0  7.28K      0   910M 
zroot       54.4G   890G      0  7.67K      0   982M 
zroot       56.3G   888G      0  7.31K      0   913M 
zroot       56.3G   888G      0  7.48K      0   957M 
zroot       58.3G   886G     25  7.34K   432K   917M

Note that this system has 32GB RAM, so I'm well over that writing data (almost 60GB) and speeds don't really drop so these are pretty real I guess.

Now my CPU isn't that fast (E5-2609 v2) and OpenSSL is close to 100% utilization (yes, AES is enabled in the BIOS), but so are the disks (gstat capture):

Code:
dT: 1.001s  w: 1.000s                                                       
 L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name         
   10   3803      0      0    0.0   3801 462921    2.4   98.8| da0          
   10   3816      0      0    0.0   3814 464135    2.4   99.7| da1          
    0      0      0      0    0.0      0      0    0.0    0.0| da0p1        
    0      0      0      0    0.0      0      0    0.0    0.0| da0p2        
   10   3803      0      0    0.0   3801 462921    2.4   98.8| da0p3        
    0      0      0      0    0.0      0      0    0.0    0.0| gpt/gptboot0 
    0      0      0      0    0.0      0      0    0.0    0.0| da1p1        
    0      0      0      0    0.0      0      0    0.0    0.0| da1p2        
   10   3816      0      0    0.0   3814 464135    2.4   99.7| da1p3        
    0      0      0      0    0.0      0      0    0.0    0.0| gpt/gptboot1

Oh, some details about the drives I was using (SMART):

Code:
=== START OF INFORMATION SECTION ===
Model Family:     Samsung based SSDs
Device Model:     Samsung SSD 840 PRO Series
Serial Number:    Yep
LU WWN Device Id: 5 002538 5a00e5e85
Firmware Version: DXM05B0Q
User Capacity:    512,110,190,592 bytes [512 GB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    Solid State Device
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2, ATA8-ACS T13/1699-D revision 4c
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Mon Apr 20 17:50:13 2020 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

So.. all in all, I'm not dissapointed! I'm ordering a bunch of SATA power cables now, and hope to test with 16 SSD's connected soon. Will update here!
 
Got my cables and connectors yesterday, did a quick test today. Same server/mobo/cpu used as in previous tests, now with all 16 Samsung EVO 850 Pro's connected. At first, very simply just all 16 drives in a raidz2 pool. And I think I'm just leaving it at this. Performance is good enough (remember, it's a home-NAS, and the network through the house will be 1Gbit.

With the same openssl method (see my previous post in this thread) of creating random data, to a file on the zraid2 pool:

Code:
root@mhgjh:/zraid6 # zpool iostat zraid6 1
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zraid6      19.1G  14.5T      0    219    522  26.2M
zraid6      19.8G  14.5T      0  5.35K      0   655M
zraid6      20.4G  14.5T      0  5.39K      0   656M
zraid6      21.0G  14.5T      0  5.43K      0   659M
zraid6      21.7G  14.5T      0  7.49K      0   924M
zraid6      22.3G  14.5T      0  8.14K      0  1004M
zraid6      23.6G  14.5T      0  5.55K      0   662M
zraid6      24.2G  14.5T      0  5.38K      0   656M
zraid6      24.9G  14.5T      0  5.40K      0   657M
zraid6      25.5G  14.5T      0  5.39K      0   656M
zraid6      26.1G  14.5T      0  5.60K      0   683M
zraid6      26.8G  14.5T      0  8.27K      0  1024M
zraid6      28.1G  14.5T      0  7.34K      0   871M
zraid6      28.7G  14.5T      0  5.40K      0   657M
zraid6      29.3G  14.5T      0  5.25K      0   646M
zraid6      30.0G  14.5T      0  5.46K      0   663M
zraid6      30.6G  14.5T      0  9.16K      0  1.12G
zraid6      31.9G  14.5T      0  6.79K      0   799M
zraid6      32.5G  14.5T      0  5.42K      0   657M
zraid6      33.2G  14.5T      0  5.50K      0   659M
zraid6      33.8G  14.5T      0  8.47K      0  1.04G
zraid6      35.1G  14.5T      0  7.46K      0   901M
zraid6      35.7G  14.5T      0  5.33K      0   647M
zraid6      36.3G  14.5T      0  4.99K      0   609M
zraid6      37.0G  14.5T      0  10.7K      0  1.28G
zraid6      38.2G  14.5T      0  5.34K      0   642M
zraid6      38.9G  14.5T      0  5.37K      0   656M
zraid6      39.5G  14.5T      0  5.28K      0   655M
zraid6      40.2G  14.5T      0  10.4K      0  1.27G
zraid6      41.4G  14.5T      0  5.31K      0   629M
zraid6      42.1G  14.5T      0  5.13K      0   623M
zraid6      42.7G  14.5T      0  8.89K      0  1.08G
zraid6      44.0G  14.5T      0  7.00K      0   828M
zraid6      44.6G  14.5T      0  5.13K      0   627M
zraid6      45.3G  14.5T      0  5.59K      0   671M
zraid6      45.9G  14.5T      0  9.86K      0  1.20G
zraid6      47.2G  14.5T      0  5.96K      0   688M
zraid6      47.8G  14.5T      0  5.27K      0   638M
zraid6      48.4G  14.5T      0  5.65K      0   661M
zraid6      49.1G  14.5T      0  9.70K      0  1.15G
zraid6      50.4G  14.5T      0  7.19K      0   807M
zraid6      51.0G  14.5T      0  5.33K      0   638M
zraid6      51.6G  14.4T      0  5.71K      0   666M
zraid6      52.3G  14.4T      0  9.33K      0  1.08G
zraid6      53.6G  14.4T      0  7.78K      0   876M
zraid6      54.2G  14.4T      0  5.69K      0   659M
zraid6      54.8G  14.4T      0  5.60K      0   662M
zraid6      55.5G  14.4T      0  8.29K      0   992M
zraid6      56.7G  14.4T      0  8.96K      0   969M
zraid6      57.4G  14.4T      0  5.71K      0   638M
zraid6      58.0G  14.4T      0  5.90K      0   667M
zraid6      58.7G  14.4T      0  9.50K      0  1.10G
zraid6      59.9G  14.4T      0  8.16K      0   875M
zraid6      60.6G  14.4T      0  5.73K      0   654M
zraid6      61.2G  14.4T      0  5.77K      0   653M
zraid6      61.8G  14.4T      0  8.52K      0  1005M
zraid6      63.1G  14.4T      0  9.07K      0   959M
^C

While this is going, I can see with gstat that all drives are at around ~27% utilisation, so I'm hitting a bottleneck somewhere, but it's fine by me. It's more than fast enough. The main reason for using the SSD's is lower power usage and eliminating the need to suspend/spindown the disks when idle to limit power usage (so ease of use).
 
Back
Top