2 of 3 M1015 controllers detect HDD's

Hi All,

Stats:

[Mobo] Supermicro X9SCM-F bios 2.0a - 2 x PCIe-8x + 2 x PCIe-4x [8x slot]
[SAS cards] 3 x IBM M1015 SAS cards; all with IT fw v14; uses LSI SAS2008 chipset
[OS] FreeBSD 9.0-RELEASE
[other] Xeon E1220V2, 16 GB ECC, SSD for OS, 10 x HDD in a Norco RPC-4224

Problem:

- 1 of the 3 verified working IBM M1015 SAS cards does not detect drives behind it.
- this particular card works / detects drives when swapped slots, backplane and when booted off Linux LiveCD.
- problem stays with keeping one of these PCIe 8x cards in a PCIe 4x [electrical, 8x mechanical] slot.
- pciconf detects all 3 x M1015 cards

pciconf http://pastebin.ca/2249885

sysctl http://pastebin.ca/2249886

I am using camcontrol devlist to find drives and no matter what I can't get last card in the PCIe-4x slot to detect HDD's.

I don't know where to go from here - any pointers please?

TIA.
 
Any benchmark on what kind of read and write transfer rate you can achieve on such system?

It looks a lot like what I have selected and about to buy, would be grateful if you would be ok to share some performance results on this.
 
@boris_net

Not as of yet - I am using mainly Green drives so not expecting blazing performance, however, since it's basically going to be an overkill NAS 120 MB/s performance is all I need.

Will post later once I get everything done - SSD does boot damn quick though. Currently trying to figure out why my OpenSSH keys from Linuxland fail with the passphrase.
 
@boris_net

What would you recommend for benchmarking?

I'm mainly interested in filesystem sustained reads, writes for use as a NAS. Bonnie++ and iozone seem a little overkill since I'm not going to be creating, deleting hundreds of files on a regular basis.

I may default to my dd created random file xfering back and forth...
 
Summary:
-- I am using a 10x 2TB raidz2 pool of cheap ‘green’ drives.
-- This box will be a glorified NAS; won’t do much else. Hardware may be overkill, but I am building this to last.
-- Am currently getting 85+ MB/s read/write over NFSv4 so I am happy with that...could be better I know, but works for me. On to the next project once this has all stabilized and put into service.
-- I am also aware that dd is not a de facto benchmark. I may try bonnie++ later if time permits.


Code:
[cpu]		Intel Xeon E3-1220-V2
[mobo]		Supermicro X9SCM-F (bios 2.0a)
[ram]		(4x) Crucial CT51272BA1339 [4GB DDR3 Unbuffered ECC]
[ssd]		Crucial M4 64GB (fw 000F)
[sas card]	(3x) IBM M1015 (IT mode v14)
[case]		Norco RPC-4224
[psu]		Antec Truepower New 750W 
[pwr cable]	Antec 77CM Molex Connector With Cable for NeoPower Series
[sas cable]	LSI Multi-Lane Internal SFF-8087 to SFF-8087 SAS Cable 0.6M	
[hdd]		(4x) 2TB Seagate ST2000DL003
		(3x) 2TB WD WD20EARS
		(1x) 2TB WD WD20EARX
		(1x) 2TB Hitachi HDS5C3020ALA632 
                (1x) 2TB Samsung/Seagate ST2000DL004
[os]		FreeBSD 9.0-RELEASE amd64
[NFS]		v4
[ZFS]		v28, dedupe, compression OFF
[firewall]	ipfw

NOTES:
** did NOT do any ZFS tuning - there is no /boot/loader.conf file.
** idle power usage for FULL system is approx 120 Watts.
** I also kept the stock 5 x 80mm fan wall in the Norco - yes it is noisy but it keeps all cool and unit in diff room from HT.

================================================================
**I created a 24GB random file for benchmarking due to I HOPE this helping saturating the 16GB of RAM**

/dev/urandom → SSD write: 86 MB/s
--not really accurate measure of SSD write, but needed to make file somehow
Code:
dd bs=1M count=24000 if=/dev/urandom of=/home/user/testdir/rnd24GB.dd
...
25165824000 bytes transferred in 291.965923 secs (86194388 bytes/sec)

SSD read: 573 MB/s
Code:
dd if=/home/user/testdir/rnd24GB.dd of=/dev/null
...
25165824000 bytes transferred in 46.863236 secs (537005680 bytes/sec)

HDD raw write: all 120+ MB/s
-- sent the rnd24GB.dd file to the devices
Code:
dd bs=1M count=24000 if=/home/user/testdir/rnd24GB.dd of=/dev/da0
...
14997782528 bytes transferred in 122.560944 secs (122369998 bytes/sec)

dd bs=1M count=24000 if=/home/user/testdir/rnd24GB.dd of=/dev/da1
...
25165824000 bytes transferred in 189.845046 secs (132559814 bytes/sec)

SSD → ZFS write: 78 MB/s
-- looks like SATA bottleneck since /dev/urandom & NFS better results; see below
Code:
dd if=/home/user/testdir/rnd24GB.dd of=/home/user/ztank0/file1.dd
...
25165824000 bytes transferred in 322.990155 secs (77915143 bytes/sec)

ZFS read: 196 MB/s
Code:
dd if=/home/user/ztank0/file1.dd of=/dev/null
...
25165824000 bytes transferred in 128.271675 secs (196191591 bytes/sec)

/dev/urandom → ZFS write: 85 MB/s
Code:
dd bs=1M count=24000 if=/dev/urandom of=/home/user/ztank0/file24GBrand.dd
...
25165824000 bytes transferred in 294.469267 secs (85461632 bytes/sec)
dd bs=1M count=24000 if=/dev/urandom of=/home/user/ztank0/file24GBrand2.dd
...
25165824000 bytes transferred in 293.075589 secs (85868032 bytes/sec)

NFSv4 ZFS → Linux client disk write: 76 MB/s
Code:
dd if=/mnt/zfs/file24GBrand.dd of=/home/user/local/tt_temp/24GBxfer1.dd
...
25165824000 bytes (25 GB) copied, 329.331 s, 76.4 MB/s

NFSv4 ZFS → Linux client read (stream): 88 MB/s
Code:
dd if=/mnt/zfs/file24GBrand.dd of=/dev/null
...
25165824000 bytes (25 GB) copied, 285.951 s, 88.0 MB/s

NFSv4 Linux client write → ZFS: 86 MB/s
Code:
dd if=/home/user/local/tt_temp/24GBxfer1.dd of=/mnt/zfs/xferbak24GB.dd
...
25165824000 bytes (25 GB) copied, 291.752 s, 86.3 MB/s

================================================================
ZFS info
-- I used the GPT partition per disk + GNOP for each drive as described here: http://forums.freebsd.org/showpost.php?p=175779&postcount=6
-- scrubbing at 700+ M/s too for ableit a very small test dataset

Code:
#zfs list
NAME    USED  AVAIL  REFER  MOUNTPOINT
tank0  1.02M  13.6T   329K  /tank0
# zpool status tank0
 pool: tank0
state: ONLINE
scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	tank0       ONLINE       0     0     0
	  raidz2-0  ONLINE       0     0     0
	    da4p1   ONLINE       0     0     0
	    da6p1   ONLINE       0     0     0
	    da7p1   ONLINE       0     0     0
	    da8p1   ONLINE       0     0     0
	    da9p1   ONLINE       0     0     0
	    da5p1   ONLINE       0     0     0
	    da3p1   ONLINE       0     0     0
	    da2p1   ONLINE       0     0     0
	    da1p1   ONLINE       0     0     0
	    da0p1   ONLINE       0     0     0

errors: No known data errors

#zpool status -v tank0
  pool: tank0
 state: ONLINE
 scan: scrub in progress since Tue Dec  4 21:01:36 2012
    65.0G scanned out of 123G at 723M/s, 0h1m to go
    0 repaired, 52.69% done
config:

	NAME        STATE     READ WRITE CKSUM
	tank0       ONLINE       0     0     0
	  raidz2-0  ONLINE       0     0     0
	    da4p1   ONLINE       0     0     0
	    da6p1   ONLINE       0     0     0
	    da7p1   ONLINE       0     0     0
	    da8p1   ONLINE       0     0     0
	    da9p1   ONLINE       0     0     0
	    da5p1   ONLINE       0     0     0
	    da3p1   ONLINE       0     0     0
	    da2p1   ONLINE       0     0     0
	    da1p1   ONLINE       0     0     0
	    da0p1   ONLINE       0     0     0

errors: No known data errors
 
Back
Top