s-ata 1 write speed comparison

Hello fellas,

I was wondering what's the speed you have on s-ata1 controller.

I have S-ATA2 disks (Seagate Pipeline 500GB) connected to the mobo controller and another pci controller (both s-ata1) with transfer speeds up to 35-38 MB on a FreeBSD-8.0-release.

I get an average of 30 MB when copying from 1 disk to the other and ~35MB when copying from the network (from my laptop - Lenovo T400).

dd shows ~65MB write on all the disks is I create a 5GB file from /dev/zero.

I know the write speed is directly proportional with the read speed of the other disk but what do you make of the previous values ? Good ? Bad ?

Personally I consider them bad.
 
yeah I kinna figured that out but I've putted it in just for the heck of it.

Anyway, as a comparison, Centos 5 and redhell 5 could not hold a stable 35 MB transfer rate from my laptop through ftp. The speed would eventually drop to a "normalized" 28MB (sometimes even lower).

The "nice" thing is that dd was showing a mind blowing 100MB when using dd (I was like "wtf? 100MB ? and FTP only 28?")
 
Sorry this is not specific to the particular controller but I thought I would post it here as I spend a fair bit of time looking at disk speeds and I use the diskinfo command.

I hope to see in excess of 100mb (100 000 kbytes/sec) for outside middle and inside otherwise I suspect a nasty raid config.

What do you think of using this as a disk speed benchmark does it report the same as your dd method?

diskinfo -t /dev/da0

or what ever you disk is. One easy way to find out is to just type mount.
 
Code:
[root@xxxxx ~]# diskinfo -vct /dev/mirror/gm0
/dev/mirror/gm0
        512             # sectorsize
        160040803328    # mediasize in bytes (149G)
        312579694       # mediasize in sectors

I/O command overhead:
        time to read 10MB block      0.418774 sec       =    0.020 msec/sector
        time to read 20480 sectors   2.872771 sec       =    0.140 msec/sector
        calculated command overhead                     =    0.120 msec/sector

Seek times:
        Full stroke:      250 iter in   1.603919 sec =    6.416 msec
        Half stroke:      250 iter in   1.550966 sec =    6.204 msec
        Quarter stroke:   500 iter in   3.213315 sec =    6.427 msec
        Short forward:    400 iter in   2.560215 sec =    6.401 msec
        Short backward:   400 iter in   2.561784 sec =    6.404 msec
        Seq outer:       2048 iter in   0.331308 sec =    0.162 msec
        Seq inner:       2048 iter in   0.396737 sec =    0.194 msec
Transfer rates:
        outside:       102400 kbytes in   3.776254 sec =    27117 kbytes/sec
        middle:        102400 kbytes in   3.924349 sec =    26093 kbytes/sec
        inside:        102400 kbytes in   5.154929 sec =    19864 kbytes/sec

[root@xxxxx ~]# gmirror status
      Name    Status  Components
mirror/gm1  COMPLETE  ad8
                      ad10
mirror/gm0  COMPLETE  ad14
                      ad16

[root@xxxxx ~]# diskinfo -vct /dev/ad8
/dev/ad8
        512             # sectorsize
        500107862016    # mediasize in bytes (466G)
        976773168       # mediasize in sectors
        969021          # Cylinders according to firmware.
        16              # Heads according to firmware.
        63              # Sectors according to firmware.
        5VV17X7Q        # Disk ident.

I/O command overhead:
        time to read 10MB block      0.183263 sec       =    0.009 msec/sector
        time to read 20480 sectors   1.934157 sec       =    0.094 msec/sector
        calculated command overhead                     =    0.085 msec/sector

Seek times:
        Full stroke:      250 iter in   6.642226 sec =   26.569 msec
        Half stroke:      250 iter in   4.618055 sec =   18.472 msec
        Quarter stroke:   500 iter in   7.345918 sec =   14.692 msec
        Short forward:    400 iter in   3.569374 sec =    8.923 msec
        Short backward:   400 iter in   3.899193 sec =    9.748 msec
        Seq outer:       2048 iter in   0.202528 sec =    0.099 msec
        Seq inner:       2048 iter in   0.284220 sec =    0.139 msec
Transfer rates:
        outside:       102400 kbytes in   1.585748 sec =    64575 kbytes/sec
        middle:        102400 kbytes in   1.486146 sec =    68903 kbytes/sec
        inside:        102400 kbytes in   1.678586 sec =    61004 kbytes/sec

[root@xxxxx /]# dd if=/dev/zero of=5GB bs=5M count=1000
1000+0 records in
1000+0 records out
5242880000 bytes transferred in 96.725527 secs (54203685 bytes/sec)

As you can see, both cmd's gave me a quite nice write speed but since I cannot trust dd I cannot consider it's result as viable. I see however the same result (more or less) from diskinfo and when I consider the fact that I could not reach higher speeds than 38MB (ok, I admit the "read source" matters A LOT) I consider both wrong.

There is a catch tho. At the moment I do not have any equipment there that I can say for sure it has higher read speed than 38MB/sec. This is because, first, I do not have fizicall access there and second, the site is ~800km away from my present location. Personally I own a quite nice piece of hardware that I was able to do read/write @ ~100MB. Of course, If I would be able to go to that site with my computer, I believe I could achieve greater speeds than 38MB/sec.
 
for whom it may concern:

I upgraded from 8.9 to 8.1 (i386) in the idea that I would give ZFS a try. I patched the src with "zfs_metaslab.patch" and "releng-8.1-zfsv15.patch" (pointed out by mm in http://forums.freebsd.org/showthread.php?t=8270), rebuilded the world plus a custom kernel (generic + pf/altq), I created 1 zfs hdd and to my surprise, I couldn't reach speeds greater than 20MB/sec.

loader.conf:
Code:
vm.kmem_size_max="512M"
vm.kmem_size="330M"
vfs.zfs.arc_max="40M"
vfs.zfs.vdev.cache.size="5M"
zfs_load="YES"
opensolaris_load="YES"

rc.conf
Code:
zfs_enable="YES"

I create the zfs FS with:
[CMD=""]zfs create <name> <disk>[/CMD]

I tried setting gzip compression but I got a max of 8MB write speed.


Now, I removed the zfs conf from that drive and created a normal UFS+S FS and got write speeds of 41MB. so basically ... wtf ?

The other disks jumped from an average of 30-32 to an average of 35. Most intriguing I would say (8.1 serves UFS quite well although I cannot say the same for ZFS). either that, either I'm doing something wrong ...

BTW: the system has 3GB of ram and my mem looks like this:
Code:
Mem: 173M Active, 522M Inact, 56M Wired, 40M Cache, 112M Buf, 2201M Free

also, I was monitoring the kernel memory and it's quite stable at:
Code:
TEXT=22294972, 21.2621 MB
DATA=21168128, 20.1875 MB
TOTAL=43463100, 41.4496 MB
I was using the script from http://wiki.freebsd.org/ZFSTuningGuide

Like I said ... to whom it may concern.
 
Back
Top