Slow disk throughput

I seem to be having some issues with one of the servers I have. There are two SATA drives in the machine, one is SATA II, the other is a HP drive that came with the server, I believe it is a SATA I drive.

When doing any kind of moderate to heavy disk activity, the system slows down considerably. Unzipping tar/gzip archives, copying from a partition on one drive to the other, et cetera. The same machine used to have CentOS 5 on it, and I didn't have any problems like this.

Is there anyway to tune hard drives on FreeBSD like on Linux with hdparm?

I'm currently running bonnie++ to get some benchmarks, but it is taking forever to complete.

The machine is running FreeBSD 7.1, custom kernel with pf/altq compiled in and a lot of unneeded drivers removed, and it's a dual quad core Xeon X5355 (2.66GHz), so I 100% sure that it isn't any type of CPU bottleneck. It happens with the GENERIC kernel also, so I'm sure it isn't a kernel configuration error either?

Thanks in advanced.
 
i have similar issue when copying files from 1 disk to another

Code:
ad0: 152627MB <SAMSUNG SP1604N TM100-30> at ata0-master UDMA33
ad4: 238475MB <WDC WD2500KS-00MJB0 02.01C03> at ata2-master SATA150
 
If it helps any...

Code:
[root@genesis ~]# dmesg | grep da
mpt0: <LSILogic SAS/SATA Adapter> port 0x2000-0x20ff mem 0xdfa10000-0xdfa13fff,0xdfa00000-0xdfa0ffff irq 16 at device 0.0 on pci3
da0 at mpt0 bus 0 target 1 lun 0
da0: <ATA WDC WD5000KS-00M 2E07> Fixed Direct Access SCSI-5 device
da0: 300.000MB/s transfers
da0: Command Queueing Enabled
da0: 476940MB (976773168 512 byte sectors: 255H 63S/T 60801C)
da1 at mpt0 bus 0 target 2 lun 0
da1: <ATA GB0500C8046 HPG1> Fixed Direct Access SCSI-5 device
da1: 300.000MB/s transfers
da1: Command Queueing Enabled
da1: 476940MB (976773168 512 byte sectors: 255H 63S/T 60801C)
Trying to mount root from ufs:/dev/da0s1a
 
killasmurf86 said:
i have similar issue when copying files from 1 disk to another

Code:
ad0: 152627MB <SAMSUNG SP1604N TM100-30> at ata0-master UDMA33
ad4: 238475MB <WDC WD2500KS-00MJB0 02.01C03> at ata2-master SATA150

Your ad0 drive shows on a UDMA33 bus, which has a maximum transfer rate of 33 MB/s, so that could be a bottleneck there.
 
Voltar said:
Your ad0 drive shows on a UDMA33 bus, which has a maximum transfer rate of 33 MB/s, so that could be a bottleneck there.
i think it's something else.... i don't think it should make system less responsive, because DMA controller should be involved to copy data, but it's just my guess....
 
Voltar: The closest we have to hdparm is atacontrol.
Try "atacontrol mode device" to see what mode a drive is in, e.g. "atacontrol mode ad0" - both should be SATA150 or SATA300.

Just for the sake of covering a few basics, what sort of transfer rates do you get with a straight dd if=/dev/(the disk) of=/dev/null bs=1M count=1k for each drive?
How does gstat look when you're copying from a drive to another? How many % busy? What transfer rates?
If you look at top, does it spend a lot of time in System?
Oh, and what sata controller are you using?
 
Djn said:
Voltar: The closest we have to hdparm is atacontrol.
Try "atacontrol mode device" to see what mode a drive is in, e.g. "atacontrol mode ad0" - both should be SATA150 or SATA300.

Code:
[root@genesis ~]# atacontrol mode da0
atacontrol: Invalid device da0
[root@genesis ~]# atacontrol list
ATA channel 0:
    Master:      no device present
    Slave:       no device present
ATA channel 1:
    Master:      no device present
    Slave:       no device present
[root@genesis ~]#

Maybe it doesn't work for drives attached to RAID cards?


Just for the sake of covering a few basics, what sort of transfer rates do you get with a straight dd if=/dev/(the disk) of=/dev/null bs=1M count=1k for each drive?

First drive...
Code:
[root@genesis ~]# dd if=/dev/da0 of=/dev/null bs=1M count=1k
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 14.917110 secs (71980553 bytes/sec)
[root@genesis ~]# dd if=/dev/da0 of=/dev/null bs=1M count=1k
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 15.270414 secs (70315174 bytes/sec)
[root@genesis ~]#
68.6 MB/s & 67.05 MB/s, not too shabby.

Second drive...
Code:
[root@genesis ~]# dd if=/dev/da1 of=/dev/null bs=1M count=1k
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 9.584325 secs (112031031 bytes/sec)
[root@genesis ~]# dd if=/dev/da1 of=/dev/null bs=1M count=1k
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 9.600573 secs (111841430 bytes/sec)
[root@genesis ~]#
106.8 MB/s & 106.66 MB/s, very good.

How does gstat look when you're copying from a drive to another? How many % busy? What transfer rates?
Activity/operations on the same disk is about 95-99% busy with 3000-5000 kBps. Disk to disk is 100% on the write drive with ~500 kBps, maybe ~10-15% busy on the read drive ~3000 kBps.


If you look at top, does it spend a lot of time in System?
~2% - 4% max


Oh, and what sata controller are you using?

LSI Logic SAS 3000, or something close to that, phpsysinfo.


And bonnie++ benchmarks...

Code:
[root@genesis /usr/home]# bonnie++ -d /usr/home/tmp/ -u username
Using uid:1001, gid:0.
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.93d       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
genesis.volt-ho 16G   563  98  6664   1  5654   1   965  98 74466  13 154.0   7
Latency             16240us     952ms    7211ms   31179us    1089ms    1306ms
Version 1.93d       ------Sequential Create------ --------Random Create--------
genesis.volt-hostin -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  1225   5 +++++ +++ 21436  26  1529   3 +++++ +++ 31361  40
Latency              2181ms     303us     314us    2782ms     318us     634us
1.93c,1.93d,genesis.volt-hosting.net,1,1230977158,16G,,563,98,6664,1,5654,1,965,98,74466,13,154.0,7,16,,,,,1225,5,+++++,+++,21436,26,1529,3,+++++,+++,31361,40,16240us,952ms,7211ms,31179us,1089ms,1306ms,2181ms,303us,314us,2782ms,318us,634us
[root@genesis /usr/home]#
That took about two hours to complete.
 
Ah, yes - atacontrol is just for ATA and SATA drives - the comparable tool for the SCSI subsystem (which handles things like RAID controllers, as well) is camcontrol. I haven't ever needed to use it, so I can't offhand say anything useful about it (except "see if there's anything interesting in the manpage", obviously).

The raw numbers look decent and it's indeed not CPU limited - I do wonder what it's spending its time on ...
 
maybe my eyes are failing, but
I've upgraded to 7.1R
and started using gpt (this might just be coincidence) on my ata drive
it now can write faster (acording to gstat)

it's now up to 27MB/s, on 7.0-pX it was 14-16MB/s max


in few hours i will upgrade my SATA drive to use gpt


EDIT
updated, just saw my ata HDD run 27MB/s
 
I don't think it's you, I upgraded to Release from the RC version and my speeds seem to be better after a 7.1-Release kernel recompile. It's only been ~22 hours since I installed the new kernel, but so far I've seen better performance while installing gameservers.
 
btw with gpt is was in full control of FS lay-out [i like it]
It was just a matter of basic math to put most speed requiring filesystems to outer tracks of HDD

now my fs lay-out is (starting from inner track to outer track):
Code:
BOOT - not /boot, but loader
root
/usr/ports
/usr/src
/var
/usr
/home
/home/Files/archive
/home/Files/archive/music
/tmp
swap
 
I might try that on my desktop that's running FreeBSD, but I don't think I'll repartition my servers for that, the downtime would be unacceptable right now. Looks interesting though.
 
Back
Top