Horrible iSCSI (istgt) performance

Hey all! Is there any possibility you can post some of your istgt.conf files as example? I sit with a similar problem of bad performance, not a very powerful pc, but expect better performance than what I'm getting...


iostat reports about 8MB/sec stable throughput....
 
tinusb said:
Hey all! Is there any possibility you can post some of your istgt.conf files as example? I sit with a similar problem of bad performance, not a very powerful pc, but expect better performance than what I'm getting...

iostat reports about 8MB/sec stable throughput....

When you say "not a very powerful pc" - what are we talking here?

Do you get different throughput using scp?

Is your CPU spiking when you do your tests? If so, you may want to look for a better network card, one that doesn't have to interrupt the CPU.

Do your drives support the speed you are expecting? What about the bus that the drives are connected to?
 
dave said:
When you say "not a very powerful pc" - what are we talking here?

Do you get different throughput using scp?

Is your CPU spiking when you do your tests? If so, you may want to look for a better network card, one that doesn't have to interrupt the CPU.

Do your drives support the speed you are expecting? What about the bus that the drives are connected to?

Hi,

Haven't checked using scp yet...

SCP copy speed is:

2900kb/sec

No spiking of the CPU. Drives are SATA2, connected via a SATA1 bay, so can only do SATA1 - with normal on board controller.

Specs are:
CPU AMD A8-3850 CPU (2900Mhz)
RAM 8GB Mushkin DDR-1333
MOBO Gigabyte A75-D3H
HDD WD10EARX WD10EZRX WD10EZRX (thus 3x 1TB WD Green)
NIC Realtek 8168/8111 Gigabit; and INTeL/Pro1000 (Have the INTeL cards in our own machines, and do manage to get 85MB/sec tho)

When I take a look a the performance graphs in ESXi, I get a maximum throughput at times of about 300Mbps - utilizing a Gbit connection by only 30%? I do understand there are overheads, etc...

Regards
Tinus
 
olav said:
Actually, when it comes to iSCSI there is a HUGE difference between a good and a BAD NIC. I tested first with a Realtek NIC and then tried an Intel. I went from 5MB/s to 125MB/s.

Is it worth a try to switch the NIC to intel?
 
Just to repeat the same thing again, you DID update QueueDepth to 64, right? Because out of anything I tested, these things are all that matters.

Here are some example snippets:

Code:
[Global]
...
# QueueDepth is limited by this number, and I don't know if it is per connection or accross all (due to zero documentation available) so I raised it a bunch.
MaxR2T 256
...

[LogicalUnit1]
  ...
  # 64 was perfectly sufficient for VirtualBox clients, but something else I tried ... maybe Proxmox, had errors until doubling it again.
  QueueDepth 128
  ...
 
I've gotta chime in on this...

So there are some 'alternate' ways to do iSCSI.

First of all make sure you've got your nic's setup with optimal MTU's (for FreeBSD best to set to 8244)

2nd of all, make sure you create your block devices with 4k or 8k alignment.

Now for the good stuff...

Run multiple istgt's on your target. :)

The istgt target client is single threaded bound.

To get around this, launch multiple istgt, with each istgt providing a single block device.

Once you see the target devices on the initiator side, STRIPE ON THE INITIATOR. :)

Try it.

Larry
 
Back
Top