Hi there! I've been playing for 4-6 weeks.
The system is:
ESXi 5.5 u2 + 4 NICS , round robin,
FreeBSD 10.1
iscsi
zfs
I tried lot's of thing, read tons of http pages, however the sequential read speed is 50-60 MB\s from 6 HDD 7200rpm ( stripe of 3 mirrors ).
zvol chunk = 4k
NTFS allocation unit 64k
I see exactly the same on esxtop and windows in iomter. In iometer I use pattern as such: bs=512k , outstandoing i\os=1,2,3,4,5,6,7,8. I can get pretty cool number when we speak about write speed ( random or sequential) or random read speed. However sequential read speed is aweful.
primary cache = metadata
secondary cache = metadata
I don't use ssd arc as I want to get good read speed just from the pool and only after that I'm going to attach ssd arc. Well, I would like to know - is there any one who can get good sequential read numbers? Honestly - I've read and spent just tons of nights and days trying to figure out - failed. I tried FreeBSD, FreeNAS, OpenIndiana. the result the same.
When I use zvol chunk = 64k or zvol chunk = 128 k the speed is better, however I do not want to use those zvol chunks. There is a fundamental reason for that, as you know ( or maybe do not know ) ----- when you use big zvol chunks ( pretty much any bigger than 4k ) --- the result is horrible when we speak about writing performance. I mean - when you start writing on zvol (8,16,32,64,128k) with blocks lower than zvol chunk ----> the huge amount of readings (parasite readings) happens on pool. Can you believe it? You just want to write , instead you start massive reading. --- as a result write speed horribly goes down. it's because writing block is lower than zvol chunk , so to write 1 iops from the application zfs need to read block, change and write somewhere else. Maybe I understanding is not so correct - the fact is that writings just start provide massive reading pressure on the pool - and this is horrible. The only thing that help - is to use zvol =4k. This mode reduces such negative effect significantly. Of course if esxi would support 4k storage disks -- we could feel much better. Unfortunately right now esxi 5.5 -6 does not support 4k drives so we have to set zvol as if it is 512b drive. It means Windows thinks that 512b drive is here. This definetley cause side effects.
-----
To make the long story short --- I did a profound searching and as I feel - all people just can't have good reading performance form zfs via iscsi when we READ sequentially.
Am I right? And what do you think about that? Right now I with great sorrow going to use Linux + esxt3 + raid + scsi + bcache . but that's another story. I will give you lots of useful information that I've gotten for this 6 weeks if you wish. It's just alot.
Sincerely yours Alex. K.
The system is:
ESXi 5.5 u2 + 4 NICS , round robin,
FreeBSD 10.1
iscsi
zfs
I tried lot's of thing, read tons of http pages, however the sequential read speed is 50-60 MB\s from 6 HDD 7200rpm ( stripe of 3 mirrors ).
zvol chunk = 4k
NTFS allocation unit 64k
I see exactly the same on esxtop and windows in iomter. In iometer I use pattern as such: bs=512k , outstandoing i\os=1,2,3,4,5,6,7,8. I can get pretty cool number when we speak about write speed ( random or sequential) or random read speed. However sequential read speed is aweful.
primary cache = metadata
secondary cache = metadata
I don't use ssd arc as I want to get good read speed just from the pool and only after that I'm going to attach ssd arc. Well, I would like to know - is there any one who can get good sequential read numbers? Honestly - I've read and spent just tons of nights and days trying to figure out - failed. I tried FreeBSD, FreeNAS, OpenIndiana. the result the same.
When I use zvol chunk = 64k or zvol chunk = 128 k the speed is better, however I do not want to use those zvol chunks. There is a fundamental reason for that, as you know ( or maybe do not know ) ----- when you use big zvol chunks ( pretty much any bigger than 4k ) --- the result is horrible when we speak about writing performance. I mean - when you start writing on zvol (8,16,32,64,128k) with blocks lower than zvol chunk ----> the huge amount of readings (parasite readings) happens on pool. Can you believe it? You just want to write , instead you start massive reading. --- as a result write speed horribly goes down. it's because writing block is lower than zvol chunk , so to write 1 iops from the application zfs need to read block, change and write somewhere else. Maybe I understanding is not so correct - the fact is that writings just start provide massive reading pressure on the pool - and this is horrible. The only thing that help - is to use zvol =4k. This mode reduces such negative effect significantly. Of course if esxi would support 4k storage disks -- we could feel much better. Unfortunately right now esxi 5.5 -6 does not support 4k drives so we have to set zvol as if it is 512b drive. It means Windows thinks that 512b drive is here. This definetley cause side effects.
-----
To make the long story short --- I did a profound searching and as I feel - all people just can't have good reading performance form zfs via iscsi when we READ sequentially.
Am I right? And what do you think about that? Right now I with great sorrow going to use Linux + esxt3 + raid + scsi + bcache . but that's another story. I will give you lots of useful information that I've gotten for this 6 weeks if you wish. It's just alot.
Sincerely yours Alex. K.
Last edited by a moderator: