always async on istgt + zvol with sync=standard

Hi,

Since having strange behavior on NFS read speed via some of the 1000base NIC, I have decided to try iscsi on my box. However, the sync=standard on zvol+istgt doesn't really flush the data safely to the zvol, unlike NFS, it never uses ZIL and performs totally the same as sync=disabled.

Write cache was disabled when starting the service, but it seems it didn't work.
Code:
Starting istgt.
istgt version 0.4 (20111008)
normal mode
LU1 HDD UNIT
LU1: LUN0 file=/dev/zvol/vol/iscsi, size=536870912000
LU1: LUN0 1048576000 blocks, 512 bytes/block
LU1: LUN0 500.0GB storage for iqn.2011-03.example.org.istgt2:iscsi0
LU1: LUN0 serial storage22lun0
LU1: LUN0 read cache enabled, write cache disabled
LU1: LUN0 command queuing enabled, depth 32
LU2 HDD UNIT
LU2: LUN0 file=/dev/zvol/vol/storage1_iscsi, size=536870912000
LU2: LUN0 1048576000 blocks, 512 bytes/block
LU2: LUN0 500.0GB storage for iqn.2011-03.example.org.istgt2:iscsi1
LU2: LUN0 serial storage22lu2lun0
LU2: LUN0 read cache enabled, write cache disabled
LU2: LUN0 command queuing enabled, depth 32

Below is my istgt.conf
Code:
[Global]
  Comment "Global section"
  NodeBase "iqn.2011-03.example.org.istgt2"
  PidFile /var/run/istgt.pid
  AuthFile /usr/local/etc/istgt/auth.conf
  MediaDirectory /var/istgt
  LogFacility "local7"
  Timeout 30
  NopInInterval 20
  DiscoveryAuthMethod Auto
  MaxSessions 16
  MaxConnections 4
  MaxR2T 32
  MaxOutstandingR2T 16
  DefaultTime2Wait 2
  DefaultTime2Retain 60
  FirstBurstLength 262144
  MaxBurstLength 1048576
  MaxRecvDataSegmentLength 262144

  # NOTE: not supported
  InitialR2T Yes
  ImmediateData Yes
  DataPDUInOrder Yes
  DataSequenceInOrder Yes
  ErrorRecoveryLevel 0


[UnitControl]

# PortalGroup section
[PortalGroup1]
  Portal DA1 0.0.0.0:3260

# InitiatorGroup section
[InitiatorGroup1]
  InitiatorName "ALL"
  Netmask ALL

# LogicalUnit section
[LogicalUnit1]
  TargetName "iscsi0"
  Mapping PortalGroup1 InitiatorGroup1
  AuthMethod Auto
  UseDigest Auto
  ReadOnly No
  UnitType Disk
  UnitOnline yes
  BlockLength 512
  LUN0 Storage /dev/zvol/vol/iscsi auto
 LUN0 Option WriteCache Disable
LUN0 Option Serial "storage22lun0"

# LogicalUnit section
[LogicalUnit2]
  TargetName "iscsi1"
  Mapping PortalGroup1 InitiatorGroup1
  AuthMethod Auto
  UseDigest Auto
  ReadOnly No
  UnitType Disk
  UnitOnline yes
  BlockLength 512
  LUN0 Storage /dev/zvol/vol/storage1_iscsi auto
 LUN0 Option WriteCache Disable
LUN0 Option Serial "storage22lu2lun0"

Changing sync=always have given me consistent writes with no data loss, but the performance was much slower (approximately 5MB/s).

Is that a zvol issue or istgt? Will sync=standard on zvol use ZIL?
 
@belon_cfy

Don't know about istgt in general. But if it doesn't quite do what it's supposed to, as you know, you can force sync'ed writes with sync=always. Then add a powerful SLOG to get the performance to acceptable level.

/Sebulon
 
Hi,

Yes, setting sync=always and adding SSD as SLOG did improve the write performance. But just wondering why the sync=standard won't work on ISCSI or might be zvol?

On NFS, sync=standard gave me sync write with no data loss.
 
Hi @rkagerer!

I thought your forum thread was a very interesting read. Especially the part about the program that showed exactly how many writes the disappear. Imagine someone was running a big-ass database and suffered power loss, and later get calls from upset customers because ~7000 commits never got flushed :S

Although I have the sense that running NFS with sync=standard is the same as iSCSI with sync=always, since VMWare forcibly mounts an NFS share with -o sync causing ZFS to honor that request, sending all through the ZIL any way.

/Sebulon
 
Last edited by a moderator:
Back
Top