I have a FreeBSD 11.1 system in the public cloud using block storage, the system and storage works without any issues except one: my ZFS pool does not automatically mount like it should. After each reboot I need to manually mount the storage with
I'm not sure how to interpret the following iSCSI errors in
[FONT=courier new]da0 at iscsi1 bus 0 scbus3 target 0 lun 1
da0: <ORACLE BlockVolume 1.0> Fixed Direct Access SPC-4 SCSI device
da0: 150.000MB/s transfers
da0: Command Queueing enabled
da0: 262144MB (536870912 512 byte sectors)
(da0:iscsi1:0:0:1): READ(6)/WRITE(6) not supported, increasing minimum_cmd_size to 10.
(da0:iscsi1:0:0:1): SYNCHRONIZE CACHE(10). CDB: 35 00 00 00 00 00 00 00 00 00
(da0:iscsi1:0:0:1): CAM status: SCSI Status Error
(da0:iscsi1:0:0:1): SCSI status: Check Condition
(da0:iscsi1:0:0:1): SCSI sense: ILLEGAL REQUEST asc:20,0 (Invalid command operation code)
(da0:iscsi1:0:0:1): Error 22, Unretryable error[/FONT]
I know that ZFS pools are supposed to autoboot but mine doesn't and I'm thinking it's because the storage hasn't finished attaching when ZFS starts up, which would explain why it works manually a few minutes later. My /etc/rc.conf looks like this:
[FONT=courier new]#enable iSCSI below
ctld_enable="yes"
iscsid_enable="YES"
iscsictl_enable="YES"
iscsictl_flags="-Aa"
#enable ZFS
zfs_enable="YES"
#start the jails on ZFS
ezjail_enable="YES"[/FONT]
Here is what my /etc/iscsi.conf configuration looks like:
[FONT=courier new]b0 {
TargetAddress = 169.254.1.2:3260
TargetName = iqn.1234567890greatbignumber
AuthMethod = CHAP
chapIName = ocid1.volume.morestuffhere
chapSecret = supersecret
}[/FONT]
After encountering this problem and doing some investigation, I created a [FONT=courier new]/etc/ctl.conf[/FONT] file, thinking the iSCSI virtual device wasn't attaching correctly because maybe the iSCSI Target config didn't exist, but it creating this config didn't seem to make any difference. My ctld(8) config, ctl.conf, looks exactly like the example at section "28.12.1. Configuring an iSCSI Target" of the Handbook, but with my own particular IQN, storage username and CHAP password. To be honest I don't understand what [FONT=courier new]ctl.conf[/FONT] does compared to [FONT=courier new] iscsi.conf [/FONT]when they both seem to want mostly the same information.
I'm just a hobbyist and haven't attached any iSCSI devices to FreeBSD previously, so I may have just missed something.
Since it does mount manually and runs fine from then on, without a proper fix my workaround could be to simply put a delay into the boot process and then mount the storage manually using a little script in /etc/rc.local, verify it's running, and then start my jails on the ZFS pool that was mounted, I would bet that little hack would work. But I'd much rather fix the root of the problem. If it helps, I don't believe the instance is using paravirtualized drivers for storage since I'm running an emulated VMDK image on KVM. When I copy files to/from the cloud storage there haven't been any issues and it's very fast.
zfs mount -a
.I'm not sure how to interpret the following iSCSI errors in
dmesg
, I'm guessing it's some kind of timeout issue because after the system finishes booting and I've logged in a few minutes later, I can manually mount and use the cloud block storage without any problem.[FONT=courier new]da0 at iscsi1 bus 0 scbus3 target 0 lun 1
da0: <ORACLE BlockVolume 1.0> Fixed Direct Access SPC-4 SCSI device
da0: 150.000MB/s transfers
da0: Command Queueing enabled
da0: 262144MB (536870912 512 byte sectors)
(da0:iscsi1:0:0:1): READ(6)/WRITE(6) not supported, increasing minimum_cmd_size to 10.
(da0:iscsi1:0:0:1): SYNCHRONIZE CACHE(10). CDB: 35 00 00 00 00 00 00 00 00 00
(da0:iscsi1:0:0:1): CAM status: SCSI Status Error
(da0:iscsi1:0:0:1): SCSI status: Check Condition
(da0:iscsi1:0:0:1): SCSI sense: ILLEGAL REQUEST asc:20,0 (Invalid command operation code)
(da0:iscsi1:0:0:1): Error 22, Unretryable error[/FONT]
I know that ZFS pools are supposed to autoboot but mine doesn't and I'm thinking it's because the storage hasn't finished attaching when ZFS starts up, which would explain why it works manually a few minutes later. My /etc/rc.conf looks like this:
[FONT=courier new]#enable iSCSI below
ctld_enable="yes"
iscsid_enable="YES"
iscsictl_enable="YES"
iscsictl_flags="-Aa"
#enable ZFS
zfs_enable="YES"
#start the jails on ZFS
ezjail_enable="YES"[/FONT]
Here is what my /etc/iscsi.conf configuration looks like:
[FONT=courier new]b0 {
TargetAddress = 169.254.1.2:3260
TargetName = iqn.1234567890greatbignumber
AuthMethod = CHAP
chapIName = ocid1.volume.morestuffhere
chapSecret = supersecret
}[/FONT]
After encountering this problem and doing some investigation, I created a [FONT=courier new]/etc/ctl.conf[/FONT] file, thinking the iSCSI virtual device wasn't attaching correctly because maybe the iSCSI Target config didn't exist, but it creating this config didn't seem to make any difference. My ctld(8) config, ctl.conf, looks exactly like the example at section "28.12.1. Configuring an iSCSI Target" of the Handbook, but with my own particular IQN, storage username and CHAP password. To be honest I don't understand what [FONT=courier new]ctl.conf[/FONT] does compared to [FONT=courier new] iscsi.conf [/FONT]when they both seem to want mostly the same information.
I'm just a hobbyist and haven't attached any iSCSI devices to FreeBSD previously, so I may have just missed something.
Since it does mount manually and runs fine from then on, without a proper fix my workaround could be to simply put a delay into the boot process and then mount the storage manually using a little script in /etc/rc.local, verify it's running, and then start my jails on the ZFS pool that was mounted, I would bet that little hack would work. But I'd much rather fix the root of the problem. If it helps, I don't believe the instance is using paravirtualized drivers for storage since I'm running an emulated VMDK image on KVM. When I copy files to/from the cloud storage there haven't been any issues and it's very fast.