Solved ctl fibre channel LUNs not seen by qlogic bios

Hello,

I configured CTL to provide fibre channel LUNS to other computers.
Everything works perfectly : the LUNS are accessible to the initiators, they can be partitioned, formatted etc.

But there is one exception. In the qlogic bios of the initiators, hitting enter at "Scan Fibre Devices" hangs on "Checking Loop ID 0" for 30s, then shows "No device present".

At the same time, the /var/log/messages file of the target reads this:

Code:
Jun 12 15:33:46 tserver kernel: isp0: Chan 0 PLOGX Logout PortID 0x500500 nphdl 0x1
Jun 12 15:33:46 tserver kernel: isp0: CTIO returned by f/w- Port Logout
Jun 12 15:33:46 tserver kernel: isp0: isp_handle_platform_ctio: CTIO7[119434] seq 0 nc 1 sts 0x29 flg 0x8002 sns 0 resid 0 FIN
And 30s later :
Code:
Jun 12 15:34:57 tserver kernel: isp0: Chan 0 WWPN 0x2100001b32883921 PortID 0x500500 handle 0x1 cannot be found to be deleted

Here are some information that might be relevant :
  • The tested versions of FreeBSD were 11.0, 11.0 p10, and 11.1 BETA1.
  • I tried Freenas 11, and it worked sometimes.
  • I tried targetcli-fb on gentoo and it worked fine.
Sysctl informations:
Code:
#sysctl dev.isp.0
dev.isp.0.topo: 3
dev.isp.0.loopstate: 10
dev.isp.0.fwstate: 3
dev.isp.0.linkstate: 1
dev.isp.0.speed: 4
dev.isp.0.role: 1
dev.isp.0.gone_device_time: 30
dev.isp.0.loop_down_limit: 60
dev.isp.0.wwpn: 2377900720055839877
dev.isp.0.wwnn: 2377900720055839877
dev.isp.0.%parent: pci2
dev.isp.0.%pnpinfo: vendor=0x1077 device=0x2432 subvendor=0x1077 subdevice=0x0138 class=0x0c0400
dev.isp.0.%location: slot=0 function=0 dbsf=pci0:2:0:0
dev.isp.0.%driver: isp
dev.isp.0.%desc: Qlogic ISP 2432 PCI FC-AL Adapter

ctladm ports :
Code:
# ctladm port -l
Port Online Frontend Name     pp vp
0    NO     camsim   camsim   0  0  naa.5000000a3f888301
1    YES    ioctl    ioctl    0  0  
2    YES    tpc      tpc      0  0  
3    YES    camtgt   isp0     0  0  naa.2100001b3212ec85

Did someone already face this problem or have any idea?
Thank you
 
Oh good heavens, this has been plaguing me for years now. All my machines with Qlogic HBAs worked OK (sometimes) when connected directly to my FreeNAS host, but barfed with timeouts whenever scanning through my switch (Brocade 5000). I finally buckled down today to trace the FC transactions to figure out why when I found this message... looks like they've backported it to the 11.1 kernel in FreeNAS 11.1, at least if their Git repo is accurate, so I finally have a real reason to upgrade from 9.10.x. Here's hoping I can finally run my Alpha and SPARC machines from SAN, and boot my ESXi machine through the switch! Will report back when the upgrade completes and I've had a chance to check it out.
 
Alas, Solaris is still a little borked; it enumerates the devices fine, but when it comes to actual port logins, it times out. I can produce some details later when I've got some time, but for now, I'm super happy it's working for VMWare.
 
Hello,

Unfortunately, that won't be of any help but I have the same problem with solaris or openindiana x86_64.
I wanted to boot it on san but it never worked.

Let's hope someone will come with a solution.
 
OK, so: Solaris can enumerate all my LUNs (and so can the FCode boot code on the machine), but it seems to have some trouble actually connecting.

Example session:
Code:
localadm@isengard:~$ sudo fcinfo remote-port -s -p 2101001b32a1a639 2101001b323788c7
Remote Port WWN: 2101001b323788c7
        Active FC4 Types: SCSI
        SCSI Target: yes
        Port ID: 11700
        Port Symbolic Name: mirkwood.local:isp1
        Node WWN: 2000001b323788c7
        LUN: 0
          Vendor: FreeBSD
          Product: iSCSI Disk    
          OS Device Name: Unknown
        LUN: 2
          Vendor: FreeBSD
          Product: iSCSI Disk    
          OS Device Name: Unknown
        LUN: 7
          Vendor: FreeNAS
          Product: iSCSI Disk    
          OS Device Name: Unknown
        LUN: 8
          Vendor: FreeNAS
          Product: iSCSI Disk    
          OS Device Name: Unknown
        LUN: 9
          Vendor: FreeNAS
          Product: iSCSI Disk    
          OS Device Name: Unknown

localadm@isengard:~$ sudo cfgadm -al -o show_SCSI_LUN
Ap_Id                          Type         Receptacle   Occupant     Condition
c4                             fc           connected    unconfigured unknown
c5                             fc           connected    unconfigured unknown
c6                             fc-fabric    connected    unconfigured unknown
c6::2101001b323788c7,0         disk         connected    unconfigured unknown
c6::2101001b323788c7,2         disk         connected    unconfigured unknown
c6::2101001b323788c7,7         disk         connected    unconfigured unknown
c6::2101001b323788c7,8         disk         connected    unconfigured unknown
c6::2101001b323788c7,9         disk         connected    unconfigured unknown
c6::2101001b32a1a639           unknown      connected    unconfigured unknown
c7                             fc-fabric    connected    unconfigured unknown
c7::2100001b3281a639           unknown      connected    unconfigured unknown
c7::2101001b323788c7,0         disk         connected    unconfigured unknown
c7::2101001b323788c7,2         disk         connected    unconfigured unknown
c7::2101001b323788c7,7         disk         connected    unconfigured unknown
c7::2101001b323788c7,8         disk         connected    unconfigured unknown
c7::2101001b323788c7,9         disk         connected    unconfigured unknown

localadm@isengard:~$ sudo cfgadm -c configure c6::2101001b323788c7
 (several seconds go by)
cfgadm: Library error: failed to create device node: 2101001b323788c7: I/O error

localadm@isengard:~$ sudo dmesg
(snip)
Mar 15 08:54:22 isengard scsi: [ID 243001 kern.warning] WARNING: /pci@500/pci@0/pci@d/SUNW,qlc@0/fp@0,0 (fcp5):
Mar 15 08:54:22 isengard        INQUIRY to D_ID=0x11700 lun=0x0 failed: State:Timeout, Reason:Hardware Error. Giving up
Mar 15 08:54:22 isengard scsi: [ID 243001 kern.warning] WARNING: /pci@500/pci@0/pci@d/SUNW,qlc@0/fp@0,0 (fcp5):
Mar 15 08:54:22 isengard        INQUIRY to D_ID=0x11700 lun=0x2 failed: State:Timeout, Reason:Hardware Error. Giving up
Mar 15 08:54:22 isengard scsi: [ID 243001 kern.warning] WARNING: /pci@500/pci@0/pci@d/SUNW,qlc@0/fp@0,0 (fcp5):
Mar 15 08:54:22 isengard        INQUIRY to D_ID=0x11700 lun=0x7 failed: State:Timeout, Reason:Hardware Error. Giving up
Mar 15 08:54:22 isengard scsi: [ID 243001 kern.warning] WARNING: /pci@500/pci@0/pci@d/SUNW,qlc@0/fp@0,0 (fcp5):
Mar 15 08:54:22 isengard        INQUIRY to D_ID=0x11700 lun=0x8 failed: State:Timeout, Reason:Hardware Error. Giving up
Mar 15 08:54:22 isengard scsi: [ID 243001 kern.warning] WARNING: /pci@500/pci@0/pci@d/SUNW,qlc@0/fp@0,0 (fcp5):
Mar 15 08:54:22 isengard        INQUIRY to D_ID=0x11700 lun=0x9 failed: State:Timeout, Reason:Hardware Error. Giving up
Mar 15 08:54:22 isengard scsi: [ID 243001 kern.warning] WARNING: /pci@500/pci@0/pci@d/SUNW,qlc@0,1/fp@0,0 (fcp4):
Mar 15 08:54:22 isengard        INQUIRY to D_ID=0x11700 lun=0x0 failed: State:Timeout, Reason:Hardware Error. Giving up
Mar 15 08:54:22 isengard scsi: [ID 243001 kern.warning] WARNING: /pci@500/pci@0/pci@d/SUNW,qlc@0,1/fp@0,0 (fcp4):
Mar 15 08:54:22 isengard        INQUIRY to D_ID=0x11700 lun=0x7 failed: State:Timeout, Reason:Hardware Error. Giving up
Mar 15 08:54:22 isengard scsi: [ID 243001 kern.warning] WARNING: /pci@500/pci@0/pci@d/SUNW,qlc@0,1/fp@0,0 (fcp4):
Mar 15 08:54:22 isengard        INQUIRY to D_ID=0x11700 lun=0x9 failed: State:Timeout, Reason:Hardware Error. Giving up
Mar 15 08:54:22 isengard scsi: [ID 243001 kern.warning] WARNING: /pci@500/pci@0/pci@d/SUNW,qlc@0,1/fp@0,0 (fcp4):
Mar 15 08:54:22 isengard        INQUIRY to D_ID=0x11700 lun=0x2 failed: State:Timeout, Reason:Hardware Error. Giving up
Mar 15 08:54:22 isengard scsi: [ID 243001 kern.warning] WARNING: /pci@500/pci@0/pci@d/SUNW,qlc@0,1/fp@0,0 (fcp4):
Mar 15 08:54:22 isengard        INQUIRY to D_ID=0x11700 lun=0x8 failed: State:Timeout, Reason:Hardware Error. Giving up
Mar 15 08:55:30 isengard scsi: [ID 243001 kern.warning] WARNING: /pci@500/pci@0/pci@d/SUNW,qlc@0/fp@0,0 (fcp5):
Mar 15 08:55:30 isengard        Failed to create nodes for pwwn=2101001b323788c7; error=5

I can do an FC traffic dump from my switch a little later, which I assume may be more useful for seeing which packets are being issued and which ones are failing to return.
 
Interestingly, when the timeouts start hitting on the INQUIRY packets, I get a lot of this in my dmesg:

Code:
isp1: isp_handle_platform_ctio: CTIO7[12049c] seq 0 nc 0 sts 0x2 flg 0xa02 sns 0 resid 0 MID
isp1: isp_handle_platform_ctio: CTIO7[1205d0] seq 0 nc 0 sts 0x2 flg 0xa02 sns 0 resid 0 MID
isp1: isp_handle_platform_ctio: CTIO7[1205fc] seq 0 nc 0 sts 0x2 flg 0xa02 sns 0 resid 0 MID
isp1: isp_handle_platform_ctio: CTIO7[120628] seq 0 nc 0 sts 0x2 flg 0xa02 sns 0 resid 0 MID
isp1: isp_handle_platform_ctio: CTIO7[120654] seq 0 nc 0 sts 0x2 flg 0xa02 sns 0 resid 0 MID
isp1: isp_handle_platform_ctio: CTIO7[120680] seq 0 nc 0 sts 0x2 flg 0xa02 sns 0 resid 0 MID
 
I don't want to hijack your thread here, but I have a question that I just can't seem to find an answer to.

How do you perform LUN Masking or limit access to certain LUN's from certain client systems? I know how to do this in Solaris, but can't seem to find an answer to this question for FreeBSD.
 
Hello,
That is not possible on FreeBSD, because it has not been implemented.
The only way I am aware of is to create vports, using hint.isp.X.vports (replace X accordingly with your port number(s)) in /boot/loader.conf and a fabric.
More informations are available in the man page isp().

Vports will create (virtual) additional ports that you can zone in your fabric.

I hope this is clear, as my english is not that good.
 
I appreciate your reply. I was using FreeNAS as a Fibre Channel storage server, but wanted to get LUN Masking setup somehow and thought maybe FreeBSD would be capable. I guess, so much for that idea. I am trying to supply FC/FCoE storage to two ESXi clusters and figured LUN Masking would solve the problem of having to have two FC storage servers to divide storage between the two clusters.

I have played with Solaris 11.4 but would prefer FreeBSD or Linux and I am not entirely sure you can do it with Linux either. In my testing with FreeNAS and FreeBSD the performance was excellent. The only issue I ran into was high latency when FCoE was configured, but later found out that was an issue on the ESXi side of things and there is a quick edit to fix it.

I wash FreeNAS community edition wasn't so hobbled. TrueNAS offers FC LUN Masking, but haven't been able to hack it enough to get it working.

Again, Thank You for your reply.
 
Unfortunately I don't have any experience with targetcli. Most of my Linux experience is with SLES or Debian, but it is pretty limited at best.
If I had a good tutorial that I could go over and learn from I imagine I would be ok.
I prefer to use ZFS, but have no experience with ZFS on Linux at all.
 
Back
Top