Can I make a FC SAN with FreeBSD?

If I can make a RAID with FreeBSD can I install a Fiber Channel card and make it serve up LUNs to clients? Just curious. I don't want iSCSI for this purpose, which seems to be the more accepted and easier route.

Thanks for your thoughts.
 
At least one of the FibreChannel drivers supports target mode. The man page for isp(), used for Qlogic cards, suggests that it supports target mode (with a kernel rebuild).

It does say it's not reliable though (although that may just be on the SCSI cards) and I can't find any information on how to make it work so you may be better off with another OS for this. FibreChannel support doesn't seem to be that good. We spent days trying to get a Qlogic card to talk to a FC disk array (FreeBSD as the initiator) and gave up in the end, installing Linux which connected as soon as the installer started booting.
 
dR3b said:
You can use Linux with LIO "http://linux-iscsi.org/wiki/Fibre_Channel"

If you're going to do this with Linux there are a couple ways to do it. SCST is a good alternative to LIO.

COMSTAR on Solaris and it's derivatives is also stable, performs well, and is easy to use. It may be worth it to consider that instead of having to use Linux. I have quite a few FC targets running on Solaris that are working very well.
 
very interesting. the whole point to this is to get cheap storage to an old ( solaris 9) box. we tried a d240 media server, but the raid 5 it created is slow as dirt. cant seem to get iscsi to work on it but i know fc will work.
 
I would say the following is what you are looking for, CAM Target Layer

This is the CAM Target Layer addition to FreeBSD, that enables you to serve up FC as a target from a FreeBSD server. This works with the isp(4) driver, and I have played around with this for some time and it does what is required.

I have tried exporting ZFS ZVOLs as FC targets and that works, and have not yet experimented with raw disks. The readme is pretty thorough though, and should help you in setting up the same.

The changes are present in the 9_STABLE and HEAD, I have not checked 9_1, to be sure that it is a part of that release as well.
 
Thank you, i have plenty of things to try this weekend. i do have a qlogic qle2462 hba coming in friday, so this should work out nicely. hmmm.. i have never done this without a switch , wonder if you can connect direct one hba to another?
 
meleehunt said:
Thank you, i have plenty of things to try this weekend. i do have a qlogic qle2462 hba coming in friday, so this should work out nicely. hmmm.. i have never done this without a switch , wonder if you can connect direct one hba to another?


Look at FreeBSD 9.1-RELEASE Hardware Notes

My 2432 is detected. I don't know about your 2462.

Code:
isp0: <Qlogic ISP 2432 PCI FC-AL Adapter> port 0x5000-0x50ff mem 0xfdff0000-0xfdff3fff irq 19 at device 0.0 on pci19
isp1: <Qlogic ISP 2432 PCI FC-AL Adapter> port 0x5400-0x54ff mem 0xfdfe0000-0xfdfe3fff irq 16 at device 0.1 on pci19
 
Somari said:
The changes are present in the 9_STABLE and HEAD, I have not checked 9_1, to be sure that it is a part of that release as well.

It is included in 9.1-RELEASE. I wondered about what it was when upgrading a 9.0 to 9.1 and looking through GENERIC to see what was new and couldn´t really work out what this new ctl was about. Now I know:) Cool stuff!

/Sebulon
 
Followup: The 2462 card seems to be seen fine, the hard part is getting solaris 9 sparc to play nice with any card I offer it. It appears you MUST use sun branded PCI-X (1000 blade and a V480).

I have started playing with the camcontrol and ctladm, this is where things get kinda sticky.

I get the idea but if anyone has working syntax for a direct connection it would really help.

Thinking I create a lun of x size on x target - then go to the other side and see if it sees it.
Its the presenting the lun that is not going so well.

After that it will be to see if I can find a fiber switch around here and enough cable to make an actual mpxio or multipathed situation. but first things first - get comfortable with the luns. The raid10 is made from zfs and has created 1.8 tb from 4 x 1tb drives, which seems about right. They are el cheapo wd greens but they work, not going to stay with them but for testing its great. I heard the reds were much better and faster. After all raid ~ reliability right?
 
meleehunt said:
very interesting. the whole point to this is to get cheap storage to an old ( solaris 9) box. we tried a d240 media server, but the raid 5 it created is slow as dirt. cant seem to get iscsi to work on it but i know fc will work.

If so, I would go for OpenIndiana here. It will be also benefit to have it in Solaris environment. And you can still choose which way you want to send SCSI cmds - over IP or via FCP.
 
Back
Top