Promise Fastrack 150SX4 under 9.1-RELEASE

I have an old Promise Fastrack 150SX4 in an IBM xSeries 235 (dual 2.4Ghz Xeon, 32 bit only).

It's currently supporting 3 SATA disks (250GB, 1TB and 2TB). The system boots from a 100GB ATA disk on another (internal) controller.

I've just upgraded the box from 8.3-RELEASE to 9.1-RELEASE and switching to the new graid and CAM subsystems is causing me a few issues.

Firstly, I found after much trial and error under 8.3 that I needed to switch the disks from UDMA6 to UDMA5 for best performance. The system initially negotiates UDMA6 for all 3 disks, but I only get 10-15 MByte/s transfer rates. Under UDMA5 I've seen rates as high as 35 MBytes/s. I used to use a small rc.d script that did the switch on boot using atacontrol. Now with having to use camcontrol, I haven't been able to work out the magic sequence of commands to get it to switch. Can anyone help me?

Secondly, when I setup the Promise under 8.(something), I created 2 arrays, each of 1 disk configured as JBOD. These appeared as ar0 and ar1. It was nice and stable. I added a 3rd disk and migrated the system over to ZFS. I quickly discovered that since this is a 32 bit system this was a *really* bad idea. I migrated it back to UFS.
I then tried the upgrade via freebsd-update to 9.1-RELEASE. When it came back up after the initial reboot it only created /dev entries for the slices in one of the disks. For the other two it bizarrely insisted that they were RAID0 volumes and created r1 and r2 in /dev/raid. This seemed to be propagated back to the Promise controller as it now had the same configuration.

Interestingly, if I disable the promise raid altogether via a sysctl setting in loader.conf, /dev slice entries for the two "RAID0" devices are created, but not for the first disk.

I suspect there is something weird on the disks, possibly from running them under ZFS, that is being picked up by the graid driver but I don't know what or where, and I don't know how to fix it. Anyone have any ideas?

My current solution is to backup the disks elsewhere and attempt to rebuild the 3 disks natively, without creating arrays for them in the Promise controller, but due to the first problem, the UDMA one, it's currently taking over 18 hours to copy 912 GB. <sigh>

Any help appreciated.
 
Found out how to set UDMA mode.

vk1kcm said:
Firstly, I found after much trial and error under 8.3 that I needed to switch the disks from UDMA6 to UDMA5 for best performance.

Found an answer for this one. Put the following into /boot/loader.conf, replacing the "2" with the device id for each device.
Code:
hint.ata.2.mode="UDMA5"
I found the id's using;
[cmd=]sysctl -a | grep dev.ata[/cmd]

In my case I only had one multi channel device so it was easy to find the device ids, but if you have a complex setup it would be much harder.
 
Back
Top