Solved moving zfs disks to another PC

Hello Everyone,
The scenario described below is on my playground PCs. I do all this to try things and gain experience. There is no valuable data involved.

I was recently given a 2-port 3Ware PCI SATA raid controller. Amongst many other things I tried, I used two identical make/model disks and confugured them as JBOD, then sliced them up, and created zmirrors from the slice pairs as follows: twe0s1+twe1s1 = zmirror1, twe0s2+twe1s2=zmirror2, twe0s3+twe1s3=zmirror3. I installed and booted FreeBSD 10.3R/amd64 to zmirror1 (I was unable to boot another FreeBSD 10.3R system from zmirror2 and zmirror3 due to the limitations of zfsboot). Most of the things I wanted to play with worked out as expected, then I run into this problem:
  • Assume a computer failure: The 3ware controller and the mother-board housing it is damaged.
  • Task: recover data from zmirror1.
  • Attempted solution: I removed the two disk drives from PC1, and connected one of them to PC2 (there is no 3ware adapter in PC2, so the disk is connected directly to the mother-board). Booted FreeBSD 10.3R/amd64 from the system disk that was originally in PC2.
    Code:
    gpart show ada2
    lists the 3 slices I had created for zmirror1, zmirror2 and zmirror3 on the two disks when they were in PC1. Then I was hoping to mount the zmirror1 volume (well, the one member of it) as read only, but was unable to do so.
So the task is to read-only mount the zfs partition that was once a member of zmirror1, to access files on it. On PC2, /etc/rc.conf contains
Code:
zfs_enable="YES"
, yet zpool list or zpool status shows no pools (the booted system is on UFS in a BSD schema). Also, zfs list says there is no zfs filesystem available.
Attempting to access the zmirror1 data via conventional mount -t zfs -r /dev/ada2s1 /mydisk fails, returning "no such device". Meanwhile ls -l /dev/ada2* does show ada2s1 and all its other slices, and gpart show lists them too.
When the disks are reconnected to the 3ware controller in PC1, everything works fine. But I am unable to access the zmirror1 data (or even just the empty volumes of zmirror2 or zmirror3) in PC2.
  • What am I missing?
  • Why doesn't zpool list pick up the existing pool or zfs from ada2?
  • Is this because the vdevs were called twe??? when the pool was created in PC1, and that same vdev in PC2 is now named ada2?
Any explanation on the above, or a suggestion on how to read-only mount the zfs volume on ada2s1 is most welcome.

Regards,
Keve
 
Last edited:
My guess: any storage device with a non-native name (like 'twe*') is a controller-proprietary thing and will only work on a controller of same type (and sometimes same model as well), it will not work when connected to a regular SATA controller. In this case "work" refers to the storage format (as in the bits stored on the device), not the hardware.
This is speculation from my side, it has not been verified. YMMV.
 
Some RAID cards do store metadata on the drive, either at the beginning or end. Both can complicate drive portability, because the RAID card pretends the metadata does not exist and the actual drive storage does not include it. Move the drive to a different controller which does not hide that metadata, and the partition tables are in the wrong place.
 
The correct and only way to scan for pools on any connected disks is:

# zpool import

The other commands you tried, list etc only work in pools that have been already imported ("mounted") to the system.
 
3ware uses a proprietary disk format so these disks will only work on another 3ware controller. I've had the same exact problem when my old 3ware controller died - all 4 disks weren't readable on other controllers or normal HBAs.
If # zpool import doesn't show the pool(s), you need another 3ware (maybe even similar series) controller. Best practice: use normal HBAs, avoid HW-RAID Controllers like the plague.
 
It seems I failed to realize the importance of using zpool import. Once I tried it (and thoroughly re-read the corresponding manual paragraphs) I managed to mount my old pool in a new computer.
Also, it seems I was lucky with my particular kind of 3ware card because I needed no extra jiggery-pokery to get the pool of the failed card working on a motherboard-only system. Mine was a cheap kind of 3ware card, maybe that is why.
 
3ware uses a proprietary disk format so these disks will only work on another 3ware controller. I've had the same exact problem when my old 3ware controller died - all 4 disks weren't readable on other controllers or normal HBAs.

Partially true.

If you configure a hardware RAID array, then yes, those disks (and the associated array) will only be usable on other 3Ware RAID controllers. Not sure if the array can be migrated between 3Ware and Avago/LSI/MegaRAID controllers; they're all owned by Broadcom now, but not sure if they've made them compatible yet.

If you configure the disks as "single" drive arrays, or you configure the controller for JBOD support, then those disks can be used with any other controller. I know this as we have migrated systems from 3Ware-based hardware RAID5 and RAID10 arrays, to software RAID10. Originally, we configured the disks as "single" drive arrays to get all the extra features of the controller (caching and whatnot), and then migrated those to the SATA controller on the motherboard when the 3ware drive died. No issues. We were actually very surprised to discover this worked, as we were under the impression that a "single" disk array would still use the 3Ware metadata.

Later, we started configuring the 3Ware controllers in JBOD to simplify things. With a final migration to the SATA controller on the motherboard (usually after a motherboard upgrade to SuperMicro, as the controller on the Tyan boards we used back then were crap).

(We've since moved to plain HBAs for the bigger servers and the onboard SATA ports for school servers.)
 
If you configure the disks as "single" drive arrays, or you configure the controller for JBOD support, then those disks can be used with any other controller.

Exactly this was the problem: There was no JBOD support available for that chipset. Single disks had to be configured as single-drive raid0 stripes, resulting in proprietary on-disk format. I've tried using a newer controller without luck and gave up relatively quickly (latest backup was only a few hours old anyways...)
 
Exactly this was the problem: There was no JBOD support available for that chipset. Single disks had to be configured as single-drive raid0 stripes, resulting in proprietary on-disk format. I've tried using a newer controller without luck and gave up relatively quickly (latest backup was only a few hours old anyways...)

Ah, that would have been a very old controller than. The ones we've used (9500, 9550, 9650) all supported a RAID level called "single" (not RAID0). Using that allowed for the full "RAID" functionality of the controller, without locking the drives into the controller.
 
Back
Top