zpool cache on CF fails

I have a bit of a mystery.

I have a CF card stuck in a CF-SATA adapter, connected to an SATA port on my ICH8 motherboard.

I have partitioned the CF card with GPT, and if I create a UFS partition on it, newfs and mount it, it works perfectly.

If I create a (test) zpool on the CF card, then create a filesystem on it and do a bunch of I/O on it, it works perfectly.

If I take that same partition and add it as a cache to an existing mirrored pool, it fails. Usually with timeouts almost immediately while mounting, leading ultimately to a kernel panic.

To recap

Code:
zpool create temp ada2p1
zfs create temp/test
dd if=/dev/random of=/temp/test/foo
works

Code:
zfs add tank cache ada2p1

fails.

WTF?
 
To throw more fuel on this, a USB to CF adapter appears to work properly. It's just this particular SATA to CF adapter, and only when it's used as a ZFS cache disk.

Very weird.
 
aragon said:
What if you specify your cache device when you create your pool?

I didn't think of it at the time (I'm sort of new to ZFS), and now it's too late. :(

Meanwhile, I returned the SATA to CF adapter and am using the USB one without any trouble.
 
shitson said:
Code:
zfs add tank cache ada2p1

Did you try?

Code:
zfs add tank cache ada2

No, I fibbed a little. I was actually using gptid labels instead of literally ada2p1. I never tried using the whole device instead of a partition.

Moving the device from the CF-SATA controller to the USB-CF controller and using the exact same gptid label worked. This is, in fact, what is attractive to me about gptid labels - disks can move around and it won't matter. The disks that actually have the two halves of the mirror both have gptzfsboot partitions on them, so even if one disk dies utterly, the system should still boot.

The only thing I can think of is that some bizarre hardware incompatibility cropped up related to the particular workload imposed by the ZFS cache layer, but I'm not sure I believe that.
 
Back
Top