ZFS Availablility of /dev/diskid

Hello,

for the last 5 years i have been able to answer my questions by searching and there was no need for an account. that time seems to have passed.

i know there is nothing like /dev/disk/by-id like there is on linux when playing around with openzfs, BUT freebsd-11.2-live gives me /dev/diskid which seems to serve the same purpose: not relying on the logic of zfs to "figure it out" where and what my disks are and importing the pool correctly. this seems to be a problem mainly when the pool wasnt exported on a different system and i try to reimport it using zpool import -d command. there is always some disk or disks that are unavailable and the pool is either degraded or faulted as a result.

/dev/diskid/ solved that on freebsd live and i was able to access the pool.

does anybody now what manages that directory on a regular freebsd-install? /dev/diskid doesnt seem to be there by default and i would like to use it to import my pool on my 'real' system now. (although, through some magic all the disks on all the pools are discovered right now with there /dev/ada* and /dev/da* paths ... still i would like to do it with the extra layer of abstraction so i wont run into this problem anymore at some later point.)

any hints are greatly appreciated.
 
There are a set of switches in kern.geom.label (try sysctl -d kern.geom.label and sysctl kern.geom.label), controlling the names to make each device available under. And remember that once you start using a device/partition with one of its names, others will be hid, until device/partition is no-longer in use.
 
for the last 5 years i have been able to answer my questions by searching and there was no need for an account. that time seems to have passed.
Welcome to the forums! There are more reasons to create an account just to ask questions though, sometimes it's just about sharing experiences and such as well :)

BUT freebsd-11.2-live gives me /dev/diskid which seems to serve the same purpose: not relying on the logic of zfs to "figure it out" where and what my disks are and importing the pool correctly.
Actually ZFS still does that, also on FreeBSD. If you need direct access to a specific partition you can always use the block devices itself. Things like /dev/ada0 or /dev/ada0p1 for example.

this seems to be a problem mainly when the pool wasnt exported on a different system and i try to reimport it using zpool import -d command.
Why use that anyway? # zpool import -fR /mnt <pool> should do the trick quite fine too, just force the import and you should be all good. Specifying a device directory really doesn't change this behavior all that much. The warning you get is merely there as a failsave to ensure that you don't accidentally change important settings (such as mountpoint) on a permanent basis.

there is always some disk or disks that are unavailable and the pool is either degraded or faulted as a result.
Then I cannot help wonder if you actually set up the pool in a correct way and I'd question whatever installation procedure you followed. Worst case scenario is that you're suffering from actual hardware problems, and are now "blaming" ZFS for that. That's the way it sounds to me at leas.
 
Welcome to the forums! There are more reasons to create an account just to ask questions though, sometimes it's just about sharing experiences and such as well :)


Actually ZFS still does that, also on FreeBSD. If you need direct access to a specific partition you can always use the block devices itself. Things like /dev/ada0 or /dev/ada0p1 for example.


Why use that anyway? # zpool import -fR /mnt <pool> should do the trick quite fine too, just force the import and you should be all good. Specifying a device directory really doesn't change this behavior all that much. The warning you get is merely there as a failsave to ensure that you don't accidentally change important settings (such as mountpoint) on a permanent basis.


Then I cannot help wonder if you actually set up the pool in a correct way and I'd question whatever installation procedure you followed. Worst case scenario is that you're suffering from actual hardware problems, and are now "blaming" ZFS for that. That's the way it sounds to me at leas.

hi, thanks for your reply!

i know i can import an unexported pool with -f and specify a mountpoint but something is just off with that pool. on one system it detects (and imports) on the next it just doesnt. and it also varies from boot to boot. for example fedora sometimes worked and then didnt. same thing with debian. vanishing disks as far as zfs is concerned. disks are fine and available (according to smart).

the disks were once a root pool created by freebsd installer (raidz1 with 5 disks). i dont think there is something fishy about the way it is setup.

i also know about the regular access through block devices, but other than specifing a directory with -d and the disks not being deteced (-d /dev/ for example) i am shit out of luck if zfs doesnt recognize the disks.

i am happy do debug with interested people but for now i will use the regular paths and try to switch over with the tips i got from bobi b.
 
on one system it detects (and imports) on the next it just doesnt. and it also varies from boot to boot.
Interesting. I don't have a theory for the problems happening between boots (other than what I mentioned earlier) but I'm not too surprised to see different behavior across OS's. See: even though ZFS is generally referred to as 'ZFS' it actually maintains different versions and those are usually not forwards compatible:

So it's also perfectly possible for different OS's to support different ZFS versions. For example: the ZFS pool as I'm using it now on FreeBSD 11.2 wouldn't be usable on a system running on, say, FreeBSD 10.2.

Please note that I don't keep up with developments around Linux & ZFS, but I could well imagine that this also applies to that platform.

i also know about the regular access through block devices, but other than specifing a directory with -d and the disks not being deteced (-d /dev/ for example) i am shit out of luck if zfs doesnt recognize the disks.
Ayups. Which is what I always refer to as the downside of ZFS.

It is a robust & reliable filesystem, but also has several weaknesses. For example: trash the boot sector of a ZFS partition (or disk) and you may end up destroying the entire pool (thus all filesystems on it), depending of course on the way it was setup. Still, it's a risk which is severely limited when using UFS (which, of course, also has its own set of issues).

This is one of the reasons why ZFS isn't always the best option.
 
Back
Top