UFS Accidentally mounted zroot on UFS system

I wanted to take a look at an old drive with a ZFS install. So I ran zpool import instead of zpool import -R. And it appears the old zroot drive has taken over my UFS system. How might I go about safely detaching that?
 
Ouch. My opinions, please wait for others to chime in before doing anything I suggest.

I think it basically means your old directories have been over mounted by the ZFS bits. So UFS /bin is overlaid with ZFS /bin. I don't think it would be a "union" mount, but a complete overlay.
Given that I think a simple zpool export would work and get you back to your UFS /bin (and others).

Worst case shut down, physically disconnect the ZFS device and reboot.
 
I think it basically means your old directories have been over mounted by the ZFS bits. So UFS /bin is overlaid with ZFS /bin. I don't think it would be a "union" mount, but a complete overlay.
And that's certainly correct.

I second the opinion(!), try zpool export first. If you're unlucky and it doesn't work (because something already locked some files on the ZFS pool, whatever?), I guess my next action would be a forced poweroff, IOW, pull the plug. ZFS shouldn't have issues with that, UFS neither, at least if SU+J is active. But it's still a bit "gambling".
 
And that's certainly correct.
Thanks for confirming. I'm long past my experimenting days where I would actually try something like this. I just want my systems to keep on working with minimum downtime (try to keep it to reboot after security updates) so don't have extra systems around anymore to try "weird stuff".

But this is an example of why I like ZFS and boot environments. "bectl mount" lets one temp mount a BE to muck around. I "grew up" on FreeBSD with UFS and love the longevity of it (the handling of the move to 64bit inodes was very impressive) but I'm very much a ZFS convert.
 
I ended up shutting down and pulling the drive. The system recognized this and corrected on the next boot.

I'll test out zfs export in a more controlled environment so I can have a method for dealing with this without a shutdown

But this is an example of why I like ZFS and boot environments. "bectl mount" lets one temp mount a BE to muck around. I "grew up" on FreeBSD with UFS and love the longevity of it (the handling of the move to 64bit inodes was very impressive) but I'm very much a ZFS convert.
I'm also quite a ZFS fan. Not having to worry about switching around sata ports and having the boot come to a halt is a plus.
 
I'll test out zfs export in a more controlled environment so I can have a method for dealing with this without a shutdown
I would strongly recommend not to do any further testing here. Mounting some different system over your running live system is just something you should never do. Maybe with less fatal consequences than e.g. overwriting your currently used system disk with dd (and you certainly don't want to test that?), but still on a similar level – just avoid it.
 
I would strongly recommend not to do any further testing here. Mounting some different system over your running live system is just something you should never do. Maybe with less fatal consequences than e.g. overwriting your currently used system disk with dd (and you certainly don't want to test that?), but still on a similar level – just avoid it.
I'm referring to, like a virtual machine.

Say this happened on another perhaps more critical system. Should I still pull the plug there or is there a soft recovery?

Why did the system allow me to do this anyway? I can understand FreeBSD can be a bit unforgiving in what you ask it to do. But seems strange it can just mount across a live system like that.
 
Why did the system allow me to do this anyway? I can understand FreeBSD can be a bit unforgiving in what you ask it to do. But seems strange it can just mount across a live system like that.
Were you logged in/sudo/su when you did this? Well that is your answer.
You can do the same thing even with UFS filesystems, so it's not a ZFS vs UFS thing. It's just a "you are root, you can do whatever you want, so be careful with your powers".
 
Were you logged in/sudo/su when you did this? Well that is your answer.
You can do the same thing even with UFS filesystems, so it's not a ZFS vs UFS thing. It's just a "you are root, you can do whatever you want, so be careful with your powers".
Sure. What I mean is. When I ask the system for a file, who is it asking to provide me with a file? Presumably a filesystem driver. Wouldn't the system notice that you have two mount points at the same location and throw a permission denied or something?

I'm assuming the filesystem structure is monolithic and for this case you can just overlay it with another one and whichever is the most recent "overlay" will take priority.
 
I'm assuming the filesystem structure is monolithic and for this case you can just overlay it with another one and whichever is the most recent "overlay" will take priority.
Yep unless you explicitly "union mount" the second one. Then you get the union of both; I think the first mount takes precedence. As in both filesystems have "/bin/blah", and ls /bin/blah gets you /bin/blah from the first mount.


Wouldn't the system notice that you have two mount points at the same location and throw a permission denied or something?
Good question, but I think that may be a different discussion. I've always known/understood that as root I can do "rm -rf /", so maybe my tolerance for "what should/should not happen" is different. I'm not saying you are wrong, more "it's a different discussion, worth having, but the expectations clearly need to be defined".
 
Back
Top