ZFS Hopelessly lost with ZFS

I have now disconnected the USB drive.

=> 63 468862065 ada0 MBR (224G)
63 1985 - free - (993K)
2048 1124352 1 ntfs (549M)
1126400 261750496 2 ntfs (125G)
262876896 288 - free - (144K)
262877184 1179648 3 !39 (576M)
264056832 2048 - free - (1.0M)
264058880 204803248 4 freebsd [active] (98G)

=> 0 204803248 ada0s4 BSD (98G)
0 197132288 1 freebsd-ufs (94G)
197132288 7670960 2 freebsd-swap (3.7G)

root@X1:~ $ zpool status
pool: tank
state: SUSPENDED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC
config:

NAME STATE READ WRITE CKSUM
tank UNAVAIL 0 0 0 insufficient replicas
da0p1 REMOVED 0 0 0

errors: 4 data errors, use '-v' for a list
root@X1:~ $

I noticed I have zfs_enable in my rc.conf. I don't know why. it's probably been like that for ages.

Maybe I will try attaching the USB drive to a different laptop and see what happens.
 
I noticed I have zfs_enable in my rc.conf. I don't know why. it's probably been like that for ages.
zfs_enable tells the system to load ZFS kernel module at boot time. I think this has something to do with licensing. This is actually OpenZFS on FreeBSD. The other way is to compile the ZFS into custom kernel (this is what I am usually doing). Also, when you have a loadable ZFS kernel module, it can be replaced with OpenZFS from ports...
 
The best explanation of ZFS I heard was by Bryan Cantrill when he presented Solaris 10 at a dog & pony at one of the hotels here about 20 years ago. ZFS is a volume manager and a filesystem wrapped up in one. Think of it as a Linux LVM on steroids and a filesystem on steroids all in the same package.

People who say ZFS is limited on single disk systems don't see the whole picture. Data integrity is ZFS' strength, so yes, mirror and RAID are better then not. But when managing volumes, like Linux sysadmins (like me at $JOB) with LVM, LVM + EXT4/XFS is a PITA when compared with ZFS. Want to create a filesystem, zfs create vs lvcreate followed by mkfs.

Do you want to resize a logical volume on LVM? lvresize, then resize2fs. With ZFS it's dynamic.

If you go to https://docs.oracle.com/cd/E19253-01/819-5461/zfsover-2/, the last two paragraphs talk about simplified administration. That's ZFS' everyday strength. It's complex under the covers so you and I can manage storage more simply.
I guess it probably has a place given the huge size of modern storage. I used to use LVM on a single disk AIX desktop box to manage volumes. It all depends what you're using the system for.
 
It looks like I am not going to figure out to mount my zfs disk so I'll probably boot from it and mount the partition from my other disk.

But at the moment I have zfs mounted the pool and don't know how to unmount.

What happens if I simply disconnect the USB enclosure?
 
It looks like I am not going to figure out to mount my zfs disk so I'll probably boot from it and mount the partition from my other disk.

But at the moment I have zfs mounted the pool and don't know how to unmount.

What happens if I simply disconnect the USB enclosure?
Did you try zpool import?
 
Maybe this is hardware related. If zpool-import(8) reports "devices are faulted in response to IO" (input, output) from a USB attached external device, perhaps the USB port speed is to blame. Try USB 3.0, if you have a machine providing such a port.

But at the moment I have zfs mounted the pool and don't know how to unmount.

What happens if I simply disconnect the USB enclosure?
You want to zpool export tank , then unplug.
 
Maybe this is hardware related. If zpool-import(8) reports "devices are faulted in response to IO" (input, output) from a USB attached external device, perhaps the USB port speed is to blame. Try USB 3.0, if you have a machine providing such a port.


You want to zpool export tank , then unplug.
I can't because have pool or dataset is busy.

Earlier I ran zfs mount -a to see what happened, but it looks like there isn't a zfs umount -a.
 
I can't because have pool or dataset is busy.
Just power down the machine. All file systems are un-mounted, ZFS pools exported gracefully.

I can't because have pool or dataset is busy.

Earlier I ran zfs mount -a to see what happened, but it looks like there isn't a zfs umount -a.
Sure there is:
zfs-mount(8)
Code:
     zfs unmount [-fu] -a|filesystem|mountpoint

       -a  Unmount all available ZFS file systems.  Invoked automatically as
           part of the shutdown process.
 
Isn't there an option on zpool import to "reroot" the datasets?
Basically, you have a zpool that has BEs. If you straight import it, you overlay your existing BEs. But import with a reroot would mean "what used to be /etc is now at /mnt/etc".

I'm going by memory so don't recall the specific option, but it should be useful.
 
Isn't there an option on zpool import to "reroot" the datasets?
Basically, you have a zpool that has BEs. If you straight import it, you overlay your existing BEs. But import with a reroot would mean "what used to be /etc is now at /mnt/etc".

I'm going by memory so don't recall the specific option, but it should be useful.
I've abandoned the idea of mounting the zfs disk on my existing system and have now reversed the disks, booting from the zfs disk and mounting what was the existing disk and trying to recreate what was there, but filtering out the junk that accumulated over the years.
 
I've abandoned the idea of mounting the zfs disk on my existing system and have now reversed the disks, booting from the zfs disk and mounting what was the existing disk and trying to recreate what was there, but filtering out the junk that accumulated over the years.
That's fine. It's not just a ZFS issue but taking a disk from one system and wanting to temp mount on another, you need to be careful how you look at things and how you mount.
 
to be clear: we also don't agree with it! but we can see others deciding to stick with UFS if they want, and don't really see the need to argue other people into agreeing with us when we don't even agree with us
My reason for using ZFS on everything is Boot Environments. System upgrades made simple. One just needs to do "bectl list" every now and again and then "bectl destroy -o" to clean up.
But having a BE to roll back to on a failed upgrade, "priceless" as the commercials say
 
Useful to know what it's useful for.
Simply the idea of not caring about partitions is great. How often did you have to move directories around and added links because /var, /usr or somesuch ran out of disk space? With ZFS that is not a problem.

Also I have caught drives going bad by "zpool scrub", telling me the data went bad before I dependend on it. And that was before the drive picked it up itself, SMART was still saying that all was fine. With copies=2 (for important stuff) you can even have a good chance of correcting the data with only one drive. Usually that is for my home directory, not the /dvd dump or areas which can be fixed by a reinstall. This was the biggest selling point for me.
 
Back
Top