ZFS Hopelessly lost with ZFS

I have now disconnected the USB drive.

=> 63 468862065 ada0 MBR (224G)
63 1985 - free - (993K)
2048 1124352 1 ntfs (549M)
1126400 261750496 2 ntfs (125G)
262876896 288 - free - (144K)
262877184 1179648 3 !39 (576M)
264056832 2048 - free - (1.0M)
264058880 204803248 4 freebsd [active] (98G)

=> 0 204803248 ada0s4 BSD (98G)
0 197132288 1 freebsd-ufs (94G)
197132288 7670960 2 freebsd-swap (3.7G)

root@X1:~ $ zpool status
pool: tank
state: SUSPENDED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC
config:

NAME STATE READ WRITE CKSUM
tank UNAVAIL 0 0 0 insufficient replicas
da0p1 REMOVED 0 0 0

errors: 4 data errors, use '-v' for a list
root@X1:~ $

I noticed I have zfs_enable in my rc.conf. I don't know why. it's probably been like that for ages.

Maybe I will try attaching the USB drive to a different laptop and see what happens.
 
I noticed I have zfs_enable in my rc.conf. I don't know why. it's probably been like that for ages.
zfs_enable tells the system to load ZFS kernel module at boot time. I think this has something to do with licensing. This is actually OpenZFS on FreeBSD. The other way is to compile the ZFS into custom kernel (this is what I am usually doing). Also, when you have a loadable ZFS kernel module, it can be replaced with OpenZFS from ports...
 
The best explanation of ZFS I heard was by Bryan Cantrill when he presented Solaris 10 at a dog & pony at one of the hotels here about 20 years ago. ZFS is a volume manager and a filesystem wrapped up in one. Think of it as a Linux LVM on steroids and a filesystem on steroids all in the same package.

People who say ZFS is limited on single disk systems don't see the whole picture. Data integrity is ZFS' strength, so yes, mirror and RAID are better then not. But when managing volumes, like Linux sysadmins (like me at $JOB) with LVM, LVM + EXT4/XFS is a PITA when compared with ZFS. Want to create a filesystem, zfs create vs lvcreate followed by mkfs.

Do you want to resize a logical volume on LVM? lvresize, then resize2fs. With ZFS it's dynamic.

If you go to https://docs.oracle.com/cd/E19253-01/819-5461/zfsover-2/, the last two paragraphs talk about simplified administration. That's ZFS' everyday strength. It's complex under the covers so you and I can manage storage more simply.
I guess it probably has a place given the huge size of modern storage. I used to use LVM on a single disk AIX desktop box to manage volumes. It all depends what you're using the system for.
 
It looks like I am not going to figure out to mount my zfs disk so I'll probably boot from it and mount the partition from my other disk.

But at the moment I have zfs mounted the pool and don't know how to unmount.

What happens if I simply disconnect the USB enclosure?
 
It looks like I am not going to figure out to mount my zfs disk so I'll probably boot from it and mount the partition from my other disk.

But at the moment I have zfs mounted the pool and don't know how to unmount.

What happens if I simply disconnect the USB enclosure?
Did you try zpool import?
 
Maybe this is hardware related. If zpool-import(8) reports "devices are faulted in response to IO" (input, output) from a USB attached external device, perhaps the USB port speed is to blame. Try USB 3.0, if you have a machine providing such a port.

But at the moment I have zfs mounted the pool and don't know how to unmount.

What happens if I simply disconnect the USB enclosure?
You want to zpool export tank , then unplug.
 
Maybe this is hardware related. If zpool-import(8) reports "devices are faulted in response to IO" (input, output) from a USB attached external device, perhaps the USB port speed is to blame. Try USB 3.0, if you have a machine providing such a port.


You want to zpool export tank , then unplug.
I can't because have pool or dataset is busy.

Earlier I ran zfs mount -a to see what happened, but it looks like there isn't a zfs umount -a.
 
I can't because have pool or dataset is busy.
Just power down the machine. All file systems are un-mounted, ZFS pools exported gracefully.

I can't because have pool or dataset is busy.

Earlier I ran zfs mount -a to see what happened, but it looks like there isn't a zfs umount -a.
Sure there is:
zfs-mount(8)
Code:
     zfs unmount [-fu] -a|filesystem|mountpoint

       -a  Unmount all available ZFS file systems.  Invoked automatically as
           part of the shutdown process.
 
Just power down the machine. All file systems are un-mounted, ZFS pools exported gracefully.


Sure there is:
zfs-mount(8)
Code:
     zfs unmount [-fu] -a|filesystem|mountpoint

       -a  Unmount all available ZFS file systems.  Invoked automatically as
           part of the shutdown process.
It was the -fu parameter I was missing.
 
Isn't there an option on zpool import to "reroot" the datasets?
Basically, you have a zpool that has BEs. If you straight import it, you overlay your existing BEs. But import with a reroot would mean "what used to be /etc is now at /mnt/etc".

I'm going by memory so don't recall the specific option, but it should be useful.
 
Isn't there an option on zpool import to "reroot" the datasets?
Basically, you have a zpool that has BEs. If you straight import it, you overlay your existing BEs. But import with a reroot would mean "what used to be /etc is now at /mnt/etc".

I'm going by memory so don't recall the specific option, but it should be useful.
I've abandoned the idea of mounting the zfs disk on my existing system and have now reversed the disks, booting from the zfs disk and mounting what was the existing disk and trying to recreate what was there, but filtering out the junk that accumulated over the years.
 
I've abandoned the idea of mounting the zfs disk on my existing system and have now reversed the disks, booting from the zfs disk and mounting what was the existing disk and trying to recreate what was there, but filtering out the junk that accumulated over the years.
That's fine. It's not just a ZFS issue but taking a disk from one system and wanting to temp mount on another, you need to be careful how you look at things and how you mount.
 
I do not agree with that. IMHO,
to be clear: we also don't agree with it! but we can see others deciding to stick with UFS if they want, and don't really see the need to argue other people into agreeing with us when we don't even agree with us
 
  • Like
Reactions: mer
to be clear: we also don't agree with it! but we can see others deciding to stick with UFS if they want, and don't really see the need to argue other people into agreeing with us when we don't even agree with us
My reason for using ZFS on everything is Boot Environments. System upgrades made simple. One just needs to do "bectl list" every now and again and then "bectl destroy -o" to clean up.
But having a BE to roll back to on a failed upgrade, "priceless" as the commercials say
 
Useful to know what it's useful for.
Simply the idea of not caring about partitions is great. How often did you have to move directories around and added links because /var, /usr or somesuch ran out of disk space? With ZFS that is not a problem.

Also I have caught drives going bad by "zpool scrub", telling me the data went bad before I dependend on it. And that was before the drive picked it up itself, SMART was still saying that all was fine. With copies=2 (for important stuff) you can even have a good chance of correcting the data with only one drive. Usually that is for my home directory, not the /dvd dump or areas which can be fixed by a reinstall. This was the biggest selling point for me.
 
I need to get round to giving zfs a try, next time I install a system. Maybe when I give 15.0 a spin!
I know, I keep saying I'm going to get round to trying zfs, and never seem to find the time ... 😂
 
On a recent BSD Now podcast, one of the panel members called out the advantage of creating a zpool on a partition instead of the whole disk. The gpart label lets you name the partition as a clue to your future self as to what you're looking at.

I've taken an SD card or USB memstick, run gpart to create a partition with a recognizable name and then created a zpool.
Run zpool export, walk over to another machine, run zpool import.
I can play with zfs attributes like encryption and not worry about trashing the main drive.
 
I need to get round to giving zfs a try, next time I install a system. Maybe when I give 15.0 a spin!
I know, I keep saying I'm going to get round to trying zfs, and never seem to find the time ... 😂
I never really understood ZFS, and still don't, but I'm learning.

I recently bought a new SSD disk and decided to install FreeBSD 15.0-RELEASE on it using ZFS so am startng to learn something about it slowly....

All my life I've been used to directories for holding files so it is quite a culture shock to use datasets instead.

Anyway, I'm glad I took the plunge. One immediate advantage is that I can use iocage for managing jails and it so much easier to do than it was previously.

I'm still at an early stage and just finding my feet, but my advce to you is to 'find the time'.

The community here is very helpful and supportive.
 
All my life I've been used to directories for holding files so it is quite a culture shock to use datasets instead.

I believe this is an error (which I made). It's not so much datasets as a replacement for directories, but as an additional abstraction that previous gen filesystems don't have.

When I started using zfs I started making datasets everywhere for everything. It can become quite the inconvenience when something needs altering.

A dataset is something between the directory and the hardware. Maybe because a zpool, which is the most concrete thing the filesystem actually knows about (it doesn't know about bare metal), is a little too unweildy for many tasks, and a directory too flimsy. A directory is just a label, a dataset is a drawer, and a zpool is a cabinet. Kind of thing.

The zpool for mapping to the hardware. The dataset for designing the type of storage and encompasing it. The directory to populate it.

Frankly I believe you have a better handle on the thing than I, but I thought I'd lay this thought out as the product of a few months now of using the thing.

For example, you don't mount a drive, and you don't mount a partition, and you don't mount a zpool. You import a zpool, and from it you mount a dataset. Inside that dataset are the directories. So I guess the best equivalent for a dataset is a drive. But zfs doesn't tie the "drive" to a physical device. In the divorce from the attachment to a physical device, a couple of extra abstraction layers are needed.
 
My understanding/opinions. May not be 100% accurate but I think should be close enough for understanding.

"Old school" we have a physical device of size X. We partition that device so we have 4 partitions of various sizes that add up to X. We create filesystems on each partition, on for /, one for /var, one for /usr, one for /home. A user winds up downloading lots of data and /home goes to 0% available. The sysadmin goes "now what"? Typically add new devices start making symlinks.

ZFS. Typically a "zpool" covers the entire device, so "size X" A dataset is kind of like a partition, but it is not a fixed size. zpool on a physical device of size X, create a dataset that mounts at /, another that mounts at /var, another at /usr another a /home. Each dataset has a max size of X (size of the physical device) so one doesn't "need" to worry about blowing out a partition.
Directories are simply things under a mount point. ZFS, dataset is mounted at /usr you do mkdir /usr/local, everything you put in /usr/local is in the dataset that mounts at /usr.

So at first glance it may seem confusing, but take a step back and look at everything.
vdev is roughly a "hardware abstraction layer" meaning "this vdev maps to these one or more hardware devices". Think "vdev mirror is two or more hardware devices".
A zpool is built on top of vdevs (a vdev that is a mirror the zpool is a mirror)
ZFS datasets are built on zpools, they are semantically closer to partitions than anything else.
 
datasets seem analogous to volumes in other logical volume systems. the difference with ZFS is that it integrates the logical volume manager with the filesystem more directly.
 
  • Thanks
Reactions: mer
Back
Top