UFS vs ZFS

I now really despise ZFS. I won't be using it anymore. I have a project I'm developing on a ZFS machine that decided to no longer boot. I can't mount ZFS I can see it and read what's on the drive but am unable to access the data. What a joke. Now I have to rebuilt my project files pulling bits off a server by the commit dates. Infuriating.

EDIT: I should restate that part about seeing what's on the drive. Gpart confirms I have a ZFS partition but ZFS refuses to mount to zroot for me to access it from a live disk. It's the worst thing I've ever seen and I hate it forever...basically. With that said, I'm sure if I had any idea about ZFS I could have properly set it up in the first place and avoided this catastrophe.
 
zfs is a fine filesystem [If you don't enable features not supported by the bootloader]
what's the output of:
Code:
zpool import
zpool status -x
zpool list -v
In loader.conf you can specify which root zfs partition & kernel to load.
 
zfs is a fine filesystem [If you don't enable features not supported by the bootloader]
what's the output of:
Code:
zpool import
zpool status -x
zpool list -v
In loader.conf you can specify which root zfs partition & kernel to load.
I'll check on those when I'm at my machine. But I'm not able to boot from the zroot so I am unable to access loader.conf I am a bit worked up about it at the moment to the point where I'm considering giving up computers and taking up wood working or moving to an abacus.
 

Attachments

  • shopping.jpeg
    shopping.jpeg
    71 KB · Views: 67
what's the output of:
Code:
zpool import
zpool status -x
zpool list -v
Can you boot from a USB stick with a FreeBSD image on it ?
Yes I can. And I can zpool import but I can't mount the stupid thing it says it's missing some dataset. And the backup in option 8 of the boot loader does nothing. It won't boot either. I'll have to boot from live USB and run those commands when I'm home and I'll post the output...well....I will type them out in a reply.

EDIT: Thank you for offering help. I guess I could have just asked rather than ranting about my frustration on this post. In fact I will just post a thread asking for help and stop posting on this thread. I will start it with the commands you offered. Thanks and sorry.
 
 
try this , always save me

Code:
zpool import , to get the pool name of the zpool



mkdir /tmp/zroot



zpool import -fR /tmp/zroot zroot


mkdir /tmp/root



mount -t zfs zroot/ROOT/default   /tmp/root

asuming that the name of the pool is zroot
 
I now really despise ZFS. I won't be using it anymore. I have a project I'm developing on a ZFS machine that decided to no longer boot. I can't mount ZFS I can see it and read what's on the drive but am unable to access the data. What a joke. Now I have to rebuilt my project files pulling bits off a server by the commit dates. Infuriating.

EDIT: I should restate that part about seeing what's on the drive. Gpart confirms I have a ZFS partition but ZFS refuses to mount to zroot for me to access it from a live disk. It's the worst thing I've ever seen and I hate it forever...basically. With that said, I'm sure if I had any idea about ZFS I could have properly set it up in the first place and avoided this catastrophe.
ZFS doesn't play well with 'partitions'. It needs the whole disk, wiped clean. Just tell the installer you want to do a ZFS install, then go with defaults that get suggested along the way. This thread alone has posts that explain why the defaults work, and point out what changing those defaults will do to the system.

The defaults suggested by the installer (for the ZFS installation) are pretty well thought out, you can do lots of interesting stuff with that setup later on. If you mess with the installer's defaults, the interesting stuff just won't work as expected.

Having said that, one reason I like ZFS is that it allows me to make adjustments and set limits AFTER the installation - that's something that UFS just can't do with its partitions. That was a major planning headache that I left behind when I switched from UFS to ZFS in 2017...
 
 
ZFS doesn't play well with 'partitions'. It needs the whole disk, wiped clean. J
My opinion only, I completely disagree with this statement. ZFS plays perfectly fine with partitions created by gpart. I've been doing it that way since at least FreeBSD 9.x.

FreeBSD Installer. By default even when choosing ZFS you get partitions, at least that is what my hands on experience shows. Choosing all the defaults I get a freebsd-boot, a freebsd-swap and a freebsd-zfs partition.

Now if you are talking about a "partition on a device being set up for multiboot" that is a different situation and you may be correct since it's been a really long time since I've done that and multiboot depends on a lot of other things.
 
ZFS plays perfectly fine with partitions created by gpart.
ZFS most likely won't complain if you set it up like that, but I think that kind of defeats the point of having ZFS at all... esp. if you end up running out of room on /usr/ports thanks to the distfiles, or /usr/home thanks to the downloads :p You can't make softlinks using ln -s that cross hard gpart-created partition boundaries. ZFS has no such limitation if you use datasets instead of partitions.
 
ZFS most likely won't complain if you set it up like that, but I think that kind of defeats the point of having ZFS at all... esp. if you end up running out of room on /usr/ports thanks to the distfiles, or /usr/home thanks to the downloads :p You can't make softlinks using ln -s that cross hard gpart-created boundary partitions. ZFS has no such limitation if you use datasets instead of partitions.
I still disagree with this and am honestly not sure what you are arguing.

The installer, on a single disk system, creates freebsd-boot, freebsd-swap (both fixed size) and then a freebsd-zfs that is "the rest of the device". the zpool uses all of the freebsd-zfs partition, has multiple datasets, by default /usr/ports and /usr/home are datasets under that zpool. All datasets under a zpool share the full space, unless you go and set limits (quotas) on a dataset.

Running out of space on a /usr/ports or /usr/home dataset you are filling up the entire zpool. Same thing as filling up a complete UFS partition.

If you create partitions for UFS , then sure you can create symlinks across partitions but you still wind up with running out of space on the partition you specifically symlink across to. You are also limited to the size of the partition at creation time.

ZFS, zpools, datasets it's relatively easy to add another device, create zpool on it (gparted or not), create zfs datasets on that new zpool, move data and reset mountpoints. My experience a heck of a lot easier than on other filesystems.
Heck you can even expand zfs filesystems by adding devices and creating the correct redundancy or simply striping to get more space (dangerous).

It always comes down to sitting down and thinking about everything first.
The machine is going to have a bunch of users on it? separate device and zpool for the OS and applications, separate device and zpool for /user/home.
Building ports? Same thing. Separate OS and "ports pool devices".
I've been doing that for years, it makes it easy to actually do a full install without losing user data.

I'm not saying ZFS is "perfect for everything all the time", but in my experience, for my use cases, I'll take it everytime over other filesystems.
 
Running out of space on a /usr/ports or /usr/home dataset you are filling up the entire zpool. Same thing as filling up a complete UFS partition.
I'm afraid you're missing my point...

When I was using UFS, I usually had a separate partition for /usr/ports and /usr/home. In UFS, you can't increase partition space by a symlink to a different partition. You can mount some sub-folders remotely using NFS, but that has pitfalls that I'd rather not get into.

With ZFS, I left those headaches behind. I can make /usr/ports (or any other dataset) bigger or smaller, set size limits, whatever - after installation.

If I use UFS, I have to remember to allocate 20GB at installation for /usr/ports (if I use UFS), and be stuck with it. And no, I cannot run ln -s /usr/ports/distfiles /usr/home /downloads to get more space for distfiles.

If I use ZFS, it offers a separate dataset for /usr/ports, and at any time after installation, I can easily set upper limit to 20GB, or lower limit to 10 GB, symlink to /usr/home, or whatever. I can take steps to prevent filling up the entire zpool. UFS just doesn't offer a feature THAT convenient.

It always comes down to sitting down and thinking about everything first.
ZFS saved me that headache, I can do it later. UFS just doesn't offer that "later" option.
 
I'm afraid you're missing my point...

When I was using UFS, I usually had a separate partition for /usr/ports and /usr/home. In UFS, you can't increase partition space by a symlink to a different partition. You can mount some sub-folders remotely using NFS, but that has pitfalls that I'd rather not get into.

With ZFS, I left those headaches behind. I can make /usr/ports (or any other dataset) bigger or smaller, set size limits, whatever - after installation.

If I use UFS, I have to remember to allocate 20GB at installation for /usr/ports (if I use UFS), and be stuck with it. And no, I cannot run ln -s /usr/ports/distfiles /usr/home /downloads to get more space for distfiles.

If I use ZFS, it offers a separate dataset for /usr/ports, and at any time after installation, I can easily set upper limit to 20GB, or lower limit to 10 GB, symlink to /usr/home, or whatever. I can take steps to prevent filling up the entire zpool. UFS just doesn't offer a feature THAT convenient.


ZFS saved me that headache, I can do it later. UFS just doesn't offer that "later" option.
It sounds like we are both saying/agreeing on the same thing. ZFS is more flexible and allows you to easily fix things "after the fact"
 
ZFS doesn't play well with 'partitions'. It needs the whole disk, wiped clean.
I also disagree. If you want to give a whole disk to ZFS, do the following: Create a partition table with gpart, and create only one single very large partition. Give that partition to ZFS.

Why? Isn't that wasteful, extra work, and loss of a few dozen sectors? All that is true, but it has one big advantage: You can give the partition a clear and human-readable label. Like that, if you ever get into a situation where you have to identify a disk, you can just use "gpart list" on the disk, and quickly and without guesswork find out what the content of the disk is.
 
I also disagree. If you want to give a whole disk to ZFS, do the following: Create a partition table with gpart, and create only one single very large partition. Give that partition to ZFS.

Why? Isn't that wasteful, extra work, and loss of a few dozen sectors? All that is true, but it has one big advantage: You can give the partition a clear and human-readable label. Like that, if you ever get into a situation where you have to identify a disk, you can just use "gpart list" on the disk, and quickly and without guesswork find out what the content of the disk is.
My take is that if you stuff ZFS into a GPart-created partition, that defeats the very point and design of ZFS.

"Creating one single very large partition" is a practice that is usually advised against, for many reasons. Just looking on Google for several variations on what is good partitioning practices - that kind of drove home the point that having just one partition is not a great idea. ZFS on whole disk (no gpart) relieved me of the need to have that headache, and I'm forever grateful.

BTRFS/ext4/jfs also offer options to limit directory sizes. But none of them enforce usable boundaries like ZFS. Also, increasing disk space is much easier in ZFS.

And, something wrong with zfs list ? you get a list of datasets, it's human-readable, as well.
 
My take is that if you stuff ZFS into a GPart-created partition, that defeats the very point and design of ZFS.
No, it doesn't.

If you start partitioning for individual datasets, then your statement would be true. The whole point of ZFS is to manage "volumes" (called datasets in ZFS) itself in a much more flexible and efficient way.

But not everything that could be on the disk is a dataset. Think about boot partitions (efi, freebsd-boot), think about swap... it often makes sense to have them on the same disks.
 
I ran across this a very long time ago shortly before doing some ZFS stuff. The arguments in it made sense to me.


"Creating one single very large partition" is a practice that is usually advised against, for many reasons.
And if you give ZFS "your whole disk" this is exactly what you are doing, you are just not doing it in a "gpart partition". ZFS works very nicely adding to vdevs when things are partitioned. Don't discount the usefulness of being able to use labels to create the zpool; huge benefit. Use gpart to create and label partitions, use the /dev/gpt/label to create the zpool and you are good.
 
I second Zirias. All my zpool are on a freebsd-zfs partition. Also the partitions are given a gpt label. So bios disk order does not matter.
 
Manual partition versus whole disk mattered on Solaris, where the kernel has different behavior depending on what you use. That difference never existed in FreeBSD. You'll get full performance with manual partitioning, as long as your partitions are aligned with your disk sectors (and you really have to go out of your way to do that wrong...).
 
I'm sorry but still I disagree. One of my systems with a single disk I simply took defaults for a ZFS install and it created a freebsd-boot, freebsd-swap and freebsd-zfs. First 2 were created "big enough" and the freebsd-zfs took the rest of the device minus some for alignment.
That thread is talking about installing in a VM where the "vm manager" has a lot to do with how things boot.

Solaris as pointed out earlier, had performance issues if you did partitions vs whole device, but FreeBSD has never had that limitations (as far as I can recall). There have always been issues if you create partitions that don't align with physical sectors, but if you simply add -a 1m on gpart create (align on 1M boundary which is pretty much compatible with any sector size that is a power of 2 below 1m like 512 or 4k) it works fine. That pretty much also tracks the Solaris stuff.
 
Linux sometimes not work well with zfs label. But freebsd does not has that problem.
I have 4 freebsd-zfs gpt partitions. One for the zpool, one for the special device, one for the log device and one for the cache device.
 
Back
Top