UFS Is ZFS the "official" Logical Volume Manager for FreeBSD?

ANOKNUSA

Aspiring Daemon

Thanks: 372
Messages: 675

#1
I've been playing with several small disks lately (~120Gb-250Gb each) for various purposes on different systems. With disk space at a premium in these cases, I'd like to have control over partition sizes without having to move lots of data around any time a resize might be necessary. I see that gvinum(8) is still in base, and an official tutorial still exists in the FreeBSD documentation set, but since I've started using FreeBSD I've seen mention that it's no longer really developed or maintained, and may disappear at some point.

Is this in fact the case? If so, how might logical volume management with UFS be handled? Or is ZFS now the preferred--and eventually, the only supported--way of getting this functionality? Thanks in advance.
 

kpa

Beastie's Twin

Thanks: 1,791
Messages: 6,303

#2
There is gvinum(8) but I never got around to figure out how to use it in the same way I would use the ZFS volume manager.
 

Beastie7

Well-Known Member

Thanks: 135
Messages: 360

#3
Hmm I wonder what Netflix does because they apparently use UFS too for their OpenConnect appliances. Is UFS still actively being developed?
 

TheDreamer

Member

Thanks: 5
Messages: 68

#4
I suspect, that the filesystem layout on the appliances remains constant over its lifetime. There are plenty of systems that can happily run for years without need to change their layout or storage.

Just as there are systems with volume management that have been run for years with change -- co-worker has been fighting the confines of 20GB volume his application resides years, seems to also be calling people if out be okay to delete their archives -- IIRC I saw ~40GB of unconfigured space available.... he still makes those calls, but less frequently and about them considering some less than infinite retention and infinite immediate retrieval.... Though its what I want with with one of my archives, even though I'm second longest in the department, the archive often has whey something was before me. Which reminds me, I should setup a successor should anything happen to me....like retirement :D Not so sure I want to still be working when time ends.

And, I would put out there that one of the reasons UFS is still available is that it is still being actively developed/maintained. As, I sometimes I have been annoyed how quickly things disappear from FreeBSD when it stops being actively maintained. Though there's this other OS we use at work has been guilty of such things, which recently lead to a complication with sticking with an older release.... the systems couldn't cope with the leapsecond :oops:

I think we're about 3 releases behind the leapsecond patch, which is a release or two behind their current release....

The Dreamer
 

SirDice

Administrator
Staff member
Administrator
Moderator

Thanks: 6,599
Messages: 28,129

#5
UFS is going to be around for a while. ZFS is great but doesn't work on ARM for example. There are also other situations that make UFS a better choice instead of ZFS (think embedded systems).
 

drhowarddrfine

Son of Beastie

Thanks: 973
Messages: 2,862

#6
I'm confused (lazy). I thought ZFS required 4GB of RAM and, preferably, multiple disks to use it and it was slower than UFS. I have one disk (SSD) and that's all I need for my workstation so I never bothered to look at ZFS.
 
Last edited by a moderator:
OP
OP
ANOKNUSA

ANOKNUSA

Aspiring Daemon

Thanks: 372
Messages: 675

#7
Well, I've been running a root-on-ZFS system on an SSD in one of my laptops for months. I've got 8 gigabytes of RAM, but the system rarely passes 2 gigabytes consumed. It's also my understanding that ZFS is slower than UFS, but on an SSD that doesn't mean much.

I've read up a bit on both gvinum(8) and gstripe(8) (the most similar maintained utility) in the last few days, and I get the impression that they both actually only work with whole disks and ideally require multiple disks, and that resizing logical volumes in gvinum(8) isn't very straightforward. This would be in contrast with Linux LVM, which works perfectly fine on a single disk and uses logical volumes that can be resized on a live system. Now I'm wondering if what I wanted was ever possible at all...
 

alphaniner

Member

Thanks: 5
Messages: 38

#8
I originally started looking into gvinum as part of a FreeBSD replacement for my LMV2-on-mdadm NAS/SAN. But I was informed by an admin/mod/dev that gvinum is deprecated:
gvinum is deprecated, please don't use it.
Doesn't get much clearer than that. It was a bit disappointing, but not exactly shocking in light of the dearth of recent information about it. I just wish the manpage reflected it.

In the meantime I had cause to use ZFS for some 'serious business' and don't think I'd want to go back to block-level logical volume management even if I could.
 
OP
OP
ANOKNUSA

ANOKNUSA

Aspiring Daemon

Thanks: 372
Messages: 675

#9
alphaniner, your exchange was actually the first clue I saw that it might be deprecated. But as you said, the man page doesn't really indicate as much. I've seen neither an official announcement nor more than one developer say so, and it still exists in the 10.2 source tree. (I haven't checked -CURRENT to see if it's been removed there.)

In the meantime I had cause to use ZFS for some 'serious business' and don't think I'd want to go back to block-level logical volume management even if I could.
Yup. It's nuts how something so effective and useful could be so easy to use compared to other, similar tools. But I'm working with two 32-bit machines, one of which only has 256Mb of RAM, and these are basically throw-away hobbyist projects, so using ZFS is both infeasible and pointless.
 

alphaniner

Member

Thanks: 5
Messages: 38

#10
I had forgotten about this, but even before that exchange I was also "wondering if what I wanted was ever possible at all...". AFAICT, in LVM2 terms your only option is to create one LV spanning the entire VG. I only have personal experience with one other lvm suite (from IBM's AIX) but my understanding is that most logical volume management uses a PV->VG->LV scheme. In that regard, calling gvinum a logical volume manager is a bit misleading, and if it hadn't been described as such I doubt I would have ever considered it...

don't think I'd want to go back to block-level logical volume management even if I could.
Well, at least until I'm reminded of the whole "don't use more than 70% of the pool" rule of thumb :(
 

TheDreamer

Member

Thanks: 5
Messages: 68

#11
I have two SFF servers, using Atom D2700, so the maximum memory I can have is 4GB. Only have a 120G SSD in each, but I'm doing ZFS and they've been working great. Though they mainly run tedious apps like: ntp, dns/bind99, net/freeradius2 (for auth onto my home wifi network), net/isc-dhcp42-server, forward (www/squid) and reverse proxy (www/nginx and net/haproxy) services...plus some extras, such one server has net-mgmt/nagios and the other has net-mgmt/cacti.

One is also where I have irc/irssi running all the time in screen sysutils/screen...though there some contortions I don't want to talk about on getting that to work without a terminal to layout in, except perhaps to say it involved using x11-servers/xorg-vfbserver.... I use net/mosh to remote into the server and attach to the screen session, possibly from additional location using just ssh.

I also have CARP and HAST working between these two servers (and for this I opted to going with UFS w/SU+J), since devices for HAST are zvols. I didn't want to recreate the problems of two zpools on the same disk, and I suspect one inside the other would lead to some ever weirder problems. ZFS was designed to work best when it has the whole disk, though SSD might not surfer as badly....when two separate zpools thing they each of the whole of the same disk. Though some of it was making the mistake of using the same tweaks for another machine on other systems ...

Though 9.3 has changed and I haven't been able to figure out if its even possible to get performance back. So I made the mistake of getting WD Purple's for my root pool (on paper they sounded better than Red and were cheaper.) But, evidently when they say write optimized, it means don't read? It took 5 (actually, 7185.6 minutes) days to backup 245GB (which was everything except my home directory, done separately, and the BackupPC pool as I have crashplan for the computers it was backing up now.)

Though I was kind of surprised my computer stayed up that long...it did panic later. One of these days I'll try to get into what figuring those out.

The Dreamer.
 

giorgiob

Member

Thanks: 1
Messages: 38

#12
And, I would put out there that one of the reasons UFS is still available is that it is still being actively developed/maintained. As, I sometimes I have been annoyed how quickly things disappear from FreeBSD when it stops being actively maintained.
Is there any information / concrete plan regarding the removal of UFS from FreeBSD? Even if ZFS became the standard filesystem, I do not see why UFS should be removed completely: As far as I know, ZFS has much more features but UFS is more lightweight and I would like to continue to use UFS on some of my machines / disks.
And even if I decided to use only ZFS, I'd still want to be able to read disks formatted with UFS if I need to.
 

olli@

Member
Developer

Thanks: 46
Messages: 58

#13
Is there any information / concrete plan regarding the removal of UFS from FreeBSD? Even if ZFS became the standard filesystem, I do not see why UFS should be removed completely: As far as I know, ZFS has much more features but UFS is more lightweight and I would like to continue to use UFS on some of my machines / disks.
And even if I decided to use only ZFS, I'd still want to be able to read disks formatted with UFS if I need to.
There are no plans to remove UFS. There are quite a few situations where UFS is better suited than ZFS. It should also be noted that UFS is still actively maintained, and even new features are developed. Just recently, mckusick@ started adding inode check-hashes to UFS.
 

ShelLuser

Son of Beastie

Thanks: 1,569
Messages: 3,411

#14
I'd also like to add that I don't necessarily agree with the comment that things quickly disappear within FreeBSD. For example: it's been quite a few years ago that Clang replaced GCC and became the default compiler. Yet many years later we still have /usr/src/contrib/gcc as well as source tree settings such as WITHOUT_CLANG_IS_CC or WITH_GCC.

Now, this may not be the best example (GCC is used on other platforms afterall) but the same thing applied when we moved from the Solaris package management tools (pkg_add) to pkgng. The previous tools remained available for several versions before they were finally removed.

And that's not even mentioning the amount of unmaintained ports.

I'm not saying it's perfect, but it's also not as if something goes "poof" just like that either.
 

Phishfry

Son of Beastie

Thanks: 1,049
Messages: 3,092

#15
I am very happy with UFS and geom based RAID arrays. I am using graid3 as it works fine as does gmirror.
If I needed the features of ZFS I would use it. I just think geom raid is so simple to use and that is a main selling point for me.

There does seem to be a certain fanaticism around ZFS. I try to ignore it.
I wish people would recognize that having more than one file-system is a good thing.

When I look at FreeBSD 12.0 Release Notes -I can't help but feel sad for the depreciated drivers.
We are axing some 10/100 network adapters which, while quite old, are still usable. SCSI controllers too.
Not that it was a sudden move, as we have a full release cycle to migrate.
It is the feeling that somebody worked hard and free to get us these drivers and now we are ripping them out.
Why? Because of the ever changing the ABI and that breaks the drivers. There have been no security flaws with these drivers.
It is just a push to remove old devices. Why? If the ABI was not changed they would need no maintenance.
Chicken and Egg problem.
I am afraid some developers are too quick to depreciate devices simply because they are old.
My thought is we see people on this forum with really old hardware. Maybe in their country that is all that they can get.
FreeBSD should not be about device age but functionality. Does the device still work.
10Base-2 over BNC is obviously obsolete. But anything with RJ45 is not.

I also worry because we have corporations sponsoring developers. Even Kirks commits are now "Sponsored by Netflix"
Hopefully these corporate interests don't take over FreeBSD development and decisions.
A Microsoft'er on the foundation board and HyperV compiled in by default. Yikes.
(This post sponsored by Butterball Turkey and Oyster Stuffing)
 

yuripv

Active Member

Thanks: 60
Messages: 142

#16
When I look at FreeBSD 12.0 Release Notes -I can't help but feel sad for the depreciated drivers.
We are axing some 10/100 network adapters which, while quite old, are still usable. SCSI controllers too.
Not that it was a sudden move, as we have a full release cycle to migrate.
It is the feeling that somebody worked hard and free to get us these drivers and now we are ripping them out.
Why? Because of the ever changing the ABI and that breaks the drivers. There have been no security flaws with these drivers.
It is just a push to remove old devices. Why? If the ABI was not changed they would need no maintenance.
Chicken and Egg problem.
I am afraid some developers are too quick to depreciate devices simply because they are old.
My thought is we see people on this forum with really old hardware. Maybe in their country that is all that they can get.
FreeBSD should not be about device age but functionality. Does the device still work.
10Base-2 over BNC is obviously obsolete. But anything with RJ45 is not.
This was discussed: https://lists.freebsd.org/pipermail/freebsd-arch/2018-October/019167.html (yes, read through the entire long thread) and there *IS* a cost: https://lists.freebsd.org/pipermail/freebsd-arch/2018-October/019202.html.
 

ShelLuser

Son of Beastie

Thanks: 1,569
Messages: 3,411

#18
Yes and this is the attitude I worry about:

Trying to insult people because they can't afford hardware.
+5 (but I can only do one thanks ;)).

But it goes much deeper than that: often it's not merely about affording but availability as well.

"Back in the days" (uh oh: grandpa Shell talking :p), before the adoptation of the Internet as we know it today I was a vivid and passionate FidoNet user (and more switch-based networks). Heck, I eventually grew out to become the host of my own Zone 2 based network. Within FidoNet context that's pretty big.

That period was totally awesome. I could sent a private message from the Netherlands to another FidoNet user in the US and they would get it within a few days. How cool was that?! All operated by phone lines mind you!

This is around 1990 we're talking about and I still have backups of the BBS ("Concord") and 'front end' ("Portal of Power") software I used back then. And don't forget about my favorite tosser FMail. Heck, I also still have timEd, which was the message editor I used back then.

Freely available and free to use for everyone, but if you registered (which I did) you gained the right to edit your signature. Which, being the fanboy I was, I still kept referring back to my favorite software.

Sorry for a little rant but this post honestly triggers memories. Some very good ones :)

But.. back ontopic: So around 2010 I learned that parts of Southern Russia, Northern Asia and even in Africa people still relied on these concepts. Back in 2010 when "everyone" had a dual core and smart phone, right?

This rabbit hole is much deeper than some are willing to accept.

Seriously... Nothing wrong with looking at the world from your own perspective, but could we at least TRY to look beyond our own comfort zone when we're venting opinions which would affect many people than those you think you know?

My servers don't have a floppy disk. My deprecated one down in my storage compartment has both a 5.25" and a 3.5" disk drive, and I even have my Commodore 64 and my (rather massive) 5.25" disk collection as well (notches for the win: using 2 sides of one disk!).

Yet despite that I am still in full support of FreeBSD keeping support for floppies into their system, even though it's one of the first things I disable in src.conf. Sure: I can well imagine that this will be eventually removed, maybe in 2020? But the fact is: even though most (modern) PC's don't have this concept anymore (anyone remember ZIP drives? 100Mb storage, whoo!) it is still supported today.

Because even though WE don't use floppies anymore doesn't mean that no one does.
 

Phishfry

Son of Beastie

Thanks: 1,049
Messages: 3,092

#20
No that was Part 2 of the insult.
You can buy used core i5 laptops for around $20 if you shop around.
Whats 20 bucks to a rich guy.
Now think about users in Africa who only have power for 4 hours a day. $20 is what they make a month.
Please don't assume every user can buy newer hardware.
 

yuripv

Active Member

Thanks: 60
Messages: 142

#21
You are overreacting, really, and taking Mark's words out of context. Saying that using i386-class hardware in 6 years when you can have amd64-class one *now* for just $20 has nothing to do with discriminating anyone; it was meant only as disagreement with previous poster.
 

Phishfry

Son of Beastie

Thanks: 1,049
Messages: 3,092

#22
Yes I don't want to insult anyone. I just think the whole post sounded elitist and that is what I worry about.
 
Top