ZFS ZFS Snapshots and Beadm hard to grasp. New to Unix.

Hello! New to Unix and learning. Here due to the confusing ZFS being mean to newbie.

Have tried to figure out and read up on making a snapshot of the hole system. Like a recovery point or the general term "Backup".
And Boot Environments seemed like exactly what I was after. But the result was like recursive snapshots is just not enough for my use case. Did not do the job.

I have messed around with snapshot -r (recursive) and beadm. But I can not figure out even with guides and reading about it on the forum how to actually succeed in anything. Well I'm just very confused and can not get it to do what I want. Just not knowledgeable enough.

Like reading and listening about ZFS on YouTube I imagined BE being like VM's. Thinking with portals? It seems more like time travel that can wreck reality to me personally.

Not dumb enough to thing this are virtual machines. And I understand that the nice thing about them is to keep the diff and not make copies in GB's sizes.

More like having a easy to use way to keep your FreeBSD 11. 12. 12.1 installs on a single drive. Or raids. Anything. That was what made me think it was a kind of user friendly way to manage multiple installs and snapshot create a system to experiment on without affecting anything else. Multiple Zpools to boot into simply said.

Like keeping your nice well organized FreeBSD you use only for writing stuff in. Nothing else on it. And then a system that is full packed with stuff that distract from writing. And can be maintained separately.

But in my use case I like it to be a very strict compartmented like making jails. Having stuff separately. Do I make myself clear what my goal is? I can not find any info on what I might need to do for something like that.

And I'm not sure if that is even a thing beadm or snapshots can do on there own. I just read about them thinking they where going to be doing exactly that. Not partial or dependency riddle confusions.

How do I go about using ZFS in a manner like dual booting Linux and FreeBSD on the same drive? But without grub and just FreeBSD's. Even if I need to learn how to mount and manage stuff in single user. I got the drive space but not the knowledge. Multiple independent installs of a FreeBSD system without going partitioning up the driver. ZFS must be better then that.

Like a example of confusing things making me very confused and holding off using FreeBSD. Even now I somewhat understand why it happens but no idea of what to do about it.

I installed and updated FreeBSD.
Then I created a Boot Environment and activated it. (rebooting into it) Then I created a users while in that boot environment. '
I then activated the default Boot Environment (boot into) and destroyed the BE witch I had created a user in.

But the /home and such was still showing up when running ZFS list -t all. It say it is empty but something fundamental about ZFS I have just no way to grasp. And I find sometimes files made in one BE showed up cross every BE. And I get it is not meant to be used exactly the way I'm trying to do. But then what terminal wizardry do I need to learn?

Something to do with creating a system from a recursive snapshot. Creating a system on a different pool? Then manage them by "mounting" or something? Like where in this prosses do Windows ask for a activation code? Jk. No but there is no clear way to understand how to do something like this outside of jails. But even to run jails you need to be able to manage the jail host. And that is like giving up and moving onto virtualization for the wrong reason.

Like I simply do not want to buy more drives. I most of all like to learn ZFS since it sounds like a grate tool to learn.
But I do not even understand how to create a new pool. zfs create? Just to cryptic instructions for a newbie.

250GB should fit at least a few full instances of FreeBSD. Should not need to hook up laptop hard drives from 2004 just to setup 2 FreeBSD's. I still have learned nothing about backing up properly. Just partitioned by dividing hardware. XD

I guess the real problem is not knowing how to make and manage pools. ;/
But Boot Environments seems like the right answer at the same time.
Sorry for ramble but in properly learning a Unix system ZFS can really help. Having full restore points if messing up something. Unless you mess up to the point of having to USB recover. Done that already XD And reinstalled quite some times also due to mistakes and wanting a clean slate.

Thanks for feedback or some guidens on what I might need to read up on. Just feel like a minefield messing with anything. Slow connection to this PC so a reinstall takes times and a waste on the servers. (pkg's)
 
Last edited:
If you want to learn and have some practice regarding ZFS you could use an USB stick to hold a pool. Then there is no need to reinstall the main system. The boot environments are for the complete system. I suggest to take small steps. If you experiment with the boot environments or ZFS data sets of your main system things get time consuming and frustrating in case of mistakes which is where you are now. With an USB stick you can start with the examples in the handbook and then test your own ideas. I am quite confident that this will enable you to learn about ZFS.
 
Hate not following my post closely. Life is complex. Sorry non the less.

I thank you for the good will comment chrbr. Really do.
But a USB stick? What good will it do? I mean it is just a other drive. (that gets hot and slow) I don't follow the logic there. :/

And how "The Boot Environments are for the complete system" as you said in your post to be taken at face value. Then it would have no lasting impact on the other BE on the system. But they do.
And I know from passing mentions of it on forum post that multiple pools can exist on the same drives. It is just very hard to read up on since the target audience is squarely at working admins. But will do some more attempts at reading the handbook and such.

A better advice would be to spin up a FreeBSD VM on some other platform I can manage instead. Hoping I was just stupid and did not understand instructions right in the handbook or something. (not native English speaker) It might be my real solution having VM as my temporary safety net to tinker with ZFS and figure it out. Just not a fan solving the problem that way. Have gone from FreeBSD to learning ZFS and then to spin up a VM. Just sad that there is no material about one of FreeBSD's biggest pro. Something fundamental to the hole system (if wanted) just for the ones with good enough knowledge before getting into FreeBSD.

Better turn back to UFS and partitioning at some point.

Running Windows on my gaming laptop with a VM to tinker with ZFS. Then to tackle a proper installed FreeBSD with my new found ZFS knowledge? That's sad :c
I really love to see a good general use open source OS sticking around. And this one got the right ingrediencies. Just not the users.

Anyways stay safe.
 
But a USB stick? What good will it do? I mean it is just a other drive. (that gets hot and slow) I don't follow the logic there. :/
You can use it to practice on without breaking the installation that's on your harddisk. Set up a simple FreeBSD install on the USB stick and boot from it. Then play around with beadm(1) or bectl(8) to get a better understanding of how it works and how to use it. If you remove the stick you can simply boot your 'normal' system from the HD again. There's less risk of breaking your currently working system that way.

And I know from passing mentions of it on forum post that multiple pools can exist on the same drives.
That's not common though. Having multiple pools, yes, I have a system with 3 pools on it for example. Each pool has its own set of drives. You will rarely find people having multiple pools on the same disks.

A better advice would be to spin up a FreeBSD VM on some other platform I can manage instead.
If you have that ability, then yes. By all means set up a VM, attach a couple of drives and play around with it. Extremely useful for experimentation. Nothing's lost if you break it.

That's in essence what @chbr was trying to tell you. Find a way to experiment without potentially breaking your current install. Booting from a USB stick is just one way of doing that.

Just sad that there is no material about one of FreeBSD's biggest pro.
There's tons of information. But that's also the problem, there's tons of information. Some good, some not so good. And reading about something is one thing, actually doing it and dealing with any problems that might happen is another. Theory will only get you to understand it, now you need find a way to use that knowledge in a practical way. Time to get some real world experience and the only way to do that is to sit down and muck around with the commands and break everything :D
 
beadm(8) still has some nasty bugs; instead bectl(8) is much safer to use, and it's in base. Boot envs are all datasets under pool/ROOT by convention. The recursiveness of snapshots taken by bectl(8) applies from that path downwards, i.e. it does not recurse into datasets beneath pool/ROOT. It uses the local timzone to create the timestamp part of the snapshot's name, but ommits the timezone name or offset, which is plain wrong BTW. To snapshot the whole pool, you can use zfs snapshot -r pool@`date -u +%F-%T-%Z`. Beware that using the timezone offset (lowercase %z) does not work because it contains a + plus sign, which is not allowed.
 
You can use it to practice on without breaking the installation that's on your harddisk. Set up a simple FreeBSD install on the USB stick and boot from it.
Well of Corse! But I can trash my install to learn ZFS since I got the drive for the purpose of learning. First when I feel I can manage the system will I want to keep data worth saving on it.

And there where I'm stuck. The disconnect I find with ZFS and the community around it is that your expected to use it for server usage and be well into that field already.
Not saying chrbr here but the hole of storage community in general. The problem was not truly that I do not have a drive to play around in. But the slightest idea of what to play around with to learn. Or read up on. Boot Environments? zpool? zfs? snapshots?

Hard enough to teach my mom how to send by E-mail a PDF. She would love trying to understand the instructions how to backup a system / instance a system before messing with it :O
Not.

Like example 1 on ZFS being hard.
beadm(8) still has some nasty bugs; instead bectl(8) is much safer to use, and it's in base.
Going to use base then. Do not think I have ran into any bugs with beadm yet? ? But safer is better. And got to love FreeBSD's way of being a hole OS from install. So rather learn the hard way and stay using OS included stuff.

Example 2. MAN is lovely. But hard. Expects to much underlying knowledge for most things. More of a cheat sheet when in console I'm finding it.

Example 3.
applies from that path downwards, i.e. it does not recurse into datasets beneath pool/ROOT. It uses the local timzone to create the timestamp part of the snapshot's name, but ommits the timezone name or offset, which is plain wrong BTW. To snapshot the whole pool, you can use zfs snapshot -r pool@`date -u +%F-%T-%Z`. Beware that using the timezone offset (lowercase %z) does not work because it contains a + plus sign, which is not allowed.
Wha? Well to take a scheduled snapshot having the time stamps are grate. Stuff like this would be grate info someone doing that. But Imagen telling a user from the Mac or Windows side about proper use of + plus sign. You scare them away. Unless I can not name it just plain "snappyname". Your still not as scary as the terminal to me.

But example 4 here. Trying to use a command can be really really quirky

No but really I have a question here since I can not fully understand a core concept here.
Boot envs are all datasets under pool/ROOT by convention. The recursiveness of snapshots taken by bectl(8) applies from that path downwards, i.e. it does not recurse into datasets beneath pool/ROOT.

That is what makes it partially a environment and trips me up. This ROOT nonsense and mount points to be frank needs a newbie 101 paper or guide.
The mount point of ROOT = /
And "/home" is something like /home
It makes me think home is sitting under ROOT. Am I crazy or something? /home. ROOT/home.

So when running a recursive snapshot I expect /home to be part of ROOT. ROOT is not trying to hide it's purposes now is it?
Might my admin knowledge just fail me but that is what I think I learned in school.

IF it was called ROOT Environment then yes. It would make sense that the ROOT was being swapped and leaving everything under ROOT intact between changes.

More from starting to read about mount points did I realize that might be what messes me and many others up. Some other post here on the forum trying to bring it to attention how zroot worked when someone else had problems learning ZFS. But there are clear gaps in learning ZFS zpool etc.


 
Going on FreeBSD Wiki and checking Boot Environments it say clear as day that this are being excluded from boot environments.
/tmp
/usr/home
/usr/ports
/usr/src
/var/audit
/var/crash
/var/log
/var/mail
/var/tmp

And minimum needed are
zroot
zroot/ROOT
zroot/ROOT/default

But what controls this? No more said about how to change or config this. And so a newbie are utter confused. Me included.
So clear it up for me before messing around like a stone man. That is admitting defeat and irresponsible use of a terminal.

Since playing around with ZFS and the limited admin and IT work I have done my knowledge is when in doubt RTFM. Or ask someone to do if for you. But it has not given me a clear idea of WTF is going on. And since I can not have someone set it up and explain it for me. Well. Confusion.

Seems the intended purpose of BE are to be used for the ROOT environment. I.. Like this is the point of making up babble words since to confused to really come up with something to say that even is a acceptable English sentence.

But I might get it now from some time working on it. Just not sure if I'm right.

It Boot Environments are set up to only affect ROOT as it is the part changing when upgrading the system, so checking if the current system will work with the upgrade it needs the user data and such to check if everything worked out well. That is what I have come to a conclusion about.

And since Unix is not going to on it's own mess with the user space, it dose not truly give it a safe environment by default to mess about in since the OS is not going to poke about with it. The admin need to do something to change that data.

But why not include it still in the BE? I find no reason that I can come up with why you would NOT want to play around in environment that can go off the rail and ruin data. Since if it works out fine you can keep using that environment and discard the old one. And ZFS clever data pointing means it cost nothing in space. It really is a baffling choice for Boot Environments to have it setup like this in the default form.

Is it to be able to try out a update and still run a server to the clients to use? And when needed to go back to the old Boot Environment the user or rather clients data do no roll back with it? Like is that the hole reason for this default setup? To be deployed after a jailed try first?

The ADMIN is responsible to not mess up. (or run it in a jail) since the underlying system is not meant to mess stuff up. Windo.

But agen this messed me up hard. Or anyone not in the admin bubble I would think.

So the "standard" way a FreeBSD system is set up comply to server the way Boot Environments are used for system updates on a live server?
And to use it in my wanted way of a Boot Environment to work. I need to change the core FreeBSD install more or less. But no where is this the explanation on wiki's or the forum. At least from my research.
Your expected to be able to figure it out? :)


Just talking about plain snapshots for a moment. Have played around with recursive snapshot of the hole zroot. Use it to trying doing rollback ect.
Ended with both a lockup of the system ones of the times. Kind of understandable. Since it is doing a rollback on a running system.

But sometimes it accepts it but do not rollback? Hu? Seems like to get a recursive to actually effect the hole system you need to rollback every single one individually. Not enough to do zfs rollback -r zroot@snap1. Or going from zroot/ROOT/default@snap1. I mean in general I find snapshots alone to be only good for as a backup tool. Recover tool? Anyways that is also the point I guess with it.

And so it might be a matter of just saying to use -r is not enough of a guide for a new user. Yea? Seems like what I have stumbled upon this realization after more reading and testing. And so you try Beadm and find it being not really set up to be what normally used on machines as "rollback points" or "recovery". And find it being possible to mess up with user data and it not rolling back the state desired. Flip table and cry ensues. So Boot Environments are the way to go? But you basically would need to setup the ZFS system from install to use it in the way I like? Being more like participation of OS's on the same pool or whatever ZFS term is right to use?

Since agen ZFS and the tools it brings are really grate! But any redundancy requires at least 2 disks. It still is so powerful!
I hope more will use it if only there was not so admin focused. OR whatever my or others problems are to learn it. IT seems like a more Unix underlying knowledge problem at play. With some quirkiness from ZFS that simply you got to learn. (somehow?)


Since I began the post a few weeks ago I have started to get a better grasp on
zroot
zroot/ROOT
zroot/ROOT/default

But something confusing about it is that zroot part. zroot = pool right? This is the name of the pool. Origin.
We leave zroot/ROOT empty. And make directives for boot environments etc.
Usage being to mount "default" as the point of /

Ok but then /usr/home are still part of zroot. Called virtual directories in ZFS? Mounted on home ect etc to make up the rest of the mount points. Or where they called virtual data sets? Agen plain and simply just not reliable in the matter since I'm learning. Might be completely talking trash or using terms wrong. Taking information and knowledge from so many places.

If I ditched the standard layout that a normal guided ZFS install gives me, I can place everything under default? So then changing BE really would make every single environment I make separate and "complete"? Acting like having multiple instances of OS's on the same drive. Redundancy or corruption be dammed. Am I wrong here?
Unix = hard. Just workable and not hard and impossible to fix. Done that been there. Windu.

I can mess around as much as I like without worrying about the other BE's. Well unless a network cord is connected and I manage to screw up the internet. Ha h a.
Not done what I'm taking about yet but I think I get it. Just ready to be told off before doing it.

Like a one liner in the BE wiki would make one think strait. Is that not what my problem is here? That BE's did not explain itself more clearly and what governs it?
Then I got to really learn how to mount stuff to ofc not getting help from the FreeBSD installer.

This basic knowledge that made me trip up in the first place and useful for Linux too. But is it really just that I have missed about BE's? boooo!

Scary nasty things going on in the terminal. Just very hard to learn on your own since your looked down on for not knowing the ins and outs. My schooling did not teach me about mounting properly. Yet that was IT/server admin school and all. Made me use terminals but not to the extent like this. Not at all.

But the pay off might be to finally set up a descent backup box on the side running ZFS. Keeping images from cameras and such.
And then be able to thinker with more FreeBSD on the desktop side. Not needing to get a rack of drives and send snaps around.

Am I still completely off curse? I think I have done my homework now. Just left setting up the ZFS to my liking at install?
Not at all perfect or know my stuff here. So thanks for correcting me. <3
Not liking to bash on a Wiki without knowing I'm right first. But really there is so much confusion about ZFS in general from newbies.

Not very hard to see why being less then Unix ready, really means ZFS is out of the question to use. Yea?
 
beadm(8) still has some nasty bugs; instead bectl(8) is much safer to use, and it's in base.
Not so long ago, I just experimented the opposite. I use bectl in scripts because it's in base, but when time comes to update/upgrade my systems, I only use beadm.

Maybe since that time bectl became more robust, however I've never seen any severe bug in beadm.
 
Sorry but the long posts might be a big ask to read into.

Trying to figure out Boot Environments. And I naively believed they where the second coming to sliced bread. But seems to be used for updates and not what I saw being so grate about them in my case.
Code:
# zfs list
NAME                           MOUNTPOINT
zroot                             none
zroot/ROOT                        none
zroot/ROOT/default                /
zroot/tmp                         /tmp
zroot/usr                         /usr
zroot/usr/home                    /usr/home
zroot/usr/ports                   /usr/ports
zroot/usr/src                     /usr/src
zroot/var                         /var
zroot/var/crash                   /var/crash
zroot/var/log                     /var/log
zroot/var/mail                    /var/mail
zroot/var/tmp                     /var/tmp
This is the way FreeBSD standard ZFS install looks like.

I naively going to go on Windows and limited Linux knowledge and explain/show what I like to setup. But have not found a way of doing it.
Code:
# zfs list
NAME                          MOUNTPOINT
zroot                            none
zroot/ROOT                       none
zroot/ROOT/mynewBE               none
zroot/ROOT/default               /
zroot/ROOT/default/tmp           /tmp
zroot/ROOT/default/usr           /usr
zroot/ROOT/default/usr/home      /usr/home
zroot/ROOT/default/usr/ports     /usr/ports
zroot/ROOT/default/usr/src        /usr/src
zroot/ROOT/default/var          /var
zroot/ROOT/default/var/crash      /var/crash
zroot/ROOT/default/var/log          /var/log
zroot/ROOT/default/var/mail         /var/mail
zroot/ROOT/default/var/tmp         /var/tmp
Then if I activate mynew BE it's underlying /usr and /var is used. Dose it make more sense?
It would probably more look like this using zfs list.

Code:
# zfs list
NAME                          MOUNTPOINT
zroot                       none
zroot/ROOT                    none
zroot/ROOT/internet            none
zroot/ROOT/offline            none
zroot/ROOT/writing            none
zroot/ROOT/default                   /
Yea?

Is this use case not feasible or un common? Why would it not be awesome to have multiple independent install of FreeBSD on the same drive? Or on 100 drives? :c

I'm a grate noob about this hole thing. But that is why I'm here trying to figure out what I need to read up on. Or if I got it all wrong.

BE acts like a virtualization but just on the storage level. Using the real CPU and GPU hardware directly (the computer in essence) so it literally the file storage system of the future. Not simply data blocks but smarter. So I love the idea of taking advantage of it and never fear losing a OS. Having multiple working systems on hand. Even if all are on one drive.

Like the more common Linux distros today slam everything into one directory. As do Windows more or less. So I lack any knowledge here about file systems. And learning about it I imagen ZFS was a a grate tool for it.

Sharing storage of a backup disk and running multiple FreeBSD's on the main drive. It would be sweet and instead of sharing the computer with multi user instead give each one a boot environment instead. Or have multiple for myself.

Interested to run graphics so jails and VM are not perfect for that. GPU passthrough that is. Not going to try and run AAA games. Just have acess to acceleration for my own coding and hobbies.

:c
 
Code:
zroot/usr----------------/usr
Is not mounted. It's only there to make datasets for /usr/ports, /usr/src easy (and automatically mount in the right place). So the actual contents of the /usr/ directory is part of zroot/ROOT/default dataset.

Code:
root@molly:~ # zfs get canmount zroot/usr
NAME       PROPERTY  VALUE     SOURCE
zroot/usr  canmount  off       local
root@molly:~ # zfs get mountpoint zroot/usr
NAME       PROPERTY    VALUE       SOURCE
zroot/usr  mountpoint  /usr        local

Code:
# zfs list
NAME----------------------------------------MOUNTPOINT
zroot ---------------------------------------- none
zroot/ROOT----------------------------------none
zroot/ROOT/mynewBE-----------------------none
zroot/ROOT/default--------------------------/
zroot/ROOT/default/tmp---------------------/tmp
zroot/ROOT/default/usr----------------------/usr
zroot/ROOT/default/usr/home---------------/usr/home
zroot/ROOT/default/usr/ports----------------/usr/ports
zroot/ROOT/default/usr/src-------------------/usr/src
zroot/ROOT/default/var------------------------/var
zroot/ROOT/default/var/crash------------------/var/crash
zroot/ROOT/default/var/log---------------------/var/log
zroot/ROOT/default/var/mail---------------------/var/mail
zroot/ROOT/default/var/tmp---------------------/var/tmp

You don't want /usr/home, /usr/ports, /usr/src and everything under /var part of the boot environment. You want these to be shared between the different boot environments. With a boot environment you're only switching out the OS, either for backup before patching, or to boot different versions of FreeBSD.

Why would it not be awesome to have multiple independent install of FreeBSD on the same drive?
That's exactly what bectl(8) and beadm(1) allow you to do, with the default installation. There's no need to shuffle everything around to accomplish this.
 
It really is just that simple? That golden goose was probably delicious. HAaay! Throw me a new SSD drive brother! No I will not get my own!

Going to solve it that way then. That being using snapshots like if recovering or setting up a new system. Having multiple drives and leverage ZFS other ways. Since why not tinker with FreeBSD.

Just silly having to GRUB or UEFI my way into OS's. I really wanted ZFS to deal with it for me. (freebsd side of things) Would be nice on 1 drive laptops and such. And drive bays are quite limited for me. And drives for that matter.

I do not mind creating a replica of the full environment before doing a test on upgrading it. If the update starts trashing the user data nothing is lost. Just delete the hole environment since it was a clean room experiment that was allowed to go horribly wrong. Odd to not have ZFS used in such a way. I mean in a way of jails / virtualization. Plenty of times you can not truly test something without on real hardware and user data that can be ruined "safely"

But if that's not the use case of BE then ok.

I really was not able to find any info about what I was wondering about. I'm asking the question for a answer to my tinkering and the reading I have done on ZFS and FreeBSD.

Not to be mean or trying to be hard to do with. Just a beginner. :) Finding a tool that was shiny and golden.
 
If I'm understanding what you want (which I may not be, in which case, apologies), you want to be able to tinker and try stuff, then, if it doesn't work, go back to what you had before. It seems to me, (again I may be unclear on your usage case) that a USB drive might be good for that.
As for zfs, snapshots might be more useful to you than beadm.
A good, fairly short, snapshot article.
https://docs.huihoo.com/opensolaris/solaris-zfs-administration-guide/html/ch06.html
(Don't know if it was written about FreeBSD, but the commands should work.)
 
beadm(8) still has some nasty bugs; instead bectl(8) is much safer to use, and it's in base. Boot envs are all datasets under pool/ROOT by convention. The recursiveness of snapshots taken by bectl(8) applies from that path downwards, i.e. it does not recurse into datasets beneath pool/ROOT. It uses the local timzone to create the timestamp part of the snapshot's name, but ommits the timezone name or offset, which is plain wrong BTW. To snapshot the whole pool, you can use zfs snapshot -r pool@`date -u +%F-%T-%Z`. Beware that using the timezone offset (lowercase %z) does not work because it contains a + plus sign, which is not allowed.
May I ask for your latest view of bectl v beadm? I've been using beadm for some time while upgrading with no issues. Is it worth changing to bectl at this stage? I'm just about to start implementing zfs snapshots - so I'd like to avoid any issues if possible. Thanks.
 
May I ask for your latest view of bectl v beadm?
My opinion, they are interchangeable. Command line options are the same; the only behavior difference I've noticed is on the "destroy" command. beadm by default asks if you want to destroy the origin snapshot, bectl you need to add the "-o" option. This may be changed in the version of bectl in -CURRENT.

I'm just about to start implementing zfs snapshots - so I'd like to avoid any issues if possible.
Snapshots of the root dataset or "snapshots in general"?
Again my opinion, you can do snapshots of your root dataset, but I think it better to use beadm/bectl to create/control boot environments.
 
My opinion, they are interchangeable. Command line options are the same; the only behavior difference I've noticed is on the "destroy" command. beadm by default asks if you want to destroy the origin snapshot, bectl you need to add the "-o" option. This may be changed in the version of bectl in -CURRENT.


Snapshots of the root dataset or "snapshots in general"?
Again my opinion, you can do snapshots of your root dataset, but I think it better to use beadm/bectl to create/control boot environments.
Many thanks - understood. I'm experimenting (for reasons I won't go into here) with snapshots stored on a USB drive - so I intend to use the standard zfs commands for snapshots, and run incrementally (say) weekly. But thanks again - it's good to get reassurance on hard-to-fix-if-they-go-wrong processes.
 
Let me throw in my two cents: I recently found that bectl is now part of the OS, while I have used beadm in the past. In the past I never needed to go back to a former BE, but I like the easy possibilities to do so - just in case a serious error occurs. I now switched from beadm to bctl for two reasons:
  1. I want to have the minimum number of pkg's installed - so I prefer OS parts
  2. it's very nice when you use freebsd-update to configure it making an automatic snapshot for the BE before patching. This is a great add-on as you will never forget to make a snapshot first ...
Where they are both not helpful, if you switch devices and want to rescue your former configurations and increase the patch level ( at least to my small knowledge) . You could mount your old zroot with a zpool import ... -t old-zroot , but then you can't use bexxx to easy copy your configurations. But that is also not the purpose of an BE ... my use-case is more a backup & restore use-case. ( where you might want to use etcupdate )
 
Peacekeeper2000 one thing I've found extremely useful is bectl or beadm mount. Basically do a tmp mount of a boot envrionment (similar to your zpool import on an alternate mount point). If I've mucked up a config that causes issues, reboot into the previous working BE, mount the broken one, fix rc.conf or whatever else you need to, unmount, reboot and boot into the one you just fixed.
 
Peacekeeper2000 one thing I've found extremely useful is bectl or beadm mount. Basically do a tmp mount of a boot envrionment (similar to your zpool import on an alternate mount point). If I've mucked up a config that causes issues, reboot into the previous working BE, mount the broken one, fix rc.conf or whatever else you need to, unmount, reboot and boot into the one you just fixed.
In my case that would have not worked as the BE is on a second device that needs first to be imported to a different mount point ( otherwise ZROOT would conflict with ZROOT - old vs new) But I agree to fix a single config, I can imagine that a temporary mount of a snapshot is an easy wa y to get a working copy of a configuration back
 
Back
Top