FreeBSD 13 annoyances?

Indeed, you don't have efi booting.

Concerning the home backup server, it's simple:
# gpart bootcode -b /boot/pmbr -p /boot/gptboot -i1 ada0

The said live server is based on a MBR scheme and I ain't accustomed with this scheme for FreeBSD (not to mention the UFS mirror). I can't answer for now.
 
Thank you.

As for the live server, I found my notes on how I created the gmirror. Does this help?

Code:
gmirror label -v gm0 /dev/ada0 /dev/ada1
gpart create -s MBR mirror/gm0
gpart add -t freebsd -a 4k mirror/gm0

gpart create -s BSD mirror/gm0s1
gpart add -t freebsd-swap -a 4k -s 4g -i 2 mirror/gm0s1
gpart add -t freebsd-ufs -a 4k -i 1 mirror/gm0s1

gpart bootcode -b /boot/mbr mirror/gm0
gpart set -a active -i 1 mirror/gm0
gpart bootcode -b /boot/boot mirror/gm0s1

newfs -t -U /dev/mirror/gm0s1a
mount /dev/mirror/gm0s1a /mnt
 
Reading the man pages and the handbook, verifying that on a 13.0-RELEASE installed on a MBR scheme, I have just to point you there where you'll find what you need:

# gpart bootcode -b /boot/mbr mirror/gm0
# gpart bootcode -b /boot/boot mirror/gm0s1


I have to say that MBR is obsolete and advise you to make a new server with a GPT scheme. And, if you want a RAID1, you can do that with the guided installation if you choose zfs as root file system.
 
Thank you. The reason I used MBR instead of GPT was because of this :

gmirror(8) stores one block of metadata at the end of the disk. As GPT partition schemes also store metadata at the end of the disk, mirroring entire GPT disks with gmirror(8) is not recommended. MBR partitioning is used here because it only stores a partition table at the start of the disk and does not conflict with the mirror metadata.

I've seen lots of people have issues with zfs, so that's why I stayed away from it.
 
So I upgraded one of my systems from 12.2-RELEASE to 13.0-RELEASE doing the standard procedure like always. After doing the first 'freebsd-update install' command to install the kernel, and rebooting after that, the system comes up with the new kernel, 13.0-RELEASE. When running freebsd-update install again to upgrade the rest of the system I get:

Code:
root@server:~ # /usr/sbin/freebsd-update install
src component not installed, skipped
Cannot identify running kernel
root@server:~ #


So obviously, it's having a problem determining the current running kernel. doing a sysctl -n kern.bootfile returns /boot/kernel/kernel

all of that seems normal. I go look in the root file system and find.. no boot directory. what the heck?
the system was installed with 12.x via the auto-zfs system from the installer, so it created the zfs structure for the system.
the system shows the usual bootfs as i've always seen it for zfs root systems as:
Code:
root@server:~ # zpool get bootfs zroot
NAME   PROPERTY  VALUE               SOURCE
zroot  bootfs    zroot/ROOT/default  local
root@server:~ #

It can't find the kernel, but yet it loaded it AND it loaded modules (including zfs.ko), but yet the system can't locate them anywhere. So now i'm stuck between a halfway upgraded system where the kernel is 13, the userland is 12.2, and I can't get freebsd-update to upgrade the userland to 13 because it won't see it. I tried running with the --currently-running 13.0-RELEASE option and it still won't run.

So far, for me, 13.0-RELEASE isn't starting off well. Does anyone have any ideas on how to fix this messed up scenario? I'm afraid of doing a rollback because I don't know what it will (or won't) do.
 
There's the fact that now xorg gives me a black screen whenever I startx, and all I did was update to 13.0. Shouldn't have to find some fancy new config to keep X from breaking after a system update, but I do. Haven't found a solution yet.
 
I've seen lots of people have issues with zfs, so that's why I stayed away from it.
As much as I find "well it works for me!" type comments quite annoying... er, it works for me. :D I've used it for years now with no incidents (other than the early days when the ARC(?) config could be a bit picky; and RAIDZ2 being really slow for general use: don't do that unless you have lots of annoying little HDDs and you're really short of space) and it's come to my rescue on quite a few occasions. tbf UFS could've also come to my rescue if it'd likewise been configured with snapshots etc so ymmv and one might argue it's personal preference.

The one thing I'm wary of is that it seems it's not well integrated with the paging system so putting swap volumes on ZFS seems to not be recommended; so I still have separate partitions for swapping (which are part of gmirror sets, which may or may not be sensible...)

Oh, and the other thing is that caution about upgrading pools. I'm always a bit nervous about the risk of forgetting to also update the loader in my EFI partitions, something which would be quite bad; as such I've written a wrapper for zpool to remind me, though it's anyone's guess whether or not it can save me from my own randomness. Yeah I know I could have a separate UFS /boot partition, which I used to do, but I found "but that's clumsy and ugly" took priority over pragmatism.
 
The one thing I'm wary of is that it seems it's not well integrated with the paging system so putting swap volumes on ZFS seems to not be recommended; so I still have separate partitions for swapping (which are part of gmirror sets, which may or may not be sensible...)
This should be resolved sooner rather than later. See this bug.
In reality, swap on ZFS is no more problematic than mounting anything else. It's worked forever on Solaris.

Let's not forget the system administrator's adage: Don't expand swap, just buy more RAM. ;)
 
Let's not forget the system administrator's adage: Don't expand swap, just buy more RAM. ;)
Memories of trying to cram 30-50 users into 8-16MB. It's hard to shake off that feeling of "I must have swap!" even if it does coexist with "argh, 4KB of my 64GB of swap is being used, what has gone so terribly wrong?!! I am teh worst sysadmin evar!!1"
 
all of that seems normal. I go look in the root file system and find.. no boot directory. what the heck?
the system was installed with 12.x via the auto-zfs system from the installer, so it created the zfs structure for the system.
the system shows the usual bootfs as i've always seen it for zfs root systems as:
Did you do an encrypted install? In that case your /boot probably lives on a separate boot pool. Make sure that's mounted, there seems to have been a period where this didn't happen automatically.
 
There's the fact that now xorg gives me a black screen whenever I startx, and all I did was update to 13.0. Shouldn't have to find some fancy new config to keep X from breaking after a system update, but I do. Haven't found a solution yet.
Looks very much like you did not update graphics/drm-kmod. Personally, I have 2 desktops and 1 laptop, all running 13.0 with Xorg and MATE. No problems, but after upgrade, the kernel bound modules should be rebuilt.
 
Just use the packages. There's no reason to build from ports for this. But you do need to check if it's been properly upgraded from graphics/drm-fbsd12.0-kmod to graphics/drm-fbsd13-kmod (which should automatically happen with a pkg upgrade). Especially if you perhaps used pkg-lock(8) to lock certain packages.
Yes, there seem to be two schools - one who build everything from source by themselves and another who use pre-built binary packages. But the essence here is the same - 12.0 kmod should be replaced by 13.0 kmod. One way or another.
 
Building devel/doxygen and multimedia/gstreamer1 from ports failed on FreeBSD 13 for some obscure reason somehow related to the use of devel/bison. Building succeeds however when you redirect output to a file. Very strange.
I'm building as much as I can from ports, compiling with as many options enabled as practical. I had trouble with building devel/doxygen, too. Turns out I ran into a weird circular dependency: graphics/graphviz->lang/ruby27->devel/doxygen->graphics/graphviz.
After an hour of googling, I found an obscure Nabble page that suggested the following:
  1. Build lang/ruby27 (and all subsequent dependent ports) without the options that depend on devel/doxygen or graphics/graphviz.
  2. Next, along the same lines, build devel/doxygen without the options that depend on graphics/graphviz. At this point, depending on lang/ruby27 is OK, because it was built.
  3. And then finally, come back and build graphics/graphviz.
My annoyance is Wayland. I compiled all the necessary packages, it launches from Konsole (under Xorg) into its own window. That window runs KDE on Wayland just beautifully, with all settings just fine. But running the exact same command from SDDM - just crashes. I have to ssh in from another computer, and restart SDDM. I did see on Gentoo forums that it may be helpful to run Wayland on another TTY/PTY/VTY... Figuring out how to do that on FreeBSD took some time, but it looks like vidcontrol() is an option to play with.
 
I'm building as much as I can from ports, compiling with as many options enabled as practical. I had trouble with building devel/doxygen, too. Turns out I ran into a weird circular dependency: graphics/graphviz->lang/ruby27->devel/doxygen->graphics/graphviz.
After an hour of googling, I found an obscure Nabble page that suggested the following:
  1. Build lang/ruby27 (and all subsequent dependent ports) without the options that depend on devel/doxygen or graphics/graphviz.
  2. Next, along the same lines, build devel/doxygen without the options that depend on graphics/graphviz. At this point, depending on lang/ruby27 is OK, because it was built.
  3. And then finally, come back and build graphics/graphviz.
My annoyance is Wayland. I compiled all the necessary packages, it launches from Konsole (under Xorg) into its own window. That window runs KDE on Wayland just beautifully, with all settings just fine. But running the exact same command from SDDM - just crashes. I have to ssh in from another computer, and restart SDDM. I did see on Gentoo forums that it may be helpful to run Wayland on another TTY/PTY/VTY... Figuring out how to do that on FreeBSD took some time, but it looks like vidcontrol() is an option to play with.
https://euroquis.nl//kde/2021/04/30/wayland.html soon plasma will be fully functional with Wayland.
Anyway sddm must be disabled.
 
In spite of random comments earlier I've been happy with FBSD 13 so far. The only "problem" was the old classic of a filesystem running out of space after I'd upgraded, which took me an indecently long time to realise (though I'm not sure why it didn't actually say so!) but overall it feels more solid and noticeably faster than FBSD 12.

Bit of a strange one last night though which is that it decided to reboot itself for no apparent reason. Not a crash, it was a clean reboot as I can see from the state of my zpools (one lives on a manually activated geli partition so it would've been unhappy if it'd had the rug pulled from under it) and I can see just far enough back through the recovered dmesg buffer that it was a clean reboot. I just have no idea why. Nothing in the logs, no emails, nothing interesting in process accounting nor any other clues.

It's possible it was something to do with that zpool. I've just acquired a new backup drive, a weatherproof USB thing, and I've created two pools on it, a small recovery pool with root/boot filesystem and a big backup pool which as I mentioned lives on a manually-started geli partition. I've been using the same scheme for years without any problem so I suspect it's unlikely and I'm only mentioning it in case any of it sounds at all familiar to anyone; I don't expect anyone to offer a thoughtful critique of my backup strategy, at least not here! The only thing that seemed out of the ordinary is a (very) small number of "CCB request completed with error" messages regarding my new drive which at a glance don't seem congruent with what smartctl has to say. I was in the process of doing a full refresh when the reboot occurred, i.e. four zpools active, two on the server itself (main pool, four-HDD RAID10 plus spare, online backup pool on a single drive), the other two on the removable USB drive as mentioned, geli, zfs send & recv active at the time. Didn't seem to be doing anything especially interesting; it'd finished its daily periodic stuff 10 minutes before and had just started the weekly, apparently being in the middle of rebuilding the locate database.

I'm not even sure offhand what there is that can do a controlled reboot. The UPS monitor can trigger a halt if it's almost out of power but it wasn't that (it wasn't a halt, there was no power cut and the UPS seems happy enough) so I'm a bit mystified. I haven't enabled crash dumps but as it wasn't a crash that wouldn't help anyway.

Weird.
 
Memories of trying to cram 30-50 users into 8-16MB. It's hard to shake off that feeling of "I must have swap!" even if it does coexist with "argh, 4KB of my 64GB of swap is being used, what has gone so terribly wrong?!! I am teh worst sysadmin evar!!1"
Many years ago I was a user of an IBM 3084 mainframe. One half of it was used for user workload (logged-in users using TSO, editing and compiling, and minor batch jobs). Because we were running some old software that could not be ported for XA (31-bit addressing), we had to run the machine with 24-bit 370 addressing, meaning we had only 16 MiB of physical memory. We used the second half of the machine to replace two 370/168s, one of which was exclusively for batch workloads, the second one for industrial control (and it was the highly specialized industrial control software that forced us to stay in 24-bit mode). At peak times, the machine had ~500 logged-in users, and that overloaded it so badly that response times were considered unacceptable: compiles of 1000-line programs took several minutes. At peak times, the machine was heavily swapping. And since supplies of fast 3380 disks was still low, it was swapping to 3350 disks. That was probably a good thing, since we had a handful of 3350s that had additional fixed heads installed (so a small number of disk tracks are always accessible, without arm movement), and those cylinders were used for swapping. Alas, each of those cylinders only gave a few MB of fast swap.

Now, you have to imagine this: A 16 MiB machine with 500 users (32 KiB per user!) was still capable of functioning, and compiling! Alas, it got slowed down by having to swap, even though it probably had dozens of MB of swap that didn't require disk seeks.

The solution was to throw money at it. Not just a little bit of money, but a ton of money. We bought a RAM disk from a Japanese manufacturer, which had the unimaginably large capacity of 144 MB, and was used for swap only. That capacity may seem laughable, but in those days the largest disk you could buy had a capacity of 2.5 GiB, and cost nearly $100K. With that additional swap disk, performance became bearable.

So swapping is not necessarily bad, if you are swapping to a sufficiently fast device. A million $ fast device. For comparison: The list price (in 1982 dollars) of the CPU was $8.7M.
 
Many years ago I was a user of an IBM 3084 mainframe. [...]
That makes interesting reading; thanks! I had some experience of a similar system, a 3090 of some description ("of some description" as I'm not 100% sure beyond that: it was over 30 years ago. I stood next to it once and was transfixed by the typically IBM big clunky power switch) which I think had three CPUs and "some" memory: 64MB? May have been 32+32MB (main+expanded) and a herd of 3380s and 3390s, but I was just Jane Random User who was trusted with TSO access and promptly ran up the department's bill. A lot. It supposedly had up to 2,000 users although "interactive" may be the wrong word to use; I think the vast majority were doing transactional stuff on CICS, TOPICS or the home-grown VISTA email & forums. It's hard to let that latter one pass without comment as I still haven't seen an email system that comes close to it in terms of functionality. They migrated to PROFS for some reason which was perplexing as it seemed a step backwards, though VM (on another 3090; MVS had a machine to itself) was a much nicer system to use than MVS which made me lose the will to live.

Anyway, it was remarkably quick, all things considered. Not so much at the wrong end of a piece of wet string sharing 9.6 Kbits with another couple of dozen people but that was hardly the mainframe's fault. Having said that, remote users preferred to dial into those Unix systems and through that overly-contended SDLC rather than the mainframe's own modem rack as the response times were better: I guess curses works better over very low-speed links (our modems had a max speed of 2,400 baud IIRC; the mainframe's may have been even less) than 3270 does.

Probably the slowest system I recall using was a Vax 11/785 at college. It wasn't a bad machine in itself, though not sure how much memory it had, but up to 100 students all trying to do compilations at the same time was a bit much for the poor thing. That said, it coped much better than the Suns we had in one of the labs that would grind to a complete standstill with as few as half a dozen users, though as they weren't all that different to the minicomputers I mentioned I have to wonder if they had some configuration "issues".

Also system names: a bit of a lack of imagination going on all round. Those minicomputers were Philips P9070s, and the IT group was called ISA. So our minicomputer was called P9070ISA. D: The college's Vax ran VMS and was called VMS1, later joined by... you guessed it, VMS2 and VMS3. The Vax with Unix was called unix1. Lower-case, of course. I thought they'd actually used some imagination by calling their PDP10s BLUE and ORANGE until I realised it's because they were coloured blue and orange. In fact they were mostly beige, it was just that uncommonly interesting-looking strip across the top that had anything worth being called a colour (though BLUE being a KL10 had flashing lights and was therefore A Proper Computer™ but I've only seen it from the back). The most "interesting" name belonged to the MVS box (well actually series of boxes: I recall it was quite long with a knobbly bit for the 3rd CPU) with the unmemorable name GBMTCP19, so unmemorable that I can somehow remember it 30 years later.
 
So I upgraded one of my systems from 12.2-RELEASE to 13.0-RELEASE doing the standard procedure like always. After doing the first 'freebsd-update install' command to install the kernel, and rebooting after that, the system comes up with the new kernel, 13.0-RELEASE. When running freebsd-update install again to upgrade the rest of the system I get:

Code:
root@server:~ # /usr/sbin/freebsd-update install
src component not installed, skipped
Cannot identify running kernel
root@server:~ #


So obviously, it's having a problem determining the current running kernel. doing a sysctl -n kern.bootfile returns /boot/kernel/kernel

all of that seems normal. I go look in the root file system and find.. no boot directory. what the heck?
the system was installed with 12.x via the auto-zfs system from the installer, so it created the zfs structure for the system.
the system shows the usual bootfs as i've always seen it for zfs root systems as:
Code:
root@server:~ # zpool get bootfs zroot
NAME   PROPERTY  VALUE               SOURCE
zroot  bootfs    zroot/ROOT/default  local
root@server:~ #

It can't find the kernel, but yet it loaded it AND it loaded modules (including zfs.ko), but yet the system can't locate them anywhere. So now i'm stuck between a halfway upgraded system where the kernel is 13, the userland is 12.2, and I can't get freebsd-update to upgrade the userland to 13 because it won't see it. I tried running with the --currently-running 13.0-RELEASE option and it still won't run.

So far, for me, 13.0-RELEASE isn't starting off well. Does anyone have any ideas on how to fix this messed up scenario? I'm afraid of doing a rollback because I don't know what it will (or won't) do.

I have this bug on every major upgrade since years and could never figure out why this happens. I always get scared if I read the

Code:
root@server:~ # /usr/sbin/freebsd-update install
src component not installed, skipped
Cannot identify running kernel
root@server:~ #

But the workaround is .

Code:
zpool import -f bootpool
 
I have this bug on every major upgrade since years and could never figure out why this happens. I always get scared if I read the

Code:
root@server:~ # /usr/sbin/freebsd-update install
src component not installed, skipped
Cannot identify running kernel
root@server:~ #

But the workaround is .

Code:
zpool import -f bootpool

Thanks for your help, Sebastian. I don't know why I didn't think about that, but it worked. I'm gonna have to commit this one to memory. What a scary situation. Glad it wasn't for a server that was important!
 
https://euroquis.nl//kde/2021/04/30/wayland.html soon plasma will be fully functional with Wayland.
Anyway sddm must be disabled.
I do follow that blog, the guy is a FreeBSD committer. What I found out after my playing with the software is that Plasma is plenty functional with Wayland. The issue is really launching it. Somebody did figure out how to launch xorg on FreeBSD, what needs to be run first, second, and so on. The first command needs get a whole constellation of details to get right - the command itself, the args passed to it, where it's launched from (which TTY, or something different, FreeBSD-specific), what hardware drivers and devices to look for, what permissions are applicable, at what point in the boot process should it get launched, and all those ducks need to be lined up. And that's frankly my frustration - looks great, plenty functional, but startup is awkward at best.
 
X.Org

… to launch xorg on FreeBSD, what needs to be run first, second, and so on. The first command needs get a whole constellation of details to get right …

That's extraordinary.
It might have been truer years ago, but nowadays: things should largely look after themselves.
 
X.Org



That's extraordinary.
It might have been truer years ago, but nowadays: things should largely look after themselves.
Yeah, and it looks like in case of Wayland, that got taken a bit too far - everybody expects someone else to take care of a problem, and end result, nobody takes care of the problem. Somebody needs to get those FreeBSD-specific Wayland launch ducks lined up, and to put in some effort into maintaining that code. Code won't maintain itself, y'know.
 
Back
Top