Is there a process which decides what software goes in "base/world" ?

I was told it dangerous to change the root shell.
I use zsh as root shell, and every script works fine, even /etc/rc.subr .
But yes, the number of shells in base should be limited.
Because otherwise you end up in a shell-scripting-flavor-hell.
 
There's only one shell scripting standard, and that's bourne (POSIX). So IMHO, any different shell (C shells!) doesn't need to be in base.

I was told it dangerous to change the root shell.
This is "dangerous" because your shell might be damaged by port upgrades (or might be unavailable if /usr/local resides on a different device, although that's unusual nowadays). Without a "base system" (looking at Linux), you wouldn't even have a chance to avoid that danger. You can easily avoid it by giving at least one of root or toor a shell that's in base.
 
Now you're mixing two different concepts. Perhaps three.

First question: FreeBSD has to ship with at least one shell in base. One of the reasons is that users have to be able to start a login session, and have at least a minimally functioning shell. It is also needed for those parts of the base system that are implemented as shell scripts, for example the rc system. The current implementation needs a Korn-style or Posix-style shell. What exact shell implementation is used here is somewhat secondary, but there really is no benefit and grave danger in changing it.

That basic scripting shell, which is absolutely required, does not have to be any users login shell, the one they are faced with with doing interactive CLI use. For historic reasons, BSD has always shipped a csh variant for that, and this is the default login shell of the root user. But note that the login shell of any user (including root) does not have to be the same as the scripting shell that's used for mandatory parts of the system.

I was told it dangerous to change the root shell.
Yes, and no. Yes: If a clueless system administrator changes the root shell to something that breaks, then they can't easily dig themselves out of the grave they dug. That particularly includes the historically common case of a shell that is in /usr/local (not mounted if disks fail or file systems break or we are in single-user mode), or depends on shared libraries that can easily be broken. If this happens, they either need to reinstall the system, or get a functioning OS in another way (boot from USB stick or CD) and repair what they did, and that is a big hassle (and in the old days, it wasn't even possible).

No: A knowledgeable admin can change the login shell of the root user, and if they're careful, it will work fine. Classic example is to statically link that new shell, and store it in the root file system (or at least in the same file system that contains bin). With today's systems typically not having multiple physical disks, and modern file systems being very reliable, it is even possible to use a dynamically linked shell that's in /usr/local. For example, on my machine, root uses bash. If you look at /etc/rc, it clearly uses /bin/sh.

I use zsh as root shell, and every script works fine, even /etc/rc.subr .
Did you install the zsh package, and then changed the login shell for root to be zsh? In that case, scripts such as /etc/rc* do not use zsh. The way they run is: the executor looks at their first line (which is /bin/sh), and runs that program.

On the other hand, if you actually overwrote /bin/sh with a copy of zsh (perhaps a link to it), then you are indeed using it. It would be mildly surprised if the system still worked, but pleasantly surprised: It would mean that the folks who write and maintain the /etc/rc system have done an excellent job of being independent of features of specific shells, and use only the common subset.

For this reason, one of the good practices of people writing shell scripts is to test their scripts with a minimal shell. The best one for this purpose is actually the ancient V7 Bourne shell. Note: I mean the implementation written by Steve Bourne at Bell Labs, not what we today call bash. I think pdksh (can be put into a V7-compatible mode (perhaps it's a build option) that is good for developing scripts.

But yes, the number of shells in base should be limited.
Because otherwise you end up in a shell-scripting-flavor-hell.
Matter-of-fact, the correct number of shells in base should be 1, that being minimal. And since the traditional /bin/sh needs to exist, that should be the only one. In practice, this is not a good idea, since many users have come to rely on tcsh being present, and many existing installations have scripts (including login files and aliases) that rely on that.

I don't see any need to change anything here. If someone wants another shell, they're super easy to install, but they should not be in base. Personally, I always configure all my accounts to use bash, for consistency: It is a reasonably good shell (with a few annoying gnu-isms, but I can grit my teeth), and it is easily available on all machines I use.
 
In fact i copied "oksh" into /bin in order for eventual "recovery".
oksh uses libraries from the base system (actually /lib/libncursesw.so.9 and /lib/libc.so.7); So f.e. after an upgrade of FreeBSDs base system your recovery shell may require a library that isn't available anymore. None shell from the ports tree is a good idea for roots tasks.
 
So if i'm correct i must look for staticly linked shells without any dynamic dependency as recovery shell ?
But,
ldd /bin/sh
Code:
/bin/sh:
    libedit.so.8 => /lib/libedit.so.8 (0x1090d1ee000)
    libc.so.7 => /lib/libc.so.7 (0x1090eb8d000)
    libncursesw.so.9 => /lib/libncursesw.so.9 (0x1090dde4000)
 
/usr/ports/shells/bash-static #make patch
===> bash-static-5.2_3 is marked as broken: ld: error: undefined symbol:
rl_trim_arg_from_keyseq.
*** Error code 1
 
Dynamic linking with libraries is not a problem as long as they don't get broken by upgrades.
Who checks if an upgrade don't break a dynamic linking ? Rethorical question.
 
If I recall, running a ldd on it reveals that it isn't statically linked either.
Nope, it is a fancy blob of static ELF. It's not broken, works just fine. I use it on all my FreeBSD servers, even as root. And yes, I walked the walk of shame before when I had to boot the rescue CD to fix the login after unsuccessful upgrade.

While on HP-UX I've got very much used to ksh on daily basis. But I must admit I like bash as interactive shell (/me ducks down). And yeah, it's a big blob. I don't care for interactive shell. For scripts I stick to the classics - sh.

Dynamic linking with libraries is not a problem as long as they don't get broken by upgrades.
Problem is not that it depends on a library but rather it depends on a library from ports. And those can break easier. It's assumed that your base doesn't get screwed up that easily. You can always create an emergency rescue user that has shell in /rescue/sh in case of an emergency. But if you can't use csh from base because of dynlib usually you have bigger problems.
 
You walked the wall of shame, we all did.
Once I wanted to clean my system
So I did a rm -vfR /usr/local.
But my root shell zsh dynamic library was there.

Now comes the interesting part,
ldd
usr/local/bin/oksh:
libncursesw.so.9 => /lib/libncursesw.so.9 (0x822073000)
libc.so.7 => /lib/libc.so.7 (0x822702000)
You see nothing in /usr/local ... except oksh itself wich i had put in /bin
oksh has only dependencies in "base/world". Which have the tendency to be "more stable".
 
oksh has only dependencies in "base/world". Which have the tendency to be "more stable".
Yes, but: That dependencies may differ from what your system actually provides. An actual example can be found in a german BSD forum: Someone set Bash as the default shell. By upgrading from FreeBSD 12.x to FreeBSD 13.1 after a reboot (without having run the package upgrade yet) even a login required libncursesw.so.8, but that wasn't available anymore. So there was much more trouble to upgrade the packages after upgrading world.
OK, I'm sure you would be able to solve this by yourself, but: A shell that is not compiled with the whole environment it is running on is not a good idea to rely on.
 
As long as the system can boot and come up to the multiuser level (logins are possible), a lot of these issues can be solved by having a second root account (typically called toor), which uses a boring standard shell as its login shell. Traditionally, it uses (t)csh.
 
Nope, it is a fancy blob of static ELF. It's not broken, works just fine. I use it on all my FreeBSD servers, even as root.
Interesting. I use bash as my interactive shell on most my machines. Mainly because I am quite used to it but also so I can sidestep *bashisms* with my colleagues scripts ;)

It is just strange that the bash-static package drags in *any* additional dependencies. Perhaps for installation I will just extract it manually from the package file.
 
So if i'm correct i must look for staticly linked shells without any dynamic dependency as recovery shell ?
But,
ldd /bin/sh
/bin/sh is part of the base OS and is updated at the same time as /lib/libc.so. So there's never a risk of having the shell linked to a non-existing version of libc (they are both built and installed in unison). A shell that's installed with a port/package would still be linked to the 'old' libc version from the previous major version.
 
It is just strange that the bash-static package drags in *any* additional dependencies.
Yeah, that's true. Even pkg says so. While readelf doesn't show any dependencies I can't do this:
Code:
# cp /usr/local/bin/bash /a
# chroot /a /bash
Segmentation fault (core dumped)
#
So something's up with it. Not in a mood to debug it why now though. Always handy to have that rescue/2nd root account with a shell from base, that's for sure.

Just for demo if one needs a rescue user this does work:
Code:
# cp /rescue/sh /a/
# chroot /a /sh
Cannot read termcap database;
using dumb terminal settings.
#
But as I've mentioned if one has problem with libc login in general is least of the problem. :)
 
Yeah, that's true. Even pkg says so. While readelf doesn't show any dependencies I can't do this:
Code:
# cp /usr/local/bin/bash /a
# chroot /a /bash
Segmentation fault (core dumped)
#
So something's up with it. Not in a mood to debug it why now though. Always handy to have that rescue/2nd root account with a shell from base, that's for sure.

Just for demo if one needs a rescue user this does work:
Code:
# cp /rescue/sh /a/
# chroot /a /sh
Cannot read termcap database;
using dumb terminal settings.
#
But as I've mentioned if one has problem with libc login in general is least of the problem. :)
chroot / /a works fine
 
chroot / /a works fine
Not sure what you're trying to achieve; I wanted to chroot to /a and let the /bash within chroot be the shell being executed. Static binary without further dependency (or configuration requirement) would survive that.

You're missing a couple of important /dev/ entries in your chroot(8).
It should not crash because of that though. Curiosity did get best of me, I checked it. As I wanted to have debug symbols I compiled the bash from ports with static flag. This actually works:
Code:
# mkdir /a && cp /usr/ports/shells/bash/work/bash-5.2/bash /a
# chroot /a /bash
# echo /*
/bash
#
So bash-static from binary package is built differently. It's a pity FreeBSD doesn't ship debug symbols to go along the binary packages. It crashed because it killed itself. I'd assume it has some lib dependency that it fails on. Stacktrace was a bit weird though.
 
It's a pity FreeBSD doesn't ship debug symbols to go along the binary packages.
You need to build the port with DEBUG set. It's one of those 'hidden' options that's nearly always available but doesn't need to be in OPTIONS in the port's Makefile.
 
You need to build the port with DEBUG set. It's one of those 'hidden' options
I was talking about the binary packages not ports. FreeBSD doesn't provide these, one has to build the port with appropriate make.conf in place or toggle OPTIONS.
 
FreeBSD doesn't provide these, one has to build the port with appropriate make.conf in place or toggle OPTIONS.
Doesn't need to be in OPTIONS or make.conf. You can just add it, make -DWITH_DEBUG install in the port's directory.
 
We don't understand each other. I'm talking about debug symbols to the binary packages, packages that are installed by pkg. If I do pkg install bash-static I'll get the binary package without debug symbols. It would be nice if debug symbols would be available for that binary package, something like bash-static-dbgsym. This is not available so whenever one wants to have debug symbols for a given package it has to be built from ports (issue here is not how to build a port with debug symbols).

But since I did this bash test now for the sake of chroot it seems bash-static is built differently when it's shipped with binary package as my chroot test shows. Which is interesting.
 
it seems bash-static is built differently when it's shipped with binary package as my chroot test shows. Which is interesting.
Indeed. There is something not quite right with it. I assumed that the port was rotting a little because the static flavour probably isn't used by many users so is lacking some testing.
 
the static flavour probably isn't used by many users so is lacking some testing.
Yeah, could be. All my physical boxes use ports so I never had any issue with it.
I might poke around a bit around that crashdump as that did catch my attention and spur some curiosity. :)
 
I poked around in gdb and figured that out it's crashing on internal strcmp (arg0 being NULL). Don't want to go into too much detail; I found a way to make it work:
Code:
# mkdir /a
# cp -p /usr/local/bin/bash /a
# chroot /a /bash
Segmentation fault (core dumped)
#
# LC_ALL=C chroot /a /bash
# echo /*
/bash /bash.core
#
Interestingly enough bash 5.1 doesn't have this problem. Not that this helps anybody. :)
 
Back
Top