Here is an idea. I don't know whether it is a good idea. In a nutshell, it amount to turning FreeBSD into a research operating system, and try out whether some of the design principles of Unix can be completely overthrown, but do so in a compatible fashion.
One example: Since the first day of Unix, the way command line parsing was handled was super simple. The shell knows how to glob (expand * and ?), and that's all it knows. Otherwise, the shell has no knowledge whatsoever what all the string arguments a program consumes mean. This is one of the reasons that autocomplete works so badly, and requires so many setup files to get it to work half-decently. For example, "ls foo<tab>" tries to autocomplete all possible file names that start with foo, which happens to be the correct functionality. On the other hand, "ping foo<tab>" does the same thing, but the shell doesn't know that ping's first argument is not a local file, but a node name. And when you get into options, parsing and autocomplete go complete haywire: "ls -<tab>" will not try to autocomplete all possible options; much less will it give help on what options might be available. That is, unless there are files in the current directory whose names start with "-", in which case the result will be very much not what the user intended. The extreme example is is: If you create a file named "-Rf" in your current directory, then "rm *" becomes insanely dangerous. That's because the rm program does not have any clue that the first argument "-Rf" was actually a file name that resulted from a glob, not an option. Finally, there is no help available at the command line, without actually running the executable. It is impossible to implement a special character (such as Control-?) that would ask a program from the command line: "what input would be valid here" or "what does this argument or option mean".
So here is my rough proposal: Change all CLI programs that ship with the FreeBSD base to use a standardized command line parser. One that understands the differences between options and arguments, knows the domain of arguments for autocomplete, help and globbing (so "ping *" pings all hosts on the local network, instead of trying to ping all file names), has a built-in standardized help facility with good i18n support. One way of implementing this is to change the task of "executing" a program to grow the ability for the shell to peek into the command line parser of each program executable file, just like a shared library can be read on the fly. It could also be implemented by using the main() of the program in a coroutine fashion. Clearly, the current ABI of "main(int argc, char* argv[])" would have to change, while the current ABI would need to be supported for existing code.
There is precedent for this: VMS had a standardized options/argument parser. That's why all VMS programs are so consistent in their use of options and arguments, whether supplied by Digital or by the open source community: once it becomes easier to just use the OS's default parser, everyone does it, and programs automatically run consistently. And because complexity (such as options that only apply to a subset of arguments) could be written just once (in the standardized parser) and then amortized, it was possible to improve the syntax and semantics of command line parsing in new versions, without having to rewrite lots of code.
Great, once we have that done, do the next step: Decide on which shell is the best one to use, and make that shell callable as a library. In a nutshell (pun!), the shell becomes a data-driven library with a standardized API, and the thing we normally use at the command line is just one version, where the "data" it is driven from s the set of all program (executables) on the path. Now programs that use a command line internally (such as debuggers) don't have to write their own command parsing and decoding any longer. The GNU readline library is a small step in this direction, but what it is lacking is shell functionality such as variables, quoting, loops, if statements, and so on. I think we would start seeing a whole new set of CLI-driven utilities for things like network management, log analysis, root cause analysis, and so on.
After a few years, this could be transmutated into new ways of writing RPC mechanisms and protocols, by making the data that describes a program's options parsing and interaction with the shell, and its internal mechanisms available over the network. Think of it as CORBA and EJB (Enterprise JavaBeans), but on steroids and tied into basic OS operation.
There are many other possible examples. One is to get rid of today's file system (POSIX) interface, completely. Move to immutable objects, without names (internally using OIDs), and a catalog system based on a KV store, which among many other things allows finding objects and their OIDs. Find ways to do aggregation of objects to efficiently handle tasks as "many related objects" (today we have directories for that), "an object that exists in multiple contexts" (to replace hard- and soft-links), and "an object that can get appended to" (for today's logs). This immediately leads to the concept of named files having versions, which many OSes had, but Unix forgot to implement. This idea is very powerful if one combines it with a transactional updates to create a new object from modifications of an old one ... if done right, todays notion of "backup" becomes a very different and much more efficient animal. Then combine it with an idea that at first may seem orthogonal: the only thing that can get stored on disk is an object (here meant in the programming language sense of the word). That notion takes over what used to be called "file format" or "record management" in OSes like MVS and VMS. Usually, the "object in memory" is a container class (such as a list or vector or database table) of smaller objects. As an example, what today is a file full of C source code (stored as ASCII characters, with a newline to indicate where new source line starts) might get stored as a list of C statements, each of them stored as a graph of that statement's parse tree. When combined with the earlier idea that every version of a source is an immutable file system object, the whole notion of "editor" or "compiler" becomes much more powerful. Today's IDEs do a lot of this manually, but maybe it would be good if this was the universal format for all files. Obviously, existing flat files (arrays of bytes) would have to be supported for backwards compatibility.
These are the kind of things that would take a smart group of CS researchers (a big prof, a few ass profs, a few postdocs, a dozen grad students) a few years to plan, bounce off people, prototype, and then get to production. It's the kind of thing that could be done by a university research group and get a few PhDs and MSs done. Except today it couldn't: academic systems research has fragmented way too much and has moved way too much upstream (into fields such as storage, networking, distributed systems), and there is no funding mechanism for "make general use OSes more useful and extensible".
The BSDs, being well engineered, having a coherent core group (Hi Kirk!), and traditionally having strong links to the research community would be the perfect platform for this. Doing this in Linux would be impossible, because Linus would have one of his famous hissy fits. And the FAANG don't care, their scale is such that an individual OS node is no longer important enough to worry about; they instead build much larger systems (for an example, look at gRPC, MapReduce and Borg, which created Hadoop, Docker and Kubernetes).