Actually, fully distributed source control means that people can make progress exactly WITHOUT talking to each other. And that's because in some open source development models, there is no central authority, no administration. For example, say there is a piece of code that knows how to count walking elephants. One programmer might enhance it for their own purposes, to also count flying elephants. Another programmer might start from the original base, and also count walking rhinos. Both can publish their changes. A third programmer might then pull both changes in, and create code that can count all pachyderms, walking or flying, without having to coordinate with any of them. None of them need to cooperate. It does create a lot of freedom, without any paranoia.
Yes, thats a nice idea about freedom. But there is a misconception: writing an OS is not a means in itself, done by the pure ambition of self-fulfillment no matter the outcome; it is instead a means to an end: to create something that actually works.
Now I perfectly understand that our ivory-tower league, namely the developers, are mainly interested in their self-fulfillment - and that's perfectly alright. But then, issues of freedom and paranoia have no place in engineering, and should actually rather be discussed with a therapist.
Anyway, we did already have exactly that, with Linux, in 1995 (and from what I learned, it has not changed in the meantime):
[the following is all practical, real and authentic experience of my own - it is in no way made up]
Act 1.
It begins with the code not doing the expected thing. You read the source and you figure, it should do the expected thing! Finally you figure out: the source is
not from what the object was built! Somewhere there was a change - nobody knows where, nobody knows what, nobody knows why - and version numbers are a chaotic heap, so you never know what you're actually running.
Act 2.
Then, if you finally go and find some source to compile it yourself, to at least get an object that matches your source - there is no means to figure out if this source is the appropriate one matching to the rest of the system. Because it is a
bazaar: there are lots of sources you can choose from. There are lots of versions of these. And, specifically, there is no monotonous numbering, so you cannot just read the commitlog
in sequence, to understand what has developed and how we have gotten here.
Act 3.
During that process of looking into the source, you practically always find a bunch of bugs, mistakes and coding errors on the wayside. Some are obvious mistakes, and could just be corrected. But most are related to and interdependent with other functionality - so to solve the matter, one would first need to talk to the auther, to evaluate what they actually thought when writing it that way.
But, as You explained, we do not do this. We don't talk to each other anymore.
Act 4.
Finally, you may find a commit log that actually identifies the auther of something. But that doesn't help you in any way. Because all you get is a cipher under which the author writes. The actual contact data is protected, and is only uncloaked to customers of the github corporation.
But even then, if you manage to find some customer of github, and manage to have some message dispatched to the author, you may most likely not get a reaction.
Because, as You explained, we do not do this. We don't talk to each other anymore.
Exactly this is the reason why I dumped linux. And when I was pointed to FreeBSD, it became immediately obvious that here the things were done in the right way. There was a consistent codebase. The code in /usr/src would always exactly represent the running object, because it was built from there: straight down the line.
And, most important: there were
people! People who knew what they were doing - people with a skill level so that I still think I should rather call them demi-gods.[1] And these people were going fully open! They were visible in public discussions, and they had signatures like old-school Usenet, like scientists have: sometimes even with full address and phone!
Now, today, all this has already decayed. Gradually and slowly, but nevertheless. In the old time, if you were to send a bug report, it got processed. Sometimes quickly, sometimes one or two years later. But it got processed.
Now we have tool to store away the bugreports, so that nobody needs to bother reading them.
And obviousely, as You have explained, everybody is just throwing in their beloved features, without ever caring for anything else, and none of these nor anybody else feels responsible for the outcome. So who should be concerned about bugs, at all? Obviousely nobody.
Then, as
Zirias described
in his paper (paragraph 7.), FreeBSD once had a culture of fixing and improving things over time. But no longer.
Nowadays, things are just thrown over the fence, I mean, into the codebase - and then the author disappears again. Take, for instance, the ULE scheduler. Since the beginning, people were complaining that it does not work well under all conditions. And consequentially there was great engagement in bullying those complainers, and telling them they should just revert to the old scheduler and shut up. (There was no engagement in looking into the code and figuring out what actually goes wrong there.)
Then, I was hit by the malfunction. So I grabbed dtrace, and figured what is going wrong (plus at least one additional bug found on the wayside). Obviousely, nobody cares. I have patches for these - I do not know if they make the behaviour better over-all; they just fix the malfunction I was running into. Obviousely, nobody cares.
I finally managed to figure the e-mail of the original author, and he actually responded! But then, he seems to be adherent to the google code-of-conduct (bottomline: "we just want to be happy developers, and we do
not want to talk about nasty and unpleasant things, specifically
not about such abominable things as bugs and malfuctions"). So, as soon as the topic came to bugs, communication stalled.
[1] Strange story on the wayside: when later I got a job, and started to do consulting, i.e. building Unix client/server infrastructures and Internet functionality for major european banks and insurance companies, I was considered a "guru" by my fellow consultants - because I was almost the only one there who would ever have looked into the source, who might even dare to write a kernel driver if need arises. While the others were mostly focused on reading release notes and doing installations/configurations along the book.
OTOH there were these demi-gods of the Berkeley OS: people like e.g. HPSelasky, or Matt Dillon - there was a couple of dozens of those, and their skill was so many magnitudes above what I could imagine, I never even dared to talk to them.
Why does anyone has to be in charge? And from a software quality point of view: I would expect that anyone who writes any change follows good software engineering practices, writes a clear requirements document, reviews all artifacts, and test their code after it is done.
Yes, I was already waiting for that test-crap coming up. This is indeed what seems to be the new mindset: lets write any crap we want, because we have tests in place, and the tests will tell us if the crap works or not, and lets finally do away with any attempt for logical verification (commonly termed: "thinking").
Proper engineering means to logically think through the stuff, to understand what it does, in relation to the other components and the system as a whole. And this certainly cannot be done when you don't even know the other components.
I know that this is the point where it hurts - because people get very violent when you try to make them think - thinking is painful to them, and they want to avoid it.
The fact that you can mix-and-match changes from many sources doesn't have to reduce quality, if the developers have a mindset of creating high quality software.
It is all about the mindset - and the mindset is that we have tests in place as a rope to catch us when we fall. And you behave entirely different when you know that you are protected: you no longer strive to behave error-free; you create more crap, because you think nothing too bad can happen from it. High quality is already abandoned at that point.
But also, the problem is: tests don't catch you when you fall! Tests can only protect from re-introducing problem that are already known (and fixed). Because, a test needs to be written first, and it can only be written if somebody has thought about and knows that there can be a possible malfunction that should be tested against.
I know the agile culture came up with a pragma that there should be at least as many LOC of tests as of active code, and so they started to create their test code automatically. Very great - so there is zero skill going into the tests, and you could just leave them away and have the same.
But, over all, I think we must understand that the ambitions of the users and those of the developers
point in exactly the opposite direction.
While the users choose FreeBSD for reasons like those I mentioned above, or those
Zirias mentions in his paper, and many of them choose FreeBSD deliberately while
running away from Linux (for reasons like those mentioned above), and therefore have no interest whatsoever in getting back the Linux workflow - to the contrary, the main interest of the developers is to put FreeBSD where it belongs, as
just another Linux distribution - for very obviouse reasons: the more FreeBSD becomes Linux, the easier becomes porting.