(Talking about the Linux kernel)
Somewhere there was a change - nobody knows where, nobody knows what, nobody knows why - and version numbers are a chaotic heap, so you never know what you're actually running.
At least up to Linux kernel 2.4 including, version numbers were exceedingly clear and numerically increasing, and could be found in the name of tar files at kernel.org. That goes up to about 2005 or so. After that, I stopped compiling my own Linux kernels.
(about distributed version control enabling distributed development)
But, as You explained, we do not do this. We don't talk to each other anymore.
What I said is: We can now do development and re-merge development without central coordination. That doesn't mean that we have to stop using central coordination; git works perfectly well in a centralized model (with a single master repository) also. It also doesn't mean that one developer can't talk to another developer. It only means that they don't have to do these things.
Finally, you may find a commit log that actually identifies the auther of something. But that doesn't help you in any way. Because all you get is a cipher under which the author writes. The actual contact data is protected, and is only uncloaked to customers of the github corporation.
But even then, if you manage to find some customer of github, and manage to have some message dispatched to the author, you may most likely not get a reaction.
Do not confuse github (a commercial corporation, now owned by Microsoft) with git (a free/open piece of software). I have used git quite a bit (professionally for about 10 years, personally for about 5), and never created a github account, and never worked on any source that came from github. Every piece of code I've used in git has the clear-text e-mail address of the committer for each commit. To my knowledge, the FreeBSD source will not be controlled by the github copy (which is a secondary copy, not the master), so this issue doesn't arise.
Now the question whether developers respond to e-mails: That's up to them. In volunteer-run free/open software development, there is no way to force them. With commercial software, a customer has means (contract law) to force software vendors to fix bugs, and vendors have means to force developers to do so (as developers are employees, the paycheck is a pretty good carrot and stick). But the question of what version control system is being used doesn't change the relationship between user, bug and developer.
(About software quality)
It is all about the mindset - and the mindset is that we have tests in place as a rope to catch us when we fall. And you behave entirely different when you know that you are protected: you no longer strive to behave error-free; you create more crap, because you think nothing too bad can happen from it. High quality is already abandoned at that point.
Please read what I said. The thing that determines code quality is the mindset of the developer. That encompasses many things. For example having clear requirements (knowing what the code is supposed to accomplish). For example creating well-crafted and easily understandable and maintainable artifacts. And to run tests to validate that these goals are being met. Tests are part of whole picture. You can't "test quality into software", but without any tests, you don't even know whether you have met your quality goals.
And to be clear: When I say tests, I don't only mean automated tests (which are part of the source code). I think just as important is a dedicated team of testers, in a corporate setting disconnected from engineering (they report to a different VP, so there is no temptation to fake test results), good test plans, room in the schedule and budget for testing. You talk about counting the LOC of tests; I don't like that metric at all. In my mind the correct metric is: for every software engineer on the payroll, you should have about 2 testers on the payroll.
But also, the problem is: tests don't catch you when you fall! Tests can only protect from re-introducing problem that are already known (and fixed). Because, a test needs to be written first, and it can only be written if somebody has thought about and knows that there can be a possible malfunction that should be tested against.
Nonsense. If you write tests this way, your software process is broken. Tests are there to validate that requirements are being met. Example requirement: "This piece of software shall count the number of elephants in the zoo, and print a non-negative integer when run from the command line as ./count_elephant. If the computer is not installed in a zoo, it shall crash with a clear English error message. If the number of elephants is below 10, the count shall match exactly. If it is between 10 and 99, it shall match within 10%, and at 100 and above within 5%." How do you test this? Your test team sets up a fake zoo, catches a few elephants, and tries various scenarios (like 0, 9, 11 and 42 elephants), runs the program 100 times each, and performs statistics on the results. They could run regression ... making sure the accuracy doesn't get worse with new versions, and checks that the running time of the program is within reason. The could run the program at an aquarium, and check the spelink of the error message. This is testing. It has to be driven by requirements, not by last week's bug.
Old joke: A software company builds a bar. On the day before beta release, the tester walks into the bar, orders one beer, gets it and drinks it, all good. He orders zero beers, he orders 5 beers for his colleagues, he order sqrt(-1) beers, he orders qwertyuiop beers, he orders 6.02e23 beers, and in all cases he gets expected results. Signs off on public release. The next day, the first real customer comes in, asks what time it is, the bar explodes killing everyone in sight. Oops.
But: None of this discussion has anything to do with SVN versus git.