Maybe a web of trust of all committers might help.
(Phil Zimmermann cited on wikipedia:
Web of trust)
The cryptography web of trust describes a mechanism for implementing how trust is transferred between people. That's a typical computer problem, solvable by algorithms. It does not explain how trust is created in the first place; that is a sociological and psychological problem, for which solutions exist, but they don't use computers.
Let me explain by giving two scenarios. Say I work in a big company, where my email is
ralph@example.com. Our department just hired two new engineers,
alice@example.com and
bob@example.com. I meet Alice and Bob; we talk about their skills, their preferences, what part of the project they are most interested in. We discuss what training is required, how they can learn the existing code base, the coding conventions, the mechanical process (build machine, source code control). In that interaction they act and react: I can see that they have prior experience (and sometimes that they are lacking experience, which is what training is for). At lunchtime, we discuss life: kids, broken cars, hobbies, family, all that stuff. I can see that they are real people with real concerns, strength and weaknesses, predicaments and struggles. After a week or two of working together, I understand what makes Alice and Bob tick, why they work here, how they make decisions. Also, I know that Human Resources ran a background check on them, so I can be sure that they have not been in legal trouble for computer crimes and so on (*). I meet their family at group events, we go to a Christmas dinner together, I know them as people, not just as programmers. Eventually, this is where trust comes from, and our group lets Alice and Bob work on our project and contribute (check in code). And while we carefully check each others work product (whether that be code reviews or design discussions), I fundamentally trust them to do the right thing: because I know that if they did the wrong thing, their dreams and aspirations (whatever those might be) would be shattered.
Underneath that is a basic social contract: Alice and Bob are working for Example.com, probably because they want that salary. Perhaps Alice wants to buy a faster BMW, and Bob needs money for his kid's ballet classes. They both understand that if their performance is bad or highly negative (like if they put a nasty hack into Example's source code), they will not only lose that salary, but they will go to jail. And having talked to them (and their significant other, and perhaps their kids), and having authenticated them (I use that word very deliberately) by checking their background, I think I know what drives their life choices. As an example, maybe Alice went to SUNY Stony Brook for her CS degree, and at a conference I talked to her advisor, and he mentioned that she is probably happier in California, because the rainy and cold weather in upstate New York didn't allow her to take her car to racetracks often enough, whereas Buttonwillow and Sonoma nearly always have sunshine. Maybe one day Alice comes to work all bandaged and beat up, and admits that on Saturday she rolled the car at the track, and her boyfriend had to take her to the emergency room and drive her home afterwards, so maybe she'll marry him after all. And she's really hoping to get that pay raise next month, because the new Porsche is really expensive. Or maybe Bob complains at work that our office schedule is not compatible with his kid's new clarinet classes, and whether we could move the weekly code review meeting from Tuesday to Wednesday; in exchange he'll do extra on-call bug catching shifts Thursday afternoon, when his wife takes the kid to soccer; and the fact that his kid is playing soccer is my fault, because I introduced Bob and his wife to my kid's former soccer club coach. You see the web of trust here.
Are Alice and Bob likely to be script kiddies, or full-time employees of a foreign intelligence agency? No. Theoretically possible, but very unlikely. I think I understand their motivation and goals in life; after working together (and discussing cars and kids), it's very difficult (and uncommon) to live a double life. If our work is of particular importance (such as a national security / military project, or important infrastructure), then I can even be sure that trained professionals have checked Alice's and Bob's background super carefully.
Now contrast this with the remote and anonymous style used in open source development: I'm working on a project, and I see that
alice@gmail.com and
bob@outlook.com send pull requests. I review their first commit, and it looks good, so we accept the change set. Over the next few months, they keep coding, and their work looks good. I know nothing about Alice and Bob ... I don't even know whether that's their real name, or just an e-mail address. They might have kids, they might be saving up money to buy a new BMW, their aging dad might be in the hospital, I don't know any of these things. I might be able to infer what time zone they are in (from when they send code and when they react to e-mails), but even that is uncertain, they might be night owls, or doing their open source development at odd hours (maybe they have a day job). I know absolutely nothing about these people. It is perfectly possible that they are sociopaths, professional black hat hackers. It is also quite possible that they are really smart CS students, working on real-world projects to pad their resume, so in a few years they can get those comfortable and well-paid jobs, like Alice and Bob have today.
But the scary thing is: I have no reason to trust those two anonymous people. All I have to go on is their code submissions, and perhaps a small number of e-mails or discussion posts when talking about purely technical issues. If one of them takes a 4-week break from coding, I don't know whether that's because they had to travel to stay with their uncle who is in the hospital after a stroke, or whether they had to go to another training class offered by the Elbonian Spy Agency's hacking class. There is no web of trust here; all I have is an e-mail address and a public key. Cryptography doesn't create trust, but it is a convenient way to take trust that was created offline and implement it.
The solution to writing good (and trustworthy!) software doesn't lie in technology. It lies in interpersonal relationships. There is a reason the two best books on Software Engineering don't concern themselves much with things like programming languages or how many spaces to indent, but with how to work together. The two books are "The mythical man-month" and "Peopleware: productive people and teams". You can read K&R, Stroustrup and Steele until you have completely absorbed them; if you don't understand how to build social groups to work together, it will amount to nothing.
(*) Footnote: Yes, I've seen failures in this process too. One time I interviewed a new sys admin for our group, and we hired them. About a week or two after they started the job, HR and corporate security came to us, and the new employee was fired on the spot and walked out of the building. It turns out they had lied in their application about a previous job, on which they were caught hacking, and got a felony computer crimes conviction and spent a few years in jail. Oops. The second failure was a newly hired employee, who worked in an office geographically far away from us, and because of that there was relatively little contact with them. They ended up stealing all our source code, and attempting to sell it to an underground hacking group pretending to be associated with the Chinese spy agencies. In reality, the "underground hacking group" were some FBI agents who had seen the employee advertising the source code for sale on the dark web, and set up a sting. I think they got 10 years in federal prison.