Open Source Review Security

So, we've all accepted the "many eyes" theory of open source, and we assume that those eyes find many defects and fix them, hence increasing security. But, inside of many very important security sectors (especially in the U.S.) - that line of reasoning is said not to work. To paraphrase a few words from the linked article at the end of this post:

"Allowing people to review source code for even a minute is dangerous"

https://www.reuters.com/article/us-...e-widely-used-by-u-s-government-idUSKBN1FE1DT

I don't think the U.S. military has ever released source code just to increase its security. When is open source more secure, and under what conditions?

What does the FreeBSD forum community think about this? I know this has been hashed and rehashed many times here on the forum, but have we ever constructed a simple, one or two sentence statement that could be the definitive answer for this question? Is it that the source (the blueprint) allows an attacker to target areas and find a single defect, making it inappropriate for zero-tolerance situations, but that same blueprint enhances overall the number of fixed defects, making reviewable source a better option in all situations except for zero tolerance ones? Perhaps I've answered my own question, but maybe some other forum members could provide their own all-in-one answers.

If my explanation suits you, then you'll agree that any software source used inside of secure apps (especially certain govt quarters) - should NEVER be given to (potential) adversaries. That seems to eliminate Unix and Linux for use in these quarters, and (I would think) - antivirus software with exposed source.
 
So, we've all accepted the "many eyes" theory of open source, and we assume that those eyes find many defects and fix them, hence increasing security.
We have all? Hardly. I for example do not accept that theory. My theory would be: A thorough review by qualified people will increase security. The term "thorough" implies that the review and the reviewers are well managed, meaning trained, supervised, their efficacy measured, scheduled, and so on. Note that I'm not implying that the reviewers are employees and are paid, nor that the managers are their supervisors in an employment law sense of the word. OpenBSD demonstrates that thorough review is possible without much money changing hands.

Hundreds of drunk graduate students with a temper but without a clue looking at the code does not increase security. It might increase the feeling of security in spectactors.

Underlying all this is a deep question of the philosophy of software development. Security is one aspect of the quality of a software product (there are others, like correctness, performance, easy user interfaces, good support, ...). It is hard for quality to be reviewed into software after the fact. It needs to be designed in: All developers need to know what the goals for the project are (are we building a tank, or a minivan, or a race car, ...), and if security is one of the goals, they need to make their design and coding decisions accordingly. A review after the fact can only determine whether the quality goals were met or not. What do you do if they aren't? Redesign and rewrite.

Note: I'm not saying that reviews (in particular design and code reviews) are useless. But I am saying that they need to be part of the development process, not added after the fact; and the goal of the review has to align with the goal of the development process.

"Allowing people to review source code for even a minute is dangerous"
Taken to that extreme, the statement is also completely wrong.

The statement that is correct is the following: Allowing adversaries to look at your code is dangerous. Because they can find security openings, and exploit them. Obviously, there are tradeoffs here. If you don't have the money for a high-quality development process, you can use open-source software, or put your code out for public viewing, and hope that 99.9% of the reviewers are good guys (who look for holes, and when they find then fix them or tell you about them), and only 0.1% of the reviewers are hackers intent on attacking you once they find a hole. In this scenario, you still come out ahead on average.

If my explanation suits you, then you'll agree that any software source used inside of secure apps (especially certain govt quarters) - should NEVER be given to (potential) adversaries. That seems to eliminate Unix and Linux for use in these quarters, and (I would think) - antivirus software with exposed source.
Not all Unixes: Many are closed source. Or have you ever seen the source code to Solaris, AIX or HP-UX (unless you happen to work for these companies, or have NDAs with them)?

Yet both the US and other competently run militaries and even more secretish entities (like the National Walrus Orefice and No Such Agency) make extensive use of Linux. But they also have other security precautions. For example, I once worked with people at an installation where every sys admin has an assault rifle on their back or leaning against his desk, and where vendor representatives (like field service technicians) are run through a metal detector and patted down to make sure they don't have guns.

Security is not a simple black and white game.
 
Oh just remember the early days of Open Source with Richard M Stallman and his penchant for refusing to use passwords. Yet, he worked for DARPA and all of the strict protocol (which he could not stand).

The paradigm of the Cathedral vs. the Bazaar as dictating the design philosophy and benefits for each.

The freedom to brainstorm and think of truly inspired creative solutions does not come too often from adverse, harshly critical environments.
 
Security is not a simple black and white game.

I'll grant you that!

Not all Unixes: Many are closed source. Or have you ever seen the source code to Solaris, AIX ... ?

I've never seen the AIX code. I've thought probably too much about this problem (how to position oneself in the software realm so as to maximize security and minimize costs). I've never convinced myself there's a very good answer. You mentioned closed source Unix, but I could throw Microsoft and Apple products into that cart, but then I have to wonder about the motives of each of those proprietary companies. Is it all good? It seems that I have to write my own operating system and then run it on my own brand new silicon, that I of course have not revealed to anyone. Oops - being a fabless design shop of one, I've outsourced my hardware to Intel. Not a feasible option anyway, of course.

So, there are going to be many people/companies/entities involved as a matter of practical fact, but unfortunately we must align this information with all the recent disclosures which prompt us to trust nobody.

If we had omnipotent surveillance, I wonder if we'd find that in the most extreme corners of "those other quarters" I mentioned, they were using a completely different known-to-nobody OS running on known-to-nobody hardware, and operated by - you know - nobody.
 
So, we've all accepted the "many eyes" theory of open source, and we assume that those eyes find many defects and fix them, hence increasing security.
Actually that theory has been proven to be bollocks many times already. It's simply not true. Take for example the Debian OpenSSL disaster. A package maintainer (out of all people) altered the very encryption engine of the software yet it took them 5 years (!) before they finally discovered the flaw. Many eyes indeed.

And I mean that somewhat sarcastically but it's true. If you look at the backlash this has caused and how many ISP's started to revoke tons of their certificates then yeah, this was huge. Yet millions of eyes failed to spot that which was basically right in front of their noses.

See, just because you have access to the source code doesn't mean you're also going to study it when using said software. Some might, but a majority doesn't. In fact I think we can even argue that "open source" basically means "free software" for the majority of users. Free as in beer.

Many eyes can still easily overlook the obvious, quantity doesn't make quality. Many hands on the other hand do manage to get some solid work done. That, to me, is the true power of open source projects and open source software in general.

"Allowing people to review source code for even a minute is dangerous"
(I put up a double code block to avoid creating confusion)

For me this is going from one dumb absolute (the "many eyes doctrine") right into the other ;) There's a bit of truth in there of course: if you have full access to the way something works then you could theoretically use that knowledge to abuse said environment. In my opinion most modern politicians would prefer to fight the symptom by revoking access to said source code while the real culprit (the actual problems in the software) is left untouched.

So obviously it could be dangerous to give people access to the source code. But it's not the source code which makes it dangerous but rather the poor state of the software.

There really isn't a "right" or "wrong" here. Open source software is most definitely not safer by definition because of its target audience, that's bollocks. It can give an advantage, but that only goes as far as people will use it. And most don't.
 
...
There really isn't a "right" or "wrong" here. Open source software is most definitely not safer by definition because of its target audience, that's bollocks. It can give an advantage, but that only goes as far as people will use it. And most don't. ...

Fully agree. The OpenSSL thing was certainly a fiasco.

I have this semi-fallacious belief that if I use the most obscure/mysterious OS I can find, and the most unsung/obscure hardware it can run on, where all the people who designed and/or used either thing are dead or in an infirmary, I'll be all smuggly and safe in my cocoon. Then I wake up.

That can't be why I use FreeBSD, can it?
 
I've done a little coding in my time, purely at a hobbyist level. I've never reviewed code for security in any open source software I've used. I doubt there's many that do review code in that way. I can't imagine a volunteer developer having nothing better to do than review somebody else's code for security flaws. The many eyes idea sounds plausible, but in reality I don't think it's a huge advantage. The OpenSSL fiasco is a case in point. It's nice the code is out there for anyone to review and I think there's a lot of good that comes of it, but it does provide an attack vector. Even so I'd rather the source be open than closed.
 
Back
Top