so, if someone sneaks a backdoor into security/snort, you just uninstall it and look for something else if you're just a user. But as a dev, you'd be responsible for your own project, and failure to handle a security breach properly will drag your project back into mud and obscurity.While I do need to trust and possibly run all code in the kernel, I do not need to trust any of the ports to be clean and worth of my trust.
There are bazaars and then there are FOSS bazaars, full of unnecessary intermingling dependencies pulled in for random trivial crap (mostly PIP/NPM/crates.io/CPAN stuff for the corresponding dependency aggregator "languages").And then, in ports, the "cathedral" ends where third party upstreams come into play. There's just no other way to offer a really large open-source "software collection" than "bazaar".
This is the kind of expectation of responsibility that is normally codified in a given project's Code of Conduct .
Code of Conduct does create a motivation to lie in order to avoid certain consequences, true... and I'm gonna counter with the idea of Hanlon's Razor: Do not attribute to malice what can be explained with simple stupidity.It is not. A code of conduct is a way to enforce the obligation to lie, i.e. politically correct newspeak.
For long times it was quite normal that somebody writing a software would have their backdoor included, and nobody would take offense in that. Because computer people were a community, they were all in the same mailboxes and on the same side.
And basically there are only two sides: there are those people rich enough to buy islands, and those people who cannot afford enough to eat.
And the whole purpose of a code of conduct is to lie away this distinction and replace it with empty newspeak blabla.
Not so soft for those sitting in some Boeing737max.Staying informed gives you a softer landing spot.
Yeah, and I fly Airbus... - because I'm informed.Not so soft for those sitting in some Boeing737max.
This is also part of our software quality culture and adherence to procedures of conduct unwelcoming critical viewpoints.
It will take a while, may be a long while, but one thing we can do is to switch to object capability (ocap) hardware, may be like CHERI. This is essentially permission given on a need to know basis. Every third party software package put in its own protection domain and gets access to only what it needs and no more. No more need for an all powerful (but easily fooled) "superuser". No more "confused deputy" problem. So for example, a compression library such as xz can't access anything but the input and output streams. In a sense this model is already used for distributed systems (you need the right key to commit etc.) but not locally and many of these systems use a role based access model.On the other hand: Linux and its underlying open source layers today runs 99% of all computing in the world, and there is no economic way to replace it with a trusted system, except for very small high-value corners (aerospace, military and national security still uses some custom software written exclusively by paid professionals with security clearances). This is sort of a man-made disaster that is happening slowly; somewhat similar to global warming. I have no idea how to fix it as a society; for my servers at home I can be a little more careful.
And here we go...as I saw a recent White House report that is urging the industry to use memory safe languages as well as secure hardware building blocks such as CHERI or ARM's memory tagged extension.
A bold step onwards to having only government-approved operating systems allowed to access the network, for your protection.
Well, you can try to write a simulation of that security model. Take an existing facility that is equipped with key card readers, grab logs for say, a week, and feed those logs to the simulation. You'll quickly discover that even with computer-aided analysis, the model is quite impractical, and will result in wasted time just managing the system and making manual adjustments.The difficulty with any fine-grained permissions scheme such as ocap is in human factors. It is like every room in your factory (or house) has its own lock and you need the right key to get in and use a subset of tools in a given room, and no two people or rooms use the same key. On top of that there would be need to revoke keys or provide one time access keys etc. How do you manage this mess of keys without getting all tangled up? How do you make systems secure without making them much harder too use?
No. This is a sociological or management problem: We need to change the way we allow people to work on mission-critical software, to make sure they are trustworthy. That means authenticating them (are you really the person you claim to be, please show us your ID card or driver's license), it means understanding their background (why do you online chat with a North Korean embassy employee who is known to be under cover), it means monitoring them (why are you taking a vacation in Moscow during the month with the worst weather), it means looking into their bank account (where did those 2 million yuan come from). In many software development environments (military and intelligence) that is actually done. While I've never been security cleared, I've spent an hour with a person from the government who works for a security agency several times, because they wanted to be sure (well, they also wanted to make sure I fill out my weekly time card absolutely correctly). In a civilian setting, this is done by knowing your colleagues as people, not as identity-less e-mail aliases. If the group of people who pulled off this hack had to come to work in an office with authenticated people every day (for example in the large Linux development groups that exist inside Intel, IBM or Google), their cover would have been blown within 5 minutes: There are 3-4 people, with thick Russian accents, and they all pretend to be "Jia Tin"?It will take a while, may be a long while, but one thing we can do is to switch to ...
I'm not saying that this is a bad idea. Matter-of-fact, it is probably a really good idea. I'm actually old enough to have done some coding (in class) on the old Burroughs 5000 series machines, where each memory word "knew" whether it was an instruction or data, so it was nearly impossible to mistakenly execute integers or floating point numbers (known as a tagged architecture). I remember learning that there is also some hardware where low-level memory "knows" whether it is an integer or a string, which really helps find lots of bugs. I was there when the big micro-kernel discussions happened (and had the bad luck of having to use some micro-kernel based computers, which were barely functional). These are just a few examples of lessons from history that have been forgotten, and perhaps new designs (such as capability-based machines) will bring them back.Every third party software package put in its own protection domain and gets access to only what it needs and no more.
All these things are a starting point. Where we can, let's use all the nice computer architecture techniques you mentioned. But we need to remember that ultimately, things like padlocks, keys, doors and safes are just there to slow thieves down; the ultimate way to prevent crime is to have a jail where burglars (once caught) get punished. At its core, this is a social problem, not a technological one.May be there is some hope as I saw a recent White House report that is urging the industry to use memory safe languages as well as secure hardware building blocks such as CHERI or ARM's memory tagged extension.
No. This is a sociological or management problem: We need to change the way we allow people to work on mission-critical software, to make sure they are trustworthy. That means authenticating them (are you really the person you claim to be, please show us your ID card or driver's license), it means understanding their background (why do you online chat with a North Korean embassy employee who is known to be under cover), it means monitoring them (why are you taking a vacation in Moscow during the month with the worst weather), it means looking into their bank account (where did those 2 million yuan come from).
Linux is free unless you're using the FSF definition in which case it's considered open source as stated by the Linux foundation. Since the kernel ships with "binary blobs". Not the most helpful observation probably but I find the ideologies and licencing definitions to be somewhat interesting myself.I didn't discuss governments getting involved. Whether regulatory oversight is a good idea or a bad idea in the software industry is a fascinating question, and this case will influence that debate.
But if someone wants reliable software (and reliability includes it not having dangerous hacks and backdoors in it), that will cost someone somewhere money. Your economic argument is not wrong, but it is inapplicable: The FOSS movement is not a normal economy. People do get things for free, and normal economic cost/benefit analysis does not apply. As an example: Linux, OpenBSD and seL4 are all free operating system kernels (*). Yet, we would all agree that they differ massively in their "security" (I listed them in order). You want a secure machine? Download seL4, and you get much more than you paid for, because seL4 costs nothing.
(*) The word OpenBSD usually refers to a complete OS distribution; above I'm talking only about the kernel; while the word "Linux" is usually used to describe distributions such as RedHat or Ubuntu, I mean it literally here.
This will never be 100% successful. This is why we need "defense in depth". Why we live in houses with locks (even though you can (or used to) get away without locking them in small towns. Some friends in Palo Alto used to never lock their doors but that probably changed).This is a sociological or management problem: We need to change the way we allow people to work on mission-critical software, to make sure they are trustworthy.
"Assembly" will get more complicated for sure. But this is already the case with shared libs. What changes is you have to pass the required ocaps as arguments and the shared lib can access only memory or resources protected by these ocaps and in the way you intended. But if you link something in your address space it has access to everything so you'd better make sure you can trust it. Currently you also have to trust shared libraries whether they are trustworthy or not. Still, you have no choice: the amount of 3rd party software we already depend on is so much that we are never going to find enough "trustworthy" and competent people to audit all that or rewrite all that in Rust (but even that only protests you from dumb errors not deliberately bad code).But we need to understand that this will RADICALLY change the way we code. Today, we build large pieces of software by simply assembling small pieces. As an example, the last few evenings I've been fixing my home-built backup software, and I'm currently using Python, C, parts of the Gnu C library (for calculating hashes fast), SQLite, GRPC, and probably a few other things I've forgotten. In an architecture with high walls between components, this will all change. The cost of transitioning to such a model is gargantuan.
I thought a big part of the draw of commercially-backed Linux distros was that the respective companies thoroughly audited the code changes coming in.
Agree. And how thorough you are at each layer, and how deep you go, depends on what you're protecting. It's a cost-benefit tradeoff. In some environments, the outermost layer is so good, no depth is needed. For example, there are data centers that have no network cables going in and out, armed soldiers on the outside, and every sys admin inside has an assault rifle on their back. On the other hand, some of the most secure facilities in the world (they do nuclear weapons secrets) still encrypt all their disks, and get very upset if a vendor can't use T10-DIF self encrypting drives (been there done that). Security is a complex topic, and the economics of it doubly so.This will never be 100% successful. This is why we need "defense in depth".
We leave the keys to our cars in the cars, usually in the middle console between the seats. My wallet is nearly always in my car, except if some family member borrow the car. But that's because you can't get to our house without either going through a locked gate and being challenged by neighbors, or taking an hour-long hike. In most towns, this would NOT work.Why we live in houses with locks (even though you can (or used to) get away without locking them in small towns. Some friends in Palo Alto used to never lock their doors but that probably changed).
Absolutely. Someone in our society will be spending more money on computer security, and it's a good thing.Basically all our options at present have significant costs (including the option of not doing anything - we have lost billions to ransomware already) but they are all worth exploring further.
I have no idea how much auditing, checking and vetting the big distributors (RedHat, SUSE, ...) do. Clearly, the distributions built by volunteers do very little, there just isn't enough time there. I know that the big companies that redistribute or use open-source software (IBM, Oracle, or Cisco, Amazon, Google) put considerable effort into reading source code, including lots of automated tools, and funding outside places (like security research groups at universities). There is a reason many security-relevant bugs are found by companies.
You did:I didn't discuss governments getting involved.
And you expanded on what that means... who else but the government would be this paranoid?In many software development environments (military and intelligence) that is actually done. While I've never been security cleared, I've spent an hour with a person from the government who works for a security agency several times, because they wanted to be sure
Revert "MFV: xz 5.6.0"
This commit reverts 8db56defa766eacdbaf89a37f25b11a57fd9787a,
rolling back the vendor import of xz 5.6.0 and restoring the
package to version 5.4.5.
The revert was not directly due to the attack (CVE-2024-3094):
our import process have removed the test cases and build scripts
that would have enabled the attack. However, reverting would
help to reduce potential confusion and false positives from
security scanners that assess risk based solely on version
numbers.
Another commit will follow to restore binary compatibility with
the liblzma 5.6.0 library by making the previously private
symbol (lzma_mt_block_size) public.