Backdoor in upstream xz/liblzma leading to SSH server compromise

While I do need to trust and possibly run all code in the kernel, I do not need to trust any of the ports to be clean and worth of my trust.
 
Yeah, and even then, "Responsibility" becomes a difficult thing to define... How much is one willing to gamble on their own definition of that word? The entire project? probably not. But then you gotta define what it means to 'pull your own weight'.

ralphbsz once wrote a pretty lengthy post explaining the joke about a guy being asked to implement an Accounts Receivable database for a client, and then disappearing for two years because he has to invent a new kind of database first, and then massage it into something the client wants. That's a pretty extreme case, I'd say, but an unfortunate demonstration of a lack of responsibility in some areas of software development. Can you trust someone like that, especially if you paid good money for the dev to actually deliver the end product? ?
 
While I do need to trust and possibly run all code in the kernel, I do not need to trust any of the ports to be clean and worth of my trust.
so, if someone sneaks a backdoor into security/snort, you just uninstall it and look for something else if you're just a user. But as a dev, you'd be responsible for your own project, and failure to handle a security breach properly will drag your project back into mud and obscurity.

This is the kind of expectation of responsibility that is normally codified in a given project's Code of Conduct . Just one idea that bazaar does seem to need to borrow from the Cathedral - in addition to avoiding cyber-bullying. ? Cathedrals do have better resources to address that - and conversely, to hide that just as well.

Adults are expected to know what they're getting into, and to handle expectations imposed by society in a way that does not cause problems for either.

Sometimes, I think it may be better to just develop a thick skin so that you can empathize when necessary, but also be able to not care either way.
 
And then, in ports, the "cathedral" ends where third party upstreams come into play. There's just no other way to offer a really large open-source "software collection" than "bazaar". :rolleyes:
There are bazaars and then there are FOSS bazaars, full of unnecessary intermingling dependencies pulled in for random trivial crap (mostly PIP/NPM/crates.io/CPAN stuff for the corresponding dependency aggregator "languages").

Reducing dependencies will help reduce supply chain attacks but everyone is so afraid of crafting suitable wheels for their software.
 
This is the kind of expectation of responsibility that is normally codified in a given project's Code of Conduct .

It is not. A code of conduct is a way to enforce the obligation to lie, i.e. politically correct newspeak.

For long times it was quite normal that somebody writing a software would have their backdoor included, and nobody would take offense in that. Because computer people were a community, they were all in the same mailboxes and on the same side.

And basically there are only two sides: there are those people rich enough to buy islands, and those people who cannot afford enough to eat.
And the whole purpose of a code of conduct is to lie away this distinction and replace it with empty newspeak blabla.
 
It is not. A code of conduct is a way to enforce the obligation to lie, i.e. politically correct newspeak.

For long times it was quite normal that somebody writing a software would have their backdoor included, and nobody would take offense in that. Because computer people were a community, they were all in the same mailboxes and on the same side.

And basically there are only two sides: there are those people rich enough to buy islands, and those people who cannot afford enough to eat.
And the whole purpose of a code of conduct is to lie away this distinction and replace it with empty newspeak blabla.
Code of Conduct does create a motivation to lie in order to avoid certain consequences, true... and I'm gonna counter with the idea of Hanlon's Razor: Do not attribute to malice what can be explained with simple stupidity.

It may be too simple to just say "people are just uninformed and stupid if, without realizing it, they get involved in something that demands a rather minimal burden of responsibility". "You didn't know you were expected to pull your own weight on this project? Come on, if you have even the minimal skill, you gotta know what the expectations are around here." Sounds harsh, but that's reality. Staying informed gives you a softer landing spot.
 
Staying informed gives you a softer landing spot.
Not so soft for those sitting in some Boeing737max.
This is also part of our software quality culture and adherence to procedures of conduct unwelcoming critical viewpoints.
 
Not so soft for those sitting in some Boeing737max.
This is also part of our software quality culture and adherence to procedures of conduct unwelcoming critical viewpoints.
Yeah, and I fly Airbus... :p - because I'm informed.
 
On the other hand: Linux and its underlying open source layers today runs 99% of all computing in the world, and there is no economic way to replace it with a trusted system, except for very small high-value corners (aerospace, military and national security still uses some custom software written exclusively by paid professionals with security clearances). This is sort of a man-made disaster that is happening slowly; somewhat similar to global warming. I have no idea how to fix it as a society; for my servers at home I can be a little more careful.
It will take a while, may be a long while, but one thing we can do is to switch to object capability (ocap) hardware, may be like CHERI. This is essentially permission given on a need to know basis. Every third party software package put in its own protection domain and gets access to only what it needs and no more. No more need for an all powerful (but easily fooled) "superuser". No more "confused deputy" problem. So for example, a compression library such as xz can't access anything but the input and output streams. In a sense this model is already used for distributed systems (you need the right key to commit etc.) but not locally and many of these systems use a role based access model.

The difficulty with any fine-grained permissions scheme such as ocap is in human factors. It is like every room in your factory (or house) has its own lock and you need the right key to get in and use a subset of tools in a given room, and no two people or rooms use the same key. On top of that there would be need to revoke keys or provide one time access keys etc. How do you manage this mess of keys without getting all tangled up? How do you make systems secure without making them much harder too use?

Also, this is not sufficient but a necessary condition (IMHO). Someone can still fool you to give up your keys but the "radius of damage" due to such exploits would be much smaller.

May be there is some hope as I saw a recent White House report that is urging the industry to use memory safe languages as well as secure hardware building blocks such as CHERI or ARM's memory tagged extension.
 
The difficulty with any fine-grained permissions scheme such as ocap is in human factors. It is like every room in your factory (or house) has its own lock and you need the right key to get in and use a subset of tools in a given room, and no two people or rooms use the same key. On top of that there would be need to revoke keys or provide one time access keys etc. How do you manage this mess of keys without getting all tangled up? How do you make systems secure without making them much harder too use?
Well, you can try to write a simulation of that security model. Take an existing facility that is equipped with key card readers, grab logs for say, a week, and feed those logs to the simulation. You'll quickly discover that even with computer-aided analysis, the model is quite impractical, and will result in wasted time just managing the system and making manual adjustments.
 
It will take a while, may be a long while, but one thing we can do is to switch to ...
No. This is a sociological or management problem: We need to change the way we allow people to work on mission-critical software, to make sure they are trustworthy. That means authenticating them (are you really the person you claim to be, please show us your ID card or driver's license), it means understanding their background (why do you online chat with a North Korean embassy employee who is known to be under cover), it means monitoring them (why are you taking a vacation in Moscow during the month with the worst weather), it means looking into their bank account (where did those 2 million yuan come from). In many software development environments (military and intelligence) that is actually done. While I've never been security cleared, I've spent an hour with a person from the government who works for a security agency several times, because they wanted to be sure (well, they also wanted to make sure I fill out my weekly time card absolutely correctly). In a civilian setting, this is done by knowing your colleagues as people, not as identity-less e-mail aliases. If the group of people who pulled off this hack had to come to work in an office with authenticated people every day (for example in the large Linux development groups that exist inside Intel, IBM or Google), their cover would have been blown within 5 minutes: There are 3-4 people, with thick Russian accents, and they all pretend to be "Jia Tin"?

Every third party software package put in its own protection domain and gets access to only what it needs and no more.
I'm not saying that this is a bad idea. Matter-of-fact, it is probably a really good idea. I'm actually old enough to have done some coding (in class) on the old Burroughs 5000 series machines, where each memory word "knew" whether it was an instruction or data, so it was nearly impossible to mistakenly execute integers or floating point numbers (known as a tagged architecture). I remember learning that there is also some hardware where low-level memory "knows" whether it is an integer or a string, which really helps find lots of bugs. I was there when the big micro-kernel discussions happened (and had the bad luck of having to use some micro-kernel based computers, which were barely functional). These are just a few examples of lessons from history that have been forgotten, and perhaps new designs (such as capability-based machines) will bring them back.

But we need to understand that this will RADICALLY change the way we code. Today, we build large pieces of software by simply assembling small pieces. As an example, the last few evenings I've been fixing my home-built backup software, and I'm currently using Python, C, parts of the Gnu C library (for calculating hashes fast), SQLite, GRPC, and probably a few other things I've forgotten. In an architecture with high walls between components, this will all change. The cost of transitioning to such a model is gargantuan.

And it doesn't solve the real problem: We need to trust people who are doing jobs where failure affects lots of others. There is a reason airline pilots and police officers are subjected to lots of scrutiny and background checks: it's because planes and guns are objectively dangerous. If an amateur wants to fly a small plane, or get a gun and do target shooting, they need to undergo extensive checks, and there is no room for privacy there. We need to abandon the computer culture's love for anonymity, and its pretense of a meritocracy (you are a good and ethical person because you wrote lots of correct code quickly).

May be there is some hope as I saw a recent White House report that is urging the industry to use memory safe languages as well as secure hardware building blocks such as CHERI or ARM's memory tagged extension.
All these things are a starting point. Where we can, let's use all the nice computer architecture techniques you mentioned. But we need to remember that ultimately, things like padlocks, keys, doors and safes are just there to slow thieves down; the ultimate way to prevent crime is to have a jail where burglars (once caught) get punished. At its core, this is a social problem, not a technological one.
 
No. This is a sociological or management problem: We need to change the way we allow people to work on mission-critical software, to make sure they are trustworthy. That means authenticating them (are you really the person you claim to be, please show us your ID card or driver's license), it means understanding their background (why do you online chat with a North Korean embassy employee who is known to be under cover), it means monitoring them (why are you taking a vacation in Moscow during the month with the worst weather), it means looking into their bank account (where did those 2 million yuan come from).

So we're back at McCarthy's now?

If people want "critical infrastructure", they should at first pay for their damn stuff, instead of taking it for granted and then bringing along SS-blockwarts.
The dialogue on xz clearly stated, this is a private/leasure project. Now find the bug. (Hint: you get what you pay for)
 
I didn't discuss governments getting involved. Whether regulatory oversight is a good idea or a bad idea in the software industry is a fascinating question, and this case will influence that debate.

But if someone wants reliable software (and reliability includes it not having dangerous hacks and backdoors in it), that will cost someone somewhere money. Your economic argument is not wrong, but it is inapplicable: The FOSS movement is not a normal economy. People do get things for free, and normal economic cost/benefit analysis does not apply. As an example: Linux, OpenBSD and seL4 are all free operating system kernels (*). Yet, we would all agree that they differ massively in their "security" (I listed them in order). You want a secure machine? Download seL4, and you get much more than you paid for, because seL4 costs nothing.

(*) The word OpenBSD usually refers to a complete OS distribution; above I'm talking only about the kernel; while the word "Linux" is usually used to describe distributions such as RedHat or Ubuntu, I mean it literally here.
 
I didn't discuss governments getting involved. Whether regulatory oversight is a good idea or a bad idea in the software industry is a fascinating question, and this case will influence that debate.

But if someone wants reliable software (and reliability includes it not having dangerous hacks and backdoors in it), that will cost someone somewhere money. Your economic argument is not wrong, but it is inapplicable: The FOSS movement is not a normal economy. People do get things for free, and normal economic cost/benefit analysis does not apply. As an example: Linux, OpenBSD and seL4 are all free operating system kernels (*). Yet, we would all agree that they differ massively in their "security" (I listed them in order). You want a secure machine? Download seL4, and you get much more than you paid for, because seL4 costs nothing.

(*) The word OpenBSD usually refers to a complete OS distribution; above I'm talking only about the kernel; while the word "Linux" is usually used to describe distributions such as RedHat or Ubuntu, I mean it literally here.
Linux is free unless you're using the FSF definition in which case it's considered open source as stated by the Linux foundation. Since the kernel ships with "binary blobs". Not the most helpful observation probably but I find the ideologies and licencing definitions to be somewhat interesting myself.
 
This is a sociological or management problem: We need to change the way we allow people to work on mission-critical software, to make sure they are trustworthy.
This will never be 100% successful. This is why we need "defense in depth". Why we live in houses with locks (even though you can (or used to) get away without locking them in small towns. Some friends in Palo Alto used to never lock their doors but that probably changed).
But we need to understand that this will RADICALLY change the way we code. Today, we build large pieces of software by simply assembling small pieces. As an example, the last few evenings I've been fixing my home-built backup software, and I'm currently using Python, C, parts of the Gnu C library (for calculating hashes fast), SQLite, GRPC, and probably a few other things I've forgotten. In an architecture with high walls between components, this will all change. The cost of transitioning to such a model is gargantuan.
"Assembly" will get more complicated for sure. But this is already the case with shared libs. What changes is you have to pass the required ocaps as arguments and the shared lib can access only memory or resources protected by these ocaps and in the way you intended. But if you link something in your address space it has access to everything so you'd better make sure you can trust it. Currently you also have to trust shared libraries whether they are trustworthy or not. Still, you have no choice: the amount of 3rd party software we already depend on is so much that we are never going to find enough "trustworthy" and competent people to audit all that or rewrite all that in Rust (but even that only protests you from dumb errors not deliberately bad code).

Basically all our options at present have significant costs (including the option of not doing anything - we have lost billions to ransomware already) but they are all worth exploring further.
 
One thing I find interesting about all of this is that apparently RHEL and Ubuntu were going to be released with this compromised version of xz. I thought a big part of the draw of commercially-backed Linux distros was that the respective companies thoroughly audited the code changes coming in.
 
This will never be 100% successful. This is why we need "defense in depth".
Agree. And how thorough you are at each layer, and how deep you go, depends on what you're protecting. It's a cost-benefit tradeoff. In some environments, the outermost layer is so good, no depth is needed. For example, there are data centers that have no network cables going in and out, armed soldiers on the outside, and every sys admin inside has an assault rifle on their back. On the other hand, some of the most secure facilities in the world (they do nuclear weapons secrets) still encrypt all their disks, and get very upset if a vendor can't use T10-DIF self encrypting drives (been there done that). Security is a complex topic, and the economics of it doubly so.

BUT: For most amateur users, the first layer needs to be "good enough": Install a reasonable secure OS, don't share your passwords with others, don't run applications from untrusted sources. This has to work, and the attack we're discussing here was putting that in doubt.

Why we live in houses with locks (even though you can (or used to) get away without locking them in small towns. Some friends in Palo Alto used to never lock their doors but that probably changed).
We leave the keys to our cars in the cars, usually in the middle console between the seats. My wallet is nearly always in my car, except if some family member borrow the car. But that's because you can't get to our house without either going through a locked gate and being challenged by neighbors, or taking an hour-long hike. In most towns, this would NOT work.

Basically all our options at present have significant costs (including the option of not doing anything - we have lost billions to ransomware already) but they are all worth exploring further.
Absolutely. Someone in our society will be spending more money on computer security, and it's a good thing.

I have no idea how much auditing, checking and vetting the big distributors (RedHat, SUSE, ...) do. Clearly, the distributions built by volunteers do very little, there just isn't enough time there. I know that the big companies that redistribute or use open-source software (IBM, Oracle, or Cisco, Amazon, Google) put considerable effort into reading source code, including lots of automated tools, and funding outside places (like security research groups at universities). There is a reason many security-relevant bugs are found by companies.

The big companies also do very targeted security checking. For example, our group was once contacted by corporate computer security, because server in our test lab were opening IP connection to addresses known to be associated with the spy services of a certain foreign country. One where lots of motherboards are manufactured, including the ones sold by SuperMicro. Even though SuperMicro doesn't have any software running on servers that should be talking on the network. Oops.
 
I didn't discuss governments getting involved.
You did:

In many software development environments (military and intelligence) that is actually done. While I've never been security cleared, I've spent an hour with a person from the government who works for a security agency several times, because they wanted to be sure
And you expanded on what that means... who else but the government would be this paranoid?

This seems pretty appropriate to post here:
 
Oops, I'm sorry. What I really meant when I said "government getting involved" was a response to PMc's McCarthy reference: I didn't discuss government mandating better software development practices (including security and background checks) for general purpose software.

The reason some very nice folks from government agencies wanted to talk to me was that I was building systems for their use, under contract, with their money. These were not open source systems, nor even systems that are widely available on the open market. That's very different from legislating "any open source contributor must post a picture of their ID card".

Misunderstanding, my fault.
 
I think that our perceptions of 'what is trust exactly' do play a role here...

For example, there's 'Agile' methodology, which involves lots of frequent customer follow-up and feedback. Was that methodology imposed on the project due to lack of trust in other methodologies? Or did project participants adopt the 'Agile' methodology because they they actually like getting reasonably frequent customer feedback?

Some people prefer to take an assignment, go away for a few months, and work undisturbed until a deliverable is completed. They would look at frequent customer feedback as a lack of trust. A waterfall model of development suits such people best.

One argument in favor of 'Agile' methodology is that there's increased likelihood that a problem like a bad actor will not snowball out of control, and will be spotted early, and fixed easily. That argument pushes the idea that an ounce of prevention is worth a pound of cure.

A counter-argument would be that this gets in the way of work, and output will be way too simple to do any good.

People generally want to know and confirm that they get what they pay for... ? That's why Bugzilla can have tickets that are open for a year or more, and the entire Internet can see it, for the most part. Open Source users do have pretty blind trust that the ticket will get taken care of.
 
FreeBSD is reverting xz back to 5.4.5:

Revert "MFV: xz 5.6.0"
This commit reverts 8db56defa766eacdbaf89a37f25b11a57fd9787a,
rolling back the vendor import of xz 5.6.0 and restoring the
package to version 5.4.5.

The revert was not directly due to the attack (CVE-2024-3094):
our import process have removed the test cases and build scripts
that would have enabled the attack. However, reverting would
help to reduce potential confusion and false positives from
security scanners that assess risk based solely on version
numbers.

Another commit will follow to restore binary compatibility with
the liblzma 5.6.0 library by making the previously private
symbol (lzma_mt_block_size) public.
 
Back
Top