Backdoor in upstream xz/liblzma leading to SSH server compromise

I'm not sure that it depends on glibc. I think that people are confusing the ifunc mechanism (used by glibc) with glibc itself. ifunc is not specific to glibc - it's supported by GCC and LLVM toolchains.

From what I've seen there are
  • configure time hacks to disable sandboxing - Linux only, doesn't try to disable capsicum
  • code modification done at the end of building via a test binary - I don't think that we use their autotools configure script in core
  • initial vector is the crc64 function which gets hijacked when the xz shared library gets loaded
  • a hook added to the link loader ld-linux.so - that's definitely Linux only
  • the above hook then intercepts sshd functions - that requires that sshd links with liblzma.so.5
I make that only one out of five that could apply to FreeBSD (two for people building xz via configure).
 
  • Thanks
Reactions: JAW
The attack ultimately targets SSH, OpenSSH doesn't even depend on liblzma, but when systemd-integration is patched in, it does. As for glibc, I assume it's needed for using interception features (which work differently on different platforms).


No, they're not affected, just "used".


That's impossible. Not in general of course, but impossible for this specific backdoor.
If glibc and systemd are used as part of the backdoor, then yeah, they are affected in that sense... figuring out if patching those components (as opposed to something else) has merit - that's probably something that does need to be figured out.

The last line, that doesn't really answer the question... I asked to describe the potential fallout. Or are you saying that it's impossible to demonstrate that this specific backdoor will work without presence of glibc and systemd? Issuing a proper CVE frankly depends on knowing whether that's true or not.
 
The potential fallout will be that this now is a proof of concept for each script kiddie wannabe hax0r or some TLAs who have deeper pockets and more skill. So we WILL see more of this. Or, there will be more of this and hopefully we will see it and fix it.
 
If glibc and systemd are used as part of the backdoor, then yeah, they are affected in that sense...
No. They are merely "used", and even that is an exaggeration.

Regarding glibc, see Paul Floyd 's comment, it isn't entirely clear whether glibc is required at all for the backdoor to work. It seems to use "ifunc", a mechanism provided by toolchains to allow resolving functions at runtime. Such things are commonplace. There are lots of perfectly valid usecases for something like this ("intercepting" function calls). Sure you could want some protection against abuse of such features, but this would render the whole concept of dynamic linking as it is currently used invalid.

Regarding systemd, it's not even involved here (as much as I dislike this specific software, but you have to stick to the facts). It's "required" because patching OpenSSH to "integrate" with systemd is what introduces liblzma to OpenSSH's virtual address space. OpenSSH "vanilla" doesn't link liblzma.

Or are you saying that it's impossible to demonstrate that this specific backdoor will work without presence of glibc and systemd?
No, I'm saying this backdoor is impossible to work without systemd, but not because of systemd, rather because it targets ssh and without systemd, the typical OpenSSH daemon won't even link liblzma, so, won't have the backdoor anywhere near its virtual address space.
 
It's important to get one thing straight: There's no "security vulnerability" involved. It's a fully intentional backdoor.

There's no "technical" issue, unless you want to argue shared libraries need some boundaries as mentioned in my response to astyle ... I don't think that makes much sense, the boundary is the process or address space, if you link a library, this means full trust, and if full trust isn't desired, you'll need some form of "sandboxing", like create a new process to only interface with that library and wrap accesses using some IPC mechanisms ....

Ok, self-derailed, again: There's no "technical" issue. There's a social and/or organizational issue. And there's no simple countermeasure.
 
… we WILL see more of this. Or, there will be more of this and hopefully we will see it and fix it.

👍 and let's not be complacent about FreeBSD and the ports collection.

An upstream vulnerability to which I'll not draw attention; <https://forums.freebsd.org/profile-posts/5345/> (probably worse, please don't quote that here); a numbered CVE for which the fix was not yet released, when I last checked (a few days ago); and so on. None of those three is comparable to CVE-2024-3094, but you get the idea, hopefully.

oss-security - backdoor in upstream xz/liblzma leading to ssh server compromise

From the author:

<https://mastodon.bsd.cafe/@AndresFreundTec@mastodon.social/112180425641705348>

❝… Unfortunately I suspect we'll see a lot more such attacks going forward, in all likelihood with more success in some cases.❞

<https://mastodon.bsd.cafe/@AndresFreundTec@mastodon.social/112191135703673167>

❝… we got unreasonably lucky here, and that we can't just bank on that going forward.❞




Is Linux secure?

Let me rephrase, is a huge pile of C code, running in privileged mode in a shared address space, highly concurrent, using its own homegrown memory model based on volatile instead of the one the language spec defines and the compilers implement, dealing with untrusted data, implementing many complex protocols, data formats, & functionality, managing a bunch of "objects" with complex ownership and lifetime semantics, embedding its own JIT — secure?
 
Looks like it is much simplier than expected...
WHoot! Let's declare ChatGPT to be the biggest Cyber security threat that humanity has ever faced!

When people mess up at work, we wonder where the hell they went to school.

Now we have to question where the hell the chatbot went to school.
 
The potential fallout will be that this now is a proof of concept for each script kiddie wannabe hax0r or some TLAs who have deeper pockets and more skill. So we WILL see more of this. Or, there will be more of this and hopefully we will see it and fix it.
This particular case was probably a nation-state agency, and there is a limited number of suspects. Good thing they were caught. How many more such things are in daily production? Hundreds or thousands?

I see this actually as good news. This demonstrates to everyone (both in the software development community and in politics) that allowing people to write mission-critical software without good management is dangerous. We already learned the same thing then the Univ. of Minnesota students demonstrated how you can gain the "trust" of the Linux kernel community, and then use that trust to sneak hacks into Linux. On the hardware side, a similar thing happened about 10 years ago, when buried espionage chips were found in Supermicro motherboards assembled in China (whether Supermicro was an innocent victim or a willing conspirator has not been discussed in public, to my knowledge). Today's open software contributor can be is a random random person whose only identity is an e-mail address on their Github account, and whose only basis for trust is that they have done a few dozen useful commits. In many cases, they are actually employees of large corporations (Intel, IBM, Google, ... all contribute lots of code to open source), but quite a few are still de-facto anonymous. And that anonymous person can hide things in their code check-ins.

On the other hand: Linux and its underlying open source layers today runs 99% of all computing in the world, and there is no economic way to replace it with a trusted system, except for very small high-value corners (aerospace, military and national security still uses some custom software written exclusively by paid professionals with security clearances). This is sort of a man-made disaster that is happening slowly; somewhat similar to global warming. I have no idea how to fix it as a society; for my servers at home I can be a little more careful.
 
This demonstrates to everyone (both in the software development community and in politics) that allowing people to write mission-critical software without good management is dangerous.
Uh, you do know what the idea of "good management" is in politics? What I have seen in the software industry is also a pretty mixed bag...
 
How many more such things are in daily production? Hundreds or thousands?
Finally, the good question.

On the other hand: Linux and its underlying open source layers today runs 99% of all computing in the world, and there is no economic way to replace it with a trusted system
An incredible vast and complex stack of abstract software relations. And an evermore increasing dependency on it. And nobody in charge and everybody wanting to be anonymous.
Each in itself could already be considered a problem, but taken together this is poisonous. And nobody wants to see it, we just want to make money.

I have no idea how to fix it as a society
Other way round, this will do away with society. An anonymous mesh of mostly NPC can no longer be termed society, anyway.
 
and there is no economic way to replace it with a trusted system

Maybe a web of trust of all committers might help.

As time goes on, you will accumulate keys from other people that you may want to designate as trusted introducers. Everyone else will each choose their own trusted introducers. And everyone will gradually accumulate and distribute with their key a collection of certifying signatures from other people, with the expectation that anyone receiving it will trust at least one or two of the signatures. This will cause the emergence of a decentralized fault-tolerant web of confidence for all public keys.

(Phil Zimmermann cited on wikipedia: Web of trust)
 
Debian and other distros are considering downgrading xz even further to get rid of all commits from the bad actor:


FreeBSD has a very recent xz in contrib and the bad actor is even named:


xz (XZ Utils) 5.6.0
liblzma 5.6.0

I opened https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=278127 for this.
 
Such things should be discussed in public.
The only reason this PR isn't public is the automation for any PR targeting the "Security" component, which makes a lot of sense in general, as such PRs often contain detailed vulnerability/exploit information.

But then, there's not really something to see there anyways, certainly no discussion. If you want to discuss something, the public mailing lists would be a good place.

I personally think we should avoid blind activism. FreeBSD is indeed not affected by the backdoor (which is, of course, just luck). Previous commits from the "bad actor" are very likely perfectly sane, that was part of their strategy to gain trust.
 
Maybe a web of trust of all committers might help.



(Phil Zimmermann cited on wikipedia: Web of trust)
Trouble with the very idea of trust is that it's the other side of the coin called discrimination. For example, the statement "I don't trust the guy, even if he is a skilled dev whose skills I can definitely use" would completely derail the idea of Open Source...

Red Hat went closed source because they stopped trusting outsiders with their sources. After all, too many cooks spoil the broth. Can you trust someone to not overdo on pepper, let alone not slip in something that a customer may be allergic to? That's what leads to a small group of elite cooks who run a hella popular shop, but don't trust the public at large very much.
 
Red Hat went closed source because they stopped trusting outsiders with their sources. After all, too many cooks spoil the broth.
Red Hat “closed” (for non-customers) the downstream path. They don’t want others cooking their recipe and giving it away next door. They are absolutely still bringing in from upstream, and would’ve been almost certainly hit by this (if it remained undetected) in a future release.

So the broth would still be spoiled, but it would be harder for others to notice/track down within the RH ecosystem. (Since downstream distribution of the sources and packaging/build process is restricted now.)
 
Red Hat “closed” (for non-customers) the downstream path. They don’t want others cooking their recipe and giving it away next door. They are absolutely still bringing in from upstream, and would’ve been almost certainly hit by this (if it remained undetected) in a future release.

So the broth would still be spoiled, but it would be harder for others to notice/track down within the RH ecosystem. (Since downstream distribution of the sources and packaging/build process is restricted now.)
It's nice that we can use metaphors in this conversation, Eric A. Borisch !

Well, trust is a double-edged sword. The Open Source movement came about as a reaction to lack of trust. Now we're going from "Come on, trust people to take a look at your code, maybe someone will have a bright idea!" to "Way too many eyeballs, chances of releasing crappy software is way too high, bright ideas get stolen, I need filtering, and frankly, less trust allows me to get more actually done..." 😩

Yeah, people get upset when they're not trusted, but y'know, trust is something you earn.
 
Regarding glibc, see Paul Floyd 's comment, it isn't entirely clear whether glibc is required at all for the backdoor to work. It seems to use "ifunc", a mechanism provided by toolchains to allow resolving functions at runtime. Such things are commonplace. There are lots of perfectly valid usecases for something like this ("intercepting" function calls). Sure you could want some protection against abuse of such features, but this would render the whole concept of dynamic linking as it is currently used invalid.
FreeBSD uses ifunc under various scenarios. The easiest example that comes to mind is the implementation of 64-bit inodes. Older binaries still worked with 32-bit while newly built binaries used the 64-bit version.
 
Maybe a web of trust of all committers might help.

That's what we had in earlier times. People woud meet. The whole environment of internet providers, server operators, and developers was somehow related, people would know who is who, and you could always find somebody who would know somebody who would know...

But that was back in times when people still were social and loved to meet each other.
 
Maybe a web of trust of all committers might help.

(Phil Zimmermann cited on wikipedia: Web of trust)

The cryptography web of trust describes a mechanism for implementing how trust is transferred between people. That's a typical computer problem, solvable by algorithms. It does not explain how trust is created in the first place; that is a sociological and psychological problem, for which solutions exist, but they don't use computers.

Let me explain by giving two scenarios. Say I work in a big company, where my email is ralph@example.com. Our department just hired two new engineers, alice@example.com and bob@example.com. I meet Alice and Bob; we talk about their skills, their preferences, what part of the project they are most interested in. We discuss what training is required, how they can learn the existing code base, the coding conventions, the mechanical process (build machine, source code control). In that interaction they act and react: I can see that they have prior experience (and sometimes that they are lacking experience, which is what training is for). At lunchtime, we discuss life: kids, broken cars, hobbies, family, all that stuff. I can see that they are real people with real concerns, strength and weaknesses, predicaments and struggles. After a week or two of working together, I understand what makes Alice and Bob tick, why they work here, how they make decisions. Also, I know that Human Resources ran a background check on them, so I can be sure that they have not been in legal trouble for computer crimes and so on (*). I meet their family at group events, we go to a Christmas dinner together, I know them as people, not just as programmers. Eventually, this is where trust comes from, and our group lets Alice and Bob work on our project and contribute (check in code). And while we carefully check each others work product (whether that be code reviews or design discussions), I fundamentally trust them to do the right thing: because I know that if they did the wrong thing, their dreams and aspirations (whatever those might be) would be shattered.

Underneath that is a basic social contract: Alice and Bob are working for Example.com, probably because they want that salary. Perhaps Alice wants to buy a faster BMW, and Bob needs money for his kid's ballet classes. They both understand that if their performance is bad or highly negative (like if they put a nasty hack into Example's source code), they will not only lose that salary, but they will go to jail. And having talked to them (and their significant other, and perhaps their kids), and having authenticated them (I use that word very deliberately) by checking their background, I think I know what drives their life choices. As an example, maybe Alice went to SUNY Stony Brook for her CS degree, and at a conference I talked to her advisor, and he mentioned that she is probably happier in California, because the rainy and cold weather in upstate New York didn't allow her to take her car to racetracks often enough, whereas Buttonwillow and Sonoma nearly always have sunshine. Maybe one day Alice comes to work all bandaged and beat up, and admits that on Saturday she rolled the car at the track, and her boyfriend had to take her to the emergency room and drive her home afterwards, so maybe she'll marry him after all. And she's really hoping to get that pay raise next month, because the new Porsche is really expensive. Or maybe Bob complains at work that our office schedule is not compatible with his kid's new clarinet classes, and whether we could move the weekly code review meeting from Tuesday to Wednesday; in exchange he'll do extra on-call bug catching shifts Thursday afternoon, when his wife takes the kid to soccer; and the fact that his kid is playing soccer is my fault, because I introduced Bob and his wife to my kid's former soccer club coach. You see the web of trust here.

Are Alice and Bob likely to be script kiddies, or full-time employees of a foreign intelligence agency? No. Theoretically possible, but very unlikely. I think I understand their motivation and goals in life; after working together (and discussing cars and kids), it's very difficult (and uncommon) to live a double life. If our work is of particular importance (such as a national security / military project, or important infrastructure), then I can even be sure that trained professionals have checked Alice's and Bob's background super carefully.

Now contrast this with the remote and anonymous style used in open source development: I'm working on a project, and I see that alice@gmail.com and bob@outlook.com send pull requests. I review their first commit, and it looks good, so we accept the change set. Over the next few months, they keep coding, and their work looks good. I know nothing about Alice and Bob ... I don't even know whether that's their real name, or just an e-mail address. They might have kids, they might be saving up money to buy a new BMW, their aging dad might be in the hospital, I don't know any of these things. I might be able to infer what time zone they are in (from when they send code and when they react to e-mails), but even that is uncertain, they might be night owls, or doing their open source development at odd hours (maybe they have a day job). I know absolutely nothing about these people. It is perfectly possible that they are sociopaths, professional black hat hackers. It is also quite possible that they are really smart CS students, working on real-world projects to pad their resume, so in a few years they can get those comfortable and well-paid jobs, like Alice and Bob have today.

But the scary thing is: I have no reason to trust those two anonymous people. All I have to go on is their code submissions, and perhaps a small number of e-mails or discussion posts when talking about purely technical issues. If one of them takes a 4-week break from coding, I don't know whether that's because they had to travel to stay with their uncle who is in the hospital after a stroke, or whether they had to go to another training class offered by the Elbonian Spy Agency's hacking class. There is no web of trust here; all I have is an e-mail address and a public key. Cryptography doesn't create trust, but it is a convenient way to take trust that was created offline and implement it.

The solution to writing good (and trustworthy!) software doesn't lie in technology. It lies in interpersonal relationships. There is a reason the two best books on Software Engineering don't concern themselves much with things like programming languages or how many spaces to indent, but with how to work together. The two books are "The mythical man-month" and "Peopleware: productive people and teams". You can read K&R, Stroustrup and Steele until you have completely absorbed them; if you don't understand how to build social groups to work together, it will amount to nothing.

(*) Footnote: Yes, I've seen failures in this process too. One time I interviewed a new sys admin for our group, and we hired them. About a week or two after they started the job, HR and corporate security came to us, and the new employee was fired on the spot and walked out of the building. It turns out they had lied in their application about a previous job, on which they were caught hacking, and got a felony computer crimes conviction and spent a few years in jail. Oops. The second failure was a newly hired employee, who worked in an office geographically far away from us, and because of that there was relatively little contact with them. They ended up stealing all our source code, and attempting to sell it to an underground hacking group pretending to be associated with the Chinese spy agencies. In reality, the "underground hacking group" were some FBI agents who had seen the employee advertising the source code for sale on the dark web, and set up a sting. I think they got 10 years in federal prison.
 
ralphbsz is being too polite here. "Web of trust" is an entirely meaningless buzzword. There is no number that would tell how much confidence I have in any particular person doing (or not doing) anything. No viable algorithm. Nothing.
 
Back
Top