How dangerous is releasing CVEs before developing fixes?

Well, what do you do if you run a software as a service and the queries from customers are XML?
You use an XML library that does claim to be safe against untrusted data.

And if you can't do that, you simplify and use a more minimal library that you stand a chance of audit.

There is no other choice really. The alternative of simply using an unsafe one because you can't use a safe one is not appropriate for this day and age of the internet.
 
He should have contacted Google's zero-day team. They would have helped him.
Knowing a little bit about how Google operates (see footnote), I'm sure that there are somewhere between a dozen and a hundred people inside Google already at work fixing it. Given that this has been known for over 12 hours, most likely the Google-internal version of libxml2 is already patched and safe, or the problem has been worked around in some other way, and rollout to all the millions of Google hosts is in progress. I have no idea how the problem is getting fixed, nor whether or when such a fix will become public, nor whether Google's internal fix would even be useful for the world at large.

If this CVE is really important or relevant (which I don't know and can't evaluate), the open source community should fix it quickly. It is technically capable of doing that. If it isn't organizationally capable of it, that tells you something really important about the state of the world.

Footnote: I was a Google employee for quite a few years (insiders will know what the following means: I was "root in cloud"), but I am not now, and I have no internal news from them at all.
 
Knowing a little bit about how Google operates (see footnote), I'm sure that there are somewhere between a dozen and a hundred people inside Google already at work fixing it. Given that this has been known for over 12 hours, most likely the Google-internal version of libxml2 is already patched and safe, or the problem has been worked around in some other way, and rollout to all the millions of Google hosts is in progress. I have no idea how the problem is getting fixed, nor whether or when such a fix will become public, nor whether Google's internal fix would even be useful for the world at large.

If this CVE is really important or relevant (which I don't know and can't evaluate), the open source community should fix it quickly. It is technically capable of doing that. If it isn't organizationally capable of it, that tells you something really important about the state of the world.

Footnote: I was a Google employee for quite a few years (insiders will know what the following means: I was "root in cloud"), but I am not now, and I have no internal news from them at all.
The hyperscalers can afford it. The open-source community is another thing. Fixes must be tested before they're released.
 
About libxml2, what's needed consideration would be the use-cases.
If it is used to handle external data coming from networks, it should matter.

But if it is used just for storing configuration of specific apps only that are allowed to be reconfigured via its menus, unlikely to matter, as the data SHALL be validated by the app BEFORE storing and reading by the app itself. There SHALL NOT be invalidated data (if not, it's the fatal bug of the app).

So simply deleting libxml2 from ports is too stupid.
We should have mechanism on LIB_DEPENDS and RUN_DEPENDS to specify it is used for generic usage (default) or only for internal-for-the-port data.
Something like LIB_DEPENDS= libxml2>0:textproc/libxml2:internal and some mechanism to validate that :internal is specified or not and forcibly set IGNORE or BROKEN if :internal is NOT set. Not sure it's possible or not.
 
Okay, I hit that piece of bullshit-bingo for the second time now. A software cannot have a "bill of material" simply because it is not material - so this one is even bullshit in it self!
it is very little mass.
 
I'm seeing terms like "Process un-trusted data" thrown around in this thread... And... exactly wha does that even mean?

I suspect that means "Data from un-trustworthy sources". It may be good, well-formed data, just happens to be from a "bad" actor. What if the data is well-formed, just happens to be irrelevant? Just how far back upstream does textproc/libxml2 need to check? what would be a red flag? To what extent is "un-trusted data" something that can actually be blamed on libxml2? I can ask libxml2 to process somebody's private info that was found using illicit means. I can ask libxml2 to process bogus data that I generated for a legitimate debugging project. Where do we draw the line? As another example we can use netcat to generate malicious packets. Netcat is in base of FreeBSD and Linux, do we start screaming about national security implications of the very existence of netcat?

Besides, the official process for publishing CVE's says nothing about developing a fix... OP should stop spreading FUD without having a handle on how things work. There's plenty of CVEs that were published awhile ago that still don't have a fix in place. This page on Github explains what's a CVE and what's a security advisory. Basically, a security advisory is released to a project once the fix for the CVE is developed and integrated into the next version.

That being said, FreeBSD's standard response to CVE's affecting a given port is to just not compile it, spit out info that there's CVE's for this version of this port, and to tell the user that this can be overridden with a Makefile flag, or to wait for the next version of the specific port. And hopefully the vulnerability report about the current version becomes a secuirty advisory that says "Older versions of this software are vulnerable. The vulnerability is resolved in specific newer versions.".

Point of my post being, you gotta know the process before you start pointing fingers and screaming about implications of a given vulnerability, especially with off-the-cuff reactions. Gotta think before we post, then there'll be a little less bullshit flying around on Internet...

1752546296739.png
 
  • Like
Reactions: PMc
The upstream maintainer of the port we know as textproc/libxml2 has carried out his threat to release vulnerability information into the wild
Disinformation. Upstream followed guidelines to inform about a vulnerability that cannot be fixed in time.
Thus he has created zero-day opportunities
Wrong. Before making the vulnerability public it might have been a zero-day. A zero-day vulnerability is per definition not known to the public. Now it is known.
Given the number of applications depending on his library, has he just endangered the entire Internet infrastructure?
Exaggeration on purpose.
If his action will enable malicious foreign governments and terrorists to attack states or other targets, how many countries' security laws is he likely to have violated?
None.
 
As for the timing of the disclosure:

It isn't timed around availability of fixes, it is timed around the time that opponents might know about the problem.
 
But if it is used just for storing configuration of specific apps only that are allowed to be reconfigured via its menus, unlikely to matter, as the data SHALL be validated by the app BEFORE storing and reading by the app itself. There SHALL NOT be invalidated data (if not, it's the fatal bug of the app).

So simply deleting libxml2 from ports is too stupid.
We should have mechanism on LIB_DEPENDS and RUN_DEPENDS to specify it is used for generic usage (default) or only for internal-for-the-port data.
Something like LIB_DEPENDS= libxml2>0:textproc/libxml2:internal and some mechanism to validate that :internal is specified or not and forcibly set IGNORE or BROKEN if :internal is NOT set. Not sure it's possible or not.
No, thats the wrong place. The dependencies are complex enough already. Also, you probably don't have a precise definition of what exactly would be "internal" - there are most often some corner cases.

If you want to pimp the dependency tree with usecase information in such a way, then put it into a separate approach.
 
Well, well, a real "pile-on" on the FreeBSD forum. I thought we were better and more thoughtful than that.

First, several people have asked me for a link so here it is: https://vuxml.freebsd.org/freebsd/index.html. I presumed we'd all have read it anyway, having received the security e-mail from our systems and needing to assess the threat. It's linked from a chain in every forum page header.

Second, If you actually read my posts I'm not complaining about the fact the software is provided "as is" with no guarantees etc. We all know that. We use Free Software and we take that into consideration. The maintainer is under no obligation to write it in the first place, or to undertake a QC check or to continue maintaining it for ever, or at all. We all know that. It's what Free Software is all about. The licence makes it very clear he has no responsibility for any of that.

What I was writing about is the disclosure of information which could be useful to a terrorist, enemy state, or criminal gang by informing them of an attack surface against services hosted on machines using the software, which seems to be a dependency of mail and http servers (eg Apache) of many types, and is apparently even used by some professional programs, according to the maintainer. Indeed, part of his complaint is that people are making money out of his work but expecting him to maintain it without any help from them, and he's understandably fed up about that. The problem is not that he hasn't fixed it, but that, by disclosing it so openly, he has alerted threat actors to the opportunity they might otherwise have taken longer to discover and therefore he could be deemed by a law-enforcement service to have acted to assist criminals, terrorists, or enemy powers in attacking a country's Internet-carried infrastructure, including law-enforcement, defence, power grids etc. It doesn't even matter how quickly such people could have found out by other means - the mere fact he has made it easier for them would probably be enough to get him arrested. Nor does it matter that governments are foolish to use the public Internet as a vehicle to carry vital services in the first place rather than investing in their own separate systems. The fact is they do.

We all know there is blame to be apportioned in many other places, but governments find it easier to outlaw anyone who takes advantage of or exposes the weaknesses than to fix them.

Can people really not see the difference between saying "I don't want to fix this anymore. If you want it fixed do it yourself" and telling the world "This is how you could use it to break into computers"?

For me, it's like the difference between standing on a street corner and protesting against government policy, and breaking into a government facility and damaging the equipment. The first registers a complaint and the second gets you proscribed as a terrorist, and once that happens, even complaining about the proscription becomes a crime.
 
The difference is intent, if you wanna make a law-adjacent metaphor. They didn't write exploit code.

The maintainer expressed themselves. The major parties did not respond appropriately. The maintainer changed their process.

You can not blame the maintainer for threat actor's actions. If the maintainer's process changes put major parties at risk, maybe they should have responded appropriately. If major parties don't want to be at risk because of the process, they can not use the software.
 
What I was writing about is the disclosure of information which could be useful to a terrorist, enemy state, or criminal gang by informing them of an attack surface against services hosted on machines using the software, which seems to be a dependency of mail and http servers (eg Apache) of many types, and is apparently even used by some professional programs, according to the maintainer. Indeed, part of his complaint is that people are making money out of his work but expecting him to maintain it without any help from them, and he's understandably fed up about that. The problem is not that he hasn't fixed it, but that, by disclosing it so openly, he has alerted threat actors to the opportunity they might otherwise have taken longer to discover and therefore he could be deemed by a law-enforcement service to have acted to assist criminals, terrorists, or enemy powers in attacking a country's Internet-carried infrastructure, including law-enforcement, defence, power grids etc. It doesn't even matter how quickly such people could have found out by other means - the mere fact he has made it easier for them would probably be enough to get him arrested. Nor does it matter that governments are foolish to use the public Internet as a vehicle to carry vital services in the first place rather than investing in their own separate systems. The fact is they do.
Y'know, you seriously gotta consider the audience of the post. And that means, consider how you phrase things.

I'd like to remind you, this is a technical forum. Not a place for sensational posts that look like half-baked, poorly thought-through alarms. Learning how CVE's and security advisories work is one legitimate aim that this conversation could have. Learning what the appropriate reaction to those is another legitimate aim. Spreading FUD and getting all excited about things that are frankly out of your hands - just not the best way to frame things, I'd think. Gotta be level-headed about that stuff.

You can scream about national security implications of everything under the sun - from how you can set up a spam server, a scanner, a node to orchestrate DDOS attacks, a sovereign chat platform to trade classified documents and illicit software and other vices, and there's a LOT more problematic and criminal behavior that has been made possible by tech and there's no end in sight.

Just what are you specifically gonna do about those CVE's, anyway? What will happen to you specifically if you don't?

There's people who are legitimately concerned about what will happen to them if they circumvent the Great Chinese Firewall with VPN. Nothing's gonna happen to your machine if you override that CVE warning at compile time. As for FreeBSD project? They have developed a very adequate response process for the CVE's, and it keeps them out of legal trouble - and it's the kind of trouble that is actually pretty irrelevant to OP personally. If OP wants to get political about it - not my problem.

Can people really not see the difference between saying "I don't want to fix this anymore. If you want it fixed do it yourself" and telling the world "This is how you could use it to break into computers"?
Try telling a random joe on the street, "This is how to break into computers". What do you think the reaction of that random joe is?

That random joe probably has no clue how to gain access to a computer via nonstandard means, and standard means are probably out of reach anyway. And if on the outside chance, that random joe is someone who can actually understand WTF you just told him, what are the chances he'll go off to do exactly that without considering consequences? That random joe is far likelier to go jump off a building in response to a stupid TikTok challenge than go around breaking into computers and stealing data.
 
This is the side of base system I should be learning about - hardening base system 👀 I would like FreeBSD to be here for another half century
 
I'm seeing terms like "Process un-trusted data" thrown around in this thread... And... exactly wha does that even mean?

You can run a checksum hash on data files similar to software/software packages. This is still new to me and as far as I know ML/DL in the AI space goes. We are moving in that direction as far as I know
 
You can run a checksum hash on data files similar to software/software packages. This is still new to me and as far as I know ML/DL in the AI space goes. We are moving in that direction as far as I know
More likely it means data obtained from external resources isn't validated against the most basic constraints, such as attributes or text nodes containing xml escape codes that could allow an attacker to inject code that could e.g. cause a webbrowser to download attacker-provided javascript and execute it.

Checksums do not validate input. They validate whether it was transmitted correctly, but that won't stop a mallicious content provider from providing purposefully crafted code in order to capture client resources.
 
Checksums do not validate input. They validate whether it was transmitted correctly, but that won't stop a mallicious content provider from providing purposefully crafted code in order to capture client resources.
Yep. And even then, validation of input data is highly dependent on the context. textproc/libxml2 can be considered a validator of XML data, but there's no such thing as a perfect validator (or a perfectly secure validator).

One does need to have specific standards/limits against which the data can be validated. Even then, it's impossible/impractical to think of friggin' EVERYTHING, y'know...
 
oooo so this is an XML parser we are talking about here 👀 code and data go hand in hand. No data, code has no use.

Circling back to the original question this has many implications. Parsing metadata - financial data, banking data. Login information. Whoa. I should join the base system cybersecurity core dev team mailing list. A couple of my buddies work in the cybersecurity sector in this space; I wanted to unpack the subject of this thread more because I am also interested in robust cybersecurity tooling and frameworks.

Did a brief search for docs. This looks important - Base system docs - Security
 
I'm seeing terms like "Process un-trusted data" thrown around in this thread... And... exactly wha does that even mean?
When you access a page from the internet in a web browser, you are effectively loading loads of untrusted data (especially if someone doesn't have an ad-block!). For example:
  • Images, png, jpeg, etc.
  • XML/html, etc,
  • Javascript
  • GLSL shaders
This means that if any of your parsers, or later stack has a flaw, a dodgy person could potentially modify one of these types of data and expose it. For a very dumb example with bmp, a parser could read the size from the header and then fread the data into an allocation that size. If that header contained an incorrect size, you could get the naive parser to read/write outside of that allocation.

Writing a parser for some data is hard enough without people using it to tap at the door. Its not all about memory either, so any stack can be vulnerable unless something is put into place (usually restrictive subsets of the format). Quake III had an entire virtual machine developed for it to handle untrusted models, textures and mods that the remote servers would distribute to the clients.
 
oooo so this this is an XML parser we are talking about here
Sigh... not exactly, the parser is just being used as an example of how one really needs to understand how the program even works, what the limitations are, and how issues are to be addressed, what's the appropriate thing to do.

Sometimes, it's a simple matter of changing a few lines of code to make sure the data can be processed properly. And sometimes (as kpedersen pointed out), you need a whole infrastructure to be set up, complete with extra authentication checks in place. A bit like having sendmail be limited to only internal network, and if the command originates from outside, sendmail just doesn't work. Point being, it has to be difficult for an outsider to do damage, you need to gain trust and to be able to work with internal policies.

And yeah, it's pretty important to be able to define terms exactly. It might seem like splitting hairs to some people, but it is an important aspect of cyber security and designing the correct response to a threat/attack.

Cyber security has been around for a long enough time that it's got standard procedures in place to give people an idea of what the appropriate reaction to a threat/attack is.

And this whole conversation is more about whether OP's understanding of those standard procedures, and their impact is correct. This is why it's important to read the whole thread, and not just the page with the latest comment. I mean, OP was screaming about half the Internet being brought down, and whole countries disappearing off the map, because somebody did not follow standard cybersecurity analysis procedures correctly!
 
I nominate astyle as the next executive director of OWASP.

This is a fairly week long thread. I’ll read when I can make time. I agree with OP. No piling on. We look for solutions.
 
Back
Top