BlackEnergy cyberespionage group targets Linux systems and Cisco routers

That depends on how things are exploited. Payloads specifically designed for Linux usually do not work on FreeBSD. Although there are a lot of similarities, the way ABI's are called from assembler is different. Some scripted payloads may work but most of the time they won't work either.

That's not to say the exploits can't be made to work on FreeBSD. Applications are mostly the same, if it's exploitable on Linux it's likely to be exploitable on FreeBSD too. So it's possible a future update may contain modules specific for FreeBSD.
 
In fact, all those outlaws with their breaching remote machines have exploited overflow issues in code. And there is no street magic to prevent the very mechanism of those attacks. :rolleyes:
 
In fact, all those outlaws with their breaching remote machines have exploited overflow issues in code. And there is no street magic to prevent the very mechanism of those attacks. :rolleyes:

Unless you're on VMS, which apparently had/has the ability to deal with most buffer overflow attacks. :)
 
The best way to get rid of buffer overflows is to align the buffers with the end of a page and map the next virtual address as - well - do not map it. That way, the first access running out of the buffer will kindly ask the kernel to stop the runaway code.
 
FreeBSD showing even the slightest readiness to become more Linux-like and to even take over some weird Linux crap won't make things better. Quite the contrary.

Two candidates that sometimes creep up: systemd-related (D-Bus) and X/Wayland-related.

The problem is complex but can in some way be summarized as: incompetent 3l1t3 hackzorzs play with C without too much of an idea hacking up crap, usually generously linking in crap from other losers.

Well noted, this is a rant but not only a rant as FreeBSD has - by concept - done the best it could to escape the crap-ism that makes up major parts of Linux. Taking in concepts from Linux will show to be similar to having a gang of Ukrainian crack users celebrate a party in one's house.

For fairness sake it should be noted that it's ultimately us, the users, driving FreeBSD into that by expecting that FreeBSD must keep up with whatever Linux defines as en vogue. One case is Wayland. Of course we could create our own Wayland server; that's not an insurmountable miracle, there have been others doing that (like the Enlightenment people). Unfortunately though, any Wayland discussion usually quickly arrives at kernel drivers for graphic cards (which Linux has) as if Wayland couldn't be done without those. Now, of course, kernel drivers are faster than userland drivers, but is FreeBSD about being not slower in graphics than Linux? I don't think so but it seems most do.

To make things worse, the real lack of fairness in those games is hardly ever seen. It basically comes down to Linux, having gazillions of people and money, somehow hacking up something, while FreeBSD with way lower resources has to not only keep up but additionally - and hardly noticed or appreciated - make it in an engineering-wise sane and solid way.

That's BTW the really major reason to be grateful for the large million-dollar donation recently. Because those provide an at least remotely reasonable chance for FreeBSD to do properly what it does.
 
The best way to get rid of buffer overflows is to align the buffers with the end of a page and map the next virtual address as - well - do not map it. That way, the first access running out of the buffer will kindly ask the kernel to stop the runaway code.
To realign buffers is the waste of time: it pounds performance plus if the buffer gets placed within class/struct, it has zero effect. There has to be a simple scheme, which is capable to kill the mechanism of overflows from the very root. It will be fast, reliable and rather simplest one as well.
 
To realign buffers is the waste of time: it pounds performance plus if the buffer gets placed within class/struct, it has zero effect. There has to be a simple scheme, which is capable to kill the mechanism of overflows from the very root. It will be fast, reliable and rather simplest one as well.

Some machine architectures did away with the possibility of buffer overflow exploits already in early 1990's using a special arrangement of the user process stack and some other tricks that I can't remember now. I think it was DEC's Alpha CPUs that implemented something like that. Those platforms were not so successful unfortunately because they weren't x86 compatible and we are now stuck with the brain-dead x86 machine architecture that can not be changed to implement proper countermeasures against buffer overflows without breaking compatibility with old software.

Luckily the situation is changing and the requirement of x86 compatibility is no longer that relevant because of cloud services and networked applications where it's not relevant what type of hardware implements the service.
 
kpa,

Look at my code: there is nothing extraordinary about implementing that protection for the x86 architecture. x86 is only bad for HPC (high performance computing), because it has too few registers.
 
Some machine architectures did away with the possibility of buffer overflow exploits already in early 1990's using a special arrangement of the user process stack and some other tricks that I can't remember now. I think it was DEC's Alpha CPUs that implemented something like that. Those platforms were not so successful unfortunately because they weren't x86 compatible and we are now stuck with the brain-dead x86 machine architecture that can not be changed to implement proper countermeasures against buffer overflows without breaking compatibility with old software.

Luckily the situation is changing and the requirement of x86 compatibility is no longer that relevant because of cloud services and networked applications where it's not relevant what type of hardware implements the service.

kpa, I think it was also HP which changed the direction of the stack. That meant that you could overflow a buffer in the stack space, but only into free memory. The return address and other data was safe because it was on the other direction. I currently have no idea if the layout for the AXP was also that way. But old software is always the millstone around the neck - the only way to get around the problem of buffer overruns is to seperate the memory handling from the language. Have the allocation and placement be part of the kernel, where it belongs.

And the demise of x86, I will be happy to share a good scotch with you on that day :D

kpa,

Look at my code: there is nothing extraordinary about implementing that protection for the x86 architecture. x86 is only bad for HPC (high performance computing), because it has too few registers.
What you want to do is basically an MMU on software level. While there was the argument for this in the MIPS3K and PPC603 that software tablewalk was faster than hardware, this has changed since those days. Today, the hardware MMUs are much more elaborated, so I would go for the hardware version. And even better would be to switch from C to some other language which does not allow the developer to be so careless. Having range checks on arrays would be a start, obligatory NULL checks on pointers the next. Yes, it sounds like Pascal, but that is a language where you can rarely run out of a data structure by accident.
 
What you want to do is basically an MMU on software level.
What exact scheme do you mean with the term "MMU"? NX-bit, for instance, isn't an efficient thing in many cases, plus for some reasons, it's needful to have an executable stack and/or even an executable heap (data). Plus the next moment: yes, it's not been top secret that hardware-implemented algorithms are way faster than even the best counterparts of any software, but hardware-based algorithms cannot be updated in on-the-fly mode and they're more tricky for developing/debugging, to say it gently.
 
What exact scheme do you mean with the term "MMU"?
The aspect of memory protection which a Memory Management Unit can provide.
NX-bit, for instance, isn't an efficient thing in many cases, plus for some reasons, it's needful to have an executable stack and/or even an executable heap (data).
I very much doubt that, this is not needed. Are there any references for this claim?
Plus the next moment: yes, it's not been top secret that hardware-implemented algorithms are way faster than even the best counterparts of any software, but hardware-based algorithms cannot be updated in on-the-fly mode and they're more tricky for developing/debugging, to say it gently.
Hardware based logic can be changed on the fly, it even can change itself while being active. Debugging is somewhat different, sure, but not complicated. But developing this? Not really, it requires you to think in parallel structures, or else you will have complicated code and one hell of a debugging on your hands.
 
Hardware based logic can be changed on the fly, it even can change itself while being active. Debugging is somewhat different, sure, but not complicated. But developing this? Not really, it requires you to think in parallel structures, or else you will have complicated code and one hell of debugging on your hands.
Well, then would you like to explain why hardware-based algorithms have not been the lion share amongst used ones?
I very much doubt that, this is not needed. Are there any references for this claim?
It's a matter of the speed: you may use absolute addresses to jump on, or relative ones.
 
For more accuracy's sake, the maximal slowdown of jmp/call is the page swapping, targeted function could be placed in an unloaded page. A more disastrous scenario has been, when a whatever function is split between the neighboring pages. The splitting is possible even for very small functions. So, relative jumps (thanks to the shorter op-codes) make more room to place a function within only one page. And, yes, the shorter op-codes are good for caching too. :)
 
Well, then would you like to explain why hardware-based algorithms have not been the lion share amongst used ones?
Sure. An algorithm can be implemented in a lot of ways, choosing a hardware description language and going for silicon is only one way to do it. As I was pointing out before, you need to get your mind around the parallel nature of hardware in order to create efficient and maintainable implementations. This kind of thinking is not widely seen, and it is not taught at universities to the masses. So there are not that many people who can do this. That is point number one. Point number two is that reconfigurable hardware is not really cheap.

FPGAs are more expensive per unit when compared with mass produced ASICs. Well, they are also ASICs, but you need more die area for the same amount of logic due to the reconfigurability and flexible routing resources, so they are simply more expensive per unit. So you, as a customer, would pay several times the money you would pay for, say, a Core-I3 for a chip that comes close in logical capacity (number of gates/transistors/...). And there is no software available off the shelf for it. But it can run circles around its counterpart for workloads which profit from a high level of parallel execution.

The sweet spot, economically, is the CPU as we know it. The hardware is not changable while running (okay, certain microcode patches would make it seem that way), the re-configuration is done by the software which tells the hardware how to switch states, one operation after the other. Not as fast as a custom tailored ASIC, not as flexible as an FPGA, but a lot less drain on the wallet.

It's a matter of the speed: you may use absolute addresses to jump on, or relative ones.
Since I was asking for reasons to have a stack segment or the heap being executable, and this does not provide a reason why it is needed, I'll ask again: why would I need a stack segment being executable, or the heap space?
 
Since I was asking for reasons to have a stack segment or the heap being executable, and this does not provide a reason why it is needed, I'll ask again: why would I need a stack segment being executable, or the heap space?
Post #17 explains it in more details.
Sure. An algorithm can be implemented in a lot of ways, choosing a hardware description language and going for silicon is only one way to do it. As I was pointing out before, you need to get your mind around the parallel nature of hardware in order to create efficient and maintainable implementations. This kind of thinking is not widely seen, and it is not taught at universities to the masses. So there are not that many people who can do this. That is point number one. Point number two is that reconfigurable hardware is not really cheap.

FPGAs are more expensive per unit when compared with mass produced ASICs. Well, they are also ASICs, but you need more die area for the same amount of logic due to the reconfigurability and flexible routing resources, so they are simply more expensive per unit. So you, as a customer, would pay several times the money you would pay for, say, a Core-I3 for a chip that comes close in logical capacity (number of gates/transistors/...). And there is no software available off the shelf for it. But it can run circles around its counterpart for workloads which profit from a high level of parallel execution.

The sweet spot, economically, is the CPU as we know it. The hardware is not changable while running (okay, certain microcode patches would make it seem that way), the re-configuration is done by the software which tells the hardware how to switch states, one operation after the other. Not as fast as a custom tailored ASIC, not as flexible as an FPGA, but a lot less drain on the wallet.
Your words don't contradict with my explanation, they're just more detailed :) Yes, computing can be realized in a hell-ish number of ways. The human brain, for instance, is the general-purpose analog computer. Plus reconfigurability is the very cause to pound performance. Actually, all computing always is about flexibility versus performance versus stability versus power-consumption.
 
Just a remark: conception of anonymous functions, for example, may need an executable stack/heap. I'm just humbly silent to mention self-writing code :)
 
Back
Top