Other Website skimming attacks

I am referencing this article.

Question: how could such an attack be executed if the application code were non-writeable?

For instance, when deploying to a different, secure site, and mounting the code to the website nullfs-readonly?

I was trying to do that, but web framework developers take measures to make it impossible, i.e. they put code into the framework, which, when run from a readonly filesystem, will deliberately crash the application, so that this becomes impossible.
 
It could have been due to stored cross site scripting or possibly an ad hosted by the victim web site. Even though a web site is non-writeable, there are many ways to get code onto the site: mainly through social engineering and using a legitimate user, or as I mentioned, maybe an ad that is hosted by the same site. The article didn't go into what the atack vector was so not sure how the attack was perpetrated.
 
Yeah, but this is actually what I would like to know a lot better.
The article recommends CSP and SRI as a countermeasure, and I am wondering if that kind of administrative overhead is indeed the way to go, or if it wouldn't be more appropriate to hinder the malicious code getting inserted in the first place.
 
Yeah, but this is actually what I would like to know a lot better.
That part always seems to be missing from those analysis. It's all about what happens next, when an unsuspecting client visits the infected sites. It's never about how they got that malware crap on there in the first place.

As we say in Dutch; "Voorkomen is beter dan genezen". Which roughly translates to "prevention is better than a cure".
 
I don't think CSP requires any admin overhead but I have not implemented it; I am on the other side of the fence as a (legal) attacker. I do know CSP is pretty good if implemented. Normally, javascript attacks come from several places as I mentioned - I don't know anything about this site but if ads are hosted there, an attacker could have an ad that hosts malicious javascript. The site could also have the 'PUT" verb enabled and the attacker wrote code to the server, or they are doing a remote file include somehow.

I wish they would post the attack vectors but unfortunately that gives attackers ideas, if they didn't already have one.
 
In addition to things already mentioned in this thread:
Most (? or just many?) websites today not only uses frameworks, but they also load those frameworks and other things (fonts comes to mind) from other websites. Knowing that, how can anyone think it is hard to get third party (possible malicious) code onto a website?

The hard part is trying to set up a website to be as secure as possible, and monitor it closely enough to detect any attempts to break it.
 
That part always seems to be missing from those analysis. It's all about what happens next, when an unsuspecting client visits the infected sites. It's never about how they got that malware crap on there in the first place.

Yessir - there is money involved in the web business, and that is when things get strange. Same as with banks - they don't tell you how their security works; and they might pay a ransom rather than tell anybody they got hacked.

In the web we are supposed to be consumers, and to believe that everything is cared for. Or, if we have a webshop of our own, we are supposed to be consumers of some web-software product, and to believe that everything is cared for (as long as we are obedient and follow the orders). No need to know about anything specific - that might just help the criminals. :(

But -as I told here recently- I for my part was once running out of a database GUI, and then decided to [ab]use a web framework as my database GUI. And from there, I have a very valid need to know, as I code a lot on my own (although there is not so much at stake - probably somebody checking out my porn movies, at worst).

As we say in Dutch; "Voorkomen is beter dan genezen". Which roughly translates to "prevention is better than a cure".

Yeah, same here (DE) - but that implies that one has an active part in designing - which a consumer is not supposed to.
 
I don't think CSP requires any admin overhead but I have not implemented it;

I had a look at it, and it seems to be something that has to be configured (and adapted when software changes). So that might be something you would want to automate in continuous-delivery, involving new risks...

I am on the other side of the fence as a (legal) attacker. I do know CSP is pretty good if implemented.

That may be - but all these things that are put upon to enhance security have an implicit weakness, (and we have seen that last week when firefox ceased to work properly): if somebody forgets to maintain them, then nothing works anymore.

Normally, javascript attacks come from several places as I mentioned - I don't know anything about this site but if ads are hosted there, an attacker could have an ad that hosts malicious javascript. The site could also have the 'PUT" verb enabled and the attacker wrote code to the server, or they are doing a remote file include somehow.

Hmm, I see...
 
In addition to things already mentioned in this thread:
Most (? or just many?) websites today not only uses frameworks, but they also load those frameworks and other things (fonts comes to mind) from other websites.

Google explicitely recommends to load the fonts directly from their site. Reading that, I didn't like the idea - because I wouldn't like to have things outside of my revision-control that can mysteriousely change at any time and break things. I didn't even think so far as that it also might mysteriousely change into some malicious javascript...

Addendum: Just read this one here about SRI, which explains a bit on how things work So these folks do actually hotlink javascripts!?!?! Well then... But then, if you do that SRI gimmick, then every time upstream brings a new version Your site ceases to function (and we're at firefox-last-week). So you automate that in the continuous-delivery toolchain - and so you're back at the start (maybe including a layover in jail).

Knowing that, how can anyone think it is hard to get third party (possible malicious) code onto a website?

Maybe because it is not our aim to do that. Maybe because we also respect the neighbor's closed door without checking if it is actually locked...

And, speaking for me, I just would like to know as good as possible what to consider before opeing a webserver to the general public. And as an old-school unix admin I start with the basic things (like access rights and separation of duties). If the webapp-userid is not allowed to read certain data back from the database, inserted javascript will not be able to read it either.

The hard part is trying to set up a website to be as secure as possible, and monitor it closely enough to detect any attempts to break it.

Aye.
 
.htaccess (httpd) can prevent many website injections. I make everything GET or HEAD only, except for submission folders that allow POST through their own .htaccess files. Then, depending on what script languages there are on your website, there's htaccess codes to block those injections. I also worry about phishing attacks.
As we say in Dutch; "Voorkomen is beter dan genezen". Which roughly translates to "prevention is better than a cure".
The saying I've heard is, an ounce of prevention is worth a ton of cure.
 
Can I find any application in FreeBSD for Desktop users, detect this kind of risk and help them to solve the problem?
 
There are a few vulnerability scanners in the ports tree, if that's what you are looking for? But most of them require at least some level of expertise to make sense of the results. And I'm not sure if any one of them would be able to find the hole that got abused in this case.
 
Can I find any application in FreeBSD for Desktop users, detect this kind of risk and help them to solve the problem?

The original topic was concerning a web application so the FreeBSD desktop applications don't apply in this case. All applications have vulnerabilities obviously, but desktop applications are subject to different attack vectors because they are not exposed to the Internet in the same way a web application is, plus they are designed very differently than web applications are.
 
As Sevendogsbsd already said, this issue concerns (the design of) web applications served by a webserver.
There is software around that does analyze web applications for such risks, but this depends on the type of application (i.e. which prgramming language or which framework is being used).
 
Back
Top