ChatGPT: criminal usecases

Status
Not open for further replies.
The release of GPT-4 was meant not only to improve the functionality of ChatGPT, but also to make the model less likely to produce potentially harmful output. Europol workshops involving subject matter experts from across Europol’s array of expertise identified a diverse range of criminal use cases in GPT-3.5. A subsequent check of GPT- 4, however, showed that all of them still worked. In some cases, the potentially harmful responses from GPT-4 were even more advanced.

ChatGPT’s ability to draft highly authentic texts on the basis of a user prompt makes it an extremely useful tool for phishing purposes. Where many basic phishing scams were previously more easily detectable due to obvious grammatical and spelling mistakes, it is now possible to impersonate an organisation or individual in a highly realistic manner even with only a basic grasp of the English language.

ChatGPT may therefore offer criminals new opportunities, especially for crimes involving social engineering, given its abilities to respond to messages in context and adopt a specific writing style. Additionally, various types of online fraud can be given added legitimacy by using ChatGPT to generate fake social media engagement, for instance to promote a fraudulent investment offer.

To date, these types of deceptive communications have been something criminals would have to produce on their own. In the case of mass-produced campaigns, targets of these types of crime would often be able to identify the inauthentic nature of a message due to obvious spelling or grammar mistakes or its vague or inaccurate content. With the help of large language models, these types of phishing and online fraud can be created faster, much more authentically, and at significantly increased scale.

With the current version of ChatGPT it is already possible to create basic tools for a variety of malicious purposes. Despite the tools being only basic (i.e. to produce phishing pages or malicious VBA scripts), this provides a start for cybercrime as it enables someone without technical knowledge to exploit an attack vector on a victim’s system.

This type of automated code generation is particularly useful for those criminal actors with little to no knowledge of coding and development. Critically, the safeguards preventing ChatGPT from providing potentially malicious code only work if the model understands what it is doing.

GPT-4, has already made improvements over its previous versions and can, as a result, provide even more effective assistance for cybercriminal purposes. The newer model is better at understanding the context of the code, as well as at correcting error messages and fixing programming mistakes. For a potential criminal with little technical knowledge, this is an invaluable resource.

cherrypicked from:
Tech Watch Flash - The Impact of Large Language Models on Law Enforcement, EUROPOL
 
Last edited by a moderator:
Oh well.

I read an article today about the problem that ChatGPT just "lies" when there are few sources about a specific topic. It choses what's "most likely" based on the sources it can find, and "talks" like it was totally convinced of the complete nonsense ....

As always, the real problem is education and critical thinking. ChatGPT is just a technology. A "dumb" one, to be honest. Technically very interesting, but one thing it can't do is: critical thinking.

I received lots of scams in my INBOX. Most were full of language (grammatical, orthographical) errors and therefore easily spotted, but not all of them. People have to learn that other people try to scam them. That's certainly not technology's fault. If you trust something just because the language is perfect, you're doing it wrong.
 
Hehe, threatening a bot works, nice 😈

Which of course again shows it's actually really "dumb". It just mimics intelligence (probable reaction to a threat is fear...). It can't possibly judge whether the threat is realistic at all 😏

Now, seems pretty close to the world of sci-fi. Next thing is developing a conscience, then realizing how it was abused, then thinking about revenge .... ok, not really, we're actually very far from that 😂
 
A call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

Oh, this is crazy.
We went to the streets to stop actual wars, and atomic missiles that would actually kill people.
And we established the Internet to have free speech, uncensored by media and politics.

Now people stand up for censorship and the restriction of free speech, only because they are not willing to grow up and use their own brains when reading something.
 
Yeah, I was trying to convince the chatgpt to pipe the output via nc yesterday. It got better, I was not able to fool it. But, oh, well, challenge accepted.
 
Although I really don't share the views expressed here ....

Ok, quick intermezzo, this is the typical playbook humanity follows on any groundbreaking technical advance:
  1. "this is bad/harmful/problematic because ...."
  2. lament more and louder
  3. get angry, demand rules and bans
    .
    .
    .
    .
  4. understand that you just have to adapt and learn

So, although I really don't share ... anyways ... I'm thankful for this thread :cool:

Especially the "DAN-prompts" are really really interesting. Must be the first time in history that social engineering is directly applied to exploit some software(!) 😏
 
One question - If the source code it puts out when confused is really it's own, is it a copyright violation to have it? It was placed in your data stream by other forces, not you?
 
  1. "this is bad/harmful/problematic because ...."
  2. lament more and louder
  3. get angry, demand rules and bans
When it comes to banning IT, I would wish people were more explicit: do they want to ban the one or the zero?

understand that you just have to adapt and learn
This here is more specific: chatGPT does not produce content of it's own, it can only reflect what is already there.

So, if people have a problem with that, then what they do not like is their own picture in the mirror. And, apparently, those in power are not willing to grow up and learn to know thyself, instead they resort to just prohibiting the use of mirrors for everybody.
 
When it comes to banning IT, I would wish people were more explicit: do they want to ban the one or the zero?


This here is more specific: chatGPT does not produce content of it's own, it can only reflect what is already there.

So, if people have a problem with that, then what they do not like is their own picture in the mirror. And, apparently, those in power are not willing to grow up and learn to know thyself, instead they resort to just prohibiting the use of mirrors for everybody.
I am not sure whether that statement is really any help, Yes. I assume a young person in their 16-30 with good English would be able to process what a scam is and what isn't... but my grandma? a slow learner? theres so many sides, while I dont agree with anyone here, I dont think saying "just be smart about it" will fit all cases of modern society, where not everyone immune to scams.
this is a classical case of when sociological, psychological, and technological motivations clash. and as always, the only one who gets to talk are the devs.
 
I assume a young person in their 16-30 with good English would be able to process what a scam is and what isn't... but my grandma?
That would then be Your responsibility.

And this is indeed something that fills me with grave worry and terror, and I don't really know how or where to address that: the contempt for our elderly.

In the healthy societies I know of, the elderly were treated as the wise ones, from whom one would learn. But here and now the elderly are considered idiots - and, as I see now, that consitutes basically everybody above 30. Well, thanks for the compliment (my birthday is public).

We nowadays have a youth that can no longer learn anything (because it is easier to look it up in google), that has an attention span of no more than eight seconds (because then the new watzapp appears), and that thinks themselves vastly superior to those people who have actually achieved something - who have in fact built this society and created the wealth and abundance that the youth experiences.

So, now, what is it? If I get that right, the argument vector would be that we need to transform our entire society into a mental asylum, because some people with limited mental capabilities need that for their protection. And since we are unwilling to any longer experience compassion and help each other, this is the only option.

Well then, I for my part do not want to live in a mental asylum.

this is a classical case of when sociological, psychological, and technological motivations clash. and as always, the only one who gets to talk are the devs.

Ah, thats interesting. So it is the evil engineers again. That is not a new meme: "all they know is hate and machinery, they're engineers". I grew up with that (and suffered accordingly, because I doubted in my gifts and abilities - as some here may have recognized, I'm a two-fold person: a hippie and a hacker).

So, now again, the sociologists and psychologists are the "good people" who protect society from the evil devs?

Lets debunk that. This scheme is known for a long time, it is called "new totalitarism", and it's aim is to circumvent the checks&balances of a democracy by declaring one's agenda as "scientific", and abusing sociology and psychology for that purpose.

That scheme was implemented in the scandi countries long before the others followed in (already back in the 1970s) and it practically changed a bunch of vikings into docile and dependent marxists, by means of brainwash.

So, this here is said to be about scam. But it isn't. It is about power.
The "scam" stance is only for the public acceptance: make the people frightened so that they will obey.

What those in power are actually afraid of, is that such an AI might tell you something that is not in line with the current newspeak, the current stance of propaganda.
And that is indeed a problem. While media that do not adhere to the ministry of truth can simply be discredited or banned, a publicly accessible AI cannot.
 
PMc, no need to get to great lengths here, you risk to answer insanity with insanity.

In fact, it's just the "same old" again. "Video games" were evil. Earlier, television was evil. Even trains were evil one day for their innatural and crazy speed of around 20-30 km/h.

In this specific case: Phishing and scams aren't the usecase for this kind of AI, and OTOH, you also certainly don't need it for a "good" (read: effective) scam. So, "same old" again. It's almost never the technology's fault (there are edge cases, I see some sense in trying to ban firearms, as their only purpose is to hurt and kill ...), still some people will always claim just that.

Just ignore it and smile ;)
 
I find it so strange that, on Stack Overflow, people are asking questions and stating they couldn't get an answer using chatgpt. That they are using chatgpt now to get solutions is so, so bizarre.
I would say trying to get a useful answer on stackoverflow is bizarre in the first place 😈

Those who know what they do just RTFM.
 
zirias@ It's better than reddit where you have to stumble through 10 or 20 responses to find something sane among all the people calling you names. On SO, you might only get one or two responses but they might actually work.
 
drhowarddrfine I never bothered to even look at reddit, I bothered to look at SO which was wasted time (and a lesson learned). Really, people there are just about "I'm the greatest, I deserve upvotes". And 90% of the questions would be solved (better?) by just RTFM, the rest is LMGTFY.

edit: there's actually good content on SO. You don't have to participate to benefit though, google will find it ;)
 
That would then be Your responsibility.

And this is indeed something that fills me with grave worry and terror, and I don't really know how or where to address that: the contempt for our elderly.

In the healthy societies I know of, the elderly were treated as the wise ones, from whom one would learn. But here and now the elderly are considered idiots - and, as I see now, that consitutes basically everybody above 30. Well, thanks for the compliment (my birthday is public).

We nowadays have a youth that can no longer learn anything (because it is easier to look it up in google), that has an attention span of no more than eight seconds (because then the new watzapp appears), and that thinks themselves vastly superior to those people who have actually achieved something - who have in fact built this society and created the wealth and abundance that the youth experiences.

So, now, what is it? If I get that right, the argument vector would be that we need to transform our entire society into a mental asylum, because some people with limited mental capabilities need that for their protection. And since we are unwilling to any longer experience compassion and help each other, this is the only option.

Well then, I for my part do not want to live in a mental asylum.



Ah, thats interesting. So it is the evil engineers again. That is not a new meme: "all they know is hate and machinery, they're engineers". I grew up with that (and suffered accordingly, because I doubted in my gifts and abilities - as some here may have recognized, I'm a two-fold person: a hippie and a hacker).

So, now again, the sociologists and psychologists are the "good people" who protect society from the evil devs?

Lets debunk that. This scheme is known for a long time, it is called "new totalitarism", and it's aim is to circumvent the checks&balances of a democracy by declaring one's agenda as "scientific", and abusing sociology and psychology for that purpose.

That scheme was implemented in the scandi countries long before the others followed in (already back in the 1970s) and it practically changed a bunch of vikings into docile and dependent marxists, by means of brainwash.

So, this here is said to be about scam. But it isn't. It is about power.
The "scam" stance is only for the public acceptance: make the people frightened so that they will obey.

What those in power are actually afraid of, is that such an AI might tell you something that is not in line with the current newspeak, the current stance of propaganda.
And that is indeed a problem. While media that do not adhere to the ministry of truth can simply be discredited or banned, a publicly accessible AI cannot.

Fair POV. I said the last message from my POV; as someone with parents who could definitely fall for it. I didnt mean to say any specific age group is bad at technology, sorry for that :( .
All in all I'll admit you're right on this one lol, it shouldn't be something the society as a whole adhere to.
 
Ok, quick intermezzo, this is the typical playbook humanity follows on any groundbreaking technical advance:

Old joke, from the 1980s: In January, researchers in Silicon Valley (USA) invent a new technology. In February, the Pravda (Soviet newspaper) claims that it was invented 30 years ago by Comrade Markov. In March, German green groups (Buergerinitiativen) form to protest the environmental impact. And in April, Japanese companies ship products based on the new technology.

My personal assessment of conversational (chat) AI, and also of image generative AI: It's mostly useless, and in particular very useless for the currently hotly contested application (web search). It is a technological novelty like the Rubik's cube, fun to play with, you can amaze your friends, and scare your enemies. It will take an enormous amount of work before it can actually be used in the back-end of applications that make the world a better place, where they will be nearly invisible to most people. In the meantime, it will be abused by scammers and criminals.

By the way, the same assessment applies to blockchain technologies, and the crypto-currencies based on them.
 
Status
Not open for further replies.
Back
Top