The release of GPT-4 was meant not only to improve the functionality of ChatGPT, but also to make the model less likely to produce potentially harmful output. Europol workshops involving subject matter experts from across Europol’s array of expertise identified a diverse range of criminal use cases in GPT-3.5. A subsequent check of GPT- 4, however, showed that all of them still worked. In some cases, the potentially harmful responses from GPT-4 were even more advanced.
ChatGPT’s ability to draft highly authentic texts on the basis of a user prompt makes it an extremely useful tool for phishing purposes. Where many basic phishing scams were previously more easily detectable due to obvious grammatical and spelling mistakes, it is now possible to impersonate an organisation or individual in a highly realistic manner even with only a basic grasp of the English language.
ChatGPT may therefore offer criminals new opportunities, especially for crimes involving social engineering, given its abilities to respond to messages in context and adopt a specific writing style. Additionally, various types of online fraud can be given added legitimacy by using ChatGPT to generate fake social media engagement, for instance to promote a fraudulent investment offer.
To date, these types of deceptive communications have been something criminals would have to produce on their own. In the case of mass-produced campaigns, targets of these types of crime would often be able to identify the inauthentic nature of a message due to obvious spelling or grammar mistakes or its vague or inaccurate content. With the help of large language models, these types of phishing and online fraud can be created faster, much more authentically, and at significantly increased scale.
With the current version of ChatGPT it is already possible to create basic tools for a variety of malicious purposes. Despite the tools being only basic (i.e. to produce phishing pages or malicious VBA scripts), this provides a start for cybercrime as it enables someone without technical knowledge to exploit an attack vector on a victim’s system.
This type of automated code generation is particularly useful for those criminal actors with little to no knowledge of coding and development. Critically, the safeguards preventing ChatGPT from providing potentially malicious code only work if the model understands what it is doing.
GPT-4, has already made improvements over its previous versions and can, as a result, provide even more effective assistance for cybercriminal purposes. The newer model is better at understanding the context of the code, as well as at correcting error messages and fixing programming mistakes. For a potential criminal with little technical knowledge, this is an invaluable resource.
cherrypicked from:
Tech Watch Flash - The Impact of Large Language Models on Law Enforcement, EUROPOL
ChatGPT’s ability to draft highly authentic texts on the basis of a user prompt makes it an extremely useful tool for phishing purposes. Where many basic phishing scams were previously more easily detectable due to obvious grammatical and spelling mistakes, it is now possible to impersonate an organisation or individual in a highly realistic manner even with only a basic grasp of the English language.
ChatGPT may therefore offer criminals new opportunities, especially for crimes involving social engineering, given its abilities to respond to messages in context and adopt a specific writing style. Additionally, various types of online fraud can be given added legitimacy by using ChatGPT to generate fake social media engagement, for instance to promote a fraudulent investment offer.
To date, these types of deceptive communications have been something criminals would have to produce on their own. In the case of mass-produced campaigns, targets of these types of crime would often be able to identify the inauthentic nature of a message due to obvious spelling or grammar mistakes or its vague or inaccurate content. With the help of large language models, these types of phishing and online fraud can be created faster, much more authentically, and at significantly increased scale.
With the current version of ChatGPT it is already possible to create basic tools for a variety of malicious purposes. Despite the tools being only basic (i.e. to produce phishing pages or malicious VBA scripts), this provides a start for cybercrime as it enables someone without technical knowledge to exploit an attack vector on a victim’s system.
This type of automated code generation is particularly useful for those criminal actors with little to no knowledge of coding and development. Critically, the safeguards preventing ChatGPT from providing potentially malicious code only work if the model understands what it is doing.
GPT-4, has already made improvements over its previous versions and can, as a result, provide even more effective assistance for cybercriminal purposes. The newer model is better at understanding the context of the code, as well as at correcting error messages and fixing programming mistakes. For a potential criminal with little technical knowledge, this is an invaluable resource.
cherrypicked from:
Tech Watch Flash - The Impact of Large Language Models on Law Enforcement, EUROPOL
Last edited by a moderator: