AI's next challenge: Social engineering hacks.

If everyone remembers the late Kevin Mitinck, his 'Hacking' amounted to doing a bit of social engineering to get actual people to divulge passwords and other sensitive information, which was later used to gain unauthorized access to various servers.

And now consider this: What if you could do this exact same 'social engineering' to trick an AI chatbot into divulging potentially dangerous information? After all, in the early days of ChatGPT, someone did succeed at giving it prompts that got the chatbot to self-identify as a woman. (I wish I could find a link to that!). Since then, ChatGPT has gotten better at reacting to people and finding information that they are after. If you want the latest on whether or not a broken water main got fixed on Big Island - yep, you can find that. It got much easier to do a few rounds of explaining what you want, and for ChatGPT or even Google to dig up the information you want.

And now - the downside of that:

Not impossible to get a chatbot to dig up information that can, in the wrong hands, be devastating. An example that I will bring up here is that of Nuclear Boy Scout (Heard about him here on these Forums, BTW). The guy had a fantastic understanding of chemistry, but managed to build a dangerous device that turned his family's backyard into a SuperFund Cleanup site that was a veritable environmental disaster that made news.

Let's consider how he gained the knowledge to do that: He was reading books and buying up consumer commodities like batteries and bottles of propane. There was a method to his madness. The guy was smart enough to think things through and to know exactly why he needed exactly that stuff.

Just getting the books and learning the steps was a tedious enough process. And now, that was back in the days before AI chatbots began to appear in the wild on the Internet. With those AI chatbots, the process of information acquisition is accelerated. And the chatbot will happily and mindlessly dig up and process that info for you, making the replication of the NBS disaster available to the moronic masses. Because there's this chatbot that is available over the Internet, to just about anyone, and it does not have the ability to think critically about the requests it receives.

Doing a bit more research, I discovered that this fiasco may be Musk's 'fault'. He's the one who came up with the Grok chatbot. And even reddit users are saying that the Grok chatbot is totally unhinged... But still... how do you program critical thinking ability into a computer??? 😒🤔

And what's next? Should we expect AI to scam everyone on the Internet out of all the money they have? or maybe fire everyone AND their boss from their jobs?

More importantly, how do you teach someone to be sensible and do critical thinking about what's even safe and what's not? The chatbots do provide 'useful' info, but in the end, they're still moronic lumps of poisonous metals and plastic.

What relation to FreeBSD, you may ask? Well, virtually all of those unhinged chatbots are accessible with www/firefox ...
 
Sometimes, you have to ask how much fault lies with the chatbot, and how much fault lies with the user... Sometimes, it's the user who's so unhinged that a technicallycorrect and sane reply gets interpreted in an unpredictable manner.

Human technical experts like users on these very Forums face that all the time. We give a technically correct and sane answer based on what we know, only to have the other person get frustrated and unload that frustration on us. Like saying that in FreeBSD, 32-bit support is in Tier 2. While it's technically possible to run 32-bit programs on FreeBSD, options are limited to specific versions of FreeBSD and programs.

Or we see that the 'technically correct, informative' answer we (human users on FreeBSD Forums) gave - it gets people to do something that is actually not a great idea. As an example, we're kind of reluctant to talk about honeypots, and to offer any directions about how to set them up. That's because we kind of know just how dangerous that can be in the hands of people who don't think things through. It's the digital equivalent of teaching someone how to 3d-print a gun, when it's obvious to us that the other person knows nothing about gun saftety and related local laws. Can ChatGPT be trained to have that kind of assessment, and react appropriately, esp. when we're struggling to teach that kind of stuff to our kids?

It may be just an impossible task to train an AI to do that kind of "thinking". Is ChatGPT gonna be trained to alert loca law enforcement when it gets too many queries about learning to 3d-print guns? OK, on the computer side of things, you can analyze firewall logs and discover that most of your traffic is AI scraper bots and port scanners. What's the appropriate reaction? That is not always clear-cut. Can ChatGPT think about that, and maybe consider ideas other than complaining to cops about excessive gun-related queries? I mean, even people are hard to train, will ChatGPT be any better?

You can't expect ChatGPT to assume the role of parent / government and keep protecting morons from themselves. It's bad enough to try and program appropriate responses about awareness of own limitations into ChatGPT.
 
Well, well... seems like Google's AI read the above posts, and has a surprisingly adequate summary of my points... I don't want to reproduce in here what Google query AI nuclear boy scout astyle returned at me, simply because it's kind of long...

Google did go off on a tangent about nuclear weapons, something I did not mention. And therein lies the danger of the Broken Telephone game, as it applies to information found on the Internet. You can't expect AI to think critically. Well, you can't exactly expect humans to think critically, either. Who gets to decide if a point is important and merits a response? What kind of process is used to decide that? What would trigger the 'nonsense' mental antenna? What's the appropriate reaction to that?

In simplest terms, the Broken Telephone game would go like this: Cat -> Furry animal -> pet -> Dog. Right away, the communication breaks down, and we start talking about very different things. If in my very next step, I decide to buy some tuna-flavored cat food, that would trigger a 'nonsense' mental antenna in the other person, because the other person thinks I'm talking about a dog, not a cat.

Google did echo back my point about human overreliance on AI for decision making. Sometimes, it is funny, like in situations that Phishfry pointed out:
AI Agents can't even take drive-thru orders right.


When this bubble burst it is going to take the market with it.
Sometimes, it's not really funny, like in my example about excessive queries about 3d-printed guns. Fortunately, 3d-printers that deal with metal are pretty expensive - A recent price that I saw was half a million USD. And how did I even find that out? A small-scale manufacturer of custom bicycle parts was crowing about the very fact that they make their nuts and bolts on newly developed equipment that makes the job easy.
 
In simplest terms, the Broken Telephone game would go like this: Cat -> Furry animal -> pet -> Dog. Right away, the communication breaks down, and we start talking about very different things.
Sorry, but this has to go here...
 
Back
Top