If everyone remembers the late Kevin Mitinck, his 'Hacking' amounted to doing a bit of social engineering to get actual people to divulge passwords and other sensitive information, which was later used to gain unauthorized access to various servers.
And now consider this: What if you could do this exact same 'social engineering' to trick an AI chatbot into divulging potentially dangerous information? After all, in the early days of ChatGPT, someone did succeed at giving it prompts that got the chatbot to self-identify as a woman. (I wish I could find a link to that!). Since then, ChatGPT has gotten better at reacting to people and finding information that they are after. If you want the latest on whether or not a broken water main got fixed on Big Island - yep, you can find that. It got much easier to do a few rounds of explaining what you want, and for ChatGPT or even Google to dig up the information you want.
And now - the downside of that:
Not impossible to get a chatbot to dig up information that can, in the wrong hands, be devastating. An example that I will bring up here is that of Nuclear Boy Scout (Heard about him here on these Forums, BTW). The guy had a fantastic understanding of chemistry, but managed to build a dangerous device that turned his family's backyard into a SuperFund Cleanup site that was a veritable environmental disaster that made news.
Let's consider how he gained the knowledge to do that: He was reading books and buying up consumer commodities like batteries and bottles of propane. There was a method to his madness. The guy was smart enough to think things through and to know exactly why he needed exactly that stuff.
Just getting the books and learning the steps was a tedious enough process. And now, that was back in the days before AI chatbots began to appear in the wild on the Internet. With those AI chatbots, the process of information acquisition is accelerated. And the chatbot will happily and mindlessly dig up and process that info for you, making the replication of the NBS disaster available to the moronic masses. Because there's this chatbot that is available over the Internet, to just about anyone, and it does not have the ability to think critically about the requests it receives.
Doing a bit more research, I discovered that this fiasco may be Musk's 'fault'. He's the one who came up with the Grok chatbot. And even reddit users are saying that the Grok chatbot is totally unhinged... But still... how do you program critical thinking ability into a computer???

And what's next? Should we expect AI to scam everyone on the Internet out of all the money they have? or maybe fire everyone AND their boss from their jobs?
More importantly, how do you teach someone to be sensible and do critical thinking about what's even safe and what's not? The chatbots do provide 'useful' info, but in the end, they're still moronic lumps of poisonous metals and plastic.
What relation to FreeBSD, you may ask? Well, virtually all of those unhinged chatbots are accessible with www/firefox ...
And now consider this: What if you could do this exact same 'social engineering' to trick an AI chatbot into divulging potentially dangerous information? After all, in the early days of ChatGPT, someone did succeed at giving it prompts that got the chatbot to self-identify as a woman. (I wish I could find a link to that!). Since then, ChatGPT has gotten better at reacting to people and finding information that they are after. If you want the latest on whether or not a broken water main got fixed on Big Island - yep, you can find that. It got much easier to do a few rounds of explaining what you want, and for ChatGPT or even Google to dig up the information you want.
And now - the downside of that:
Not impossible to get a chatbot to dig up information that can, in the wrong hands, be devastating. An example that I will bring up here is that of Nuclear Boy Scout (Heard about him here on these Forums, BTW). The guy had a fantastic understanding of chemistry, but managed to build a dangerous device that turned his family's backyard into a SuperFund Cleanup site that was a veritable environmental disaster that made news.
Let's consider how he gained the knowledge to do that: He was reading books and buying up consumer commodities like batteries and bottles of propane. There was a method to his madness. The guy was smart enough to think things through and to know exactly why he needed exactly that stuff.
Just getting the books and learning the steps was a tedious enough process. And now, that was back in the days before AI chatbots began to appear in the wild on the Internet. With those AI chatbots, the process of information acquisition is accelerated. And the chatbot will happily and mindlessly dig up and process that info for you, making the replication of the NBS disaster available to the moronic masses. Because there's this chatbot that is available over the Internet, to just about anyone, and it does not have the ability to think critically about the requests it receives.
Doing a bit more research, I discovered that this fiasco may be Musk's 'fault'. He's the one who came up with the Grok chatbot. And even reddit users are saying that the Grok chatbot is totally unhinged... But still... how do you program critical thinking ability into a computer???


And what's next? Should we expect AI to scam everyone on the Internet out of all the money they have? or maybe fire everyone AND their boss from their jobs?
More importantly, how do you teach someone to be sensible and do critical thinking about what's even safe and what's not? The chatbots do provide 'useful' info, but in the end, they're still moronic lumps of poisonous metals and plastic.
What relation to FreeBSD, you may ask? Well, virtually all of those unhinged chatbots are accessible with www/firefox ...