Finding ways to crack AI generated code.
Nah, wasted time. AI can do this, too. And it was already done.
We also already have the situation that AI produces new kinds of bugs and vulnerabilities.
As I said in another post:
AI increases production by magnitudes, but quality is not increased.
For an economy and society not caring about product quality, finding it a neat idea to get all people out of work and want to increase production even more at the same time, no matter we already drown in too much useless crap and don't ask who shall all buy that crap when nobody has a job anymore, AI is perfect.
Because we also don't need to think for ourselves anymore. AI also does this for us. Look into today's schools and universities! The teacher asks for a summary of a text. That was not for the stupid teacher needed some understanding explained to him by the clever students, but the students shall read a book, learn from it, and then write a text of their own about it, to think and show if they grasp it. Today schools and universities are not for learning anymore. A list is given of things needed to be checked off to get a diploma. So the students ask AI to produce one text for them instead, don't even read those, but hand it to the teacher. For them it's wasted time to even read this worthless junk, even hand it back corrected and commented to the students is pointless, because they don't even read this anymore.
A few weeks ago at my wife's school actually happened this:
Student:"...this book by Manfred Mann..."
Teacher:"
Thomas Mann."
Student:"No. Manfred! ChatGPT said so."
Teacher:"I have the very book right here in my hands. Look!"
Student:"Can't be, because ChatGPT..."
True story. Actually happened.
People learned what they see at a monitor must be true. Because when TV was introduced they were told pictures cannot lie. Then later when computers were introduced to the masses (way before WWW, when computers were not used as anything as complex calculators, but not for consuming media yet) people were explained computers don't make mistakes, they are always correct.
Now people lacking massively of media, computer and math skills are confronted with touchscreen computers based on a system working with probabilities - not reliably correct.
Yeah, today's people have those tools freeing them from minor, lower tasks, so they can focus on the higher level tasks.
I wonder how this is suppose to work.
