To me AI needs to be seen in two different lights:
The scientific approach, to get an additional, other way to program machines.
Scientists ask:
"How it works? Why is it? Can it be done?"
Scientific means to see pros
and cons, trying to get a complete picture,
being reasonable,
distinguish what can be delivered, what can not be done,
search and find the downsizes, every real thing always have some,
and respect them.
An engineer also asks:
"Makes it sense? Is it useful?
Are there other, better, easier, quicker ways to achieve the same result at lower costs?"
A salesman only asks one question, and one question only:
"Can it be sold?"
Selling means pros only,
and no cons at all:
"It will conviently solve all our problems without any effort.
It cost nothing, has absolutely no downsize at all,
will us all make stinking rich,
and to live a better life in a better world!
So there is absolutely no reason not to do it right now massively.
Anybody who mention even the slightes objection is just an old fart,
who wants us all to stay in the stone ages for all eternity!"
If you're not an idiot, and older than 12 years,
alarm bells shall ring everytime someone is talking about pros only, and not talking about any downsizes.
Because then something shall be sold - someone wants (your) money.
Sticking point:
Overseen, or ignored downsizes are the cause for future problems.
("Me earning the money now. Future generations may clean up the mess." [Technikfolgenbewertung])
We already had this experience several times:
fossil fuel, artificial fertilizer, asbestos, cigarettes, pesticides, nuclear power, genetics engineering, internet, electric cars,
(The last two ones may not in a state, that downsizes are allowed to be fully seen yet.)
Why don't we learn?
Because we are (too) lazy ("comfortable" aka decadent),
naive,
and greedy.
What sells best, where lies the most profit?
- things people are addicted to,
- the feeling of getting more comfort ("celebrating laziness"),
- and new magic.
AI is all of it.
Our society does not only depend on computers and the internet, it's addicted to it.
So the other side to science is the media hype that always happen when new magic occurs.
Or to be more correct:
When something from the world of science reaches the ears of salesmen, when they scent a chance for a deal,
and science becomes magic.
Magic means not (fully) understood,
neither capable, nor willing to see the whole picture,
but promises to solve (all) problems effortless without any downsize.
New magic sells best.
It's even better if something is complex and impenetrable.
When something is magic, the lack of understanding becomes a prove of itself.
And you also may give it a touch of religion.
Anybody who objects is a heretic, who shall not be heard, or busted.
AI is magic.
Anything can be done with it.
Effortless.
Too lazy to learn programming? AI will do it for you.
Complex to design something? AI will do it for you.
Too lazy to read a book? AI can do it for you.
Too lazy to write something? AI does it for you.
All other drivers suck but you? Let the cars be driven by AI!
...
Too strain to learn FreeBSD? AI.
Ever thought of the idea, what will become of you,
when everything is done by AI?
What will be the use of you then?
Of course, then you finally have the time to aim for a higher target.
Which?
Reading books about philosophy?
What for then, and why not now?
What will be your purpose in life then?
Or will this question also be answered by AI one day?
AI promises comfort.
Anybody may (fully) use and program computers without the need of specialists anymore.
We neither need specialists, nor programmers anymore.
(Instead we exploit legions of trained but unskilled, so underpayed people from so called third-world-countries to train the AIs. (see ChatGPT))
What do we do with all the fired programmers, then?
As always?
Blame them it's their own fault not learned something useful?
But that's exactly what they did:
Learned something they were told it will be the future.
Force them to labor on underqualified jobs, e.g. taxidriver?
When all cars are driven by AI, too, we will run out of arguments.
Maybe ask AI to solve that problem for us, too?
This will be comfortable:
No thinking, and no work anymore.
(btw. psychologist know for decades that humans not doing anything useful become sick and depressive.)
Very comfy!
Thanks to my knowledge as an engineer with a bit scientific thinking,
and some life experience
I don't believe this will come, because it will not fully work as it's promised at the moment.
But for a capitalistic economy based on stock corporations that's even better.
Because things that do not really work can be sold several times to a society naive believing in magic ("modern technology.")
"The next version will become closer to the target."
Point is, you need a new target before majority finds out they are mocked, because the target was unreachable from the start.
Downsize:
An enourmous waste of resources of any kind (ore, energy, people, skills,...)
But as long as there are no downsizes, there is no problem.
Only solutions.
So there will a lot of damage be done again before this will be understood.
And that frightens me.
For example to bring an additional effort to distinguish AI-crap from genuine stuff.
And facing the fact,
that a majority may already that stupid,
that they simply don't care anymore.
I now get into my kitchen, preparing lunch.
Anybody who tries to steal this creative, productive and (to me) satisfying process from me,
by e.g. implementing any kind of robot in my kitchen
preparing food instead of me, or trying to feed me on "convinient"-food, and telling me, it's better as everything what I can do
(which may be right for many self-declared-home-cooks, but not me)
by some picked up ragtaged rubbish from the internet,
will learn
that not only programmers are capable of hacking,
and hacking does not necessarily means manipulating source code, only