I am not an AI hater. I don't just look at it within a single point of view.AI haters are AI haters. It's almost an irrational stance. I know I will not convince AI haters.
Today's AI/LLM can be seen as very large collections of expertise.
When you just look at it this way, there is nothing wrong with it, but being a good thing having expertise to hand for anybody to produce anything quickly with it.
But that's not the whole picture.
You look at it from the point of view of an programming expert being helped by a tool to do less tedious boring stuff and focus more on the higher, more abstract tasks: Experts use such tools with expertise.
But that's not how our world works.
Since those tools can also be used by non-experts to produce stuff just looking like it was done by experts, it is done. So the production (massively) increases: By the experts, but also additionally by non-experts. While only experts can judge the results. The non-experts do not see this, neither can they judge the results the tool provides, nor they can judge the necessity for the results to be judged. It all looks so good, if not perfect. For non experts. While experts look at the result with their expertise, and see the flaws.
A non artist is overwhelmed by the pictures it produces, a non programmer is overwhelmed by the code it produces, a non computer expert is overwhelmed by the config files it produces, while the experts quickly see the garbage.
So it still needs experts to check the results, or garbage is released to the wild.
The latter one is also the case without AI, of course, because also experts do mistakes. But as I just said, with AI not only the amount of stuff being produced is increased massively, but since also non-experts now produce stuff unchecked the amount of garbage also increases massively.
While at the same time the capacity to check it with expertise does not. In contrary: It's even lowered.
Almost no pupil memorizes the multiplication table up to 9*9 anymore, because there are pocket calculators. As the result the majority lacks of most fundamental arithmetics, even incapable to judge a calculator's result. Calculators are not for doing 3 times 7, but for doing 345 times 789. But when you cannot do 3 times 7 in your head (which is way faster than using a calculator anyway) you cannot judge if the calculator's result is correct.
Of course you can trust todays calculators to calculate correctly (enough), but you may have made a typo while using it, not recognizing it, because you cannot judge. So, the tool originally meant for helping people on long walks became a crutch to make people stop walking at all.
That's what you also need to see: Not to look at the tool only, and its intention how to be used, but how it's actually used, and the following results it causes.
You, and many, many others learned programming long before todays LLM. Learned most of not by just taking a 20 lessons class on some language's syntax, but by the experience of actually writing real programs running in the real world. Now do tell somebody, start learning programming today to gain the same expertise you learned. How? And above all why? What for? The machine already knows it all.
So expertise is going to be lost.
Which wasn't a real problem, as long as the expertise stays preserved in the machines.
But that's not the case with todays AI.
Today's AI cannot actually think nor understand by themselves. They cannot judge, cannot know by themselves, what's right. To them everything they gather has the same value. If the machine collects somebody wrote on the internet the sky was green, the probability on delivering green for the sky's color increases, if nobody tells the machine, that's wrong. It doesn't matter if it was meant as a joke, or because it was part of some insane weird conspiracy theory - the machine cannot judge by itself. It's a primitive example, and being recognized by everybody directly, even while many blindly trust a computer's output, no matter how obviously wrong it is. But think of all the mistakes not corrected: Either because of the lack of expertise, or because of the lack of capacity to correct all errors.
When another AI gathers that uncorrected wrong info, like in real life, both see they have the same info, so it proves it must be correct - even higher probability for wrong outcomes.
You see how AIs became dumber just by time, if errors are not corrected. And you don't need to have wrong information. The lack of information also produces errors. Nobody and nothing can know everything. And as every engineer knows: Errors produce following errors. Unless they are corrected, for which they need to be found.
That's inherent to the system.
Now compute 1 + 1:
Expertise is lost because people stop learning things, because the machines provide that knowledge. And over time the machines lose the expertise stored in them, when there is no expertise to keep it.
Do you see the dilemma?
I am not saying:"Stop and kill all AI!" That's the same stupid BS as saying: "We need AI everywhere!"
All I am saying is: We need to preserve some AI free spaces.