I was thinking in my head all day. Is optically scanning bottles really Machine Learning or Machine Comparison.
There is no learning involved.
One may name the training of an AI learning, but you're right insofar as there is no thinking involved.
If you break it down to machine level after all it's just a giant heap of pure binary 'if...else...' decisions.
Just because of some mechanism becomes extremely large and complex doesn't change the mechanism's core nature.
No need to tell you ML goes back all the way of computering until the 1950s.
Already in 1980s computer game's opponents were based on some kind of AI.
What's really revolutionary new is size and power of the machines being available for that.
As I see it: The large things, like ChatGPT, were started in the first place by pure curiosity. 'Just see what we get when we build a really big machine, with lots of power.' No much thinking ahead. Of course not. It was purely scientific. But to get money for even bigger machines, you need to take business people on board. Plus amazing results always attracts investors anyway. But business people don't just spend money, they must produce revenue. So it must be sold, the sooner, the larger the better. If something new shall be sold there always is advantages only, no second side of the medal, no flaws, no downsizes, no disadvantages at all.
As a result there are several kinds, and levels of a technology simply summarized as AI, placed on some markets too soon. There's lots of confusion, because things are wildly mixed up, especially excessive promises, dreams and hopes. (It wasn't the yard who called the Titanic 'unsinkable', it was the newspapers exaggerated the new system of separated sections.) Large parts of the society cannot deal with it. Its results ain't reliable, or trustworthy enough, so to be no more as a toy for most people (which for many is fully sufficient.)
Because - and that's the part that did not change - like in every other computer language - and it doesn't matter if we talk assembler, Lisp, C, C++, Java, Rust,...LLM - the core principle problem always stays:
Until some one, or something can read minds - and I hope that will never happen - one has to learn how to talk to the computer right so it produces the actual wanted results.
Which requires always one has to be crystal about his demands what the machine shall do in the first place. Every programmer knows that. No need to tell you. But every no programmer dreams of using machines without thinking. That's where the money is. The results we see everyday thousands of times everywhere around us.
As some also already pointed out here:
If something is either started on the wrong foot, or misused for something it wasn't originally ment for, you cannot correct that by regulations, standards, patches, updates, extensions, features... - it only bloats, complicates it, makes it less useful even more.
But the good side is, this will be natures course of evolution. Sooner or later it dies like the dinosaurs became even bigger than too big to fail, and make way for new things.
