AI lowers productivity

Interesting that after 110 views, not one person has come back and said that using AI tool has dramatically improved their workflow. Still, I guess it's still early days; they're working on it...
 
The reality is that these AI tools are based on previous work by others. Many are no more than a search tool. Take that for what it's worth but I find most of the results from online AI stuff returns wrong or obvious results which have been of no value to me.

Second that, I allways say "there is no AI at all.." , is only a databatase(more complex..but basically)
 
And are persons that think that the "future is here" and put AI "database" tools to do complex works
until they dont put in military weapons decisions...whe are ok
I am a big fan of terminator movies(until 4 of course) and think..if the humanity come to the end will be
because of human stupidity and not for the "machines"
 
"there is no AI at all.." , is only a databatase
Pattern search on databases, to be correct, and reconstructing patterns by found patterns.

AI tools are based on previous work by others.

If AI is not feeded constantly with new, and correcting information, which is again needed to be done by others, AI becomes dumb.
One does not have to be expert on AI to see this principle is involved.

Crucial point is:
It's ment to be an assisant. Assist you by removing the tedious, boring parts of your work, so you may concentrate more on the creative brain activity.
For that you still need to know how the work needs to be done, and supervise your assistant.

But people be not that way.

Best example are pocket calculators.
They are of no real use if you don't at least roughly estimate if the result could be remotely correct.
That's why math teachers strongly opposed against their launch in schools in middle classes in the 1980s,
because they anticipated what will happen:
Society will unlearn math.
And that's what happened.
Nowadays most aren't even capable of doing the most simple calculations without a calculator anymore.
(Which btw. is not a primary App on smartphones anymore, but in default settings you have to dig deep until you find it. Guess why.)

I once tutored a high-school student.
Asking her what three times four was.
She was automatically grabbing for her calculator.
I said:"No. In the head."
No chance. Then she used her calculator, and wanted to write down:'3.141592....'
I said:"That's wrong."
She:"No. It can't be. The calculator shows it, see?'
I said:'No. You pushed the wrong button. And I also know which one.'
She:'So what?!' and wrote it down.
She simply didn't grabbed the concept, and worst, she didn't care, but wondered why she still failed math.

Other example are those auto-correction, and auto-filling text-input.
Do people write more or less errors since we have those?
In my youth it was something the evening news would bring if one of our national's newspapers had a typo.
Today you may be lucky if there is one article without any.
Is that although or because we have auto-spell?
I tend to say because.
I catch myself daily to type careless anything in those entry fields.
The computer will suggest me some words anyway.

When I use my wife's iPhone to write some What's-App I get pissed off in no time, because this damnd f#4in machine constantly butts into my texts. Instead of simply let me type what I know I want to write I use 80% of my time typing with undoing, and correcting what this crap automatically throws in, believes it knows better what I am thinking myself, make me forget what I wnated to write after the first five words.
This is crap!
Useless crap.
And just because the majority uses it is not the prove it's not crap,
it's just the prove for the majority is stupid, and don't care about it.

People are used to that concept by McDonalds.
You simply cannot order a cheeseburger.
You also have to say twenty times 'no, thank you' to all the stuff you don't want.
People think it's a kind of polite service.
To me it's an annyoing waste of my time, in the hope I would be uncareful for a moment, so buying more stuff.
Or become uncaring, saying 'yes' to everything.

Point is:
Our natural languages are the programming languages of our brains.
Less language skills, less intelligence.

With AI we now have the same, again. Even more far-ranging as we may understand by now.
With people getting dumber, AIs are trained more badly.
With AIs becoming dumber, and people relying on them, they become even more dumber.
Vicious circle.

Research on AI is going back to the 1950s - neural 'network' consisting of a single knot based on tubes and relais.
Now we have affordable computer power to do something with it even Johnny Everyone can get some out of it.
That's when the salesmen capture the ship.
They bring lots of money, playing the benefactors who altruistic sponsor science.
But at a certain point there has to be revenue.
The shit must be sold.
And it has to make more money as was spent.
It doesn't matter if neither the technology, nor society is ready for that.
Sell it!
We see if we fix things later.
Game over.
 
I think it's okay as an alternate way to search if I don't know exactly what I'm looking for, and it can generate a first-pass of human-language text to get a blank page started. I'm not sure it has been helpful overall for technical things because it's usually confidently wrong about topics I'm searching for (FreeBSD) because the internet is full of similar stuff I'm not (Linux), and I end up wasting time on bad information. I also find it irritating to use because the results are presented in English and it makes me feel like I'm talking to a compulsive liar.

Here's what it thinks about this thread. Blah.
 
about this thread. Blah.
AI produces mostly blabla.

If you ask:"Do you have to stop your car at a red traffic light?"
A human answers:"Yes."
AI answers:"A traffic light is...blablabla..., a car...blablabla..., traffic's regulations blarbarbarbaralabla.... so I suggest it's to be recommend to slow the speed of your car by pushing the brakes pedal until the car comes to a total stop, until the traffic light becomes green again..."

Presumbly because of AI learned 'talking' by reading on-line newspapers ?
 
When we're talking about LLMs, it's baffling how many (even intelligent) people "hold it wrong"...

It's just a language machine, literally - not a knowledge machine, not a reasoning machine, not an understanding machine. It wasn't trained to do any of these things except language processing. Whatever it produces that resembles knowledge, reasoning or understanding, is merely a collateral of language structure.
And yet look at how people use it and how it's marketed...

That said, there's still valid use cases. It helps people lacking language skills to formulate text or translate or write simple programs. And I would love to have it as a search engine result filter. Like search for chocolate cake, and it says: "In the top 50 search results there were 24 results with recipes for chocolate cakes, 17 shops with baking utilities, 13 diet blogs and 5 websites of local bakeries. Which category are you interested in?". Somehow they always try to turn the LLMs into the primary user interface for the search, and that's cumbersome, at least for me.

Personally I use copilot for a few select use cases, config files, scripts and boilerplate code on simple APIs, documentation. For real programming, turn it off, it's just distracting.
 
Since AI. (LLMs really) learns from the rest of the internet, and contribute to the internet, it stands to reason that it will increasingly learn from itself. Since it does not have ideas on its own, it has to deteriorate and implode sooner or later. That it can't help where creativity is needed is the first iteration in the proof by induction. Anyway, more research is needed.
 
Writing papers in schools, and universitys, which are done to reflect the students own thoughts about a topic, became pointless.

Students simply ask ChatGPT:"Write me a summery of..." or "Write my thesis about..."
They don't even read what is produced. They simply print it out, and lay it on the tutor's, teacher's, or professor's desk.
Before ChatGPT a teacher could see it within seconds, if it was just copy-pasted from wikipedia - red pen, long line, try again.
Now they have to thumb through pages of some shit, some machine barfed out.

It's simply impossible to prevent students from doing it.
Many schools, and universitys (I read it about Prague half a year go) simply crossed out any paperwork from education, because it makes absolutely no sense anymore whatsoever.
Papers became complete pointless.
It's just a waste of time for the teaching staff to read about some stuff of their own expertise a machine put together, sometimes of their own writings, but their students would not even take the slightest peek at it.

Of course also swiss schools are facing this problem, thinking of what to do instead.

Last but not least, we need people capable of doing jobs, for that they need to be educated.
While at the same time industry invents, and sell more and more gadgets which ment to avoid thinking, avoid learning.

A couple of months ago I read on BBC that AI will cost half a billion(!) jobs, only in Europe and North America. (I think the NY Times, and the Washington Post came up with some similar numbers.)
That's almost our complete working force.

But we are far away from some science fiction fantasy to become reality,
where everything is done by machines, and everybody can get involved only in stuff in which he's interested in - doing hobbys.

To put it in a nutshell,
yet we did not answered the question:
"Who will pay for the cheeseburger and the beer at Star Trek's replicator,
if nobody has no job to earn any money no more?"
But of course the company who paid the replicator, its development, and its maintaining insists on the revenue for it.

So, not only our education system cannot deal with it,
also our society ain't mature for that,
and especially not our economy.
This is a serious problem!
How can this work?

We have absolutely no idea whatsoever.
Only some rough ideas about something what this white bearded idealist wrote over a hundredfifty years ago, what consequntly and strictly have been demonised since then, until now, and several times proved to be doomed to fail.

Any idea what all this people shall do,
in a society that is built on people have to work?
No need to ask this someone from Switzerland, the you-must-work country, even if you're already millionaire.
Otherwise there is no real life, no really good medical care, and especially no pension plan at all.
10% paying the social security for 90%?
Good idea?
When there already is constant grumble because of 95% have to pay for 5%?
Have to ask this a swiss guy?!
Hm?
Ideas?
Communism?
Of course not.
That we definitely ruled out.
So - what?!
Shall they all become taxi-drivers?
Half a billion new taxi-drivers - when autonomic driving may come?
Also not....
"They all have to be more flexible, and find a new job."
Yes, of course. But what?
Simply blaming those lazy bumbs not trying hard enough to get back into labor's force worked in the 80s and 90s, when CNC-automation and outsourcing to asia cost app. 20% of our jobs.
Now we're going to lose app. 90% of our workforces.
Blaming will just not be enough anymore.
And you better have an reasonable idea before this comes to realty.

So, don't tell how great AI is.
We don't need no praising of the benefits of new technologys.
Furthermore we need ideas to solve the problems it causes!
Quickly!

One solution is to hope for ChatGPT (etc.) will die soon.
Or at least hope it will be, what the Washington Post yesterday wrote:
A big bubble, only, that is going to burst soon.
All this AI is going back to where it at the moment belongs:
Behind the doors of computer scientist's laboratorys.
But above all, out of our society.
At least until our society, and our economy is capable to handle it.
At the moment they neither are not.

At the moment that's the only reasonable solution I can see.
Tell me, I'm wrong, if I am, Please!
But don't praise all the benefits only,
how great the world will be,
when nobody ever have to do any real work,
when everybody lost their jobs,
until we have established communism - or something better.

Face the problems, it causes, and come with solutions, not praises!

Until then I stay with the topics head-line:
"AI lowers productivity"
'cause that's exactly what it does.
 
I am sorry.
I lost my temper yesterday evening.
I also implied noise things he or she didn't say.
Sorry.

If any-one praises the pros, and benefits, only, something has to be sold.
Nothing brings only benefits.
But if you look at it this way only, you will run into the problems it also causes naive and unprepared.
Noise did not do that.

But I imply he or she sees it from the point of view from an engineer, only see the assisting parts of the technology, but underestimates how people are going to misuse it, and the consequences this will have.

A software-enigneer sees a tool assisting him by writing code (if any so. Not only noise pointed out the produced code is mostly crap; many here are aware of that.)
But a salesman - and those are in charge, making the decisions in companys - simply understands:
"Cool, a machine that produces code. So I need less programmers."
The engineer objects:"Careful! The code is bad. And you need someone..." - too late, he's already fired.

Which company cares about quality, when costs can be reduced dramatically?
Especially as long as there are no regulations.

We have it already in many kinds of jobs:
graphics-designers, web-page-designers, support,...
I read several articles about especially middle and higher management will suffer.
Summerized: Every job relying on computers is endangered.

Especially there comes an additional vector on the social impact.
We all learned to get a good job, and some job-security, you need to be good edujacted.
That's exactly the jobs it will hit at the most.

Then again the whining starts about the lack of experts.
But then it's too late. The damage is done.

There are two ways to look at it:
It will work like it's intended to.
Then we need to be really concerned, and have to have solutions in society, and economy quickly.

Or it's more like a hyped bubble, that will disappoint most expectations.
I like to believe that.
I believe, and I hope that AI is not quite working as it's sold at the moment.
Then the investors, the salesmen, and AI promoters have to deal with each other, but not the society.

I also admit
I'm also partially infected by the fear that comes up in people with any new technology which implications cannot really be estimated. (I'm not 20 anymore, being fascinated by anything new, and frantic believe all new things only bring benefits for everybody. Too much life experience for that by now.)
But I also know humankind have not yet developed something to do changes reasonably,
but still try to hold on old things as long as possible, damming up time.
Until revolution onrushs society like a broken dam, bringing lots of damage.
Instead of patch, update, upgrade, and adapt things reasonably, and continously.
 
Which company cares about quality, when costs can be reduced dramatically?
Especially as long as there are no regulations.
Even in the face of regulations...
Just a few examples: VW, Boing, GM, the US Navy...
I read several articles about especially middle and higher management will suffer.
That is IMHO the lever to use. Only total idiots would drive things forward that'll kill their job. Or idealists.
 
Here we go again... ?
I start to believe this typo is a joke I missed by now.

Yes. I do know that.
If we would make an off-topic thread to collect all contraproductive nonsense happend, and wrong decisions made in industry by idiotic greed, we would be on page ten within two days... :cool:

I wanted to relevate my last post, put something hopeful, something positive, something constructive back into the discussion.
Otherwise it becomes harder every day to believe in the future.
Especially you shall have some optimism, although all the massive crap happens daily all around.
You have children.
I could lean back and say: "After me: The Flood."
 
It's just a language machine, literally - not a knowledge machine, not a reasoning machine, not an understanding machine. It wasn't trained to do any of these things except language processing. Whatever it produces that resembles knowledge, reasoning or understanding, is merely a collateral of language structure.
Agree. LLMs, as language machines, function excellently as tools for refining the phrasing of given content. Of course, they do not create new content or use reasoning.

Each task requires the appropriate tool, and LLMs are a specific type of tools designed for certain functions. They excel at language processing tasks, but are not intended to perform all types of work. Just as using a circular saw is not suitable for driving nails into a wall, relying on an LLM for tasks beyond its capabilities is impractical.
 
I call BS! It's obvious you people are not good at pattern recognition so let me enlighten you.

Per 1: "I used AI to produce this code to XYZ but I this weird 'seg fault' response instead."
Per 2: "No! you use `strcpy` ...you idiot!"
Per 3: "What about pointers?!"
Per 4: "*sigh* I've been doing this for 57 years and you do this ...(three genius lines of code)..."
Per 1: "Thank you!"

Meanwhile, 'person 1' has done the dishes, made a fresh pot of coffee, and pre-written an email to the boss saying their done. Seems like 'person 1' was very productive to me!
 
The reality is that these AI tools are based on previous work by others. Many are no more than a search tool. Take that for what it's worth but I find most of the results from online AI stuff returns wrong or obvious results which have been of no value to me.
Garbage in Garbage out :)
 
I call BS!
Recently played with Claude AI, gave it a simple C program (a real one, that I wrote for a particular purpose) asking to find issues and improve. It did some reasonable additions for extra error checking, but changed all three different exit codes in different parts to "1"! Why??
 
You know those super-genius-type programmer types (this guy was programming in Pencil). Well, I watched him give a reply link, to one of those "Here is some code from ChatGTP questions" to his own ChatGTP question where he demonstrated how to "ask the question properly". The code he got back was absolutely perfect (exactly like the code he had been publishing for years). So, it sort of depends on how you think of the problem(s) and what questions you're asking.
 
Every time I visit a BSD forum, it’s always the same:

- systemd is bad
- btrfs is bad
- launchd is bad
- d-bus is bad
- Docker is bad
- Rust is bad
- glibc is bad
- immutable distros are bad
- JS is bad
- GNOME is bad
- GTK is bad
- Wayland is bad
- Qt is bad
- bash is bad
- Python is bad
- Windows is bad
- Linux is bad
- AI is bad

I mean com'on guys. I get it, nothing’s perfect, sometimes even terrible, but this level of critique is beyond comedy. At this rate, oxygen will be next on the list.

Here we go
 
Back
Top