AI lowers productivity

Well I have to face AI now at work.

Our customers have RFQ and it is a job package. Request for Quote.
Specific big package items get their own paragraph with many sub-paragraphs.(Pull propeller, pull shaft, rebuild engine ect....)
Specs, drawings, included. Certifying agency and general scope and how they want it done. Who pays for what when how.

So at shop level we are last to get job package. At that point we won the bid on package/contract now.

We attend a pre-arrival meeting to make sure scheduling looks realistic give inputs. (are you crazy?) Oh growth work will cover it.

Anyway we had change of management so we are getting customers who are not regulars. Stragglers who nobody wants.

We get this job package and it is like some foreigner wrote it but worse.
It looks real but there are words in it that are not the nautical terminology used. So weird that I asked others. Same response.
Weird. What does "Have ABS verify Hardening of Propeller Nut" mean? ABS does check for slugging nut and checking strap.
Hardening a nut means something totally different. Weird terminology.

TLDR; The ship company had a new port engineer who wrote the package and he used ChatGPT. Fresh out of some school.
No idea what he is doing. Couldnt' be bothered to look at old packages/contracts.
BLAH. Getting close to retirement.

You just can't fake shit to people who have been doing it 40 years.
 
I find ChatGPT very useful. I pay for the Plus version. Not cheap.

Like with anything, you have to learn how to use it, but once you do, it saves you a bunch of time many times.

Also, it's the best English grammar corrector I've come across (using the right prompt, that cost me quite some time to develop). Without the right prompt, it is as annoying as any human (it is mirroring us, after all), and it insists on opining about your style of writing just because it has an opinion (like any human, really... annoying). But when you get it to focus only on grammar, it's an incredibly useful tool to detect your errors and learn from them.
 
Like any tool, it helps with some things but not with others. Ever since the release of GPT-4 and claude 3.5, i find that it saves me countless hours with coding. When using it to help with nix command params or firewall rules i find that it would have been easier to read the manpage instead or google it instead due to incorrect information. So with some things it works really well (like small batches of code) and with others, it sets you back. All in all, AI for me at this point is irreplaceable!
 
I find ChatGPT very useful.
For that described use I dig it.

What I don't dig is my mother (aged 78) Who writes resume's for "C" Class Executives now sends her work thru some AI thing.

I asked my Dad, Doesn't she care about the customers privacy. He said she strips out the name and address so its OK.
Man I hate to desparige my family but don't you think ChaptGPT could figure out who is COO of Amex and COO at Coke.
Stripping the names don't do anything. You now fed the beast and paid to do it.. Somebodies confidential information at that.

Man the implications are staggering. Giving it away.
 
confidential information

Real privacy in today's word is hard to get. I'd like to use a phone with and alternative OS, for instance, but I cannot afford it at the moment, so Android siphons off from me whatever it siphons off.

It's what it is. We adapt the best we can and try to do our thing despite the evil elites (no joke there).
 
Weird. What does "Have ABS verify Hardening of Propeller Nut" mean? ABS does check for slugging nut and checking strap.
Hardening a nut means something totally different. Weird terminology.
Well then, harden the propeller nut. *veg*
 
Every time I visit a BSD forum, it’s always the same:

- systemd is bad
- btrfs is bad
- launchd is bad
- d-bus is bad
- Docker is bad
- Rust is bad
- glibc is bad
- immutable distros are bad
- JS is bad
- GNOME is bad
- GTK is bad
- Wayland is bad
- Qt is bad
- bash is bad
- Python is bad
- Windows is bad
- Linux is bad
- AI is bad

I mean com'on guys. I get it, nothing’s perfect, sometimes even terrible, but this level of critique is beyond comedy. At this rate, oxygen will be next on the list.

Here we go
For any thing X, you can find people who say "X is bad". In many cases, those people are fruitcakes. Or comedians. As an example, look at dihydrogen monoxide (two different links, one more serious, one more funny). In addition to that, a significant fraction of FreeBSD "true believers" will look down on anything associated with other OSes. You find that any any community, for example Mercedes vs. BMW, or my favorite, Stihl vs. Husqvarna. Simply ignore those fanatics.

As an example, I have both a Stihl and a Husqvarna chainsaw, in addition to an Echo, Dolmar, Makita, and Greenlee. They all have their pluses and minuses, and for each job, I try to use the appropriate one. I also have opinions on many of the things you listed above, for example: systemd is a good idea, and many of its aspects are excellent, but the implementation is very messy, reflecting its chief architect being both a sociopath and a bad software architect. BtrFS was originally a good idea (Ohad's modifiable B-tree), but it is so badly implemented to be a danger to one's data (and I have the opposite opinion of ext4, which is an excellent file system, but today a file system has to be integrated with the volume manager and the RAID layer to really shine). JS is not inherently bad, and when programmed with good style and strict coding rules, one can build large JS programs that are functional, efficient and maintainable; but it also allows horrendous coding style and fragile web pages. Bash is the shell I actually use all the time, although I'm toying with the idea of switching to zsh. I am learning Rust now, and so far I like it (although I see the huge difficulty in transforming a large existing C/C++ code base to Rust). I run both Windows and Linux, where appropriate, and I also use MacOS heavily. I enjoy some more than others (and honestly, I really don't enjoy Linux system administration on a server), but they are not inherently bad. And so on and so on. If you ask reasonable people in this forum, they will tell you the pros and cons of any of these technologies; if you ask religious fanatics, you'll get word salad and bile. Complaining about the fact that the religious fanatics enjoy freedom of speech here is a bit off color.
 
Every time I visit a BSD forum, it’s always the same:

- systemd is bad
- btrfs is bad
- launchd is bad
- d-bus is bad
- Docker is bad
- Rust is bad
- glibc is bad
- immutable distros are bad
- JS is bad
- GNOME is bad
- GTK is bad
- Wayland is bad
- Qt is bad
- bash is bad
- Python is bad
- Windows is bad
- Linux is bad
- AI is bad

I mean com'on guys. I get it, nothing’s perfect, sometimes even terrible, but this level of critique is beyond comedy.

Hm. Software. Software quality.

I recognize two levels of software quality. Interplanetary and interstellar.
The first means you can trust your life on it. The second, you can trust your grandchildrens lifes on it.
Simple, isn't it? With innovation cycles of 2-3 years, we seem to have forgotton that the natural innovation cycle is counted in generations, and that is what allowed our species to survive.

You might try and look upon things that are commonly not considered bad here, and what they have in common. You may recognize a few differences then, for example in terms of scalability, flexibility, granularity or longevity.

Most of the current IT-industry is driven by the premise: make money quick. Do You seriously think much of these achievements will survive for a timespan comparable to that of unix?
 
As I see it:
One needs to distinguish the environment, above all the kind of data fed to the AI.

If you're within a restricted area, feed only data to it proven as right, done and controlled by experts, you may get outcomes really assisting - but have to answer the question, if it's worth all the effort instead of just let the experts do their job in the first place (what's the question of amount of data, and what you want to get out of it.)

But if you unleash the AI, let it freely, and unlimited feed itself through all the www's junk then all bets are off.
One does not need to be an AI expert to understand that.
One only has to understand: Computers cannot think.
And they never will. Because it needs (a lot) more to think than just to compare things, no matter how huge the database and fast the computing may be.
There is no problem with that, as long as it's been understood, seen, and handled as what it is: Some kind of toy, an experiment, an assistant at most - may be a new subsidiary alternative, but not a replacement.

But already there the problems start.
Not only most tend to sacrify anything for even the slightest bit of more comfort, most people also cannot handle assisting systems.
They're not willing to understand: A pocket calculator is of no much use if you don't know math.
They just see:'Cool, this thing is doing all the math for me, so I don't need to learn it anymore.' - Wrong!

Even those call themselves modern, educated people want to believe in magic, want to believe someday we're all going to live on Star Trek's Enterprise. When you want to booze you simply go to the machine and say:'Evil Schnaps!' - not comprehend this in the 'system' of how our society works.

People are amazed:'Hey look, the computer is writing/painting/composing all by itself.'
Not realizing:
a) It's no art at all, it's just composed anyhow by already existing things. (I always need to laugh loudly when someone sues ChatGPT for copyrights. Of course. They sell things they've stolen.)
b) It's dull.
c) We don't need it. Because there already is, and by real humans still yet produced way more, and way better books, music, art... one cannot enjoy in a live time.
People don't read good books anymore. But if a machine craps out some bad text they are on it like hungry vultures.

People are amazed of the magic of electricity as they were amazed of the steam engine:'Hey, it's moving all by itself magically!'
All you need to dazzle humans is some thing that does something by itself, seeming magically, means they don't understand how it works, and don't let them look behind the stage (see Mechanical Turk); it's even better when it's lighted up colorful, and flashing - that's kind of hypnotic, too.

People look at their new car's dashboard:'Hey, look at all those fancy colors and pictures.'
They are dazzeld by the nice looks, not realizing there is not a single useful information more, but only distracting crap, only.

That's the way you sell useless junk people neither need, nor want.
And that's where the problems start: sell out.
Everywhere you look, AI is the solution for all of our problems.
Not enough growth of the stocks exchange market? AI will deliver unfinite wealth.
Climate changes? - AI will find the solution.
AI will design better cars. I'm happy if I find a good used one, since the new ones contain so many sloppy designed features they are more in the work shop than driving...
AI now must be used everywhere for everything, not asking for what it's suitable, or when it's better not to use it.
That's the problem.

All you get in media is where AI delivers good results.
One needs to search for all the times AI delivered (sometimes funny, but unusable) obscure shit.
That's the situation when something should be sold:
Benefits and advantages, only, while this is not true for nothing.
But people like to believe it everytime again, especially when nobody's objecting is heard.
Because people also believe in new is better the critics (which does not mean to contradict) are always easily presented to be implausible grey beards by just calling them 'incurable old farts stick into the past, unwilling to change, trying to prevent progress/evolution/future, "yeah, yeah everything was better in former times"'
That's the bait, because most of the people eat that believe exactly in this themselves.
While all the 'grey beards' are trying to do is to get some reasonable perspective into an otherwise blind hype.
There is a word for that: sophisticated - that's not a bad thing.

What Phishfry said above one has to reckon with more plane crashes, at least groundings.
Because it's not about to get better products, it's about to sell AI.

The point is (again) not the technology,
but how it's handled for what by whom.
As I elaborated in my other posts above:
In my eyes AI neither is yet a technology mature enough to be used by non-experts, nor is the society ready for it.
Everytime a new technology is getting into the hands of greedy salesmen too soon, humankind suffers.

That's the problem with AI as I see it,
not AI itself.

Bottom line:
So IMO it's a good thing if some people - especially experts - dare to object when something his hyped, and say:
'Hold your horses! Slow down! Don't overkill! Be reasonable.'
 
Hm. Software. Software quality.

I recognize two levels of software quality. Interplanetary and interstellar.
The first means you can trust your life on it. The second, you can trust your grandchildrens lifes on it.
Simple, isn't it? With innovation cycles of 2-3 years, we seem to have forgotton that the natural innovation cycle is counted in generations, and that is what allowed our species to survive.

You might try and look upon things that are commonly not considered bad here, and what they have in common. You may recognize a few differences then, for example in terms of scalability, flexibility, granularity or longevity.

Most of the current IT-industry is driven by the premise: make money quick. Do You seriously think much of these achievements will survive for a timespan comparable to that of unix?
The way I see it, the linux distributions, especially those bleeding-edge ones, have this absurd mentality of "if it's bad, throw out, rewrite a new replacement." On other operating systems, Unix or otherwise, there's this concept called compatibility. When something works, it’s maintained and codebase is improved—not ripped apart and replaced like a bad experiment. Generally in OSS, developers treat everything like a disposable toy, and Linux distributions feel like a glorified beta test for such developers.

That being said, Alexander Larsson is the best person I've ever met in open source projects. Such a cool guy to work with.
 
A very simple example. A very narrow specialization. I often draw fakes for psychological relief and entertainment. I reviewed tons of fakes created by AI. The first thing I can say is: yes, AI does something glamorously, smoothly. But there is no "soul" in such works. Second: if you need to do something more complex, then AI draws very poorly. Worse than me. Lots of mistakes, bloopers. You can see it by eye. But at the moment, the WORST thing AI does is process requests for "subtle events" (EXPERIENCES!) in the field of drawing romantic scenes. For example, erotic scenes. There are no EMOTIONS on the face. There is only a "glassy" look from patterns, scripts, data sets (this is VERY good for police departments to search for especially dangerous criminals). But not for living analog people! This look, or rather, facial expressions, are algorithmic, cold, routine, "protocol". It's like an American smile at work - only for tolerance and political correctness. And behind the smile - it's unclear what. So it's too early to write off the artists for fakes. :)
 
Most of the current IT-industry is driven by the premise: make money quick. Do You seriously think much of these achievements will survive for a timespan comparable to that of unix?
(via fefe)

Who hands over money to these clowns to make more "AI"?

Other than that, I agree whith what PMc said. Some technologies shift faster than others. Things like cars and air travel are among the "grandchildren" tech, at least it was for our grandparents. We may need to change that because it took a long time to find something better, and if we don't have it by now we must make haste to find it. Other things can be seen as a bad idea much sooner, and be replaced much easier (like leaded gasoline). I'd wager AI is among the faster things, faster to crash and faster to burn. Crypto is still to be decided, the benefits will not outweight the problems in the long run.
 
I currently write this on a portable thing that goes into a shirt pocket that puts super computers from the time I was born to shame. The speed over 4G is faster than my first hard Disk. Anyone telling you what will be going on in 20 years, 30 years or 50 years should be taken with caution or laughed at.
 
I think that %pupil -sceptical wrote many correct things. mer wrote a universal formula for the AI of our time (garbage in - garbage out). Maybe the situation will improve, but I have no desire to work with AI today. Here is an example of graphics generation.
Literally everything is wrong. And what if the AI draws a map of the movement of a train that transports spent nuclear fuel?
1.jpg

2.jpg

3.jpg

4.jpg
 
Damn.:mad:
If the first thing you realize is there is a car in the living room, keep on searching to find the issue, and then finally realize you don't got she has three feet because you looked all the time only at her face but not her bottom, then you realize:
'Dude, you're getting old.'?
?
 
because you looked all the time only at her face but not her bottom
You are either a honest old dude or bare faced liar, but no one cares.

I was tripped off by the toes on her left foot being toes from a right one. And three feet, and leg where it should not be, and well - almost everything. This is pathetic.
 
Hands are another thing AI typically gets wrong. Proportions, fingers pointing the wrong way etc.
 
Nice. Of course you could just cut off a finger from 5 different people and leave their finger prints...
 
Back
Top