A Developer's Guide to Generative AI in FreeBSD

AI haters are AI haters. It's almost an irrational stance. I know I will not convince AI haters.
I am not an AI hater. I don't just look at it within a single point of view.

Today's AI/LLM can be seen as very large collections of expertise.
When you just look at it this way, there is nothing wrong with it, but being a good thing having expertise to hand for anybody to produce anything quickly with it.
But that's not the whole picture.
You look at it from the point of view of an programming expert being helped by a tool to do less tedious boring stuff and focus more on the higher, more abstract tasks: Experts use such tools with expertise.
But that's not how our world works.
Since those tools can also be used by non-experts to produce stuff just looking like it was done by experts, it is done. So the production (massively) increases: By the experts, but also additionally by non-experts. While only experts can judge the results. The non-experts do not see this, neither can they judge the results the tool provides, nor they can judge the necessity for the results to be judged. It all looks so good, if not perfect. For non experts. While experts look at the result with their expertise, and see the flaws.
A non artist is overwhelmed by the pictures it produces, a non programmer is overwhelmed by the code it produces, a non computer expert is overwhelmed by the config files it produces, while the experts quickly see the garbage.
So it still needs experts to check the results, or garbage is released to the wild.
The latter one is also the case without AI, of course, because also experts do mistakes. But as I just said, with AI not only the amount of stuff being produced is increased massively, but since also non-experts now produce stuff unchecked the amount of garbage also increases massively.
While at the same time the capacity to check it with expertise does not. In contrary: It's even lowered.
Almost no pupil memorizes the multiplication table up to 9*9 anymore, because there are pocket calculators. As the result the majority lacks of most fundamental arithmetics, even incapable to judge a calculator's result. Calculators are not for doing 3 times 7, but for doing 345 times 789. But when you cannot do 3 times 7 in your head (which is way faster than using a calculator anyway) you cannot judge if the calculator's result is correct.
Of course you can trust todays calculators to calculate correctly (enough), but you may have made a typo while using it, not recognizing it, because you cannot judge. So, the tool originally meant for helping people on long walks became a crutch to make people stop walking at all.
That's what you also need to see: Not to look at the tool only, and its intention how to be used, but how it's actually used, and the following results it causes.
You, and many, many others learned programming long before todays LLM. Learned most of not by just taking a 20 lessons class on some language's syntax, but by the experience of actually writing real programs running in the real world. Now do tell somebody, start learning programming today to gain the same expertise you learned. How? And above all why? What for? The machine already knows it all.
So expertise is going to be lost.
Which wasn't a real problem, as long as the expertise stays preserved in the machines.
But that's not the case with todays AI.
Today's AI cannot actually think nor understand by themselves. They cannot judge, cannot know by themselves, what's right. To them everything they gather has the same value. If the machine collects somebody wrote on the internet the sky was green, the probability on delivering green for the sky's color increases, if nobody tells the machine, that's wrong. It doesn't matter if it was meant as a joke, or because it was part of some insane weird conspiracy theory - the machine cannot judge by itself. It's a primitive example, and being recognized by everybody directly, even while many blindly trust a computer's output, no matter how obviously wrong it is. But think of all the mistakes not corrected: Either because of the lack of expertise, or because of the lack of capacity to correct all errors.
When another AI gathers that uncorrected wrong info, like in real life, both see they have the same info, so it proves it must be correct - even higher probability for wrong outcomes.
You see how AIs became dumber just by time, if errors are not corrected. And you don't need to have wrong information. The lack of information also produces errors. Nobody and nothing can know everything. And as every engineer knows: Errors produce following errors. Unless they are corrected, for which they need to be found.
That's inherent to the system.

Now compute 1 + 1:
Expertise is lost because people stop learning things, because the machines provide that knowledge. And over time the machines lose the expertise stored in them, when there is no expertise to keep it.
Do you see the dilemma?

I am not saying:"Stop and kill all AI!" That's the same stupid BS as saying: "We need AI everywhere!"
All I am saying is: We need to preserve some AI free spaces.
 
AI is a tool like any other that humanity has created and invented.
The difference with other tools is obvious: it's not an inert thing; it consists of algorithmic models that can evolve, in part through learning.
If we can get a tool to perform an action, why not use it?
This tool requires several levels of verification on the content it produces.
Software development cannot ignore an extraordinary tool that it has itself created.
These reflections, which I am having with myself, require consideration of other points of view. I have no intention of imposing these reflections as a "finished" concept.
 
LLMs aside, is this proposal interesting? It seems a little generic and pointless.

As an example, where is the proposal for "A developers guide to putting their shoes on in the morning"?
 
I played alot with AI , some AI's are good only for python other for other stuff. Main rule i learned, they really need a stable API, otherwise crap comes out and they start to hallucinate.


PS, Monthly Subscription Plans

Tool
Monthly Price (Individual)Best For
Claude Pro$20/monthWriting and high-level reasoning.
ChatGPT Plus$20/monthAll-around tasks and custom GPTs.
Gemini Advanced$19.99/monthGoogle ecosystem users (includes 2TB storage).
GitHub Copilot$10/monthDedicated coding inside your editor.
All THEY need are several million retail suckers@$20 per month, on top of the more expensive commercial AI accounts, to get trillion dollars rich quick. THEY know that there are millions of willing suckers that will pay to be GUIDED by their AI honey pots.
 
personally we just find it noxious that these people are trying to sell what they believe is an omni-intelligent sentient being, because what kind of person does that? i'm not sure there are polite words to describe the kind of person who does that.
 
personally we just find it noxious that these people are trying to sell what they believe is an omni-intelligent sentient being, because what kind of person does that? i'm not sure there are polite words to describe the kind of person who does that.
A very especial marketing salespeople maybe
 
Long time ago, McDonald made billions by selling burgers, at 99 cents each, to billions of customers, and now with new suckers that are willing to pay even more for their burger.
 
I'm pretty sure that all those $20/month plans don't make any money, at least not for power users. Maybe they have a lot of people who subscribe and then don't use it.
 
All ventures throughout history were made in the attempt to make money. That a person or company can make lots of money is a result of the value people place on that product. To scorn a person for making a lot of money ignores the fact that the person invested their own time, money and life into that product with the possibility of total loss and failure.

I will never forget the times I laid on the floor in the backroom of my business with pains in my stomach. The time I had to call my wife from work to drive me home cause I was throwing up and couldn't drive myself. The time I was sued for something I wasn't even present at. (I lost a few thousand dollars for my victory.)
 
Back
Top