Will FreeBSD adopt a No-AI policy or such?

I am sure the thought is comforting. Even though it really just pushed the ball to "the next generation's lifetime." Not to mention the problems of it being comforting. Does it mean that if they were going to exist in your lifetime, then you would fret?
Exactly. It is so far down the line that I don't want to waste my time worrying about something that isn't going to affect me.
I don't worry about the Sun expanding and engulfing the Earth in millions of years either.

I'm at least old enough to know that history will do what it does. Best one can do is try to understand it as it happens.
History will do what it does. And that is repeat itself. I have lived through enough gimmicks now to know not to worry.

We are lifetimes away from AGI. What we have now is chat bots. We used to have a member on these forums who wrote chat bots. Its not new to us.
 
History will do what it does. And that is repeat itself. I have lived through enough gimmicks now to know not to worry.

Clearly, you have lived a blessed life.

Exactly. It is so far down the line that I don't want to waste my time worrying about something that isn't going to affect me.

Make up your mind. Is it (or I guess anything) not worth worrying about, or is it worth worrying about but only if it happens in your lifetime?
 
We are lifetimes away from AGI.

In my opinion, an eternity, because artificially generated intelligence is a misnomer (meant to awe people I suppose, like certain religious concepts of the past).

Try to not anthropomorphize computers, it may help clear your thinking.

The question is: how close are we to autogenerated software that can outperform systems programers? Did it already happen? What would be some additional things to consider, if yes, before blindly adopting it?
 
Clearly, you have lived a blessed life.
Why? You let yourself be affected by gimmicks in the past? Perhaps you want to take the opportunity to reflect on that.

Make up your mind. Is it (or I guess anything) not worth worrying about, or is it worth worrying about but only if it happens in your lifetime?
As mentioned, in this case, they are both the same. This "AI" risk you are alluding to will not happen in our lifetime. So don't worry about it.

In my opinion, an eternity, because artificially generated intelligence is a misnomer (meant to awe people I suppose, like certain religious concepts of the past).
AGI is "Artificial General Intelligence". Nothing to do with anthropomorphism. If the "AI" needs to be tailored to a specific problem domain, then it is basically a glorified algorithm and humans are the sole driving factor. In hundreds (if not thousands) of years, once AI can adapt without needing spoonfeeding by humans to achieve a single prescribed function, then things may get more interesting.
 
Why? You let yourself be affected by gimmicks in the past? Perhaps you want to take the opportunity to reflect on that.

It is simply that I, like most of humanity, have experienced the horrors of history repeting itself. The experience has taught me the opposite of not to worry. Not that, I mean, may God continue to bless your life and your family's. I am just saying.

As mentioned, they are both the same.

Well, they are not. It's different to say "autogenerated software is not worth worrying about" and "autogenerated software is only worth worrying about beyond X amount of computing capacity."

We would like clarification, if possible.

AGI is "Artificial General Intelligence".

Ok, you got me. I don't really keep up with the marketing.
Nothing to do with anthropomorphism.

Well, intelligence is a human trait.
 
It is simply that I, like most of humanity, have experienced the horrors of history repeting itself. The experience has taught me the opposite of not to worry.
Being in tech for a while, you get used to the rollercoaster of vaporware :)

Well, they are not. It's different to say "autogenerated software is not worth worrying about" and "autogenerated software is only worth worrying about beyond X amount of computing capacity."
We would like clarification, if possible.
Those are both different questions to your previous post. And the one I answered.

For this question. The "AI" algorithms today are not a risk, regardless of computing capacity you throw at them. Its like saying a quicksort algorithm will become sentient or take over the world if you run it on a really fast computer. This is not the case.

Ok, you got me. I don't really keep up with the marketing.
The term AGI came about in 1997. We still aren't much closer to it now either. This recent LLM stuff is a fun monetizable distraction but might even slow down progress towards AGI.

Well, intelligence is a human trait.
Disagree. A rabbit has more intelligence than what we are calling "AI" today.
 
Disagree. A rabbit has more intelligence than what we are calling "AI" today.
Come on, this is a silly parsing of words. If you said "my coffee machine's warm embrace," that would be an anthropomorphization, even though a bear could embrace you also.

For this question. The "AI" algorithms today are not a risk, regardless of computing capacity you throw at them. Its like saying a quicksort algorithm will become sentient or take over the world if you run it on a really fast computer. This is not the case.
This is another silly parsing. Fine, not "computing" capacity, call it "generative" capacity.

The fact remains that you are putting the limit of caring, not at autogenerated software itself, but at it being anything more than vaporware. I am beginning to suspect that the answer is that, if it were not vaporware, your advice would be to worry.
 
Employers probably benefit from employees using AI so they can replace human workers, and employees want to be competitive :p

I wonder if it looks good on a resume to not have ever used AI?
 
What do mass lay-offs in software engineers and in general management tell you? That in many cases, they have dispensed with the human opertor altogether, and just turn the machine on in the mornings.

In others, maybe an Asian pseudo-slave with a three month crash course and a prompt can do the work of 10 engineers with master's degrees and 10 years experience each.

These debates are all academic. The real world is having its say. It's hilarious that people think that the "Artie your AI friend" chatbot is where the revolution is happening. That's just the PR effort (it has other uses too).

The questions outstanding, specially regarding FreeBSD's adoption or not of autogenerated software for code genertion: are there consequences here beyond funcitonality?

I've brought up a couple. I think the most pressing would be that it takes the direction out of FreeBSD team's hands. Eventually, FreeBSD would be something generated by some guy who wakes up in the morning and switches the machine on. The guy is metaphorical of course, somebody could just build the machine and leave it on. You could probably have a separate machine automate the process of paying electricity bills and such.
 
What do mass lay-offs in software engineers and in general management tell you? That in many cases, they have dispensed with the human opertor altogether, and just turn the machine on in the mornings.
It means huge corporations with thousands of engineers and managers find they don't need them anymore at the moment due to the economics of their business and the world. It does not mean AI has replaced them all. In fact, I'm more sure of that than your next statement:
maybe an Asian pseudo-slave with a three month crash course and a prompt can do the work of 10 engineers with master's degrees and 10 years experience each.
 
Companies don't replace software engineers with LLMs.

They just keep at a much lower headcount and expect each existing engineer to do multiple times as much with the help of machine learning.

That open headcount goes right over to hiring machine learning specialists and data scientists. So if the software engineering departments do discover that they are suffering - too bad, the headcount is gone.
 
The major thing that AI does not have -- because it is not designed to -- is "good judgement".

FreeBSD is not perfect as using good judgement as to what code makes it into the tree, but at least there is a preference for it.

As to what quality of code AI will write ... just go to any modern web page (even this one) and click "View Page Source" and tell me how you would debug a problem in it. (The google news page is particularly charming.)
 
  • Like
Reactions: mer
It means huge corporations with thousands of engineers and managers find they don't need them anymore at the moment due to the economics of their business and the world. It does not mean AI has replaced them all.

I will answer this with my answer to the following:

Companies don't replace software engineers with LLMs.

They just keep at a much lower headcount and expect each existing engineer to do multiple times as much with the help of machine learning.

That open headcount goes right over to hiring machine learning specialists and data scientists. So if the software engineering departments do discover that they are suffering - too bad, the headcount is gone.

1. The ratio between programmers and autogenerated software engineers is not 1:1. Maybe in dollar terms it may be, ie 1 of the second costs 3 of the first or some such. But this will change. Otherwise, why even do this? Obviously, someone somewhere is seeing improved performance.

2. A direct replacement of one department for another for a given function is only one way that transference takes place. You could decide that it's cheaper to outsource some of your work to another company. Guess what that company does? You might find that since you automated some executive tasks, you no longer really need certain departments. Guess how all that got automated. You may find that much of your business is simply disappearing. If you are a software producer of some kind, guess who is likeliest to be taking that business? Maybe you are a consultancy firm for a certain class of systems traditionally written in C with certain structures. Maybe all of that isn't even done by autogenerated software anymore or anybody else, because the need was negated some different place along the industrial net. Autogenerated software could well be responsible for some of those optimizations, redundancies, etc.

In fact, I'm more sure of that than your next statement:
In others, maybe an Asian pseudo-slave with a three month crash course and a prompt can do the work of 10 engineers with master's degrees and 10 years experience each.
Ah, sorry, only menial labour and customer support can be outsourced to places where a more frank appraisal of the world and different legislative frameworks permit certain cost cuts.

?


Completely misread and misunderstood what you had written there.
 
An fact of note, that few people seem to realize, specially given the bubble of it all, is that autogenerated software is a child of a bear market. Ie, its real power is in cost savings. That is because, at its core, what is going on is a revolution of automation. Tasks are getting automated, optimizaitons found along the way, that couldn't be previously. It took the global lockdown and the following global economic crash to really present the autogenerated software industry with the oportunity it needed. That is because, indeed, autogenerated software cannot make stuff up. Humans can do that. In a growth economy, money is looking for people to make stuff up. In a bear economy, money is looking to make things cheaper. Autogenerated software may not be able to make stuff up, but it can do the stuff that is already there 1000x humans.

These are simply facts.
 
Back
Top