Will FreeBSD adopt a No-AI policy or such?

I am sure the thought is comforting. Even though it really just pushed the ball to "the next generation's lifetime." Not to mention the problems of it being comforting. Does it mean that if they were going to exist in your lifetime, then you would fret?
Exactly. It is so far down the line that I don't want to waste my time worrying about something that isn't going to affect me.
I don't worry about the Sun expanding and engulfing the Earth in millions of years either.

I'm at least old enough to know that history will do what it does. Best one can do is try to understand it as it happens.
History will do what it does. And that is repeat itself. I have lived through enough gimmicks now to know not to worry.

We are lifetimes away from AGI. What we have now is chat bots. We used to have a member on these forums who wrote chat bots. Its not new to us.
 
History will do what it does. And that is repeat itself. I have lived through enough gimmicks now to know not to worry.

Clearly, you have lived a blessed life.

Exactly. It is so far down the line that I don't want to waste my time worrying about something that isn't going to affect me.

Make up your mind. Is it (or I guess anything) not worth worrying about, or is it worth worrying about but only if it happens in your lifetime?
 
We are lifetimes away from AGI.

In my opinion, an eternity, because artificially generated intelligence is a misnomer (meant to awe people I suppose, like certain religious concepts of the past).

Try to not anthropomorphize computers, it may help clear your thinking.

The question is: how close are we to autogenerated software that can outperform systems programers? Did it already happen? What would be some additional things to consider, if yes, before blindly adopting it?
 
Clearly, you have lived a blessed life.
Why? You let yourself be affected by gimmicks in the past? Perhaps you want to take the opportunity to reflect on that.

Make up your mind. Is it (or I guess anything) not worth worrying about, or is it worth worrying about but only if it happens in your lifetime?
As mentioned, in this case, they are both the same. This "AI" risk you are alluding to will not happen in our lifetime. So don't worry about it.

In my opinion, an eternity, because artificially generated intelligence is a misnomer (meant to awe people I suppose, like certain religious concepts of the past).
AGI is "Artificial General Intelligence". Nothing to do with anthropomorphism. If the "AI" needs to be tailored to a specific problem domain, then it is basically a glorified algorithm and humans are the sole driving factor. In hundreds (if not thousands) of years, once AI can adapt without needing spoonfeeding by humans to achieve a single prescribed function, then things may get more interesting.
 
Why? You let yourself be affected by gimmicks in the past? Perhaps you want to take the opportunity to reflect on that.

It is simply that I, like most of humanity, have experienced the horrors of history repeting itself. The experience has taught me the opposite of not to worry. Not that, I mean, may God continue to bless your life and your family's. I am just saying.

As mentioned, they are both the same.

Well, they are not. It's different to say "autogenerated software is not worth worrying about" and "autogenerated software is only worth worrying about beyond X amount of computing capacity."

We would like clarification, if possible.

AGI is "Artificial General Intelligence".

Ok, you got me. I don't really keep up with the marketing.
Nothing to do with anthropomorphism.

Well, intelligence is a human trait.
 
It is simply that I, like most of humanity, have experienced the horrors of history repeting itself. The experience has taught me the opposite of not to worry.
Being in tech for a while, you get used to the rollercoaster of vaporware :)

Well, they are not. It's different to say "autogenerated software is not worth worrying about" and "autogenerated software is only worth worrying about beyond X amount of computing capacity."
We would like clarification, if possible.
Those are both different questions to your previous post. And the one I answered.

For this question. The "AI" algorithms today are not a risk, regardless of computing capacity you throw at them. Its like saying a quicksort algorithm will become sentient or take over the world if you run it on a really fast computer. This is not the case.

Ok, you got me. I don't really keep up with the marketing.
The term AGI came about in 1997. We still aren't much closer to it now either. This recent LLM stuff is a fun monetizable distraction but might even slow down progress towards AGI.

Well, intelligence is a human trait.
Disagree. A rabbit has more intelligence than what we are calling "AI" today.
 
Disagree. A rabbit has more intelligence than what we are calling "AI" today.
Come on, this is a silly parsing of words. If you said "my coffee machine's warm embrace," that would be an anthropomorphization, even though a bear could embrace you also.

For this question. The "AI" algorithms today are not a risk, regardless of computing capacity you throw at them. Its like saying a quicksort algorithm will become sentient or take over the world if you run it on a really fast computer. This is not the case.
This is another silly parsing. Fine, not "computing" capacity, call it "generative" capacity.

The fact remains that you are putting the limit of caring, not at autogenerated software itself, but at it being anything more than vaporware. I am beginning to suspect that the answer is that, if it were not vaporware, your advice would be to worry.
 
Employers probably benefit from employees using AI so they can replace human workers, and employees want to be competitive :p

I wonder if it looks good on a resume to not have ever used AI?
 
What do mass lay-offs in software engineers and in general management tell you? That in many cases, they have dispensed with the human opertor altogether, and just turn the machine on in the mornings.

In others, maybe an Asian pseudo-slave with a three month crash course and a prompt can do the work of 10 engineers with master's degrees and 10 years experience each.

These debates are all academic. The real world is having its say. It's hilarious that people think that the "Artie your AI friend" chatbot is where the revolution is happening. That's just the PR effort (it has other uses too).

The questions outstanding, specially regarding FreeBSD's adoption or not of autogenerated software for code genertion: are there consequences here beyond funcitonality?

I've brought up a couple. I think the most pressing would be that it takes the direction out of FreeBSD team's hands. Eventually, FreeBSD would be something generated by some guy who wakes up in the morning and switches the machine on. The guy is metaphorical of course, somebody could just build the machine and leave it on. You could probably have a separate machine automate the process of paying electricity bills and such.
 
What do mass lay-offs in software engineers and in general management tell you? That in many cases, they have dispensed with the human opertor altogether, and just turn the machine on in the mornings.
It means huge corporations with thousands of engineers and managers find they don't need them anymore at the moment due to the economics of their business and the world. It does not mean AI has replaced them all. In fact, I'm more sure of that than your next statement:
maybe an Asian pseudo-slave with a three month crash course and a prompt can do the work of 10 engineers with master's degrees and 10 years experience each.
 
Companies don't replace software engineers with LLMs.

They just keep at a much lower headcount and expect each existing engineer to do multiple times as much with the help of machine learning.

That open headcount goes right over to hiring machine learning specialists and data scientists. So if the software engineering departments do discover that they are suffering - too bad, the headcount is gone.
 
The major thing that AI does not have -- because it is not designed to -- is "good judgement".

FreeBSD is not perfect as using good judgement as to what code makes it into the tree, but at least there is a preference for it.

As to what quality of code AI will write ... just go to any modern web page (even this one) and click "View Page Source" and tell me how you would debug a problem in it. (The google news page is particularly charming.)
 
  • Like
Reactions: mer
It means huge corporations with thousands of engineers and managers find they don't need them anymore at the moment due to the economics of their business and the world. It does not mean AI has replaced them all.

I will answer this with my answer to the following:

Companies don't replace software engineers with LLMs.

They just keep at a much lower headcount and expect each existing engineer to do multiple times as much with the help of machine learning.

That open headcount goes right over to hiring machine learning specialists and data scientists. So if the software engineering departments do discover that they are suffering - too bad, the headcount is gone.

1. The ratio between programmers and autogenerated software engineers is not 1:1. Maybe in dollar terms it may be, ie 1 of the second costs 3 of the first or some such. But this will change. Otherwise, why even do this? Obviously, someone somewhere is seeing improved performance.

2. A direct replacement of one department for another for a given function is only one way that transference takes place. You could decide that it's cheaper to outsource some of your work to another company. Guess what that company does? You might find that since you automated some executive tasks, you no longer really need certain departments. Guess how all that got automated. You may find that much of your business is simply disappearing. If you are a software producer of some kind, guess who is likeliest to be taking that business? Maybe you are a consultancy firm for a certain class of systems traditionally written in C with certain structures. Maybe all of that isn't even done by autogenerated software anymore or anybody else, because the need was negated some different place along the industrial net. Autogenerated software could well be responsible for some of those optimizations, redundancies, etc.

In fact, I'm more sure of that than your next statement:
In others, maybe an Asian pseudo-slave with a three month crash course and a prompt can do the work of 10 engineers with master's degrees and 10 years experience each.
Ah, sorry, only menial labour and customer support can be outsourced to places where a more frank appraisal of the world and different legislative frameworks permit certain cost cuts.

?


Completely misread and misunderstood what you had written there.
 
An fact of note, that few people seem to realize, specially given the bubble of it all, is that autogenerated software is a child of a bear market. Ie, its real power is in cost savings. That is because, at its core, what is going on is a revolution of automation. Tasks are getting automated, optimizaitons found along the way, that couldn't be previously. It took the global lockdown and the following global economic crash to really present the autogenerated software industry with the oportunity it needed. That is because, indeed, autogenerated software cannot make stuff up. Humans can do that. In a growth economy, money is looking for people to make stuff up. In a bear economy, money is looking to make things cheaper. Autogenerated software may not be able to make stuff up, but it can do the stuff that is already there 1000x humans.

These are simply facts.
 
That open headcount goes right over to hiring machine learning specialists and data scientists.
Who, per capita, cost about 1.5x as much as software engineers (data scientists), and 3x (ML experts). Source: been there, done that, got the T-shirt.

The big companies that are laying off 1000 software engineers, only to hire 1000 AI experts plus 500 data scientists, are not doing themselves a favor.
 
Well, I am downloading Google Antigravity right now. An entire IDE dedicated to vibe coding. Cover me, I'm going in.
Please post an experience report! If the tunnel you have crawled into is nice and well outfitted, I might go in there too.

For some reason, this makes me think of Monte Python's "Spanish Inquisition": The soft pillows. The comfy chair ... oh not the comfy chair. I want an IDE with a comfy chair.
 
  • Like
Reactions: mer
Well, as long as I'm annoying the s out of people, I might as well go a few posts further and crystalize some thoughts. Eventually I'll get modded or bored or somebody will "set me straight." Please bear with me and I apologize in advance for the inconvenience.

In a scenario where you are big money wanting to introduce atuogenerated software into the world economy in a big way, what might be some of your motivations? The savings are there and are real, but big money would benefit from speculative growth as much as smaller money, the comparative advantage is not likely to be there. On the other hand, though, the difference between what small money can do with autogenerated software and what big money can, the difference infrastructure and both quality and quantity of hardware make, as well as consolidated teams of highly trained and experienced autogenerated software architects, is tremendous. If the rule is autogenerated software, market share shifts dramatically in favour of big money. Want to set up a company? The days of hiring a capable gradute or professional, or a team of them, to profitably fulfill roles is over. You have to source, from big money, automated systems that can compete in the market for the different elements of your business. This with important exceptions and caveats, but as a general rule.

For example, what benefit could there be, beyond the mentioned cost savings, in introducing autogenerated software into open source code generation?

Well, if you wanted to take control of those projects, without overtly doing it, then this might be an avenue. After all, maybe every single developer in that project is a hard core individualist with no ties or links at all with anything they don't have complete control over. But if they start using any kind of autogenerated software as part of their work flows, they can't really (usefully) source that autogenerated software from some basement operation. They will be using software designed by big money. Therefore, big money has extended its market cap without even having to do anything. Actually, developers would be paying them subscription fees for the privilege.

Once a project is fully under your control, of course, you dispense with the idiotic "Artie your AI friend" chatbots and bring in the "Alpha Zero for $BUSINESS_SECTOR" guns.

Make sense?

Even without this, just the ability to introduce what above was referred to "judgement" directions by baking a given preference into the autogenerated software helps you bring projects under your control that nominally have nothing to do with you.
 
What crystalizes the idea is putting it in words for other people.

I highly recommend it :p

Do I prefer "social media is the future",

This one must make you feel silly, given that facebook is one of the top 5 companies int he world by market capitalization. Wait, you wrote that after all that happened?!

"bitcoin is the future"

Blockchain was actually a fairly revolutionary idea that is widely deployed now. What I was saying to people who got very excited after big money started moving in was that they were dupes for thinking tullips would hold their value for ever. What big money was investing in was the block chain. Had a hilarious cconversation about it with my dentist when he was telling me that he lost everything in the crash. People...


"llms are the future"

The very fact that your mind prints "llm" when that marketing term was invented (or popularized, who knows) a good decade after autogenerated software was in full swing as an industry should tell you everything about which side of the marketing veil you are on.

Anyway, the things I write are more for people interested in grappling with the specifics that come up when one considers these things seriously. Whether they exist or are hypothetical. People that go "tldr I remember bitcoin" will probably not profit.

I think that's an essencial part of the goal for those marketing campaigns. For you to think of it purely in the terms they decide, whether stereotypically for or stereotypically against.

Myself, I find the whole phenomenon fascinating. So get ready your safe space pacifiers, I'll probably have more to say.
 
For example, this:

Somebody above was writing about "judgement," as if autogenerated software wrote some kind of pure, unbiased analysis of any given thing (setting aside the issue of quality). That it wasn't "designed" to judge.

But that is a misunderstanding of how the technology works. You can design one of these systems so that every output it gives has a bias, or "judgement" built in.

I won't explain to you why, I will leave you to the boundless open documentation on the subject. It is time you stop accepting everything you think about things to be printed for you. RTFM.

By the way, another thing people don't consider: it's interesting that the bitcoin craze produced such a frenetic build up in raw cpu production and cpu design, which is what the bottleneck fot autogenerted software is.
 
Well, I am downloading Google Antigravity right now. An entire IDE dedicated to vibe coding. Cover me, I'm going in.

Well...

"Error
Encountered retryable error from model provider.
You have reached the quota limit for this model."

That was on my second(!) prompt.

%%

Asking it to concert cstream from C to C++ it just made one main class and stuffed a bunch of functions into it as member functions. It changed some char * to std::string and uses a vector of char for the main file I/O buffer. It silently deleted the functionality to report throughput every N seconds, leaving an unused signal handler. It freely mixed stdio snd iostream for message output.
The good: it compiles with one warning
The bad: functionality is 100% broken with no error message

%%

Asking it to change seekbench, which is already C++ but uses the POSIX API, to use modern C++ APIs instead, starting with std::chrono.
  • First it tried to convert cstream again, this IDE is not very context sensitive
  • It produced a 1000 line diff, way too many changes unrelated to the requested changes
  • Result compiles with no warnings and basic functionality that I tested works
  • It figured out a good combination of commandline flags to test it (it doesn't work with no arguments). This is mildly impressive
  • Asking it to use std::thread rather than directly managing posix threads: this works as expected. This is a simple change, though
  • Use std::filesystem. Works as expected, but this is only one code block

So overall mixed results. Obviously it works much better with small changes. I could commit none of the changes that worked because of the diff bloat, I would wreck my `git blame` history.
 
Back
Top