Will FreeBSD adopt a No-AI policy or such?

Well it's one of those ironies, isn't it?

There's all sorts of interesting subtleties in the possibilities of deployment for autogenerated software, but all this thread can arrive at is THERE IS NOTHING TO BE SCARED AT, which possibly indicates some fear somewhere about it.

Which I always felt is a key component of autogenerated software marketing. If it's just a new technology to generate software, then dive into the details is the first thing you would do, and arriving at sane policy for it would be a very simple and technical discussion.

But if it's an Intelligence that gets Trained, a Large! Language Model with a network of Neurons, maybe there is some kind of superior being being summoned here and that's scary.

At least, in my experience, I have seen people be unable to discuss it unemotionally.

Maybe it's for the best, maybe the whole phenomenon is having a filtering function.
 
The reason why I'm not afraid of LLM generated code is that it cannot enter my own source trees without me actually doing (committing) it.

Which of course includes review and instant rejections of unreasonably large diffs for the desired change at hand.
 
So AI generated code replaces the junior developers and we need senior developers to curate the output.
Which implies "how do junior devs become senior to vet the AI code"
:)
"Hello chicken, meet egg"
 
but all this thread can arrive at is THERE IS NOTHING TO BE SCARED AT, which possibly indicates some fear somewhere about it.
Not really, it sounds like we are all quite lethargic about this whole hype than anything XD.

When a wide range of participants with adjacent knowledge in the area arrive at the same conclusion, you might want to consider their views could be fairly on point. Especially if you are new to the field.

But if it's an Intelligence that gets Trained, a Large! Language Model with a network of Neurons, maybe there is some kind of superior being being summoned here and that's scary.
Translation: If its an algorithm that gets optimised with enough information to be able to profit and manipulate society, maybe the company behind it are anti-social.

Yep, I said it, your concept of AI is basically just a supermarket clubcard.

At least, in my experience, I have seen people be unable to discuss it unemotionally.
I suspect its not the AI that people get frustrated at.

So AI generated code replaces the junior developers and we need senior developers to curate the output.
Which implies "how do junior devs become senior to vet the AI code"
I don't fully believe this but I will say it anyway. "The juniors are too busy hanging around internet forums trying to popularise Rust these days to be hassled by things like "employment" ;)

(But you are right, the hiring process is compromised. Outsourcing, AI marketing and volume are exacerbating the problem for juniors particularly. In ~40 years there is going to be a lack of seniors to take over)
 
cracauer@ kpedersen spoken like someone that has not "been a junior developer for a long time".

It's one of the interesting aspects of being in this industry for a long time (I'm at over 40yrs). Old engineer seen as old dog can't learn new tricks, but all the new tricks are what we saw in the past just with a different name. Oh and "hey can you mentor this new kid? He's got lots of new good ideas".
 
To be fair, I don't disagree. Many of them are really talented. Its just often difficult to convince the company to hire them, when mid-level isn't that much more expensive in the UK.

And worse, as a consultancy, we find it a hard push to the client that a junior consultant is working on their project.
 
And worse, as a consultancy, we find it a hard push to the client that a junior consultant is working on their project.
I have worked with "juniors" that provided more value than some "seniors". made me really understand, "listen to everyone".

So maybe just listen to everyone? Make the old dog validate his position beyond "it's the way it's always been done" and make the pups draw pictures on the whiteboard (because that's the only way old dogs understand )
 
The reason why I'm not afraid of LLM generated code is that it cannot enter my own source trees without me actually doing (committing) it.

Which of course includes review and instant rejections of unreasonably large diffs for the desired change at hand.

Sounds like a rehash of kpedersen's argument: "I don't care because (implying 'as long as') it doesn't affect me." I hope I don't have to point out that this avoids entirely the fact of an opinion on autogenerated software. Which you might not care about, but this thread is about what policy FreeBSD might have regarding it. If you design that policy on the basis of "I have no opinion," you by definition can only get a brainless policy.

Not really, it sounds like we are all quite lethargic about this whole hype than anything XD.

Lethargic is the group of people that are reading this, or even not, withour participating. You fellows have reacted, at times with decidedly hurt feelings. Lethargy it isn't.

When a wide range of participants with adjacent knowledge in the area arrive at the same conclusion, you might want to consider their views could be fairly on point.

I must have mistaken who I was talking to. How many of yous guys are autogenerated software engineers?

I will take the oportunity to say something alternative: "if they sound like the opinion of every dope on the street who doesn't know what "OS" means, that's a bad sign."

It's not a bad sign because you don't know computers. It's a bad sign because you do.

Translation: If its an algorithm that gets optimised with enough information to be able to profit and manipulate society, maybe the company behind it are anti-social.

I don't see where one follows the other. Just because a large company acquires technology that can profit and manipulate society, doesn't make it antisocial. In fact, that describes everything big companies have been doing since the dawn of man. It is the entire point of them. It's not bad or good, it is just inevitable.

The problem comes when marketing arises that is blatantly idiotic, and even educated men with enough independent thinking to even use a system like FreeBSD parrot it unthinkingly, obviating any reasoning. Then you start thinking, "maybe some people involved are less than spectacular."

I suspect its not the AI that people get frustrated at.
No, nobody ever gets annoyed at a thing they are scared of. They only get annoyed when the thing gets brought up... at whoever brings it up.

My experience, maybe yours differs.

(But you [@cracauer@] are right, the hiring process is compromised. Outsourcing, AI marketing and volume are exacerbating the problem for juniors particularly. In ~40 years there is going to be a lack of seniors to take over)

I will turn something you said around on you. If all the biggest companies are restructuring themselves to the core, involving tens of trillions of dollars and many decades of planning... Do you think maybe it might be time to reconsider your position that autogenerated software is a "gimmick?"

Shoot, maybe they're just making it up as they go along (can I say "shoot"?).
 
I must have mistaken who I was talking to. How many of yous guys are autogenerated software engineers?
Because you are using the term "autogenerated" incorrectly, I imagine many of us here have autogenerated some kind of software code.

For example, myself, during my PhD, I developed a program to automate the generation of code against the OpenGL spec to serialize it across a socket.
I also implemented the initial proof of concept inline asm functionality for the Emscripten C++ transpiler whilst working on some of the LEGO titles.

Whereas I know relatively little about LLMs other than they tend to be based on llama.cpp these days and the only people knowledgable about them tend to be those who started their research decades ago with barely any funding. The other 99.99% are literal hype-merchants who were discussing Bitcoin the week before.
 
I'm imaging in the future more core OS quality-assurance being left to AI automation decisions (less human checking of QA before being pushed as an update), and already seen hints of that not working ideally with mainstream Linux. I believe that works for enterprise OSs, but not for FOSS OSs, and introduces as disconnect of priorities (OS being up-to-date and rolling vs making sure stuff works for free users; or something like Debian Sid vs stable with OSs wanting Sid's approach easier while masquerading a stable presentation to users).

At the root of that is efficiency, which feels like decisions done against the end-user experience, possibly for reasons of wanting more users/marketability?

I'm not sure with other business's AI usage, but operating systems or programs using it eventually affect me when it shouldn't, and I'm not for encouraging more of that :p
 
I'm sure that can be done by, you know, AI? -- some CEO, somewhere.
Some investor asks Sam Altman (of OpenAI) how they are planning to turn AI profitable. Altman's answer: First they will turn AI into AGI (general AI, capable of answering any question), then they'll ask the AGI how to turn AI profitable. Thereby implicitly admitting that he doesn't know.
 
Is like when all Linux distros adopt Systemd, the users have voice on this? no...
I trust on the gurus of FreeBSD on this,the AI is the end of things(personal opinion)
If the worst happen I switch to windows95 or FreeBSD 10
 
Because you are using the term "autogenerated" incorrectly, I imagine many of us here have autogenerated some kind of software code.

For example, myself, during my PhD, I developed a program to automate the generation of code against the OpenGL spec to serialize it across a socket.
I also implemented the initial proof of concept inline asm functionality for the Emscripten C++ transpiler whilst working on some of the LEGO titles.

Whereas I know relatively little about LLMs other than they tend to be based on llama.cpp these days and the only people knowledgable about them tend to be those who started their research decades ago with barely any funding. The other 99.99% are literal hype-merchants who were discussing Bitcoin the week before.

I am not talking about autogeneration of software code. I am talking about autogeneration of software. Binaries. Software that isn't designed, but autogenerated. Nobody knows, nor probably can know, how any given autogenerated software works. There is no design, no blueprint. You choose an algorhythm, or a suite, you initialize a bunch of bits randomly, and then you feed data through the algorhythm that shifts the bits incrementally in response. You don't know the logical chain between input and output. There is none. It is pure statistics. In this sense, the software is purely and genuinely autogenerated.

What you are talking about is automated code creation. What I am talking about is autogenerated software automating code creation. There is a difference. It's not the same if you kpedersen design code autogeneration, than if an already autogenerated system does it. What you generate can be reverse engineered. What autogenerated software, truly autogenerated, generates cannot. It is like a computer program that is grown instead of written. I wonder if the difference is obvious or I am repeating myself and sounding dumb. With new technology, it often pays to.

who started their research decades ago with barely any funding.

Those are the pioneers. There is a second batch, at work in the mid-2000's to mid 2010's, who really made it what it is today. Those people really understand it. In a way, they were already their own delluded hype merchants, because they watched too much Terminator and fell in love with the idea of summoning the Matrix Gods. It's silly, a child's anthropomorphization. Later on, the real marketing masters stepped in, the brain washers par excellence. But those guys, the ones who were already a little self-deluded, where some real f******g smart dudes developping some serious technology that they do understand. It is worth reading what they had to say.
 
I'm imaging in the future more core OS quality-assurance being left to AI automation decisions (less human checking of QA before being pushed as an update), and already seen hints of that not working ideally with mainstream Linux. I believe that works for enterprise OSs, but not for FOSS OSs, and introduces as disconnect of priorities (OS being up-to-date and rolling vs making sure stuff works for free users; or something like Debian Sid vs stable with OSs wanting Sid's approach easier while masquerading a stable presentation to users).

At the root of that is efficiency, which feels like decisions done against the end-user experience, possibly for reasons of wanting more users/marketability?

I'm not sure with other business's AI usage, but operating systems or programs using it eventually affect me when it shouldn't, and I'm not for encouraging more of that :p

Whatever the case, I don't see why a niche and high-end project like FreeBSD shouldn't take it upon themselves to try it the pure human way. If nothing else, because they have a chance to make something valuable that way, unlike others, and almost everybody else is going to do it the machine way. We don't know what it will look like. But we know it will all probably look the same.

The truth is that you don't need to go to college to learn C, and to learn systems programming, and to learn FreeBSD architecture. The problem that Linux with its gigantic and schyzophrenic scope has is not one that FreeBSD has. FreeBSD is a culture that can survive the developercopalypse. Because, frankly, you can learn it at home. And some of us will never get bored with it.

Once you let the machines in, though, it's bye bye. It will simply become another molecule in the great machine generated mass, indistinguishable from anything else. A Linux copy, after all these years.
 
Well, as one maybe thing to consider: autogenerated work takes design decisions out of the hands of the project leads, and into the hands of whoever produces the autogenerated software that is generating the work (translations, code, review, etc).

As has been seen in many cases of autogenerated content of all sorts, bias (direction, design decisions) can most definitely be cooked in.

This is the second thread I've seen where you are using weasel words to try and redefine work performed by LLM's and the issues around them. I don't think you're arguing in good faith, and won't respond further. It sounds like every other form of apologetics, where you can excuse every wrong by reframing the question.

I'll simply say that when you try to preach to experienced programmers and engineers how your new toy does the job better, and you refuse to take no for an answer, you're in the wrong.

When the folks pushing "AI" (LLM's) are the same people that pushed crypto, NFT's, and other scams I refuse to use their product and their outputs.
 
Back
Top