I'm sure that can be done by, you know, AI? -- some CEO, somewhere.Ideally the fear having AI generated bugs makes us check the source code, which leads to more eyes on the code and proper bug reports.
I'm sure that can be done by, you know, AI? -- some CEO, somewhere.Ideally the fear having AI generated bugs makes us check the source code, which leads to more eyes on the code and proper bug reports.
That's what we traditionally have junior developers for. To make the bugs and keep everyone on our toesIdeally the fear having AI generated bugs makes us check the source code, which leads to more eyes on the code and proper bug reports.
This.I'm sure that can be done by, you know, AI? -- some CEO, somewhere.
Not really, it sounds like we are all quite lethargic about this whole hype than anything XD.but all this thread can arrive at is THERE IS NOTHING TO BE SCARED AT, which possibly indicates some fear somewhere about it.
Translation: If its an algorithm that gets optimised with enough information to be able to profit and manipulate society, maybe the company behind it are anti-social.But if it's an Intelligence that gets Trained, a Large! Language Model with a network of Neurons, maybe there is some kind of superior being being summoned here and that's scary.
I suspect its not the AI that people get frustrated at.At least, in my experience, I have seen people be unable to discuss it unemotionally.
I don't fully believe this but I will say it anyway. "The juniors are too busy hanging around internet forums trying to popularise Rust these days to be hassled by things like "employment"So AI generated code replaces the junior developers and we need senior developers to curate the output.
Which implies "how do junior devs become senior to vet the AI code"
So AI generated code replaces the junior developers and we need senior developers to curate the output.
Which implies "how do junior devs become senior to vet the AI code"
"Hello chicken, meet egg"
I have worked with "juniors" that provided more value than some "seniors". made me really understand, "listen to everyone".And worse, as a consultancy, we find it a hard push to the client that a junior consultant is working on their project.
The reason why I'm not afraid of LLM generated code is that it cannot enter my own source trees without me actually doing (committing) it.
Which of course includes review and instant rejections of unreasonably large diffs for the desired change at hand.
Not really, it sounds like we are all quite lethargic about this whole hype than anything XD.
When a wide range of participants with adjacent knowledge in the area arrive at the same conclusion, you might want to consider their views could be fairly on point.
Translation: If its an algorithm that gets optimised with enough information to be able to profit and manipulate society, maybe the company behind it are anti-social.
No, nobody ever gets annoyed at a thing they are scared of. They only get annoyed when the thing gets brought up... at whoever brings it up.I suspect its not the AI that people get frustrated at.
(But you [@cracauer@] are right, the hiring process is compromised. Outsourcing, AI marketing and volume are exacerbating the problem for juniors particularly. In ~40 years there is going to be a lack of seniors to take over)
Because you are using the term "autogenerated" incorrectly, I imagine many of us here have autogenerated some kind of software code.I must have mistaken who I was talking to. How many of yous guys are autogenerated software engineers?
Some investor asks Sam Altman (of OpenAI) how they are planning to turn AI profitable. Altman's answer: First they will turn AI into AGI (general AI, capable of answering any question), then they'll ask the AGI how to turn AI profitable. Thereby implicitly admitting that he doesn't know.I'm sure that can be done by, you know, AI? -- some CEO, somewhere.
Because you are using the term "autogenerated" incorrectly, I imagine many of us here have autogenerated some kind of software code.
For example, myself, during my PhD, I developed a program to automate the generation of code against the OpenGL spec to serialize it across a socket.
I also implemented the initial proof of concept inline asm functionality for the Emscripten C++ transpiler whilst working on some of the LEGO titles.
Whereas I know relatively little about LLMs other than they tend to be based on llama.cpp these days and the only people knowledgable about them tend to be those who started their research decades ago with barely any funding. The other 99.99% are literal hype-merchants who were discussing Bitcoin the week before.
who started their research decades ago with barely any funding.
I'm imaging in the future more core OS quality-assurance being left to AI automation decisions (less human checking of QA before being pushed as an update), and already seen hints of that not working ideally with mainstream Linux. I believe that works for enterprise OSs, but not for FOSS OSs, and introduces as disconnect of priorities (OS being up-to-date and rolling vs making sure stuff works for free users; or something like Debian Sid vs stable with OSs wanting Sid's approach easier while masquerading a stable presentation to users).
At the root of that is efficiency, which feels like decisions done against the end-user experience, possibly for reasons of wanting more users/marketability?
I'm not sure with other business's AI usage, but operating systems or programs using it eventually affect me when it shouldn't, and I'm not for encouraging more of that![]()
Well, as one maybe thing to consider: autogenerated work takes design decisions out of the hands of the project leads, and into the hands of whoever produces the autogenerated software that is generating the work (translations, code, review, etc).
As has been seen in many cases of autogenerated content of all sorts, bias (direction, design decisions) can most definitely be cooked in.
After three years of immersion in AI, I have come to a relatively simple conclusion: it’s a useful technology that is very likely overhyped to the point of catastrophe.
This doesn't seem to be the case for those wanting to use it for open-source contributions. They very much find that it improves their workflow.using AI to get the job done is often a more time consuming, harder, and more frustrating experience because they have to go through the output and fix its mistakes
If you are used to existing static analysis tools, its not particularly scary. I think what we are seeing in the world of Linux is more their haphazard approach to development rather than LLMs themselves.Any news on FreeBSD resisting this scary route?
Any news on FreeBSD resisting this scary route?
Windows 98 SE.Is like when all Linux distros adopt Systemd, the users have voice on this? no...
I trust on the gurus of FreeBSD on this,the AI is the end of things(personal opinion)
If the worst happen I switch to windows95 or FreeBSD 10
Actually, the data seems to support the opposite thesis: https://futurism.com/ai-coding-programmers-realityThis doesn't seem to be the case for those wanting to use it for open-source contributions.
the programmers actually spent 19 percent more time when using AI than when forgoing it. [...] they still thought those tools sped them up by 20 percent.