Can we please have a forum for LLMs/AI topics?

"Current discussion seems to fit in existing topics"
Agreed. I would indeed love to see a bit more of experience sharing, configuration hints, common problems, howto and such (and for starters I would highly appreciate somebody answering a few of my questions in the other thread), but aye that still fits well here.
 
Why create a mechanism for the scrapers to ignore/filter? -i.e. the forum is already being scrapped and digested so, a dedicated field is just something the scrapers can ignore in preference over the untainted (we have to muck about in this slop, they/it can too).
 
Why create a mechanism for the scrapers to ignore/filter? -i.e. the forum is already being scrapped and digested so, a dedicated field is just something the scrapers can ignore in preference over the untainted (we have to muck about in this slop, they/it can too).
This nonsense wouldn't have existed if p2p filesharing networks weren't killed for the profit loss of media giants. I believe the OSS world has to seize control again like in limewire/dc++ times. If companies are legally permitted to download, use and exchange all public data in whatever form, private computer users have that right too without any commercial parties involved. Operating systems may need a secure infrastructure that shields communicating users from unwanted watchers ... This can also be the base of a (optional) OS-native learning system or AI.
 
p2p works good but it can be a little isolating on a larger scale. also, the last firm I worked at was headless (no server). our files were all hosted on "dropbox" (so to speak) which worked awesome-ly.
 
I plan to post how to run GPU-accelerated local LLMs on FreeBSD right now, which models to use for coding, how different hardware trades off. How huggingface works, how llama.cpp moved the caches around.
This sounds like a candidate for "Howtos and FAQs".
 
This sounds like a candidate for "Howtos and FAQs".

Not what I think people will reply to that and my followups.

In any case, use of an advanced local LLM is so slick that it is a 2-liner *assuming you start from working GPU drivers). Not much for the initial HOWTO post.

I kindly ask to give such as subforum a try.
 
Reminds me of this joke: A town council decided not to build a new station because, as the mayor put it: "It doesn't make sense. Every time a train passes through here, I look out the window, and there’s absolutely no one waiting to get on!"
 
This is darker than it may seem at first glance.

As cracauer@ already mentioned, the brain adapts to training (or the lack thereof) quite similar to a muscle. But such degeneration is mostly reversible.

The emotional intelligence aspect is more grim: many people are already afraid to open-up to some other person, because there is a risk involved and one might get hurt. But, opening-up is the only way to tear down the walls that are enclosing us into isolation.
Relating to an AI now offers a way to satisfy the desire totally risk-free, but also without much chance to learn and develop, and without a possibility to experience synergy ("the whole is more than the sum of its parts"), replacing the option to not be alone by a mere illusion of not being alone.

The source of our human values has not much to do with intellectual thinking, and much more with our ability to feel, to perceive beauty, to engage in desires - all abilities that a machine can most likely not have.

With a society that focuses mostly on being respected and influential (and not much else) things are already in a bad state, and more so with our marxist-influenced leftists teaching that sex is just a need like the need for eating and sleeping, and love only a fake invention of the bourgeoise.

AI may now open ways to drive this even further, to abandon actual social competences and replace them with mere rulesets. And in the wake our human values will also become lost, and instead of the AI supporting us, we humans will be shaped in the image of the AI.
 
This is darker than it may seem at first glance.

As cracauer@ already mentioned, the brain adapts to training (or the lack thereof) quite similar to a muscle. But such degeneration is mostly reversible.

The emotional intelligence aspect is more grim: many people are already afraid to open-up to some other person, because there is a risk involved and one might get hurt. But, opening-up is the only way to tear down the walls that are enclosing us into isolation.
Relating to an AI now offers a way to satisfy the desire totally risk-free, but also without much chance to learn and develop, and without a possibility to experience synergy ("the whole is more than the sum of its parts"), replacing the option to not be alone by a mere illusion of not being alone.

The source of our human values has not much to do with intellectual thinking, and much more with our ability to feel, to perceive beauty, to engage in desires - all abilities that a machine can most likely not have.

With a society that focuses mostly on being respected and influential (and not much else) things are already in a bad state, and more so with our marxist-influenced leftists teaching that sex is just a need like the need for eating and sleeping, and love only a fake invention of the bourgeoise.

AI may now open ways to drive this even further, to abandon actual social competences and replace them with mere rulesets. And in the wake our human values will also become lost, and instead of the AI supporting us, we humans will be shaped in the image of the AI.

I remember how up-in-arms the social LLM users were with one ChatGPT upgrade, where the new model (just a point update from the desired one) was perceived as utterly cold and unsocial. OpenAI actually re-enabled the old one as a choice.

I haven't seen updates about this, whether newest ChatGPT is more social again, or whether people went to a different vendor.

But either way, once you run an open-weight model locally with one of the llamas it cannot be taken away from you.
 
"The whole is more than the sum of its parts" by PMs. This is from Logic; and it has no relevance to computers. Some Greek guy wrote this in the B.C. era.
Well, he wrote "the whole is bigger than the part".
 
But either way, once you run an open-weight model locally with one of the llamas it cannot be taken away from you.
Of course it needs to be distinguished, what exactly is meant when talking "AI".

Installing something like ollama on your local machine to experiment or even using it for helping you with some of your personal tasks based on the training of your own personal experience, crawl some large heap of data to find patterns additionally to conventional analyses, or creating NPCs for a computer game is not the same as ChatGPT, Copilot etc.
Looking only at the technical principal how they work they don't differ much. But here comes what I also often repeat: Also size makes a difference.
If you run an "AI" of some GBs on your GPU you train personally with your own experiences for your own purposes, or building computers the size of warehouses containing hundred thousands of GPUs, trained by many thousands of people to crawl the internet are complete different beasts.
I don't want to go into technical details, because that would miss my point.
Knowing how it works, knowing to evaluate, knowing to discriminate, when to use it for what, and when better using conventional methods (knowing those), how you rate its outcome... is already the first step to expertise.
And that's my point: Not only size but also expertise makes the difference.
The masses uses it without any. They don't differ between AI, LLM, ML, et al. They not even see it's "AI", they see "I": science fiction, HAL, WOPR, C3PO, Commander Data, Marvin...became reality: computers can talk (and think) like humans, while they know everything.
This in combination with the religiuos believe in technical progress, the trust in infallibility of machines and so the naive, unsophisticated, immature, even ignorant usage of something very powerful, very easy to use, but at the same time not understanding it, not using it correctly, is dangerous. Especially when it's hyped to be sold quickly and inconsiderated, because the most money can always be made by selling something addictive to the masses.
Example as you all know:
You and a lot many others here are sophisticated, experienced programmers.
Your knowledge and experience is based on lots of learning and doing programming the conventional way. You know how to deal right with any piece of code produced by any kind of LLM or AI. For you this can be a useful tool, because you know if and when how to fit it into your toolchain.
Now look at somebody just starting to learn computers.
Humans are animals. Animals are by nature lazy, or to be more correct, trying to be most efficient: Always looking for to get to the target with the least energy used. If they don't really must do something, they safe their precious energy. (That most animals are so very busy most of the times is because they must find food most of the times, or they starve to death.) While humans are so very proud of being so outstanding smart in fact they are particulary lazy to think. Our relative large brains are the most precious "muscle" we possess. The brains need special training, which does not lead primarily to attract sex partners, and it needs a very special diet of nutrients coming only in combination with even larger amounts of other nutrients only useful for other organs. That's why people tend to focus on training their brains and neglect the rest too much tend to get fat (including myself.)
As cracauer@ said, we are talking training.
So, back to our common AI using computer newb: It doesn't matter if it's a piece of code, or some config to set up FreeBSD, you simply ask ChatGPT to produce some, within seconds a piece of "cryptical text" is given, the machine even tells how and where to copy it and voilá the computer does what you want. If not, you try again and again as long as the shit works. If not, you ask in the forums. At least one expert will deliver an answer filling the gap for you.
So, why should anybody take the long, strenuous, tedious, laborious and boring effort to learn programming or configuring FreeBSD at all, if it is all already provided effortless turn-key on a silver platter by a machine within seconds? (Don't forget: We are looking at the naive, stupid, ignorant user - the total noob, not the expert, who can make a difference.) Some may even miss the point, that all this asking AI, try and error, reading all AI's and forum's answers in summary is more work while not learning anything much really, than instead doing it the conventional way, because they only look at the one step they are currently at, but not seeing the whole picture.
Now think ahead.
This way expertise will die-out with the last expert deceased ("grey beards from the stone ages.") Because nobody is gaining expertise anymore. For what? The machine knows it all, the machine does it all. There are two scenarios:
One day there is nobody left in the forums anymore to correct the mistakes AI produces, while AI still depends on being corrected and trained. This way AI will suffer "dementia", and its stored expertise will be lost.
Or, the other way, AI will be improved one day the way it does not do any mistakes anymore, but deliver reliably. Then no experts are needed anymore. But so ain't nobody else.
You see, you can turn it like you want:
There is a dilemma.
Or, to put it into a more radical picture:
With AI we outsource our brains.
What is a human without a brain good for? There is only one job for president of the ...*cough* What to do with the rest?

But there is hope:
I recently read Sweden banned all computers from schools, and returned to pen and paper.👍
I recommend to read
Clifford Stoll, High-Tech Heretic: Reflections of a Computer Contrarian. Knopf Doubleday Publishing, 2000
And just yesterday I read at an US university a teacher returned back to make the students write their texts on (mechanical) typewriters again, so they cannot simply copy-paste, but must read themselves at least once what they are going to deliver. (I wonder if there still is enough machines and ink ribbon left.)

But anyway this still will not solve our old core problem we have:
When schooling is finished most humans flee any kind of learning ("Yeah, yeah, I will set up the hands-free car kit. But not now. I need time for that."), because what schools teach most is that learning is shit, instead of encourage people to keep on learning by themselves.
So, "back" to AI at the very moment you don't have to avoid it anymore, and power down brains to the least operation mode just enough for what's just needed.
 
I'm not going to create an empty forum and wait for it to fill. Period.

When I added "Emulation and Virtualization" it was because "General - base" was swamped with threads about jails, virtualbox and bhyve. I created it because those threads dominated the forum and took away the attention on other questions and issues that were posted there, they got completely drowned out. I had 3 pages worth of threads to move to the new forum, which cleaned up "General - base" quite nicely. I'm not seeing that amount of posts about configuring, running and maintaining AI tools or services.

I would prefer to have at least a full page worth of threads to fill a new forum. I propose a deal, give me at least half a page (that's ~15 threads) of content and I'll create the forum. And to reiterate, it'll be a forum about configuring, running and/or problems relating to the existing AI ports/packages. Generic AI discussions (directly or indirectly involving FreeBSD) can stay in "Offtopic".
 
One day there is nobody left in the forums anymore to correct the mistakes AI produces, while AI still depends on being corrected and trained. This way AI will suffer "dementia", and it's stored expertise will be lost.
Or, the other way, AI will be improved one day the way it does not do any mistakes anymore, but deliver reliably. Then no experts are needed anymore. But so ain't nobody else.
You see, you can turn it like you want:
There is a dilemma.
Everything is possible. Maybe an interaction between humans and an official bot of the forum, which learns from its mistakes. Documentation can become more interactive, using AI chats.

AI is like Internet: there were a before and after Internet, and there will be a before and after AI. AI, like Internet, is too much pervasive, and it will change a lot of things.
 
I'm not going to create an empty forum and wait for it to fill. Period.

LOL -- Well... (ALL) forums start out empty and get filled over time? :cool:

I am sure the USENET community endlessly argued over what newsgroups they were and (were not) going to add to USENET today/tomorrow/next year/etc. Are we going to add rec.arts.gardening today? Or comp.games.pc.lord.of.the.rings? Actually I remember (A LOT) of daily arguing back in the day about what USENET groups were permissible and which were not, or even USENET groups that became completely banned. This Forum Thread reflects some of that same discourse from the 1990s.

But Maturin and cracauer@ both relate experimenting with AI as software that can be run on FreeBSD that you can learn, program and experiment with.

Installing something like ollama on your local machine to experiment or even using it for helping you with some of your personal tasks based on the training of your own personal experience, crawl some large heap of data to find patterns additionally to conventional analyses, or creating NPCs for a computer game is not the same as ChatGPT, Copilot etc.
Looking only at the technical principal how they work they don't differ much. But here comes what I also often repeat: Also size makes a difference.
If you run an "AI" of some GBs on your GPU you train personally with your own experiences for your own purposes, or building computers the size of warehouses containing hundred thousands of GPUs, trained by many thousands of people to crawl the internet are complete different beasts.

So here -- someone is learning, programming and coming up with "practical ways" to use FreeBSD and AI together. Do I think FreeBSD would be a great AI platform? I honestly don't know. The general idea of FreeBSD is ... try something and see if it works. If it works -- great ! If it doesn't try something else. FreeBSD and Unix has always been about research and development and ... (none of us) knows in April/2026 what will happen in the future.

Where Maturin is COMPLETELY RIGHT is that we don't want peoples brains to become "mush" and then send them off to blindly and stupidly listen/read CoPilot/ChatGPT/Grok/etc to get wrong answers. What IS HAPPENING is that those stupid AI wrong answers end up "all over these Forum threads" and yes, the grey beards all end up having to "talk people back down to Earth again" in order to correct whatever issue they are having. Do I wish people would really LEARN things? Sure! But everyone is being told (everywhere) that learning is a waste of time since the "gadget" can just do all of this "hard and boring" learning for you.

A better way is to let people install AI software (or whatever) on their FreeBSDs, play with it, learn to program it, feed the AI, learn what it can and can't do, (maybe) come up with something new, maybe get completely bored and go play racket ball or something. But at least their brains are moving instead of being fed a bunch of "crap" from some corporate sponsored/created AI enigma.

At the moment I see "AI software" as equivilent to writing/playing/learning about how to design, write, run, etc "Computer games". Hay! This is kind of cool, I would like to learn more about this! -- kind of thing. This is not wrong.
 
 
it'll be a forum about configuring, running and/or problems relating to the existing AI ports/packages.
As far as I am concerned that can stay in ports/packages forum. AI ports/package management is not going to generate loads of articles.

Generic AI discussions (directly or indirectly involving FreeBSD) can stay in "Offtopic".
AI use cuts across everything and already being discussed in multiple places. *This* is what I was hoping will get a forum.
 
Are you really going to count stuff like this? ☝️ This is not a good plan.

Why not? Apparently the bosses don't like to have empty (or almost empty) boards here, so at creation time there should alredy be a good couple of threads that can be immediately moved to the new board.
(I might have a different viewpoint, but that doesn't matter.)

So then, lets just create those threads (and have some fun): for instance, lets ask the AI for a proper upgrade path from IPv6 to IPv8.
 
I'd rather it be off-topic if AI has to bully its way for a dedicated forum spot.

Afaik Bun support was holding up Claude, and I requested Bun support plenty of times prior to Claude; I'd rather AI intermingle before isolating itself (AI's "tech" isn't AI-exclusive).
 
Back
Top