We had some interesting random threads in the past. Including screenshots , or playing games.Make a "random AI thread" and talk about it all day long there if you wish.
For me everything is ok "as it is". Do not overcomplicate things.
We had some interesting random threads in the past. Including screenshots , or playing games.Make a "random AI thread" and talk about it all day long there if you wish.
Too random makes the forums messy.We had some interesting random threads in the past. Including screenshots , or playing games.
For me everything is ok "as it is". Do not overcomplicate things.
True, but then discussions about LLM tools can stay in "Porting new software" without the need of a specific section under "Server and Networking".Those can stay in "offtopic".
Only if they are, in fact, about creating a new port for said tool. Installation/upgrade issues (existing port or package) can go in "Installation and updating of ports and packages", that split is already there.but then discussions about LLM tools can stay in "Porting new software"
Artificial (Un)Intelligence and other WoesAny suggestions for the forum's title?
PollutionWhat might crypto coin mining go under?
What I never read anywhere is what a single computer running a local "AI" service can achieve with some open source/public domain LLM blob that sets the initial "intelligence". Does it take complicated English questions?One thought -- "AI" has existed as a "Programming Language Feature" for many decades (aka "an AI" is part of "Computer Games" in general). There are also many books on writing AI for use in games, etc. I actually own and read many of these very books.
The generic term "AI" sort of got hijacked more recently to mean (instead): "A CP3O like intelligence that surpasses the human mind"
I would (maybe) call this something more like:
- "Using a generalized AI to get answers"
- "AI and Large Language Models"
- "The generalized AI didn't give me answers I like, so now I am going to ask you instead"
It should be lumped there because it's for how to technically run games on FreeBSD but if you have a separate games sub-forum then it will be all about playing games and little to do with FreeBSD which is what this forum is about.I rather see gaming get its own forum first, not just lumped into multimedia.
What I never read anywhere is what a single computer running a local "AI" service can achieve with some open source/public domain LLM blob that sets the initial "intelligence". Does it take complicated English questions?
I my case AI advised,It takes a while to find a local LLM to "fit" you. I use a dense variant of Qwen.
It competes well with online LLMs. Of course it can't do live web search and it has a hard cutoff date for the data inside.
And you need some serious computing horsepower to run it.
Any suggestions for the forum's title? Will probably add this in the "Server and Network" category but need a short and clear title, it's quite a broad topic.
And yes, this won't be a section for "AI told me this, is that correct" type of question, it'll be about configuring and setting up of various AI related ports/packages. Generic AI topics/rants/whatever can stay in "offtopic".
I never did anything with it. Sounds like you need at least 30TB storage or so to start with. And it grows of usage? Some distributed system must be possible. Periodically sync the nodes with all new collected LLM data that's shared in p2p-style.I my case AI advised,
- Qwen2.5-Coder:14b: This is generally the best all-rounder for your specific mix. It is trained on over 5.5 trillion tokens and excels at maintaining general reasoning alongside code generation.
- DeepSeek-Coder-V2:16b-lite-instruct: A highly specialized Mixture-of-Experts (MoE) model that achieves performance comparable to GPT-4 Turbo in coding tasks.
High-performance computing/HPC? It could cover CUDA/OpenCL on GPUs (Folding@home, AI GPU-accelerated, etc)Any suggestions for the forum's title? Will probably add this in the "Server and Network" category but need a short and clear title, it's quite a broad topic.
please don't contaminate my safe CUDA space with AI perversion of GPU programming.High-performance computing/HPC? It could cover CUDA/OpenCL on GPUs (Folding@home, AI GPU-accelerated, etc)
AI posts not being confined into some sort of space on the forums is starting to detract from the forums in general. I almost feel it'd serve the forums well if there was an entire category "AI" with its own Off-topic, Development, Ports, Security...