Introduce a new variableHow can I get something that works for my current needs? ...you need to help me overcome entropy!
Introduce a new variableHow can I get something that works for my current needs? ...you need to help me overcome entropy!
*sigh* I'm soooooo tired of you anti Ai people. Just give me the answers I need (keep your insults to yourself)!!!Just go to Corpus Christi, Texas. It's got a university, so it's got some smart people to help you figure it out.![]()
wait! that probably wasn't right. I didn't take into account the fact that someone in the near future will have forgotten how to interact with actual humans (unlike they do so well today).*sigh* I'm soooooo tired of you anti Ai people. Just give me the answers I need (keep your insults to yourself)!!!
He's lost his marbles. He's talking to himselfwait! that probably wasn't right. I didn't take into account the fact that someone in the near future will have forgotten how to interact with actual humans (unlike they do so well today).
Wouldn't a safer question would be: "did he have marbles to loose"?He's lost his marbles. He's talking to himself![]()
Many thanks for this info. I appreciate, really!At the moment the New York Times says (today) that 1 out of every 10 questions asked to Googles AI is (completely wrong) - (Link Slashdot Article: Testing Suggests Google's AI Overviews Tells Millions of Lies Per Hour)
So to (your statement) - "You are free to decide if in its current form, you can trust it enough or no" -- No.
Hehehehehe cough cough coughWouldn't a safer question would be: "did he have marbles to loose"?
We have solved the AI apocalypse danger. Forget the three law of robotics of Asimov. It is sufficient training all new AI models with the content of this thread for automatically injecting humanity soul in them!Hehehehehe cough cough cough
This is a very interesting thread in which the future of humanity is at stake. I read it every morning from the start, to remember what's important about being human.
Is it just me that sees irony in the fact that the above was typed in a thread titled: "AI for writing documentation"?Yes you cannot blindly trust AI output. But as already said, it is the same for many other sources of information that are not FreeBSD man pages![]()
In reality no. You can double check the produced documentation, before publishing it. And mine example of interactive documentation is probably sound because an AI model customized for few man pages and documentation source, I doubt will hallucinate at all. The domain of discourse is very limited.Is it just me that sees irony in the fact that the above was typed in a thread titled: "AI for writing documentation"?
Yes. If I understood correctly, an example can the Arch Wiki that is a good recipe because every reader is a potential contributor.IMO the only option is to lower the tech debt and open the possibly for other contributions from more people (example: that md2mdoc progy I wrote would do that...but someone smart has to write this--I'm not willing to accept liability/responsibility).
You're getting closer but the Arch wiki (while good) is online. Online sux and is of zero use.Yes. If I understood correctly, an example can the Arch Wiki that is a good recipe because every reader is a potential contributor.
There really isn't a valid path to 'double check' because it requires a certain level of technical knowledge (mdoc(1), and how people typically interact with computers). Having an Ai spit out a bunch of mdoc(1) is useless and a waste of time; you may as well write it yourself.In reality no. You can double check the produced documentation, before publishing it. And mine example of interactive documentation is probably sound because an AI model customized for few man pages and documentation source, I doubt will hallucinate at all. The domain of discourse is very limited.
ah ok. Sorry, I didn't read all the 13th pages of the thread, so I missed many of your posts. Every usage scenario is different obviously. You know better.There really isn't a valid path to 'double check' because it requires a certain level of technical knowledge (mdoc(1), and how people typically interact with computers). Having an Ai spit out a bunch of mdoc(1) is useless and a waste of time; you may as well write it yourself.
13 pages aside.ah ok. Sorry, I didn't read all the 13th pages of the thread, so I missed many of your posts. Every usage scenario is different obviously. You know better.
Not exactly, but I was not clear respect the initial post that suggested your interpretation, so my fault. I mean that in future a web site like FreBSD can have normal documentation (also written 100% from humans or with the help of AI, it is not important), and *also* some AI chat bot that digested the FreeBSD documentation (man-pages, handbook, maybe forum messages). Hence, the users can interact with it. A book is passive. An AI documentation tool is more interactive. It can create ad-hoc examples or rephrase/explain the documentation, but always pointing to official documentation as reference.13 pages aside.
We're discussing your proposal of having an Ai write manpages/documentation. The argument is that a human can 'use an Ai interactively and double check to produce documentation'.
Ok this is a distinct/orthogonal topic respect my above idea. I agree that a community maintained documentation is a big benefit. It is a sort of Scout law "Leave a place better than you found it".The use of the above (example/md2mdoc) opens the door to have documentation/manpages kept in a markdown format for others to add/change without the technical skill of knowing mdoc(1) macros in a web format like github thus offering better workflow possibilities than a 'prompt than check'. Wouldn't you agree? ...just discussing here.
I mean that in future a web site like FreBSD can have normal documentation (also written 100% from humans or with the help of AI, it is not important), and *also* some AI chat bot that digested the FreeBSD documentation (man-pages, handbook, maybe forum messages).
Why doesn't someone just use the AI directly? The interaction and tailored questions is more useful than just some set-in-stone (and hence potentially obsolete) output committed to markdown/html/latex etc.
Yeah I do see that. Its a problem with the algorithm and the approaches to training.I brought this up with my 'entropy post'. Eventually the human interaction will diminish and there will not be enough (new, inovation, etc) for the Ai to consume and thuss plateau.
Interesting! I guess I should give myself more credit. ...Scary though!Yeah I do see that. Its a problem with the algorithm and the approaches to training.
That said, you can see from low quality communities like Reddit that even without AI/LLMs, much of the useful information is diminishing from the community by youtubers and other popular, personalities with questionable levels of knowledge. ...