AI for writing documentation

no, the use case you're describing is for the slop companies to to de-skill you from being able to do those tasks. end of story. the "just a tool" narrative doesn't hold up once you start asking whose tool it is.
 
to further that point, we quote https://toot.cafe/@baldur/116351731802876844
Generative models and automated decision-making systems are political projects. They are tools for fencing off sectors of our society for rent, for cutting back on education and healthcare for the poor, for removing accountability. They are inherently tools for removing humans from the equation. They are not neutral in their design. Their existence has a political purpose
 
AI are tools. They can be used that way. Some AI programs have already been created and used for genocide.

We can use AI too as a collective against AI used for bad purposes. Some AI tools are already accessible to us. There's some in ports. I could use AI, but for most purposes setting up AI would be more effort than doing individual tasks. From my understanding where AI can be useful to me, AI is for repetitive tasks. There are also tasks more suitable for AI, like sorting away debris/impurities from food ingredients, and sorting out recyclable materials, like aluminum, from waste processing.

No matter what, some technology was going to be created. Andrew Yang used to say from others that, in the near future, technology will take the place of jobs, years before that X clown said it.
 
So is it irony that this discussion is being discussed in a thread where Ai was being employed for a task as mundane as writing down what someone's program does? There is validity in what atax1a is saying; if you've lost the abilty and/or desire to even write down a few sentences or paragraphs describing what you did then you've sort of already lost (-i.e. what's next?).

I like Theo's analogy of Ai being akin to indent(1) (although, he gives it a capital 'I').
 
fundamentally speaking they aren't at all like indent. indent works on a deterministic parser. the LLM is fundamentally a probabilistic algorithm. slop-purveyors try and handwave around this by claiming that being able to set a seed/temperature value is somehow deterministic, but, again, that claim doesn't hold up against reality because deterministic parsers do not have seeds and temperatures.
 
*pop* whoa, hey, please try and remember, I'm the one wearing this pointy hat thing.

Excuse me while I search a few terms (I like information like that!). However, I think I get the gist (and I think I get your point); this is to "frog in the pot" us.
 
AI seems usable as a tool for programmers ...
Probably in future it will be normal having AI chat bots for interactive documentations.

Obviously for complex subjects, a book written from an human for humans is still useful/necessary, because presenting hard concepts is "an art". But AI documentation that can be interactive, and this is a plus.
 
mm yes i sure love asking my document questions that it will gladly provide wrong answers to, that i then have to go and verify.

are you people fucking serious
 
mm yes i sure love asking my document questions that it will gladly provide wrong answers to, that i then have to go and verify.

are you people fucking serious
Documentation can be wrong too. People can be wrong or lie intentionally. Source code can contain errors. There were scientific papers that were wrong, also in mathematics. Every source of information can be wrong. AI is one of them. So there is nothing special in AI.

Probably the only trustworthy source of information are formally proved proofs and source code, assuming that the requirements are well stated.
 
Documentation can be wrong too. People can be wrong or lie intentionally. Source code can contain errors. There were scientific papers that were wrong, also in mathematics. Every source of information can be wrong. AI is one of them. So there is nothing special in AI.

Probably the only trustworthy source of information are formally proved proofs and source code, assuming that the requirements are well stated.
"people can lie too :^)" isn't really a compelling argument so much as telling us more about you than we want to know, and the difference with an LLM is that an LLM structurally has no grounding in truth. it cannot care about the truth of its responses because it is a statistical algorithm.
 
"people can lie too :^)" isn't really a compelling argument so much as telling us more about you than we want to know, and the difference with an LLM is that an LLM structurally has no grounding in truth. it cannot care about the truth of its responses because it is a statistical algorithm.
LLM of the future can become more reliable, because it can use double-checking internal agents/expertises, applying also logical rules, or other techniques. AI is a tool, and tools usually improve. A "structural" limitation, can be only a temporary limitation.

In any case, my point is that every source of information has a trustworthy score, and often you had to double check. Can you trust a wikipedia article, under an editor fight? Today AI is useful, but not always trustworthy, because there can be unexpected hallucination. If you know the limits of the tool, maybe it can be used in a productive way, waiting it will improve.
 
LLM structurally has no grounding in truth. it cannot care about the truth of its responses because it is a statistical algorithm.
This is not completely correct, because LLM is trained to answer correctly to questions. So it "cares" about truth: every wrong answer is a negative feedback. It is a big compressor (i.e. predictor) of knowledge. It is a complex structure of statistical predictors. It is not a flat statistical function.

A certain form of intelligence emerges from this structures, because without a certain form of comprehension, it cannot compress, predict and answer so much correctly. Intelligence can emerge from collaborating statistical algorithm, like life can emerge from chemistry. Calling LLM a statistical predictor, is like calling life a sequence of chemistry reactions. Yes it is true, but you are ignoring the emergent properties.
 
This is not completely correct, because LLM is trained to answer correctly to questions.
Eh ... no.
It gives you the "most likely" answer (based on the model made from the data it is trained on). It has no clue if that answer is correct or not. Which is evident when you see it answer totally wrong with the same confidence as a "correct" answer.
Intelligence? You shouldn't insult yourself and other humans by stating that the current crop of AI's have any intelligence. Not in the same way humans has.
 
Eh ... no.
It gives you the "most likely" answer (based on the model made from the data it is trained on). It has no clue if that answer is correct or not. Which is evident when you see it answer totally wrong with the same confidence as a "correct" answer.
Intelligence? You shouldn't insult yourself and other humans by stating that the current crop of AI's have any intelligence. Not in the same way humans has.
Parroting you: you are giving me the "most likely" answer (based on the model made from the information you studied). You have no real clue if that answer is correct or not, unlike you work in the LLM field. Me or you or both are hallucinating, which is evident because we answered with the same confidence, despite the two answers are opposite.

From a certain point of view, I cannot trust your answer more than the answer of an AI, because maybe you are hallucinating too. Maybe you believe to be correct, but you are not.

Does LLM hallucinate? Yes. Does LLM predict a lot of things? Yes. Does humans hallucinate? For sure not in the same way of LLM, but yes: superstitions, religious wars, bad habits, etc.. Does humans predict a lot of things? Yes.

You can list many things on which humans are different and superior respect an LLM. But doing so, you cannot prove that LLM are a simple statistical function, because there is some emergent property in them.
 
So it "cares" about truth: every wrong answer is a negative feedback.

How does an AI know it gave a wrong answer?

AI doesn't even give the (same answer) to the exact same (asked question).

You have no real clue if that answer is correct or not, unlike you work in the LLM field.

Got it -- you are personally financially tied to the outcome of AI being successful. Good for you !

The rest of us (are not) and really do not like being told by AI the answers to questions we did not ask an AI for. Every time I do a web search now I get "AI volunteered responses" that I could care less about -- and I ignore them. I don't have time to figure out if an AI is just making stuff up or not.

You are trying to rationalize "AI hallucinating" as something we can all live with. Your argument that "AI lies" therefore (ALL OF THE REST OF US) are "also liars" doesn't hold any water. YOU can live with your hallucinating AI, the rest of us are going to ignore "the crazy idiot in the room" and continue on our way.

AI should always be something that can be (TURNED OFF).
 
You are trying to rationalize "AI hallucinating" as something we can all live with. Your argument that "AI lies" therefore (ALL OF THE REST OF US) are "also liars" doesn't hold any water. YOU can live with your hallucinating AI, the rest of us are going to ignore "the crazy idiot in the room" and continue on our way.
I never said that. I said that every source of information has some level of trustworthy. AI is a tool. You are free to decide if in its current form, you can trust it enough or no. For sure it will improve.
 
How does an AI know it gave a wrong answer?
The original quote, which I answered, was

LLM structurally has no grounding in truth. it cannot care about the truth of its responses because it is a statistical algorithm.

I agree with the first part, i.e. LLMs usually lack in logical reasoning. Hence, usually they don't know if they gave the correct answer, because many of them do not reflect on the answer. But I disagree with the second, i.e. LLMs who don't answer correctly according the training data are "killed", so the surviving ones are the ones that "cared" to answer correctly, at least according training data.

The inner part of a LLM is for sure building some model of the world, creating various abstractions. So, it is a form of intelligence, and it is not only "simple statistics". But, up to date, it lacks other way of reasoning of the human mind, so it is far from perfect.
 
Does LLM hallucinate? Yes. Does LLM predict a lot of things? Yes. Does humans hallucinate? For sure not in the same way of LLM, but yes: superstitions, religious wars, bad habits, etc.. Does humans predict a lot of things? Yes.
Humans can hallucinate in a very similar sense to LLMs. The Mandela Effect is a good example of this.

How does an AI know it gave a wrong answer?
To be fair, humans can't know they are correct either. They can be confidently incorrect but can also be told / given feedback to correct them quickly. With LLMs, they need to go through an entire training cycle (or monkey patched via some kind of system prompt, so they are explicitly told each time they process).

AI should always be something that can be (TURNED OFF).
Strong agree. Same with auto-format, intellisense, syntax highlighting and other visual noise and clutter. It becomes unbearable when I just want to get stuff done.
At the moment they are trying to shoehorn (essentially chatbots) into everything we do which is obviously daft.
 
Back
Top