AI for writing documentation

no, the use case you're describing is for the slop companies to to de-skill you from being able to do those tasks. end of story. the "just a tool" narrative doesn't hold up once you start asking whose tool it is.
 
to further that point, we quote https://toot.cafe/@baldur/116351731802876844
Generative models and automated decision-making systems are political projects. They are tools for fencing off sectors of our society for rent, for cutting back on education and healthcare for the poor, for removing accountability. They are inherently tools for removing humans from the equation. They are not neutral in their design. Their existence has a political purpose
 
AI are tools. They can be used that way. Some AI programs have already been created and used for genocide.

We can use AI too as a collective against AI used for bad purposes. Some AI tools are already accessible to us. There's some in ports. I could use AI, but for most purposes setting up AI would be more effort than doing individual tasks. From my understanding where AI can be useful to me, AI is for repetitive tasks. There are also tasks more suitable for AI, like sorting away debris/impurities from food ingredients, and sorting out recyclable materials, like aluminum, from waste processing.

No matter what, some technology was going to be created. Andrew Yang used to say from others that, in the near future, technology will take the place of jobs, years before that X clown said it.
 
So is it irony that this discussion is being discussed in a thread where Ai was being employed for a task as mundane as writing down what someone's program does? There is validity in what atax1a is saying; if you've lost the abilty and/or desire to even write down a few sentences or paragraphs describing what you did then you've sort of already lost (-i.e. what's next?).

I like Theo's analogy of Ai being akin to indent(1) (although, he gives it a capital 'I').
 
fundamentally speaking they aren't at all like indent. indent works on a deterministic parser. the LLM is fundamentally a probabilistic algorithm. slop-purveyors try and handwave around this by claiming that being able to set a seed/temperature value is somehow deterministic, but, again, that claim doesn't hold up against reality because deterministic parsers do not have seeds and temperatures.
 
*pop* whoa, hey, please try and remember, I'm the one wearing this pointy hat thing.

Excuse me while I search a few terms (I like information like that!). However, I think I get the gist (and I think I get your point); this is to "frog in the pot" us.
 
AI seems usable as a tool for programmers ...
Probably in future it will be normal having AI chat bots for interactive documentations.

Obviously for complex subjects, a book written from an human for humans is still useful/necessary, because presenting hard concepts is "an art". But AI documentation that can be interactive, and this is a plus.
 
mm yes i sure love asking my document questions that it will gladly provide wrong answers to, that i then have to go and verify.

are you people fucking serious
 
mm yes i sure love asking my document questions that it will gladly provide wrong answers to, that i then have to go and verify.

are you people fucking serious
Documentation can be wrong too. People can be wrong or lie intentionally. Source code can contain errors. There were scientific papers that were wrong, also in mathematics. Every source of information can be wrong. AI is one of them. So there is nothing special in AI.

Probably the only trustworthy source of information are formally proved proofs and source code, assuming that the requirements are well stated.
 
Documentation can be wrong too. People can be wrong or lie intentionally. Source code can contain errors. There were scientific papers that were wrong, also in mathematics. Every source of information can be wrong. AI is one of them. So there is nothing special in AI.

Probably the only trustworthy source of information are formally proved proofs and source code, assuming that the requirements are well stated.
"people can lie too :^)" isn't really a compelling argument so much as telling us more about you than we want to know, and the difference with an LLM is that an LLM structurally has no grounding in truth. it cannot care about the truth of its responses because it is a statistical algorithm.
 
"people can lie too :^)" isn't really a compelling argument so much as telling us more about you than we want to know, and the difference with an LLM is that an LLM structurally has no grounding in truth. it cannot care about the truth of its responses because it is a statistical algorithm.
LLM of the future can become more reliable, because it can use double-checking internal agents/expertises, applying also logical rules, or other techniques. AI is a tool, and tools usually improve. A "structural" limitation, can be only a temporary limitation.

In any case, my point is that every source of information has a trustworthy score, and often you had to double check. Can you trust a wikipedia article, under an editor fight? Today AI is useful, but not always trustworthy, because there can be unexpected hallucination. If you know the limits of the tool, maybe it can be used in a productive way, waiting it will improve.
 
LLM structurally has no grounding in truth. it cannot care about the truth of its responses because it is a statistical algorithm.
This is not completely correct, because LLM is trained to answer correctly to questions. So it "cares" about truth: every wrong answer is a negative feedback. It is a big compressor (i.e. predictor) of knowledge. It is a complex structure of statistical predictors. It is not a flat statistical function.

A certain form of intelligence emerges from this structures, because without a certain form of comprehension, it cannot compress, predict and answer so much correctly. Intelligence can emerge from collaborating statistical algorithm, like life can emerge from chemistry. Calling LLM a statistical predictor, is like calling life a sequence of chemistry reactions. Yes it is true, but you are ignoring the emergent properties.
 
Back
Top