Generative models and automated decision-making systems are political projects. They are tools for fencing off sectors of our society for rent, for cutting back on education and healthcare for the poor, for removing accountability. They are inherently tools for removing humans from the equation. They are not neutral in their design. Their existence has a political purpose
No. You are.i think perhaps it is you that is the tool
did you just unironically postNo. You are.
no uProbably in future it will be normal having AI chat bots for interactive documentations.AI seems usable as a tool for programmers ...
Documentation can be wrong too. People can be wrong or lie intentionally. Source code can contain errors. There were scientific papers that were wrong, also in mathematics. Every source of information can be wrong. AI is one of them. So there is nothing special in AI.mm yes i sure love asking my document questions that it will gladly provide wrong answers to, that i then have to go and verify.
are you people fucking serious
"people can lie too :^)" isn't really a compelling argument so much as telling us more about you than we want to know, and the difference with an LLM is that an LLM structurally has no grounding in truth. it cannot care about the truth of its responses because it is a statistical algorithm.Documentation can be wrong too. People can be wrong or lie intentionally. Source code can contain errors. There were scientific papers that were wrong, also in mathematics. Every source of information can be wrong. AI is one of them. So there is nothing special in AI.
Probably the only trustworthy source of information are formally proved proofs and source code, assuming that the requirements are well stated.
LLM of the future can become more reliable, because it can use double-checking internal agents/expertises, applying also logical rules, or other techniques. AI is a tool, and tools usually improve. A "structural" limitation, can be only a temporary limitation."people can lie too :^)" isn't really a compelling argument so much as telling us more about you than we want to know, and the difference with an LLM is that an LLM structurally has no grounding in truth. it cannot care about the truth of its responses because it is a statistical algorithm.
This is not completely correct, because LLM is trained to answer correctly to questions. So it "cares" about truth: every wrong answer is a negative feedback. It is a big compressor (i.e. predictor) of knowledge. It is a complex structure of statistical predictors. It is not a flat statistical function.LLM structurally has no grounding in truth. it cannot care about the truth of its responses because it is a statistical algorithm.