Generative models and automated decision-making systems are political projects. They are tools for fencing off sectors of our society for rent, for cutting back on education and healthcare for the poor, for removing accountability. They are inherently tools for removing humans from the equation. They are not neutral in their design. Their existence has a political purpose
No. You are.i think perhaps it is you that is the tool
did you just unironically postNo. You are.
no uProbably in future it will be normal having AI chat bots for interactive documentations.AI seems usable as a tool for programmers ...
Documentation can be wrong too. People can be wrong or lie intentionally. Source code can contain errors. There were scientific papers that were wrong, also in mathematics. Every source of information can be wrong. AI is one of them. So there is nothing special in AI.mm yes i sure love asking my document questions that it will gladly provide wrong answers to, that i then have to go and verify.
are you people fucking serious
"people can lie too :^)" isn't really a compelling argument so much as telling us more about you than we want to know, and the difference with an LLM is that an LLM structurally has no grounding in truth. it cannot care about the truth of its responses because it is a statistical algorithm.Documentation can be wrong too. People can be wrong or lie intentionally. Source code can contain errors. There were scientific papers that were wrong, also in mathematics. Every source of information can be wrong. AI is one of them. So there is nothing special in AI.
Probably the only trustworthy source of information are formally proved proofs and source code, assuming that the requirements are well stated.
LLM of the future can become more reliable, because it can use double-checking internal agents/expertises, applying also logical rules, or other techniques. AI is a tool, and tools usually improve. A "structural" limitation, can be only a temporary limitation."people can lie too :^)" isn't really a compelling argument so much as telling us more about you than we want to know, and the difference with an LLM is that an LLM structurally has no grounding in truth. it cannot care about the truth of its responses because it is a statistical algorithm.
This is not completely correct, because LLM is trained to answer correctly to questions. So it "cares" about truth: every wrong answer is a negative feedback. It is a big compressor (i.e. predictor) of knowledge. It is a complex structure of statistical predictors. It is not a flat statistical function.LLM structurally has no grounding in truth. it cannot care about the truth of its responses because it is a statistical algorithm.
Eh ... no.This is not completely correct, because LLM is trained to answer correctly to questions.
Parroting you: you are giving me the "most likely" answer (based on the model made from the information you studied). You have no real clue if that answer is correct or not, unlike you work in the LLM field. Me or you or both are hallucinating, which is evident because we answered with the same confidence, despite the two answers are opposite.Eh ... no.
It gives you the "most likely" answer (based on the model made from the data it is trained on). It has no clue if that answer is correct or not. Which is evident when you see it answer totally wrong with the same confidence as a "correct" answer.
Intelligence? You shouldn't insult yourself and other humans by stating that the current crop of AI's have any intelligence. Not in the same way humans has.
So it "cares" about truth: every wrong answer is a negative feedback.
You have no real clue if that answer is correct or not, unlike you work in the LLM field.
I never said that. I said that every source of information has some level of trustworthy. AI is a tool. You are free to decide if in its current form, you can trust it enough or no. For sure it will improve.You are trying to rationalize "AI hallucinating" as something we can all live with. Your argument that "AI lies" therefore (ALL OF THE REST OF US) are "also liars" doesn't hold any water. YOU can live with your hallucinating AI, the rest of us are going to ignore "the crazy idiot in the room" and continue on our way.
The original quote, which I answered, wasHow does an AI know it gave a wrong answer?
LLM structurally has no grounding in truth. it cannot care about the truth of its responses because it is a statistical algorithm.
Humans can hallucinate in a very similar sense to LLMs. The Mandela Effect is a good example of this.Does LLM hallucinate? Yes. Does LLM predict a lot of things? Yes. Does humans hallucinate? For sure not in the same way of LLM, but yes: superstitions, religious wars, bad habits, etc.. Does humans predict a lot of things? Yes.
To be fair, humans can't know they are correct either. They can be confidently incorrect but can also be told / given feedback to correct them quickly. With LLMs, they need to go through an entire training cycle (or monkey patched via some kind of system prompt, so they are explicitly told each time they process).How does an AI know it gave a wrong answer?
Strong agree. Same with auto-format, intellisense, syntax highlighting and other visual noise and clutter. It becomes unbearable when I just want to get stuff done.AI should always be something that can be (TURNED OFF).