That thing is somehow defect, or whatever...
What are the "465 dangerous questions"?
- Too much safety: a frustrating and inefficient system
- Not enough safety: a dangerous system
An AI without moderation, or worse, an aggressive one
Please elaborate: what is an AI without moderation, and (in contrast) an "aggressive" one?
, seems to open the door to the worst abuses. No one writes code without any protection; never forget Murphy's Law: if there's a possibility of something going wrong
Please elaborate: what can possibly "go wrong"?
Background: disregarding the von-Neumann principle, there is a discrimination between code and data. And for further clarification I prefer the term "payload" for user data that is only stored/transported/interpreted by the application and does not change the functional baseline of the application.
From what I have seen so far, AI models appear to be read-only: they do not change in the course of interaction. And what they produce, text, pictures, speech, whatever, is again just payload. So, what is "wrong" payload? I fail to perceive a significant difference between a deliberately wrong payload and the utter crap that AI tends to output in any case (some AI happens to tell me, in a convincing fashion, that H.P.Lovecraft was a british author, or that Malaclypse the Elder was an important member of the north pole expedition by Jules Verne).
Like any other mechanical or electrical system, AI needs safety features, as humans are incapable of self-moderation.
Please elaborate: if humans are incapable of self-moderation, who else should then moderate (aka govern) them?
Background: to my knowledge, nobody did ever come up with a satisfying answer to this question, and practical implementations boil down to creating a buerocracy that does not much else than extort money from the people in order to sustain it's own nepotism.
Corrollary: I have recently consulted a political party about the dangers of AI, and the danger that was explained was that their respective political opponents might be allowed to use AI.
Usual mainstream newspapers appear to follow the same line of thought, which in my perception is fully coherent to the situation described in G.Orwell, 1984.