How can ChatGPT help the automation of FreeBSD ? Can it be trained to help the system admins to admin this OS ?

Now imagine facilities where people cannot resist the treatment, e.g. facilities to care for our elderly. Imagine how it will come when the workforce there gets replaced by machines - not because the machines are already perfect at the job (they never will be), but simply because it does not matter.
yeah, just like in the movie Ghost In The Shell: Solid State Society ... The movie was made in 2006, the setting is 2034. And something that was seen as a wild fantasy back in 2006 is eerily close to becoming reality as 2034 gets closer... The movie actually tried to be a warning about the downsides the humanity will be facing if we automate our asses off. But not everyone in the audience at the time got the memo.

Now, I'm going to make some assumptions using the Pareto Principle:
  • Let's assume that 80% of the audience did not get the memo about the downsides (Meaning, only 20% of the audience actually got that memo). Yeah, there will be some quibbling about that, and yeah, there are valid arguments to be made for assuming that the 20% of the audience actually did not get the memo. For the sake of making my point, I'd like to start with the assumption that 80% of the audience did not get the memo.
  • Now, further analysis: There's several ways to 'Not get the memo', as in:
    • Completely ignore even the very existence of negative consequences: 'No such thing'
    • Yeah, there are a few corner cases, but they don't make a difference, so it's easy to ignore them
    • 'Relentless Pursuit of Perfection', as in, force conformance and get to 100% no matter what, even if it means putting a square peg into a round hole.
    • Mis-diagnosing the downsides and doing completely the wrong things about them.
    • Let's not forget the bandwagon effect: "More than half the audience thinks that, so the idea can't be wrong".
  • Any of the 'Further analysis' points that I made - they can be analyzed using the Pareto Principle. That method of analysis can be useful, but it can also be taken to the point of disagreeable absurdity if taken far enough.
Unfortunately, I can't put it past ChatGPT to take things too far with just the analysis methods... 😩
 
ChatGPT can only learn from the internet. Interestingly enough, we have rules and laws stopping kids from doing exactly that. Who would take advise from a 5yo that was taught by 4chan? And still people want to have said 5yo admin a server, drive a car or make political decisions. Where did natural intelligence go?
ChatGPT is already capable of passing a Harvard-level business school test with a B-... and, there's stuff like Udemy.com and Coursera.org for online classes... If a 5-year-old did better in a Udemy or Coursera class than a 25-year-old taking the exact same class, what does that say about the level of intelligence of the said 25-year-old?
 
It is worse. You need to project it a bit into the future.
If you have a problem with your telco, your insurance, your supplier, and you call them, you already do not get a human to talk to and solve the problem, you get a robot to talk to.
Now imagine how that becomes when it gets clear that 99.x% of customer requests can be fulfilled by a machine (and we can easily relinquish the remaining 0.x% of customers).

We already have ISO9000, which effectively replaces quality by administrative overhead.
Now imagine facilities where people cannot resist the treatment, e.g. facilities to care for our elderly. Imagine how it will come when the workforce there gets replaced by machines - not because the machines are already perfect at the job (they never will be), but simply because it does not matter.
agree, It's happen today..with my cable operator,the entire support is via whastapp, at least with a human you can talk..explain yourself,but with a menu you cant hack it...is very close
the fever for the IA es high right now, but everyone is in a hurry to give it a use for which development is lacking
just lack time....
 
what does that say about the level of intelligence of the said 25-year-old?
Nothing, I fear. We are talking business school.
Real intelligence will find a solution when faced with completely new problems, AI still fails at that AFAIK.
 
The very fact that a human has to train this is exactly the problem.
All human biases of trainers will be transfered.
So this ain't AI in my opinion.
Its a talking bird.
 
Nothing, I fear. We are talking business school.
Real intelligence will find a solution when faced with completely new problems, AI still fails at that AFAIK.
Real intelligence failed to solve even old problems like stupidity of the general population.
 
The very fact that a human has to train this is exactly the problem.
All human biases of trainers will be transfered.
So this ain't AI in my opinion.
Its a talking bird.
Yup. I studied AI in college and it's nowhere near AI. It fails at reasoning, fails at solving complex problems and fails to present accurate facts. You can ask how many genders humans have and it'll be vague. Ask how many genders dogs have and it'll say two. It's more of a Eliza Computer Therapist like in the old days except it's much more advanced. It'll be great as a next generation search engine or writing term papers based on the current data. It's not self-aware nor has the ability to learn by itself.

I think in maybe 10 years it will be able to think like humans but again if it becomes self-aware then it's a threat to humanity. That's something Elon Musk is against.
 
You can ask how many genders humans have and it'll be vague.
Well, there have been court cases on gender determination for humans, and it's still a debate - even the very definition of 'gender' gets ppl bristling. Like, 'Men cooking while women hammer a chair together?' what about self-identification? bodily functions? emotional reactions? If an AI can consider THAT while giving a vague answer, that means that this particular implementation had a bigger and richer dataset to train on.
 
Real intelligence failed to solve even old problems like stupidity of the general population.
Maybe we should stop looking for a solution to that problem and try to prove it as (un)solveable. In the mean time, go for "as good as possible, accepting to be imerfect".
 
You can ask how many genders humans have and it'll be vague. Ask how many genders dogs have and it'll say two.
But that's exactly the usecase!
As of now, AI cannot come up with genuinely new creative solutions. It cannot pinpoint the precise place in a complex and interdependent environment where a malfunction would be correctly fixed.
But it can perfectly well do the job of Winston Smith.
 
The very fact that a human has to train this is exactly the problem.
All human biases of trainers will be transfered.
So this ain't AI in my opinion.
Its a talking bird.

not if the humans recognizes their biases. If they can be recognized,can be cleared,better for the AI than for humans.
 
Yup. I studied AI in college and it's nowhere near AI. It fails at reasoning, fails at solving complex problems and fails to present accurate facts. You can ask how many genders humans have and it'll be vague. Ask how many genders dogs have and it'll say two. It's more of a Eliza Computer Therapist like in the old days except it's much more advanced. It'll be great as a next generation search engine or writing term papers based on the current data. It's not self-aware nor has the ability to learn by itself.

I think in maybe 10 years it will be able to think like humans but again if it becomes self-aware then it's a threat to humanity. That's something Elon Musk is against.

I don't think facts exist as a solid truths. I have always believed that reality does not exist, but that reality is nothing more than the result of all the opinions that people have on a subject, so might be, more realities exist. And by resultant I think that opinions contains a piece of every other opinion and at the same time is different from each of them. I also believe that facts do not exist. I make the concept of fact coincide with the concept of behavior. But the concept of behavior is indivisible from the concept of opinion, which I prefer to call interpretation. Interpretations motivate and move behaviors. Behaviors, if taken net of motivations, opinions, interpretations behind or in front of them, have no value and utility, indeed to tell the truth, it doesn't even make sense to talk about them, because the meaning is given precisely by what you think and what you think is changeable and immaterial. Trivial example: if you see a dog walking in the middle of the road and you think that this is a fact, you are wrong, because the concepts, its meanings that have been attributed to what you are seeing (dog walking in the middle of the road) are human "inventions", which are valid as long as they or a part of them want to make them count (then among other things, there are also differences between what is believed to be an equal thought). We must not be fooled by those who believe that since something is believed by many people then it is a fact. If this happens it just means that it is a /relatively/ shared opinion, but not a fact. It would also be useful to understand up to "where" it is actually shared.
 
  • Like
Reactions: PMc
That dog is NOT walking in the middle of the road! Why? There's nothing even remotely resembling a road in the area!

But when it comes to math, it's still got facts. I don't think anyone can argue against the basic fact of "1+1=2". It stops being math if you try to argue against that.
 
For math it's the same thing. Math is a language. If we want that 1+1=3,we can do it. As we took some decisions,we can change them.
 
If you want 1+1 to be 3, that's chemistry, not math any more.
If you want 1+1 to be 1 or 11, that's logic gates and circuitry design, it's not math any more.
If you want 1+1 to be 1.83, wtf are we even adding together? Are we even talking about the same thing?
 
You are right :p

Istantanea_2023-03-22_17-43-58.png
 
Back
Top