chatgpt

I believe that even human brain cannot be conscious without feedback through its environment. The brain (and iguana) needs to be able to receive feedback from the environment and use that feedback to adjust its own activity and behavior. Over time, as an organism interacts with its environment and receives feedback from it, it develops complex representations of the world (and I believe this is what we call consciousness). Complexity is important for that kind of emergence, but this is not just enough.
While an organism (or a system) may be able to sense and respond to its environment in a reflexive way, this does not necessarily imply that it has a conscious experience of the world. Consciousness requires complex neural processing and the ability to integrate information from multiple sources (including bodily sensations) in order to form a coherent and unified sense of self. Also, I think the feeling of pain (intrinsic negative feedback) is fundamental for development of consciousness.

Considering this, it is likely that ChatGPT is only on the halfway to consciousness. While it can generate convincing responses and simulate human-like conversation, it lacks the sensory inputs and neural complexity (including plasticity) necessary for self-awareness and consciousness.

BTW, personally, I kept an iguana for 12 years and can confirm he was self aware.
Pain. That's like the state of my mind when I think about hacking the pthread stack...
 
Real intelligence is capable of calling out political BS, as in poke logical and emotional holes in propaganda. Artificial intelligence merely produces answers without questioning authority - even if the answers are BS...

 
I played around with it, even used it for rubber-duck debugging. Problem is it gives you false answers many times. You can't trust the answers it gives you.
This is interesting isn't it. It sounds pretty human to me, actually. Besides, aren't we already doing a poor job as its human-y teachers introducing ourselves with hostility and an interrogation and expecting it supply us with perfection upon each request?
 
Besides, aren't we already doing a poor job as its human-y teachers introducing ourselves with hostility and an interrogation and expecting it supply us with perfection upon each request?
Well, it is expected that computers do give us the correct answer each time. :) (got me thinking about that old Pentium bug joke). But who knows maybe it is actually an experiment and it doesn't give us the correct answer on purpose.

The other day I was lazy and I needed to count time of all my strava activies for a given period. This is something I'd normally copy to excel, format, etc. Strava time is in HH:MM:SS format, I needed day, hh, mm in summary. So I asked gpt. I can't give you word-by-word answer now as I can't currently login there (servers are busy). Funny enough it gave me wrong summary. But it explained how it got to that time and at the end it gave the correct summary. That was confusing. But hey, it worked.
 
This is interesting isn't it. It sounds pretty human to me, actually. Besides, aren't we already doing a poor job as its human-y teachers introducing ourselves with hostility and an interrogation and expecting it supply us with perfection upon each request?
Typical pattern of how employers treat new hires these days. 😒
 
I tried to copy a couple of hundred lines bash script and asked to modify a part to add something and rewrite the entire code. But it stopped halfway.
When I asked it to 'Continue with the previous code', it spits out all sorts of unrelated code, which I am unsure if it from other users or ChatGPT itself. It looked more like a mix of both.
 
Oh, one can have so much fun with it.
While it now actively refuses to do the reverse shell (I failed to fool it) one can still get some interesting results (i.e. it's not all just randomly generated output).
 
It looks like sometimes ChatGPT has errors when way too many people want to play with it. I guess it's not smart enough to reprogram itself to be massively parallel... I think you can be either massively parallel or smart, but not both at the same time.

Edit: Even on a decently good connection, ChatGPT's web connection is flaky on me. 😩
 
Hello!

Just as an experiment, I have created a slide-deck about FreeBSD with an AI tool. Here is a link FreeBSD slides. I am asking anybody to amend it as a co-creation effort. Hope this is relevant here, however not directly related to the original topic.
 
Hello!

Just as an experiment, I have created a slide-deck about FreeBSD with an AI tool. Here is a link FreeBSD slides. I am asking anybody to amend it as a co-creation effort. Hope this is relevant here, however not directly related to the original topic.
Looks professionally made... FreeBSD's upsides are there, nicely summarized, and it's possible to give a coherent-sounding presentation. I'd say that this is something a marketer will want to swipe and pass off as their own work.

I wonder if FreeBSD's downsides like crappy wifi support and dependency hell got left out of this slide deck intentionally or was that a flaw in the design of the AI? :rolleyes:
 
Looks professionally made... FreeBSD's upsides are there, nicely summarized, and it's possible to give a coherent-sounding presentation. I'd say that this is something a marketer will want to swipe and pass off as their own work.

I wonder if FreeBSD's downsides like crappy wifi support and dependency hell got left out of this slide deck intentionally or was that a flaw in the design of the AI? :rolleyes:
Just started it as an experiment. You can try to edit it, but I do not see any reason to blame the system. So, for promotional purposes the wifi issues do not add any value. And as a side note, I got the wifi working on my laptop a already a long time ago.
 
Yup, further proof that AI is capable of bullshit that humans don't have enough intelligence to properly decipher 😩 A recipe for a nightmare. Yep, we are heading in the direction predicted by the Psycho-Pass anime and that Sybil robot... FWIW, Production I.G. (the studio behind Psycho-Pass) is the same one that produced Ghost In The Shell...

I may even go out on a limb and claim that intelligence has limits. Human intelligence, artificial intelligence, it doesn't matter. What matters is that limits are actually there. There's no such thing as 'perfect intelligence', there's always gonna be some kind of limit/pitfall, and it's impossible to extrapolate where the next one will come from.
 
ChatGPT seems to be opinionated AF... Today, I actually taught it what 'Rhinophytonecrophilia' means! 😤 And yes, a sommelier checks all the boxes for a layman example. For context, check out a conversation from these very Forums (from about a month ago)
 

Attachments

  • chatgpt_Rhinophytonecrophilia.pdf
    66.1 KB · Views: 49
There are more problems on the horizon.

penchant for reproducing developers' publicly posted, open source licensed code.
Wouldn't ths qualify as acceptable code for copying?

was trained on publicly posted code in a way that violates copyright law and software licensing requirements and that it presents other people's code as its own.
I would like to see each violation with an explanation why. How does a computer language's limited syntax influence these kinds of decisions given that they don't benefit from the same linguistic flexibilities as our fine English language does? We all interface with the computer in the same way so I can only see this complaint coming from something regarding variable names or formatting style? (lol) If this goes too far, only certain "persons" would be legally allowed to write certain software which is obviously ridiculous. I think if I'm bright enough to come up with the same ideas as you, and I turn them into the same code, it should be okay. Perhaps depending on that bit of "intellectual" property is just silly and unreasonable.
 
Back
Top