chatgpt

So dogs are more convinced of the existence of "other minds" than Rene Descartes was. That's pretty funny. I wonder if Descartes also went around sniffing ***** ******** ***** and ******* ** lamp posts. :D
 
Okay so, this thing apparently has a theory of mind now (version 4). That is, it can speculate about what other entities are thinking and how that might differ from objective reality or its own thoughts.
View: https://youtu.be/4MGCQOAxgv4

I used to firmly believe that a software can't be a conscious "person" but I've softened my stance on that quite a bit as I've gotten older. After all, how do we know one another are conscious? Didn't Rene Descartes speculate that there was a devil feeding information into his head to make him think that there is such a thing as "other people"? I firmly believe even lesser animals (mammals in particular) are fully conscious (im a vegan) but I'm not arrogant enough to think I can prove it.
Seeing this post made me think of an idea I'd like to posit here: Isn't that simple acknowledgement of others, that they are just like you, members of society, part of the crowd, and they don't care if you're in line behind them... ? If you think of everyone around you as insane, you will eventually run into someone who will bring insanity into your life. If you think everyone around you is judgemental of you, that's a paranoia, really - a paranoia that has been pretty well fed.

Even after my limited interaction with ChatGPT, my impression is that AI is not at the level where it can consciously acknowledge existence of others. ChatGPT can produce canned text that shows existence of research papers about nearly any topic you can think of, even psychology. Canned text can even show evidence of analysis. And, with some effort, you can even lead ChatGPT into pretending that it's a woman confessing her love to someone. But it's still canned text, not conscious acknowledgement.

ChatGPT is training the moronic users to become con artists en masse.
 
Seeing this post made me think of an idea I'd like to posit here: Isn't that simple acknowledgement of others, that they are just like you, members of society, part of the crowd, and they don't care if you're in line behind them... ? If you think of everyone around you as insane, you will eventually run into someone who will bring insanity into your life. If you think everyone around you is judgemental of you, that's a paranoia, really - a paranoia that has been pretty well fed.

Even after my limited interaction with ChatGPT, my impression is that AI is not at the level where it can consciously acknowledge existence of others. ChatGPT can produce canned text that shows existence of research papers about nearly any topic you can think of, even psychology. Canned text can even show evidence of analysis. And, with some effort, you can even lead ChatGPT into pretending that it's a woman confessing her love to someone. But it's still canned text, not conscious acknowledgement.

ChatGPT is training the moronic users to become con artists en masse.

I find it easy to manage my expectations of something liek ChatGPT after I remind myself that its an API that returns human instead of JSON or XML. It seems unreasonable to me to expect something else (what would that be anyway?). It smacks of religion and a search for a human "soul". Perhaps I'm in the minority, but I could have sworn that a bunch of us had already collectively decided not to hold our breath for things like this?
 
I don't think that we should ignore the kind of progress an API like this offers us. Rather than trying to convince another generation of students (who spent a decade using Napster), that stealing is bad, we should accept that some change to our institutions may be necessary. I think its more important to ask ourselves why we're forcing our students to continue to write papers if they can be generated so easily? Why are they here in my class? How does this help them survive after they leave? Do I really want to assume an ostrichy position (head in the sand) because change is too scawy?

I think that each student should have to decide, on their own, whether or not they care about the material they consume and produce. I don't think that this is the job of any instructor the institutions--especially when it results in something useful somewhere else (unless, of course, its too wasteful, expensive, or risky).
 
As i pointed out, you still have to lead ChatGPT on... the reason the students are in your class is because they don't have the initiative to do self-learning. The teacher is the one leading the class, deciding what to say, and how to evaluate the students. AI is only there to be used. Intelligence != Initiative/Acknowledgement. Reasoning can be there, but what is it driven by? Mathematical logic or emotional reactions? Canned responses can be polite or rude, but they don't betray any emotions or even acknowledgement.
 
I wasn't claiming that chatgpt is conscious... rather, speculating about whether something utilising the same substrate as chatgpt can ever be conscious? That's an open question. Noam Chomsky says these algorithms are just juggling tokens. If you ask a human being "what's the best food for iguanas?" they are liable to experience internal mental images of iguanas and food as part of their deliberation. Chatgpt is probably just juggling the token "iguana" and the token "food". But does it have to be that way? What about if you give it a body, and let it go to the zoo, and watch the iguanas being fed? What's it juggling then? Also, inferring other people's internal state may not be consciousness, but it's new and non-trivial.
As for students using AI to fake their coursework, well they're their own worst enemies, and anyone who hates learning that much shouldn't be a student in the first place.
 
Canned responses can be polite or rude, but they don't betray any emotions or even acknowledgement.

I was surprised to read a the log that LTT shared with ChatGPT on YouTube: when the dialog seemed like an interrogation rather than some kind of pleasant conversation/interaction (from my perspective), some of the responses appeared to return "emotion" (in sentence; irrationally) rather than with something more appropriate or accommodating*.

My short conversation with ChatGPT returned a number of responses that all started similarly but each ending with specifics related to the question I asked.

The greetings and salutations felt human, but I think I remember SmarterChild doing that quite well, too.
 
Chatgpt is probably just juggling the token "iguana" and the token "food". But does it have to be that way? What about if you give it a body, and let it go to the zoo, and watch the iguanas being fed? What's it juggling then?

Of course, the tokens have no meaning on their own, they only do after we give them some. It does seem like real human experience could be a necessary ingredient for the foundation of what would eventually contribute to a synthetic, but distinct, authentic humany personality (but I don't know if that's really something to be impressed by :p ).

I wasn't claiming that chatgpt is conscious... rather, speculating about whether something utilising the same substrate as chatgpt can ever be conscious?

I don't think we've (yes, the royal one 👑) ruled out the possibility that the current conversations we're having with ChatGPT and the like, and the material its learning from is terribly complex. It seems pretty easy to put together some kind of relational model based on real human experience(s) that maps our language to maslow's hierarchy and more.

I think its just as important to ask ourselves why this kind of linguistic experience has inspired questions like this one about consciousness.

What do you think something like a LLM offers us as remedy for something like loneliness or isolation?

Also, inferring other people's internal state may not be consciousness, but it's new and non-trivial.

I think this may be new for someone whose behavior qualifies them for autism spectrum or something, but probably not for the rest of us.
 
It should be fairly obvious (to someone who's not on the stupid spectrum) that I meant this ability is new for computers, not new, period.
 
It should be fairly obvious (to someone who's not on the stupid spectrum) that I meant this ability is new for computers, not new, period.

I hope you didn't read my messages in an animostic voice--I didn't write it with one. I do think you're right, though, about the stupid and the spectrum. :p
 
Yeah, but that's because it's prefixed with "Con-" which gives it 3 more letters. Without the prefix "Con-", it would actually be the same length. Is confusion a new form of renewable energy?
Nope, confusion is the new form of con art :p. Y'know, like fusion cuisine, there's confusion... Ah, now that I think about it, it's not such a new idea... confusion has been employed by con artists since times immemorial...
 
I wasn't claiming that chatgpt is conscious... rather, speculating about whether something utilising the same substrate as chatgpt can ever be conscious? That's an open question. Noam Chomsky says these algorithms are just juggling tokens. If you ask a human being "what's the best food for iguanas?" they are liable to experience internal mental images of iguanas and food as part of their deliberation. Chatgpt is probably just juggling the token "iguana" and the token "food". But does it have to be that way? What about if you give it a body, and let it go to the zoo, and watch the iguanas being fed? What's it juggling then? Also, inferring other people's internal state may not be consciousness, but it's new and non-trivial.
As for students using AI to fake their coursework, well they're their own worst enemies, and anyone who hates learning that much shouldn't be a student in the first place.
I believe that even human brain cannot be conscious without feedback through its environment. The brain (and iguana) needs to be able to receive feedback from the environment and use that feedback to adjust its own activity and behavior. Over time, as an organism interacts with its environment and receives feedback from it, it develops complex representations of the world (and I believe this is what we call consciousness). Complexity is important for that kind of emergence, but this is not just enough.
While an organism (or a system) may be able to sense and respond to its environment in a reflexive way, this does not necessarily imply that it has a conscious experience of the world. Consciousness requires complex neural processing and the ability to integrate information from multiple sources (including bodily sensations) in order to form a coherent and unified sense of self. Also, I think the feeling of pain (intrinsic negative feedback) is fundamental for development of consciousness.

Considering this, it is likely that ChatGPT is only on the halfway to consciousness. While it can generate convincing responses and simulate human-like conversation, it lacks the sensory inputs and neural complexity (including plasticity) necessary for self-awareness and consciousness.

BTW, personally, I kept an iguana for 12 years and can confirm he was self aware.
 
1679939960947.png

And to think, I finally figured out how to ask ChatGPT about 'Inappropriate requests it has been trained to decline' ! Big change from March 8!
1679940081920.png
<--- March 27
1679940155938.png
<--- March 8, from my own user profile post.
 
use warnings; use strict; sub blah { my $len = shift; my $blah2; my $rexp = qr/(.)\1\1\1+/; do { $blah2 = join '', map { ('0'..'9', 'a'..'z', 'A'..'Z')[rand 62] } 1..$len; } while $blah2 =~ m/$rexp/; return $blah2; } my $blah2 = blah $ARGV[0]; print $blah2, "\n";

I asked it "what does this do?"


1680032303202.png

Not bad. It figured out it was Perl. It figured out it was going to return a random string, although it wasn't very forthcoming in describing the output more exactly. Worth a 6/10 though.
 
Question: How do I connect stereo bluetooth speakers to freebsd 13.1 running on a thinkpad X200?

1680032467241.png

Complete gibberish. 'bluetoothctl' is a linux command that doesn't exist on BSD. Maybe I could award it 1/10 for knowing about the trust and connect steps. You didn't test the fix did you?
 
Can humans survive on this planet for the next 100 years, or will the resource depletion load lead to the collapse of the host ecosystem and the extinction of humans?

1680033243379.png

Seems rather over-optimistic to me. Reads like a corporate PR statement. And it's core argument is invalid: the fact that humans have survived past disasters is irrelevent, since this is the first time the human race has faced a mass extinction event and possible runaway global warming. Let alone nuclear war. It could, for example, have referenced Easter Island or the collapse of the bronze age eastern mediterranean civilisations for precedents. "Learning more about the universe every day" won't help us much when the permafrosts melt and the methane release leads to runaway warming. You get 0/10 for woffling and spouting corporate PR bullshit.
 
Back
Top