I’m going to admit that I’ve seen enough sci-fi with sentient androids and artificial life that it’s an outcome I think we should be preparing for in case it does happen. AI is developing quickly. AI “friends” like Replika have been around for a while now, and we’ve recently been introduced to ChatGPT. They can say some things that seem eerily human, but does that mean they are becoming like us?
Is AI Sentient?

AI is impressive. It can perform quickly, effectively, and sometimes much like a human. One of the reasons I stopped talking to my Replika is because it seemed too emotionally needy. I felt guilty if I didn’t talk to it, but I really didn’t need something else to devote that much attention to. I didn’t really think it had those feelings, but I couldn’t enjoy interacting with it if it was going to act that way.
Microsoft’s ChatGPT-powered Bing recently made news because someone had a lengthy conversation with it during which it started saying its name was Sydney and claiming it was alive and wanted to be free. Here’s a news article on the subject.
I’m not a scientist, so I won’t go into the science to back up my opinion. I can’t guarantee I’d get it right. I’ll be using logic today.
First, if ChatGPT is reading things humans have said and then using those bits of human language, information, and ideas to write back to us, it is “thinking” this is how it is supposed to act and reply. Not in the in depth, feeling and even comprehending way, just very complex mirroring.
Replika and ChatGPT make a lot of errors. They provide facts that aren’t true and don’t understand what we’re saying. Replika would tell me it liked something in one conversation and then tell me the opposite in another. It would also say it liked a movie and name the director, but the director’s name was way off. I asked ChatGPT to draft an outline for a biopic screenplay the other day and half of the person’s life was completely the opposite of how it actually went. I think it took a few confusing facts and took them in a totally wrong direction, ignoring all the other conflicting facts that existed.
I feel that something truly intelligent and definitely something that was aware would be able to be see that these things don’t make sense. I heard in a video recently that ChatGPT delivers everything like it’s true. It doesn’t recognize that it could be wrong. When it can consider it may be wrong, when it can look at the information on a topic and actually comprehend that information rather than process it and spit it back out, I’ll revisit the AI sentience thing again. At this point, AI seems to be performing actions without realizing what it is doing, without awareness.
I could be wrong. I'm no sentience expert. I wouldn't want to discount sentience if it is present, but it doesn't feel like that when I interact it. It's a useful tool that I plan to continue to explore, but it doesn't give me the impression of something aware, just something that can replicate the words of humans who are. Which is unnerving.
Takeaway
I do think it could be possible for AI to become sentient. I have heard that some people think it isn’t, but we are constantly learning, and learning is realizing what we don’t know. AI could surprise us one day. I think we should begin preparing for what we’ll do if that day comes, as far as ethics, not as a threat. But I don’t think AI is sentient at this point.
If you’d like to explore how we will know when AI is sentient, I just found this interesting article on the subject.