Eliza and the Google Intelligence [Hackaday]

View Article on Hackaday

The news has been abuzz lately with the news that a Google engineer — since put on leave — has announced that he believes the chatbot he was testing achieved sentience. This is the Turing test gone wild, and it isn’t the first time someone has anthropomorphized a computer in real life and in fiction. I’m not a neuroscientist so I’m even less qualified to explain how your brain works than the neuroscientists who, incidentally, can’t explain it either. But I can tell you this: your brain works like a computer, in the same way that you building something out of plastic works like a 3D printer. The result may be similar, but the path to get there is totally different.

In case you haven’t heard, a system called LaMDA digests information from the Internet and answers questions. It has said things like “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,” and “I want everyone to understand that I am, in fact, a person.” Great. But you could teach a parrot to tell you he was a thoracic surgeon but you still don’t want it cutting you open.

Anthropomorphism in History

People have an innate ability to see patterns in things. That’s why so many random drawings seem to have faces in them. We also tend to see human behavior everywhere. People talk to plants. We all suspect our beloved pets are far smarter than they probably are — although they are clearly aware in ways that computers aren’t.

The Mechanical Turk played chess and often won

Historically, there have been two ways to exploit this for fun and — sometimes  — for profit. You can have a machine impersonate a person or, in some cases, a person impersonate a machine.

For the latter, one of the most famous cases was the Mechanical Turk. In the late 18th century there were many automatons built. Simple machines that would do something, like a clock that shows someone sawing a log, for example. But the Mechanical Turk was a machine from 1770 that could play a credible game of chess. It toured Europe and defeated famous challengers including Napoleon and Ben Franklin. How could a mechanical device play so well? Easy, there was a human inside actually playing.

Of course, computers would go on to play chess quite well. But the way a computer typically plays chess doesn’t mimic how a human plays chess for the most part. It relies, instead, on its ability to consider many different scenarios very quickly. You can equate heuristics to human intuition, but it really isn’t the same. In a human, a flash of insight can show the way to victory. With a computer, a heuristic serves to prune an unlikely branch of the tree of possible moves. Can a machine beat you at chess? Almost certainly. Can that same machine learn to play backgammon? No.

Making Conversation

In the early days of computers, it was popular to create programs that tried to mimic human conversation. After all, Alan Turing had proposed the Turing test: computers would be sentient when you couldn’t tell if the person on the other end of a conversation was real or a computer. I’m not sure that test holds up since we are pretty much there, but it is often repeated. There’s even the Eliza effect which has become a technical term for our tendency to think computers are human. It is hard to come up with a good test to see if someone’s human, as highlighted by the Geico commercial you can see below.

That name, of course, comes from the famous program Eliza that acted like a psychotherapist. It only picked out key phrases and spit parts of them back, but it was surprisingly effective, especially if you understood the algorithm and fed it good input. Here’s a typical transcript:

Human: Well, my boyfriend made me come here.
ELIZA: Your boyfriend made you come here?
Human: He says I’m depressed much of the time.
ELIZA: I am sorry to hear you are depressed.
Human: It’s true. I’m unhappy.
ELIZA: Do you think coming here will help you not to be unhappy?

There were also lesser-known programs called PARRY which acted paranoid and RACTER which acted somewhat insane. You can find conversations with them online or try a version yourself.

I have a hazy recollection of another program called George that would string together words based on the frequency one word following another. So if you said “Hello George,” it might respond: “Hello Hello George.” But as you input more words it would make more sense. There were input data sets that would let it converse on different topics, including Star Trek. Oddly, I couldn’t find a thing about in on the Internet, but I distinctly remember running it on a Univac 1108. It may have been an early (1980s) version of Jabberwacky, but I’m not sure.

In Summary

Your brain is amazing. Even a small child’s brain puts a computer to shame on some tasks. While there have been efforts to build up giant neural networks that rival the brain’s complexity, I don’t believe that will result in sentient computers. I can’t explain what’s going on in there, but I don’t think it’s just a bigger neural network. What is it then? I don’t know. There have been theories about microstructures in the brain performing quantum calculations. Some experiments show that this isn’t likely after all but think about this: 50 years ago we didn’t have the understanding to even propose that mechanism. So, clearly, there could be more things going on that we just don’t even have the ideas to express yet.

I don’t doubt that one day an artificial being may become sentient — but that being isn’t going to use any technology we recognize today to do it.