Ask Hackaday: The Turing Test is Dead: Long Live the Turing Test! [Hackaday]

View Article on Hackaday

Alan Turing proposed a test for machine intelligence that no longer works. The idea was to have people communicate over a terminal, with another real person and with a computer. If the computer is intelligent, Turing mused, most people will incorrectly identify the computer as a human. Clearly, with the advent of modern chatbots, that test is now broken. Despite the “AI” moniker, chatbots aren’t sentient or even pre-sentient, but they certainly seem that way. An AI CEO, Mustafa Suleyman, is proposing a new test: The AI has to take a $100,000 budget and earn $1,000,000.

We were a little bemused at this. By that measure, most of us aren’t intelligent, either, and it seems like this is a particularly capitalistic idea. We could probably write an Excel script that studied mutual fund performance and pull off the same trick, given enough time for the investment to mature. Is it intelligent? No. Besides, even humans who have demonstrated they can make $1,000,000 often sell their companies and start new ones that fail. How often does the AI have to succeed before we grant it person status?

Alan Turing never imagined chatbots when he proposed his famous test

But all this begs the question: What is the new test for sentience? We don’t have a ready answer, and neither does science fiction. Everything from The Wizard of Oz to Star Trek has dealt with the question of when a robot or a computer is a being and not a machine.

Sentient AI and Pornography

As Justice Stewart famously said about pornography, “I know it when I see it,” perhaps the same is true here. You would be hard-pressed to say that Commander Data from Star Trek was not sentient, despite the ongoing in-story debate. But why? Because he could solve problems? Because he could love others? Because he could grow and change?

This is a slippery slope. Solving problems isn’t enough. You need to solve problems creatively, and that’s tough to define. Dogs love others, but you don’t consider them truly sentient. All sorts of plants and animals grow and change, so that’s not directly it, either.

Claude Shannon Roots for the Machine

Claude Shannon once said that it was clear machines could think because we are machines, and we think. However, it is far from clear that we understand where that indefinable quality comes from. Shannon thought the human brain had about 1010 neurons, so if we built that many electronic neurons, we’d have a thinking machine. I’m not so sure that’s true.

That could be akin to someone unfamiliar with the concept of a computer saying, “Of course, a motor can do computation because your printer has a motor in it, and it also clearly does computations.” There may be mechanisms at work in our brains we do not yet fully understand. For example, the idea that our brains may be quantum computers is enjoying a resurgence, although the idea is still controversial.

IBM and DARPA have been building brain-analog computers for several years as part of project SyNAPSE. At last report, there was still no true electronic brain forthcoming.

Over to You

So what do you think? How can we tell if a computer is sentient or just a stochastic parrot? If you can’t tell, does it matter? For that matter, can you prove that any other human being other than yourself isn’t a clever simulation? Or, for that matter, can you prove it about yourself? Perhaps we should ask ChatGPT to develop a new Turing test. Or, fearing the implosion of the universe, perhaps not.