Can Chinese Rooms Think?

There’s a tendency as a machine learning or CS researcher to get into a philosophical debate about whether machines will ever be able to think like humans. This argument goes so far back that the people that started the field have had to grapple with it. It’s also fun to think about, especially with sci-fi always portraying AI vs human world-ending/apocalypse type showdowns, with humans prevailing due to love/friendship/humanity.

However, there’s a tendency for people in such a debate to wind up talking past each other.

“Machines can never be surprised!” — an argument dealt with in Turing’s article.

“We can’t even simulate ONE neuron right!” — this somehow weasels past the point of the debate: “Is it possible at all for a machine to think like a human?”, and not “Can a machine can think like a human right now?”

Eventually, some more “well-versed” in philosphy will whip out Searle’s “Chinese Room” argument: Suppose a robot in a room full of Chinese characters on cards, takes Chinese characters as input and follows its programming to produce Chinese characters as output. It does this convincingly enough to pass the Turing test. Now imagine if a non-Chinese speaker were provided a set of instructions in English, and followed that instruction book to perform the same function that robot would. Now since that person doesn’t understand Chinese, but can reliably perform the task to pass a Turing test, it follows, Searle argues, that the robot doesn’t understand Chinese either.

It’s usually at this point where you have part of the crowd nodding along to this argument and another pointing out a thousand different reasons why it doesn’t map.

“The understanding is the emergent function that both the robot and the instruction book forms!”

I think there’s a more fundamental difference resulting in this polarisation: Some believe human intelligence to be special, unique only to humans. Then there are people like me, who think that we’re just glorified machines ourselves.

I graduated with a CS degree, and so my close friends are usually software engineers or researchers. As a result, most of them believe themselves to be “People of Science” — non-religious, non-spiritual. But yet I’ve noticed that in these debates, some would often resort to arguments like “machines can never be surprised.” I think this belies a deeper belief they hold that is in contradiction with their “science-y” persona.

I’ve developed a line of questioning over the years that, I think, teases out the beliefs of someone I’m discussing the AI apocalypse with.

  1. Do you believe that your/our mind is a result of the physical world?
    This means you don’t think that our mind/consciousness is a result of some other entity belonging in some other world, and is purely the result of the electrons, neurons and synapses’ in your head (there is a name for this). How that works exactly is not being discussed here, just the belief that your conscious mind is the result of the grey goo inside your skull.
  2. Do you believe phenomena of the physical world can be simulated?
    This is simply asking if physical phenomena can be reliably replicated in a computer, if we had sufficient computational power and sufficient understanding about the universe. How we can deal with the three body problem is not being discussed here, just that with enough knowledge and big enough computers, we could do it.
  3. Do you think machines can think like humans?

Now, to me, if you agree to the first two propositions, you should agree to that third one as well: If our mind is of the physical world, and the physical world can be simulated, then our minds can be simulated. Potentially.

Of course, there can be a whole host of reasons why someone might not believe Prop 1. or Prop 2. I’ve heard of neuroscientists who, over extended studying of the human brain, increasingly believe that perhaps our consciousness lies in a different plane of existence. I’ve also had a debate with someone who believes that it’s impossible for machines to simulate physical reality, and not just due to our current limited knowledge of the physical world. If this is the case, you don’t wind up spending time debating past each other. You can now dig down and ask why they believe these things. Be open to these viewpoints.

As for the rest, those who agree to Prop 1. and Prop 2. but reject Prop 3., I take some time to revel in their segfault as they realise there is a contradiction in their own belief systems.

I’m more about being consistent in your own system of beliefs than pushing my we’re-all-meat-robots ideology. After all, I don’t know if I’m right, and I mean, how cool would it be if it turned out our minds are a separate entity, and that we are special?

Also read...

Comments

  1. What if our current understanding of the physical world is incomplete? Perhaps we are missing some crucial insight as to how physical processes give rise to thinking? Before Turing, people thought that the brain was like a regulator in a steam engine, or like a telephone switchboard (the most complex technology of the day). The interesting thing is that none of these analogies are wrong, they are just incomplete. Perhaps we are just missing insights that are just as fundamental as Turing’s, and perhaps present-day computers have no more chance of consciousness or thinking than a steam engine or a telephone exchange. I think that is the way to understand the Chinese Room argument.

    Reply
    • This argument means you don’t accept Prop 2.
      It’s possible that our current understanding of the physical world is incomplete, but I’ve tried to get across that the two propositions assume complete knowledge (probably not very successfully), because the question “Can machines think?” assume an eventuality, and not the present state of things.
      So with complete knowledge of our physical world, can machines simulate it?

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.