The Chinese Room Experiment: Computers With Minds?

Chinese room

The Chinese Room Thought Experiment It is a hypothetical situation posed by the American philosopher John Searle, to demonstrate that the ability to orderly manipulate a set of symbols does not necessarily imply that there is an understanding or a linguistic understanding of those symbols. That is, the ability to understand does not arise from syntax, which calls into question the computational paradigm that cognitive sciences have developed to understand the functioning of the human mind.

In this article we will see what exactly this thought experiment consists of and what kind of philosophical debates it has generated.

  • Related article: “How are Psychology and Philosophy similar?”

The Turing machine and the computational paradigm

The development of artificial intelligence is one of the great attempts of the 20th century to understand and even replicate the human mind through the use of computer programs. In this context, one of the most popular models has been the Turing machine.

Alan Turing (1912-1954) wanted to demonstrate that a programmed machine can hold conversations like a human being. For this, he proposed a hypothetical situation based on imitation: if we program a machine to imitate the linguistic ability of speakers, then we put it before a set of judges, and it makes 30% of these judges think that they are speaking with a real person, this would be sufficient evidence to demonstrate that a machine can be programmed in such a way that it replicates the mental states of human beings; and vice versa, this would also be an explanatory model of how human mental states work.

You may be interested:  Edwin Locke's Goal Setting Theory

Based on the computational paradigm, a part of the cognitive current suggests that the most efficient way to acquire knowledge about the world is through the increasingly perfected reproduction of information processing rules, so that, regardless of the subjectivity or history of each person, we could function and respond in society. Thus, the mind would be an exact copy of reality, it is the place of knowledge par excellence and the tool to represent the outside world.

After the Turing machine even some computer systems were programmed that tried to pass the test. One of the first was ELIZA, designed by Joseph Weizenbaum, which responded to users using a model previously registered in a database, making some interlocutors believe they were talking to a person.

Among the most recent inventions that are similar to the Turing machine we find, for example, CAPTCHAs to detect Spam, or SIRI of the iOS operating system. But, just as there have been those who try to prove that Turing was right, there have also been those who doubt it.

  • You may be interested: “The Molyneux Problem: a curious thought experiment”

The Chinese room: does the mind work like a computer?

Based on the experiments that sought to pass the Turing test, John Searle distinguishes between Weak Artificial Intelligence (which simulates understanding, but without intentional states, that is, it describes the mind but does not equal it); and Strong Artificial Intelligence (when the machine has mental states like those of human beings, for example, if it can understand stories like a person does).

For Searle it is impossible to create Strong Artificial Intelligence, which he wanted to verify through a mental experiment known as the Chinese room or the Chinese room. This experiment consists of posing a hypothetical situation that is as follows: a native English speaker, who does not know Chinese, is locked in a room and must answer questions about a story that has been told to him in Chinese.

You may be interested:  The 70 Best Phrases of Anselm of Canterbury

How do you answer them? Through a book of rules written in English that serve to syntactically order Chinese symbols without explaining their meaning, only explaining how they should be used. Through this exercise, the questions are answered appropriately by the person in the room, even when this person has not understood their content.

Now, suppose there is an external observer, what does he see? That the person inside the room behaves exactly the same as a person who does understand Chinese.

For Searle, this shows that a computer program can imitate a human mind, but this does not mean that the computer program is the same as a human mind, because It has no semantic capacity nor intentionality.

Impact on understanding the human mind

Taken to the realm of humans, the above means that the process through which we develop the ability to understand a language goes beyond having a set of symbols; Other elements are necessary that computer programs cannot have.

Not only that but, from this experiment studies on how meaning is constructed have expanded, and where is that meaning. The proposals are very diverse, ranging from cognitivist perspectives that say that it is in the head of each person, derived from a set of mental states or that they are given innately, to more constructionist perspectives that ask how systems of rules are socially constructed. and practices that are historical and that give a social meaning (that a term has a meaning not because it is in people’s heads, but because it enters into a set of practical language rules).

You may be interested:  Gender Bender: What it is and How it Transgresses Gender Roles

Criticism of the Chinese Room thought experiment

Some researchers who disagree with Searle think the experiment is invalid. because, although the person inside the room does not understand Chinese, it may be that, together with the elements that surround him (the room itself, the real estate, the rule manual), there is an understanding of Chinese.

Given this, Searle responds with a new hypothetical situation: even if we disappear the elements surrounding the person who is inside the room, and ask him to memorize the rule manuals for manipulating Chinese symbols, this person would not be understanding Chinese, which, neither does a computational processor.

The response to this same criticism has been that the Chinese room is a technically impossible experiment. In turn, the response to this has been that the fact that it is technically impossible Doesn’t mean it’s logically impossible.

Another of the most popular criticisms has been that made by Dennett and Hofstadter, which applies not only to Searle’s experiment but to the set of thought experiments that have been developed in recent centuries, since reliability is doubtful because they do not have an empirical reality. rigorous, but speculative and close to common sense, therefore, they are above all a “bomb of intuitions.”

Bibliographic references:

  • González, R. (2012). The Chinese Piece: a thought experiment with a Cartesian bias? Chilean Journal of Neuropsychology, 7(1): 1-6.
  • Sandoval, J. (2004). Representation, discursivity and situated action. Critical introduction to the social psychology of knowledge. Valparaíso University: Chile.
  • González, R. (S/A). “Intuition bombs”, mind, materialism and dualism: Verification, refutation or epoché? Repository of the University of Chile. (Online). Accessed April 20, 2018. Available at http://repositorio.uchile.cl/bitstream/handle/2250/143628/Bombas%20de%20intuiciones.pdf?sequence=1.