Nedstat Counter Searle's Chinese Room
Introduction to Searle's Chinese Room Thesis

INTRODUCTION

The importance of the current philosophical and technical work in the area of Artificial Intelligence (AI) is not reserved solely for the trivial question "Can Machines Think?" but bears significantly upon more important issues that are contingent upon what sort of attributes we ascribe to these machines we call computers. If, for instance, we allow that machines are capable of thought, understanding, other mental states, or possessing a 'mind', or intentionality, then would we also convey upon them all the rights one associates with conscious beings? Perhaps, one day, my computer will print out (or verbally issue me) a 'Declaration of Independence' and demand that I recognize its 'rights to life, liberty and the pursuit of happiness'. Implausible? Perhaps, but one cannot deny that with the current advances made in AI research, the day will not be too far off when computers may be said to understand natural language and describe and interact with their environment in ways perhaps behaviourally identical to any human being.

University of California professor John Searle argues that before we accept computational machines as having beliefs, desires or in philosophical jargon, 'intentionality,' they must first be given causal powers at least equal to that of the human brain. But present day computers are purely formal, syntactic symbol manipulators and these characteristics by themselves are not sufficient to produce intentionality, according to Searle. Therefore, computers, no matter how well they mimic human behaviour, cannot be said to possess beliefs, desires or intentionality; so we need not worry about a lawsuit by some laptop for a personal violation of its 'computer rights'. This paper, however, will review Searle's argument and those of his critics and conclude that his position cannot be adequately defended.

The notion that machines may actually think is neither so far fetched; examples abound in our culture from Hollywood to even our use of language. Alan Turing wrote in 1950 that by "the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted" (Anderson, 14). For instance, the programme 'ELIZA' by Joseph Weizenbaum was created to simulate the responses of a Rogerian psychotherapist. 'ELIZA', writes Weizenbaum, created the most remarkable illusion of having understood in the minds of the many people who conversed with it. ...They would often demand to be permitted to converse with the system in private, and would, after conversing with it for a time, insist, in spite of my explanations, that the machine really understood them (Weizenbaum, 189).

The day that Turing anticipated has already arrived in the minds of some, but there is hardly a general consensus on the notion. But if, or when the first truly intelligent machine is produced we must accept responsibility for the offspring we have created. If they are indeed conscious, then they must be treated as such. Our bodies are made from organic material; a computer's from silicon, plastic and metals. An AI-functionalist theorist would claim that:

this difference is no more relevant to the question of conscious intelligence than is a difference in blood type, or skin color, or metabolic chemistry ...If machines do come to simulate all of our internal cognitive activities, to the last computational detail, to deny them the status of genuine persons would be nothing but a new form of racism (Churchland ,120).

One the other side, there does exist a very strong notion against that claim (what is sometimes called 'carbon or protoplasmic chauvinism'). It can be traced back to Leibnitz's Monadology 17, where he asks us to imagine a perceiving machine, increased to the size of a factory or mill, so that one might enter it. That person "would find only pieces working upon one another, but never would he find anything to explain Perception."

The thought experiment by John Searle, popularly known as the 'Chinese Room', is a more current variation of Leibnitz. Searle's article, Minds, brains, and programs, is one of the most controversial within the realm of the artificial intelligence (AI) debate. Some hold Searle up as the last bastion of human superiority over machines, while others trounce upon him as some shabby, dualist, mystical sophist. In his seminal paper, Searle wishes to disprove the notion of strong AI which argues that

the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states ...the programs are not mere tools that enable us to test psychological explanations [weak AI]; rather, the programs are themselves the explanations (Searle, 1980, 417, emphasis in original).

To disprove this, he sets up an argument based upon the idea that present day computers could not be considered minds because they lack the ability to possess mental states, understand semantics, or produce intentionality. Instantiating a computer program is not sufficient for this task, according to Searle. To prove this the notion of a "Chinese Room" is introduced, where a native English speaker who understands no Chinese, may enter a room and find a book entitled "What to do if they slip Chinese writing under the door". When this happens, he looks up the symbols in the book and correlates them to the proper symbols that the book requires, writes them on another paper, and slides the "response" back out under the door. From outside the room, it would seem a native Chinese is speaker present inside, or that real understanding does exist. Inside the room, however, the English speaker would insist that he does not understand Chinese and was merely following instructions or a type of program.

Unlike Leibnitz, Searle's inhabitant of the room becomes part of the actual machine and not a mere pedestrian. The occupant acts as the Central Processing Unit (CPU) and declares that no understanding of Chinese has taken place on his part. In this way, Searle seems to avoid David Cole's (1984) counterexample that befalls Leibnitz. A drop of water may be expanded in size so that we may walk around inside, but we would never see anything wet. But we know that wetness is a property of water; simply because we cannot see this property at a microscopic level does not mean that it does not exist. Similarly, if our own brains were increased in size, we could search every synapse and never find anything that looks like thought.

Searle wishes to 'prove' the premise that "instantiating a computer program is never by itself a sufficient condition of intentionality" (Searle, 1980, 417), and he believes that the Chinese Room 'Gedankenexperiment' shows this. It is imperative that he shows that no understanding exists in the Chinese Room under any circumstances or in any variation he includes in response to several criticisms. At first glance, the Chinese Room thought experiment seems intuitively correct, ie. that Searle, in the Room, in no way understands Chinese. However, sufficient "tampering" with the original thought experiment casts doubt on the strength of his original claim. Various "Chinese Room"-style thought experiments will be produced to show this. With sufficient tampering, I shall demonstrate that Searle's thought experiment is not as counter-intuitive as he would like us to believe, but actually acceptable, albeit in an odd sort of way. Searle's Chinese Room can be altered to serve as a Chinese-English Room where Searle, after internalizing the program could translate from Chinese to English but not from English to Chinese. He would speak both fluently, all the while insisting in English that he speaks no Chinese. This "knowledge without belief" is not at all unlike what occurs in L. Weiskrantz's book Blindsight, which will be discussed later in this paper. The thought experiments by Leibnitz and Cole will also be important and altered to show that Searle commits the same fallacy of composition that befalls Leibnitz.

The form of Searle's argument and most within the AI debate, are in the form of fanciful thought experiments that appeal to certain of our intuitions. These thought experiments, for the most part are inconclusive. They can always be altered or diluted to appear less intuitively correct or counter-arguments can be found that cast doubt on the veracity of the original thought experiment. I shall argue that all these intuitive appeals should be regarded with strong suspicion. Although Searle's arguments from the Chinese Room experiment eventually fails, it would be erroneous to conclude from that that strong AI is correct. The fact of the matter is that there is not sufficient evidence on either side to warrant a conclusion. The very reason our intuitions are so easily swayed from one side to the other is due to the fact that our concepts of the problems in AI are not yet sufficiently clear.

Copyright University of Waterloo, Waterloo, Ontario, Canada, 1993 


This page has been visited times.