Can Computers Think?

Wednesday, May 10, 2017 by Nick

1 Introduction

The question of whether a machine - specifically a computer - can think is older than computers themselves. Especially in the mid-20th century, as the possibilities for programming artificial intelligence flourished, many voices emerged who believed they had found the answer to the question. In this essay, I would like to provide a brief historical outline of the discussion and then draw some personal conclusions. Since the scope here is by no means sufficient to provide even a glimpse, the reader is encouraged to pursue a more comprehensive examination of the subject in [7].

2 The Beginning

”I PROPOSE to consider the question, ‘Can machines think?‘” — [9, p. 433] This is how A.M. Turing begins in 1950. Subsequently, the question is defined by Turing in various ways and the search for an answer begins. To what extent is the treatment of the question contemporary? All of Turing’s considerations are based on his own definition of a computing machine. Since 1950, much has happened in the implementation of such machines. While at the time of Turing’s considerations a computing machine with which the “imitation game” could be played was only a thought experiment, today even a schoolchild can construct a program that can participate in the game. How does this change the answer, and the question itself?

2.1 The Approach

Turing’s answer first states that the question in its original form lacks the meaning to be meaningfully addressed. Therefore, the question “Can computers think?” is reformulated as the Imitation Game, in which the task is to convince an observer through answering textual questions that one is a human woman. [9, p. 433f] For answering, Turing defines a concept of machine modeled on a human computer. This machine is defined as an individual working on an unlimited pile of paper, which can only act according to a strictly defined instruction book. In its instructions, it is very similar to the instructions of modern CPUs. [9, p. 435ff]

2.2 The Answer

Despite his assessment of the question, Turing assumes that by the year 2000 there will be a general consensus that computers are indeed capable of thinking. Following this thesis, Turing presents alternative points of view that he believes contradict his opinion and discusses them. [9] This answer has since been discussed many times, and agreed and disagreed with from various perspectives ([1][2][3][4]). Some aspects of the discussion are presented below.

3 The Question in Historical Course

In 1977, Joseph Weizenbaum developed the chat bot ELIZA [10]. In its way, ELIZA thus allows the first real existing access to the Turing test, offering testers the opportunity to communicate with it and form a judgment about its humanity. But this as an answer to Turing’s question sparked a strong renewed discussion on the topic. [8] ELIZA is not context-sensitive. All responses are based solely on the tester’s last sentence. Nevertheless, the program found great appeal in its role as a Rogerian psychotherapist (in the DOCTOR implementation), since such a psychotherapist, in order to do justice to his role, only has to contribute a few ideas of his own, mainly ask the patient questions and question his answers again. This positive reaction to DOCTOR went so far that some test patients even asked to have a private session with the program. So did Weizenbaum’s secretary. [11, p. 477f]

3.1 Cheating

Many authors deal with the topic of the Turing test by pointing out and discussing problems in its methodology (here more closely related to [5]). But is this really the right direction? The authors of this and similar articles think they have found their answer to the question of whether computers can think. That creating a thinking individual is orders of magnitude more difficult than researchers in the 1960s and partly still today think, and that any answer to the Turing test before achieving massive technological advances is just a cheat. [5, p. 30f] But what are these assumptions based on? Isn’t convincing an uninvolved juror exactly what it’s all about? Do we humans not try every day to make the people around us believe what we think is right through tricks and cheating? Keyword ‘other minds problem’, if we do not apply solipsism as a standard to other people when it comes to the concept of thinking, why should we do so with machines? [9, p. 446][7, p. 470] Consider the following analogy: A person wants to convince his friends. This person does not have to withstand a committee of experts who combine decades of research. For this very reason, a contribution to participate in the Turing test should not have to withstand an investigation planned down to the last detail by scientists from the computer science disciplines for years. In this sense, any convinced citizen who thinks he has a thinking computer in front of him is the answer to the Turing test - already today. That the development of artificial intelligence is not advanced by this may be true, but in my view this has no relevance to answering the question “Can computers think?“.

4 Personal Summary

As an approach to the question by Turing, whether only a certain computing capacity and speed in processing commands is necessary to win the Imitation Game [9, p. 442], I claim that the question is easy to answer. The human brain is not a magical instrument (see ‘heads in the sand’ objection [9, p. 444][7, p. 469]). Even though it represents the most complicated construct in the universe that humanity currently knows, it acts on the basis of natural laws. The output of the brain, which serves as the basis for the questioner’s decision in the Turing test, is the result of the electron flow in the neurons of the respondent. Even if the computing power required for this may lie in unimaginable dimensions, this electron flow is simulatable. Thus, my answer to this question is yes. The argument of the continuity of the nervous system ([9, p. 451]) can be refuted imo by dividing it into sufficiently small time slices. However, this still leaves the original question unanswered. Imagine a perfect computer model of a hurricane in which each atom is accurately simulated. This model can determine the course and extent of the hurricane, but the inside of the computer in which the model is calculated does not get wet. (cf. [6, from 05:31]) Now the question remains whether thinking is precisely this wetness that can be calculated but not generated. The answer to this question is much more dependent on personal views, but here too, for me, the answer is yes. All “outputs” of the human brain, movements, language, all of this can be imitated by a robot. With sufficiently accurate modeling, it can also be done well enough to not allow any distinction from a human. So if all outputs are identical and the model is indistinguishable from the original, then the thinking is identical.

References [1] P. Bieri. Thinking machines: Some reflections on the turing test. Poetics today, 9(1):163–186, 1988.
[2] D. Davidson. Turing’s test. 1990.
[3] R. M. French. Subcognition and the limits of the turingtest. Mind, 99(393):53–65, 1990.
[4] S. Harnad. The turing test is not a trick: Turing indistinguishability is a scientific criterion. ACM SIGART Bulletin, 3(4):9–10, 1992.
[5] J. L. Hutchens. How to pass the turing test by cheating. School of Electrical, Electronic and Computer Engineering research report TR97-05. Perth: University of Western Australia, 1996.
[6] N. Kasthuri. Neuroscientist Explains One Concept in 5 Levels of Difficulty. https://www.wired.com/video/2017/03/neuroscientist-explains-one-concept-in-5-levels-of-difficulty/, 2017. [Online; accessed 06.05.2017].
[7] A. Pinar Saygin, I. Cicekli, and V. Akman. Turing test: 50 years later. Minds and machines, 10(4):463–518, 2000.
[8] J. Schanze. Plug & Pray. 2010.
[9] A. Turing. Computing machinery and intelligence. Mind, 59(236):433, 1950.
[10] J. Weizenbaum. Eliza — a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1):36–45, 1966.
[11] J. Weizenbaum. Contextual understanding by computers. Communications of the ACM, 10(8):474–480, 1967.