UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Machines cannot think Gell, Robert George 1966

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
[if-you-see-this-DO-NOT-CLICK]
UBC_1966_A8 G4.pdf [ 3.44MB ]
Metadata
JSON: 1.0105248.json
JSON-LD: 1.0105248+ld.json
RDF/XML (Pretty): 1.0105248.xml
RDF/JSON: 1.0105248+rdf.json
Turtle: 1.0105248+rdf-turtle.txt
N-Triples: 1.0105248+rdf-ntriples.txt
Original Record: 1.0105248 +original-record.json
Full Text
1.0105248.txt
Citation
1.0105248.ris

Full Text

MACHINES CANNOT ' THINK by Robert George G e l l B . S c , U n i v e r s i t y of B r i t i s h Columbia, 1 962 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF M.A. <' i n the Department of PHILOSOPHY We accept t h i s t h e s i s as conforming to the req u i r e d standard THE UNIVERSITY OF BRITISH COLUMBIA A p r i l , 1966 In presenting t h i s t h e s i s i n p a r t i a l f u l f i l m e n t of the requirements f o r an advanced degree at the U n i v e r s i t y of B r i t i s h Columbia, I agree that the L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e f o r reference and study. I f u r t h e r agree that permission f o r extensive copying of t h i s t h e s i s f o r s c h o l a r l y purposes may be granted by the Head of my Department or by h i s r e p r e s e n t a t i v e s . I t i s understood that copying or p u b l i c a t i o n of t h i s t h e s i s f o r f i n a n c i a l gain s h a l l not be allowed without my w r i t t e n permission. Department of Philosophy The U n i v e r s i t y of B r i t i s h Columbia Vancouver 8, Canada Date, 29 A p r i l 1966 i ABSTRACT This paper i s a c r i t i c a l essay on the question "Can machines t h i n k ? " , w i t h p a r t i c u l a r a t t e n t i o n paid to the a r t i c l e s appearing i n an anthology Minds and Machines, A. R. Anderson e d i t o r . The general c o n c l u s i o n of t h i s paper i s that those arguments which have been advanced to show that machines can t h i n k , are i n c o n c l u s i v e . I begin by examining ra t h e r c l o s e l y a paper by H i l a r y Putnam c a l l e d "Minds and Machines" i n which he argues that the t r a d i t i o n a l mind-body problem can a r i s e w i t h a complex cybernetic machine.- My argument against Putnam's i s that e i t h e r there are no problems w i t h computers which are analogous to the ones r a i s e d by mental s t a t e s , or where there are problems wi t h machines, these problems do not have at bottom the same d i f f i c u l t i e s that human experiences r a i s e s . I then continue by' showing that a cybernetic machine i s an i n s t a n t i a t i o n of a formal system. This leads to a d i s c u s s i o n of.the r e l a t i o n s h i p between f o r m a l i t y and p r e d i c t a b i l i t y i n which I t r y to show that some types of machine are i n p r i n c i p l e p r e d i c t a b l e . In the next s e c t i o n I attempt to prove that any d i s c u s s i o n of outward signs of i m i t a t i v e behavior presupposes that some l i n g u i s t i c theory, such as a type r e d u c t i o n , has been s u b s t a n t i a t e d . The f o r c e of t h i s argument i s that such a theory has not i n f a c t been su b s t a n t i a t e d . I o f f e r some general theory about the complexity of concept-property r e l a t i o n s . i i F i n a l l y I give a demonstration that no t e s t or set of t e s t s can be found that w i l l be l o g i c a l l y s u f f i c i e n t f o r the a s c r i p t i o n of the concept '"capable of thought." I f t h i s i s s u c c e s s f u l , then I have shown that no t e s t can be found, which when a machine i s b u i l t to pass i t , i s l o g i c a l l y adequate f o r saying that that .machine can t h i n k . This argument, i s o f f e r e d as . f u r t h e r c r i t i c i s m of the I m i t a t i o n Game which A. M. Turing pro- posed as an adequate t e s t f o r t h i n k i n g s u b j e c t s . Besides the s p e c i f i c c o n c l u s i o n t h a t i n s u f f i c i e n t evidence,has been o f f e r e d to say that machines can t h i n k , t h i s paper o f f e r s a more general c o n c l u s i o n that most standard problems have at bottom a l i n g u i s t i c d i f f i c u l t y . However, t h i s general c o n c l u s i o n i s a broad s p e c u l a t i v e one to which the work i n t h i s paper, i s only a small e x e m p l i f i c a t i o n and as such r e f l e c t s mainly the f u r t h e r ambitions of the author. i v ACKN OWLEDGEMSNT I wish to thank my two t y p i s t s , Sue Reeves and A p r i l Toupin, and also Steye Porche who proof-read the manuscript and o f f e r e d some va l u a b l e suggestions. U n f o r t u n a t e l y I cannot acknowledge the help or encouragement of any of the members of the department i n the pr e p a r a t i o n of t h i s work. I should l i k e to dedicate t h i s t h e s i s to L o i s , whose e encouragement, though seldom acknowldged, was everpre.sent. V TABLE OF CONTENTS Section Page I. I n t r o d u c t i o n 1 I I . The Analogy Between Men and Machines h I I I . F o r m a l i t y and P r e d i c t a b i l i t y 19 IV. What i s Behavior? 27 V. A Test f o r Thinking 39 VI. Conclusion ...50 Footnotes -5*+ B i b l i o g r a p h y • " . . . 56 1 S E C T I O N I I N T R O D U C T I O N T h i s i s a p a p e r o n t h e q u e s t i o n " C a n m a c h i n e s t h i n k ? " a n d i t s g e n e r a l c o n c l u s i o n i s n e g a t i v e . I t i s d i f f i c u l t t o g i v e a n e x a c t c h a r a c t e r i z a t i o n o f t h e p r o b l e m s t h a t p h i l o s o p h e r s a r e i n t e r e s t e d i n w h e n t h e y d i s c u s s t h i s q u e s t i o n . H o w e v e r i t w o u l d b e f a i r l y s a f e t o s a y t h a t t h e p r o b l e m s a r e t h o s e p o s e d b y t h e r e c e n t a d v a n c e s i n d i g i t a l a n d a n a l o g u e c o m p u t e r s . T h e s e m a c h i n e s h a v e b e e n b u i l t t o p e r f o r m a g r e a t v a r i e t y o f h u m a n t a s k s a n d t h e q u e s t i o n n a t u r a l l y a r i s e s a s t o w h e t h e r o r n o t w e m u s t s a y o f s o m e ' s u p e r ' c o m p u t e r t h a t i t t h i n k s . I n t h i s r e s p e c t , o f c o u r s e , i t i s o f i n t e r e s t t o c o n s i d e r t h e d e f i n i t i o n o f a m e c h a n i c a l c o m p u t e r t o s e e i f t h e r e a r e a n y l i m i t a t i o n s s e r i o u s e n o u g h t o j u s t i f y u s i n w i t h h o l d i n g t h e d e s i g n a t i o n , ' c a p a b l e o f t h o u g h t 1 . B e f o r e w e c a n d e c i d e w h e t h e r o r n o t a m a c h i n e t h i n k s , a g r e a t n u m b e r o f s e c o n d a r y p r o b l e m s m u s t b e t a c k l e d a n d t h e s e p r o b l e m s a r e o f w i d e g e n e r a l p h i l o s o p h i c i n t e r e s t . F u r t h e r m o r e t h e p h i l o s o p h i c i m p o r t a n c e o f r e c e n t d e v e l o p m e n t s i n m a t h e m a t i c s a n d p h y s i c s m u s t a l s o b e a s s e s s e d . S o p o t e n t i a l l y t h e p r o b l e m " C a n m a c h i n e s t h i n k ? " c o u l d l e a d u s i n t o v e r y g e n e r a l p h i l o s o p h i c s p e c u l a t i o n . H o w e v e r , a n a r t i c l e b y A . M . T u r i n g 3 " i n 1950 s p a r k e d a w h o l e s e r i e s o f p a p e r s i n t h e p h i l o s o p h i c j o u r n a l s , s o m e o f w h i c h w e r e c o l l e c t e d b y A . R . p A n d e r s o n , i n a n a n t h o l o g y c a l l e d M i n d s a n d M a c h i n e s . T h i s p a p e r i s a c r i t i c i s m o f t h e m a i n a r g u m e n t s p r e s e n t e d b y t h o s e w h o f e e l t h a t m a c h i n e ' s c a n t h i n k w i t h p a r t i c u l a r a t t e n t i o n g i v e n t o t h o s e a r t i c l e s i n M i n d s a n d M a c h i n e s . T h e r e a r e s e v e r a l c o n c l u s i o n s a r r i v e d a t i n t h i s p a p e r . 2 The argument i n the second s e c t i o n attempts to show that there . i s no serious analogy between men and machines. That i s to say, no serious analogy i n the sense that those problems which are r a i s e d because of the uniqueness of human experience, are not r a i s e d w i t h very complicated computers. .In "the t h i r d s e c t i o n I show that a Turing machine i s f o r m a l , and as such i s , i n the important sense, p r e d i c t a b l e . The f o u r t h s e c t i o n i s an at t a c k upon the p o s s i b i l i t y of b u i l d i n g a computer to i m i t a t e human behaviour. The argument i s , that u n t i l c e r t a i n things are shown about our behavioural concepts, then the problem of i m i t a t i o n cannot a r i s e . Of course the f o r c e of t h i s argument i s that these things have not been shown. F i n a l l y i n the f i f t h s e c t i o n I t r y to show that no t e s t can ever be constructed which w i l l l o g i c a l l y be adequate f o r the a p p l i c a t i o n of the concept, 'capable of t h i n k i n g ' . This argument i s meant to undercut the long debate which has gone on c r i t i c i z i n g Turing's " I m i t a t i o n Game", which was proposed as a t e s t f o r t h i n k i n g . Most of the arguments of t h i s paper are an e x e m p l i f i - c a t i o n of a general p h i l o s o p h i c approach. This approach i s one i n which a t t e n t i o n i s focused on the concepts that we use. By doing t h i s a t t e n t i o n i s drawn to the complexity of these concepts, p a r t i c u l a r l y i n t h e i r l o g i c a l ' s t r u c t u r e . I t i s argued that too l i t t l e a t t e n t i o n i s given to the complexity of language, p a r t i c u l a r l y w i t h respect to our. behavioural concepts. At times i t i s argued-that u n t i l aome problems about the nature of concepts .are answered, then no d e c i s i o n about the p o s s i b i l i t y • 3 of c o n s t r u c t i n g robots can be made. So i n a sense, I t h i n k that the general nature of the t h e s i s of t h i s paper can be sa i d to be l i n g u i s t i c . There i s al s o a l a r g e r t h e s i s behind t h i s paper, but upon which none of the arguments depend. This i s the idea that most questions of p h i l o s o p h i c importance can be put i n the form of a problem about the l o g i c of concepts. I f t h i s i s so, and a systematic way can be found f o r d i s c o v e r i n g the l o g i c of concepts, then the main problems of philosophy can be solved w i t h i n a science of language. This paper does not attempt to e s t a b l i s h t h i s t h e s i s but r a t h e r i s meant to be i n some small way an e x e m p l i f i c a t i o n of i t . Thus the arguments of t h i s paper t r y to show that the problems connected w i t h t h i n k i n g machines can a l l be.given a l i n g u i s t i c i n t e r p r e t a t i o n ; although no attempt i s made to give a method f o r d i s c o v e r i n g the l o g i c of concepts. I should mention, f i n a l l y , t hat this'paper does not r e i t e r a t e i n any d e t a i l , the arguments which have already been made i n the many papers on t h i s subject. In f a c t i t i s assumed that the reader i s f a m i l i a r w i t h most of the arguments and i n p a r t i c u l a r that the reader i s very f a m i l i a r w i t h some s p e c i f i c a r t i c l e s . In some places t h i s paper i s an extension of some very thorough work by other philosophers. But i n general the c r i t i c i s m s of t h i s paper are very broad and are intended to undercut many of the standard ideas connected w i t h the problem "Can machines t h i n k ? " SECTION I I THE ANALOGY BETWEEN MEN AND MACHINES H i l a r y Putnam i n h i s paper "Minds and Machines" t r i e s to draw an analogy between the va r i o u s s t a t e s of a complex cybe r n e t i c machine ( c a l l e d a Turing machine) an d the correspon- ding s t a t e s of a human being. He maintains that a machine- "a?.s has l o g i c a l and s t r u c t u r a l s t a t e s , j u s t as a human has mental and p h y s i c a l s t a t e s , and also that those arguments which support the i d e n t i t y or n o n i d e n t i t y of mental and p h y s i c a l s t a t e s also show that the same t h i n g about l o g i c a l and p h y s i c a l s t a t e s . As we a l l know, a machine i s capable of a complete mechanistic (causal) e x p l a n a t i o n and has no hidden or otherwise mysterious p a r t s . Thus'if Putnam can s u s t a i n h i s analogy between men and machines, he t h i n k s that t h i s w i l l go some way (he does not th i n k i t would be conclusive) i n s u b s t a n t i a t i n g a mechanistic t h e s i s . I t i s my contention i n t h i s s e c t i o n that Putnam f a i l s to f i n d the analogy that he i s lo o k i n g f o r . Putnam's t h e s i s r e s t s on two main claims. He t r i e s to show that the p r o p o s i t i o n " I am i n st a t e A i f and only i f f l i p f l o p 3 6 i s on" i s , from the machine's point of view, s y n t h e t i c , or what i s taken to be the same t h i n g , at l e a s t e m p i r i c a l l y v e r i f i a b l e . ' This w i l l make i t analogous to the p r o p o s i t i o n " I am i n pain i f and only i f C - f i b r e s are sti m u l a t e d " and w i l l depend upon there being d i f f e r e n t methods of v e r i f i c a t i o n of sta t e A and f l i p f l o p 3 6 being on. His other c l a i m i s that the l o g i c a l - s t r u c t u r a l d i s t i n c t i o n i s analogous to the mind-body one i n that there can be a l o g i c a l d e s c r i p t i o n of the machine's computations j u s t as there i s a mental d e s c r i p t i o n of human 5 a c t i v i t y . I hope to show that even from the poi n t of view of the machine, the above p r o p o s i t i o n i s not s y n t h e t i c , and a l s o that the l o g i c a l - s t r u c t u r a l d i s t i n c t i o n i s not analogous to the mind-body d i s t i n c t i o n . Putnam considers a Turing machine 'T' which can be i n a number of s t a t e s , one of which i s named A. As he says, '"a Turing machine i s a device w i t h a f i n i t e number of i n t e r n a l c o n f i g u r a t i o n s , each of which i n v o l v e s the machine's being i n one of a f i n i t e number of s t a t e s , . . . " I presume that any p a r t i c u l a r s t a t e of T i s defined as a unique combination of c e r t a i n c i r c u i t s being a c t i v a t e d , c e r t a i n c i r c u i t .breakers being open, and c e r t a i n vacuum tubes operating and f u r t h e r that other c i r c u i t s are dead, other c i r c u i t breakers are closed and other vacuum tubes are not operating. I t may be the case however that the c o n d i t i o n of some components of the machine are i r r e v e l a n t i n the determination of some s t a t e , (say) s t a t e A. For the , d i s c u s s i o n i n t h i s s e c t i o n , l e t usi define s t a t e A as that s t a t e of a Turing machine i n which f l i p floip 36 i s on and a l l the other c i r c u i t s are i r r e v e l a n t . This l a s t c l a u s e , "and a l l the other c i r c u i t s are i r r e l e v a n t " can be expanded i n t o a f i n i t e l i s t . Instead of s p e c i f y i n g whether the other components of the machine should be closed or non-operational, we can say i f some c i r c u i t which i s i r r e l e v a n t that i t can be e i t h e r open or c l o s e d , e i t h e r o p e r a t i o n a l or non-operational. I n t h i s way we can expand the d e f i n i t i o n of s t a t e A i n t o a f i n i t e l i s t , such as, f l i p f l o p 36 i s . on, f l i p f l a p 1 i s e i t h e r on or o f f , f l i p flo\p 2 i s e i t h e r on or o f f , e t c . As I s a i d , t h i s d e s c r i p t i o n i s f i n i t e because there are a. f i n i t e number of components i n any 6 machine. We can now g e n e r a l i z e our d e s c r i p t i o n of what a state i s hy saying that any st a t e of a Turing machine i s equivalent to a l i s t of the va r i o u s components of the machine s t a t i n g e i t h e r that they are on, o f f , or e i t h e r on or o f f ; a c t i v e , i n a c - t i v e or e i t h e r ; a c t i v e or i n a c t i v e ; e t c . Thus any p a r t i c u l a r s t a t e could he p i c t o r i a l l y represented by a p l a n of the machine showing the c o n d i t i o n s of the various c i r c u i t s , c i r c u i t breakers, tubes, magnetic, f i e l d s , r e l a y s , e t c . ^ We could b u i l d i n t o T a sub-machine (sub-T) which could check the c o n d i t i o n of the various components of T and which would p r i n t - o u t (say) onto the input tape of T, the r e s u l t s that i t obtained'. I f we wish to check f o r the var i o u s s t a t e s t h a t T i s i n , i t w i l l s i m p l i f y our job considerably i f we determine what are the s u f f i c i e n t f e a t u r e s of each p a r t i c u l a r s t a t e which d i f f e r e n t i a t e i t from a l l the other s t a t e s of T. Then we could speak of the s u f f i c i e n t c o n d i t i o n s f o r any p a r t i c - u l a r s t a t e . There w i l l be many of the c h n f i g u r a t i o n s of st a t e B which are d i f f e r e n t from C or D but not from E or F. But that configurations;- of the va r i o u s components of T which i s s u f f i c i e n t to d i f f e r e n t i a t e some st a t e from a l l the others w i l l be c a l l e d the s u f f i c i e n t c o n d i t i o n s of that s t a t e . Now that we have t h i s machine, we can ask i t to v e r i f y the statement m I am i n state A when and only when f l i p flo.p 3 6 i s on."" To give a p l a u s i b l e s i t u a t i o n f o r t h i s to a r i s e , imagine that we have j u s t b u i l t T and t h e o r e t i c a l l y the p o s i t i o n of f l i p f l o p 3 6 should be the s u f f i c i e n t c o n d i t i o n f o r sta t e A. We ask the. machine to check (or as Putnam consid e r s , the machine i t s e l f considers checking) the above statement. The method . 7 would be t h e o r e t i c a l l y simple. The machine enters s t a t e A and sub-T repor t s the c o n d i t i o n of. f l i p f l o p .36. The machine then enters every other s t a t e (which i s a f i n i t e number) and compares the r e p o r t s of sub-T on f l i p f l o p 36 to the f i r s t . r e p o r t . I f the subsequent repo r t s are a l l d i f f e r e n t than the f i r s t one, the p r o p o s i t i o n i s t r u e . There i s however a vast p r a c t i c a l problem of g e t t i n g the machine to go through every other s t a t e , and making sure that none are missed. However, t h i s a s i d e , the statement seems open to an e m p i r i c a l s o l u t i o n , making i t s y n t h e t i c . • 7 Putnam wants to say' that i f some b r i g h t person r a i s e d the question of the i d e n t i t y of s t a t e A and f l i p . f l o p 36 being on, the same o b j e c t i o n s could be r a i s e d against i d e n t i t y i n the machine case as are r a i s e d i n the case of the i d e n t i t y of being i n p a i n and C - f i b r e s being s t i m u l a t e d . In the mind-boiy.case i t i s argued that since there are d i f f e r e n t ways of knowing about the s t a t e s to be i d e n t i f i e d , the two states could not be i d e n t i c a l . These same co n s i d e r a t i o n s hold i n the machine case. The way that T determines the s t a t e of f l i p f l o p 36, i s from the reports of sub-T, and the way that i t determines what.state i t i s i n , i s from the o r i g i n a l input order to enter the s t a t e . So there are two d i f f e r e n t ways of knowing about the two s t a t e s . Thus st a t e A i s not i d e n t i c a l w i t h . f l i p f l o p 36 being on". At t h i s ' p oint Putnam leaves the reader w i t h the choice of saying e i t h e r there i s - a 'mind-body' (or l o g i c a l - p h y s i c a l state) problem w i t h machines or e l s e the • human mind-body problem i s merely l i n g u i s t i c . Before we take Putnam's choice, l e t us go back and see whether or not the co n s i d e r a t i o n s are a c t u a l l y p a r a l l e l . I gave an example e a r l i e r .in which the statement '''I am i n st a t e A i f and only i f f l i p f l o p 36 i s closed" was s y n t h e t i c . But the example I gave to i l l u s t r a t e t h a t , was the case of checking the oper a t i o n of some machine which had j u s t been constructed. . I t i s a normal assumption i n the d i s c u s s i o n of machines that we are only considering ' t h e o r e t i c a l 1 machines; i . e . , those that never have mechanical f a i l u r e s . I assume that 8 9 Putnam i s t a l k i n g about the same machines that Turing, Church, and Davis"1"^ were, and these were t h e o r e t i c a l machines. I f T i s a t h e o r e t i c a l machine then, the case I gave to i l l u s t r a t e the s y n t h e t i c nature of the statement could not a r i s e . By dea l i n g w i t h t h e o r e t i c a l machines we e l i m i n a t e the p o s s i b i l i t y . o f m a l f u n c t i o n i n the machine, so the problem of seeing whether or not the machine f u n c t i o n s as designed cannot a r i s e . But perhaps there i s a f u r t h e r sense i n which the statement i s s y n t h e t i c . I s n ' t i t an e m p i r i c a l question as to whether or not the p o s i t i o n of f l i p f l o p 36 i s the s u f f i c i e n t c o n d i t i o n of state A, i . e . i s the p o s i t i o n of f l i p flo.p 36 the fea t u r e of the i n t e r n a l c o n d i t i o n of the machine which makes the sta t e A d i f f e r e n t from a l l other s t a t e s ? But t h i s question i s not the o r i g i n a l question but ra t h e r the one as to whether " I am i n st a t e A i f f l i p flo.p 36 i s cl o s e d . " This i s of course r a t h e r obvious because the necessary and s u f f i c i e n t c o n d i t i o n s are a complete d e s c r i p t i o n of the c i r c u i t s , c i r c u i t breakers, and tubes being on,off or e i t h e r , i n the proper c o n f i g u r a t i o n 9 f o r s t a t e A. That these c o n f i g u r a t i o n s are the proper ones i s not an e m p i r i c a l or s y n t h e t i c question but r a t h e r a question of naming or d e f i n i n g j u s t which c o n f i g u r a t i o n would be s t a t e A. Since t h statement " I am i n s t a t e A i f and only i f f l i p f l o p 36 i s c l o s e d " i s ane about the necessary and s u f f i c i e n t c o n d i t i o n s of s t a t e A, i t i s a matter only of the way the machine was set up; i . e . , a matter of the i n i t i a l s t i p u l a t i o n . That t h e d d e f i n i t i o n - o f s t a t e A i s a matter of i n i t i a l s t i p u l a t i o n though, does not prevent the question about the d e f i n i t i o n of s t a t e A being asked. The machine may consider, or some programmer u n f a m i l i a r w i t h T may consider, the t r u t h o f the proposal " I am i n s t a t e A i f and only i f - f l i p f l o p 36 i s on". This w i l l be a d i f f i c u l t , but not i n s o l u b l e problem, but t h i s alone w i l l not show the p r o p o s i t i o n to be synethic. The i n i t i a l assumptions - of any system, or the o r g i n a l c o n s t r u c t i o n a l correspondences of any machine, may be d i f f i c u l t to determine but t h i s does not prevent them from being s t i p u l a t i o n s (or axioms or d e f i n i t i o n s ) . Thus the f a c t that there i s q u i t e a problem, which one may f a i l to s o l v e , i n a s c e r t a i n i n g the i n i t i a l s t i p u l a t i o n s of the various s t a t e s of the machine., does not show that these c o r r e l a t i o n s (namings, s t i p u l a t i o n s ) are not a n a l y t i c . The main argument however i s , that the ways of determining s t a t e A and the p o s i t i o n of f l i p f l o p 36 are d i f f e r e n t , and thus i t . seems an e n t i r e l y contingent matter whether or not the two things are i d e n t i c a l . "For i n s t a n c e , " Putnam says, "the machine might b e ' i n s t a t e A and i t s sense organs might report that f l i p f l o p 36 was not on." In which case the machine would have to decide 10 whether to say the p r o p o s i t i o n was f a l s e or to a t t r i b u t e the discrepancy to o b s e r v a t i o n a l e r r o r . This problem which Putnam poses f o r the machine could never a r i s e w i t h a Turing machine because we are assuming that the machine f u n c t i o n s c o r r e c t l y • f o r as long as we want i t t o . So there i s no p o s s i b i l i t y of an ob s e r v a t i o n a l e r r o r i n a Turing machine, and i f there was an ''observation' of f l i p f l o p 36 being o f f when the machine was i n sta t e A, then the only c o n c l u s i o n i s that the given statement i s f a l s e . But i f the exact problem which Putnam r a i s e s can 1not a r i s e , s t i l l we have the f a c t that there are two independent ways of v e r i f y i n g each, part of the proposition., r.->\<; -.. ~ ; c S ' ^ : : T h e way the machine determines the p o s i t i o n of f l i p f l a p 36 i s from the.input r e p o r t of sub-T. But how does T determine which s t a t e i t i s i n ? The machine determines t h i s from the i n i t i a l input order which was given i t (or even which i t gave i t s e l f . ) .At no time does the machine d i r e c t l y observe that i t i s i n st a t e A as Putnam claims. The machine i n f e r s from the evidence of the input order to the a c t u a l i n t e r n a l c o n f i g u r a t i o n . Also the machine i n f e r s from the evidence of ' the input r e s u l t s of sub-T to the a c t u a l i n t e r n a l c o n f i g u r a t i o n . Thu's in^determining whether i t i s i n state A or whether c i r c u i t breaker 36 i s on, the machine makes an" i n f e r e n c e from evidence which i s presented to i t . Although the evidence i s d i f f e r e n t , the method of v e r i f i c a t i o n i s the- same i n both cases. Furthermore, sincerwe are d e a l i n g w i t h t h e o r e t i c a l machines, we assume that no mechanical f a i l u r e s occur and that there have been no mistakes i n programming. So, f o r a Turing machine i t i s not p o s s i b l e that T be given an order and f a i l to 11 execute i t or that sub-T report i n c o r r e c t l y . Thus both the input order and the rep o r t of sub-T become d e f i n i t i o n a l c r i t e r i o n f o r s t a t e A. Therefore, i f the p r o p o s i t i o n " I am i n stat e A" means that the machine has been given the order to enter s t a t e A, e i t h e r by i t s e l f or some programmer, and since f l i p flo.p 36 being on i s a necessary c o n d i t i o n f o r stat e A, then the- p r o p o s i t i o n " I am i n stat e A i f and only i f f l i p flo.p 36 i s on" i s a n a l y t i c f o r a Turing machine. On both accounts then the case of the machine i s d i f f e r e n t from the human case. The p r o p o s i t i o n '"I am i n stat e A i f and only i f f l i p flo.p 36 i s on'" i s a n a l y t i c whereas the analogous p r o p o s i t i o n " I am i n pai n i f and only i f my C - f l b r e s are s t i m u l a t e d " i s s y n t h e t i c . The ways i n which the machine v e r i f i e s both the s t a t e i t i s i n and the c o n d i t i o n of f l i p flo.p 3.6 are the same-. Whereas., i n the human case there i s an i n - p r i n c i p l e d i f f e r e n c e between the ways of v e r i f y i n g that one i s i n p a i n and that one's C - f i b r e s are st i m u l a t e d . So Putnam has not b u i l t an analogous case w i t h Turing machines. Putnam then turns to showing that the question of whether a machine 'knows' what s t a t e i t i s i n , i s a degenerate 12 question. I f he can show that i t i s degenerate .in a way that the s i m i l a r question about human knowledge of mental sta t e s i s , t h i s w i l l add more evidence to the analogy between l o g i c a l s t a t e s of a machine and mental s t a t e s of a human. So he compares the two questions "Does the machine ' a s c e r t a i n ' that i t i s i n s t a t e A?" and "Does Jones 'know' that he i s i n p a i n " i n order to show that questions about the meithod of a t t a i n i n g knowledge of i n t e r n a l machine s t a t e s . He hopes to show that 12 .they are both degenerate, but I s h a l l argue that the questions about machine methods are e i t h e r not degenerate or i f they are, not f o r the same reasons that questions of method are f o r mental s t a t e s . There i s one obvious sense i n which i t can e a s i l y be s a i d that the machine computed s t a t e A, and that i s the case where the machine goes through a s e r i e s of c a l c u l a t i o n s which terminates i n s t a t e A. But I take i t that Putnam i s i n t e r e s t e d i n the case of whether or not a machine can be s a i d to compute that i t i s i n s t a t e A from stat e A alone. Before considering the question though, we must add one more fe a t u r e to our' machine T, by supposing that' whenever the machine i s i n one p a r t i c u l a r s t a t e (say s t a t e A), i t p r i n t s out the words " I am i n s t a t e A". This can be done i n two ways: e i t h e r every time we give the machine an i n s t r u c t i o n to enter s t a t e A, we next give i t the i n s t r u c t i o n to p r i n t out "I am i n s t a t e A", or e l s e we can have the machine so constructed that every time i t enters s t a t e A i t a l s o p r i n t s out " I am i n s t a t e A". The question may now a r i s e "Does the machine ' a s c e r t a i n ' that i t i s i n s t a t e A?" According to Putnam, ' a s c e r t a i n ' i s synonymous w i t h 'compute' or 'work out'; so the question can be rephrased as "Does the machine ' a s c e r t a i n 1 (or compute or work out.) that i t is. i n a s t a t e 13 A?" I f we have a machine i n which a f u r t h e r i n s t r u c t i o n i s given i t to p r i n t out " I am i n ' s t a t e A", then the answer to the above question i s yes, and. the answer to the f u r t h e r query about how i t a s c e r t a i n s or w o r k s • i t out i s given by showing the programming r e q u i r e d . In t h i s p a r t i c u l a r case i t i s a matter of the i n s e r t i o n of a sub-routine (granted i t i s a short one of 1 3 one i n s t r u c t i o n ) a f t e r the i n s t r u c t i o n to enter s t a t e A. So i f we -have t h i s type of machine, the question i s not degenerate. But i f we have a machine that has b u i l t i n t o i t a programme such that every time i t enters s t a t e A i t p r i n t s out "I am i n st a t e A", then the p r i n t i n g out becomes part of the d e s c r i p t i o n , and thus a d e f i n i t i o n a l c o n d i t i o n of .the machine being i n state": A. (Mechanical e r r o r s are t h e o r e t i c a l l y e l i m i n - ated.) I f t h i s i s the case then i t l o s e s i t s analogy w i t h the human s i t u a t i o n of someone 'evincing' " I am i n p a i n " , f o r the v e r b a l statement i s not^'part of. the d e s c r i p t i o n of p a i n and not a d e f i n i t i o n a l c o n d i t i o n of being i n pain. The question about the machine a s c e r t a i n i n g or computing that i t i s i n state A becomes degenerate because the f a c t that the machine p r i n t e d out " I am i n st a t e A" i s a d e f i n i t i o n a l c r i t e r i o n of the machine's being i n s t a t e A. Putnam says that the d i f f i c u l t y of degeneracy has, i n both cases the same cause: "namely, the d i f f i c u l t y i s occasioned by the f a c t that the v e r b a l r e p o r t ( I am i n state A and I am i n pain) issued d i r e c t l y from the state i t r e p o r t s . . . But the p r i n t out " I am i n state A" i s not a r e p o r t , but a part of what i s s t i p u l a t e d as being i n st a t e A; r e p o r t s can be mistaken, but not d e f i n i t i o n a l c r i t e r i o n . The question about the machine computing " I am i n st a t e A" from stat e A i s a d e s c r i p t i o n because part of what i s set up i n t h i s machine asstate A i s a d e s c r i p t i o n of the p r i n t - o u t mechanism p r i n t i n g " I am i n st a t e A", and not as Putnam t h i n k s because '"I am i n st a t e A" issues d i r e c t l y from the machine's being i n st a t e A. However the statement '"I am i n p a i n " , i f i t i s degenerate, i s not; so f o r these reasons. In the human case, a person saying t h a t they are i n p a i n i s not. a necessary c o n d i t i o n e i t h e r f o r them knowing themselves'that they are i n p a i n nor f o r someone el s e knowing that they are i n pain. The r e l a t i o n between the statement " I am i n p a i n " and the p a i n i s q u i t e contingent, and i t i s t h i s f a c t which gives r i s e , i n the human s i t u a t i o n , to. the question of knowing about the p a i n i n order to 'evince 1' " I am i n pain'"1. This analogous s i t u a t i o n does not a r i s e i n a Turing machine. So the question of how a machine computes or works out what s t a t e i t i s i n , i s not u s u a l l y degenerate, but when the question i s , i t i s not degenerate f o r the reasons that questions of knowing pa i n ( i f those questions are a c t u a l l y degenerate) .are. To continue h i s analogy between machines and humans, Putnam shows that there are two types of machine s t a t e s , l o g i c a l s t a t e s and s t r u c t u r a l s t a t e s , and that these are analogous t o t t h e mental and p h y s i c a l s t a t e s of human beings. As I mentioned e a r l i e r , any t h e o r e t i c a l Turing machine i s capable of being i n a f i n i t e number of, s t a t e s , A, B, C, and i f the va r i o u s programmes of t h i s machine are already i n memory, then the machine w i l l change from one s t a t e to another according to i t s programming. But as Putnam says "a given 'Turing machine' i s an a b s t r a c t machine which may be p h y s i c a l l y r e a l i z e d i n an almost i n f i n i t e number of d i f f e r e n t ways," y and, f o r ' any p a r t i c u l a r manufactured machine the p h y s i c a l c o n d i t i o n of i t mayvyary from one c o n d i t i o n to another. Thus any a c t u a l machine may be i n a number of p h y s i c a l or s t r u c t u r a l states and yet may ' s ' t i l l be i n the same l o g i c a l s t a t e . So f o r any p a r t i c u l a r machine i t can be thought of or described as a f i n i t e number of l o g i c a l s t a t e s or as a number of s t r u c t u r a l s t a t e s , and the f u n c t i o n i n g of the. machine 15 can be-expressed e i t h e r e n t i r e l y i n terms of l o g i c a l s t a t e s , or again, e n t i r e l y i n s t r u c t u r a l s t a t e s . This i s , according to Putnam,' analogous to the human s i t u a t i o n i n which the f u n c t i o n i n g of the human can be explained i n terms of mental occurrences ( e . g F r e u d i a n explanation) or i n terms of p h y s i o l o g i c a l changes (e.g., complete behavioural d e s c r i p t i o n ) . In order to assess t h i s analogy, l e t us b a c k t r a c t to the d i s t i n c t i o n between l o g i c a i a a n d s t r u c t u r a l s t a t e s and consider b r i e f l y again j u s t what are l o g i c a l s t a t e s . When we set up a Turing machine, we said that i t could enter a f i n i t e number of s t a t e s , A, B, C, ... e t c ' These sta t e s r e f e r r e d to something more or l e s s e x p l i c i t ; namely the i n t e r n a l c o n f i g u r a t i o n of some h y p o t h e t i c a l machine. These sta t e s of the machine, A, B, C, ... must be e x p l i c i t , at l e a s t to the extent that we can see that we can b u i l d some machine that w i l l enter, these s t a t e s . Thus i f the p a r t i c u l a r s t a t e we are t a l k i n g about i s one i n which the machine places the input data i n t o memory space ^683, we must be able to show that a machine can be b u i l t which w i l l f u l f i l t h i s f u n c t i o n a n d consequently be able to enter t h i s s t a t e . This could be done by l a y i n g out on the d r a f t i n g board the p o s s i b l e c o n f i g u r a t i o n s of c i r c u i t s , r e l a y s , and vacuum tubes such that any machine which was b u i l t from these plans would be able to enter t h i s p a r t i c u l a r s t a t e . This requirement that the states of a Turing machine r e f e r at l e a s t to one p o s s i b l e c o n f i g u r a t i o n of a machine, i s a b s o l u t e l y e s s e n t i a l . Otherwise we would beg the e n t i r e question. I f we simply s a i d that the machine could f u l f i l such-and-such f u n c t i o n and we d i d not s p e c i f y how t h i s could be accomplished mechanically, then we would simply be 16 saying that machines can do whatever humans can and I presume that i t i s j u s t t h i s question of whether machines can do every- t h i n g humans can do that-we are t r y i n g to answer. So unless we beg the question, we must be able to s p e c i f y at l e a s t one mechanical c o n f i g u r a t i o n of a p o s s i b l e Turing machine f o r every s t a t e that we a t t r i b u t e to machine T. When we say that the i n t e r n a l c o n f i g u r a t i o n of s t a t e A must be s p e c i f i e d , we do not mean that i t must be e x p l i c i t l y l a i d out i n every minute d e t a i l . For example, i f i n s p e c i f y i n g s t a t e A we say that there must be a c i r c u i t j o i n i n g the scanner to the memory input compartment, we do not s p e c i f y the l e n g t h of the c i r c u i t , nor the chemical composition of the w i r e , nor even f o r that matter that i t must be a wire which c a r r i e s the impulse from one to'the other. In f a c t there i s no l i m i t to the v a r i o u s ways that such a c i r c u i t could be set up. (The c i r c u i t i s s p e c i f i e d by the f u n c t i o n , (or purpose, or g o a l ) , and thus there are an u n l i m i t e d number of a c t u a l mechanical ways of f u l f i l l i n g the p a r t i c u l a r purpose. We could a l s o have a messenger boy c a r r y the message, but t h i s would not be a mech a n i c a l s o l u t i o n . But we must show that there i s at l e a s t one mechanical s o l u t i o n . ) On the other hand, f o r any a c t u a l machine there w i l l be a complete p h y s i c a l d e s c r i p t i o n of the v a r i o u s c i r c u i t s , r e l a y s , tubes, e t c . , s p e c i f y i n g the a c t u a l p h y s i c a l make-up of the machine. But these s p e c i f i c a t i o n s must inc l u d e at l e a s t those s p e c i f i c a t i o n s which were l a i d down f o r the t h e o r e t i c a l s t a t e . That i s , those c o n d i t i o n s which we s p e c i f i e d f o r the T machine to be i n s t a t e A must be included i n (or d e d u c t i b l e from) the p h y s i c a l s p e c i f i c a t i o n s of t h i s - a c t u a l machine-, although these 1 7 p h y s i c a l s p e c i f i c a t i o n s w i l l a l s o , d e s c r i b e many p r o p e r t i e s which were not included i n the t h e o r e t i c a l c o n s i d e r a t i o n s of st a t e A. Our i n i t i a l s p e c i f i c a t i o n of the p r o p e r t i e s of st a t e A was ab s t r a c t i n the sense t h a t i t l e f t open to the engineer b u i l d i n g the machine many other p r o p e r t i e s to be s p e c i f i e d before the machine could be b u i l t . But the computer's p h y s i c a l or s t r u c t u r a l d e s c r i p t i o n of st a t e A w i l l d i f f e r from the t h e o r e t i c a l or l o g i c a l d e s c r i p t i o n of st a t e A only i n that i t describes more p r o p e r t i e s f o r the machine. Thus i f we t h i n k of the s t r u c t u r a l d e s c r i p t i o n as designating a set of p r o p e r t i e s and co n d i t i o n s of T, the l o g i c a l d e s c r i p t i o n w i l l be a sub-set of these. Now- i t i s u s u a l l y thought that the d i f f e r e n c e between mental s t a t e s and p h y s i c a l s t a t e s i s one of a more serious natura than j u s t that mental s t a t e s have the same but fewer p r o p e r t i e s than p h y s i c a l s t a t e s . I t i s g e n e r a l l y thought that the t e s t f o r determining p h y s i c a l p r o p e r t i e s are not a p p l i c a b l e to the p r o p e r t i e s of mental s t a t e s . Most of the-.philosophical specul- a t i o n of the l a s t few years has been an attempt to f i n d some i d e n t i t y p r i n c i p l e between the p r o p e r t i e s of our mental st a t e s and those, p r o p e r t i e s which are o b j e c t i v e l y a t t r i b u t e d to other people. .Putnam doesn't even need an i d e n t i t y p r i n c i p l e because there i s only one type of property. He has f a i l e d to f i n d two types of things between which we need to f i n d some bridge or connection. From a complete p h y s i c a l d e s c r i p t i o n of a machine we can deduce the t h e o r e t i c a l d e s c r i p t i o n , but u n t i l some i d e n t i t y p r i n c i p l e i s afforded by Putnam or someone e l s e , we cannot deduce the mental d e s c r i p t i o n of a person from h i s p h y s i c a l c o n d i t i o n . This i d e n t i t y p r i n c i p l e which would bridge 1 8 the g u l f between mental and p h y s i c a l s t a t e s may yet he found by philosophers,, nevertheless,, what i s c e r t a i n l y true i s that some p r i n c i p l e i s needed. In the case of a Turing machine there i s no p r i n c i p l e needed because Putnam has f a i l e d to show that there i s a type d i f f e r e n c e between the p r o p e r t i e s of l o g i c a l and p h y s i c a l s t a t e s . Therefore the d i f f e r e n c e between a l o g i c a l and p h y s i c a l d e s c r i p t i o n of a'machine i s not analogous to the d i f f e r e n c e between a mental and p h y s i c a l d e s c r i p t i o n of some person's p a i n (say). Thus I conclude that the l o g i c a l - s t r u c t u r a l d i s t i n c t i o n w i t h machines i s not analogous to the mental-physical d i s t i n c t i o n i n the human s i t u a t i o n . The c o n c l u s i o n of t h i s s e c t i o n i s not that there are no problems to be answered or d i s t i n c t i o n s to be made w i t h complex Turing machines. The co n c l u s i o n i s rat h e r that the problems .raised or the questions asked by a Turing machine a b o u t . i t s e l f are not.problems f o r the same reasons that s i m i l a r questions about humans are. The machine may ask i t s e l f questions of the same form as humans may,' but the d i f f i c u l t y i s not the same d i f f i c u l t y that a human has. S i m i l a r l y , many d i s t i n c t i o n s can be drawn i n d e a l i n g w i t h complex machines, but these a l s o , I conclude, are not the same d i s t i n c t i o n s which philosophers have noted i n the human case. Thus the problems which a complex Turing machine might face are not the same as those that humans t r y to answer, and i n t h i s sense the analogy between men and machines f a i l s . . " - ' 1 9 SECTION I I I FORMALITY AND PREDICTABILITY' •In t h i s s e c t i o n , I wish to show that a Turing machine i s a concrete i n s t a n t i a t i o n of a formal' system, and as such, i s p r e d i c t a b l e . My demonstration that Turing machines are formal i s not unique but I f e e l that i t i s important that i t should be shown ra t h e r e x p l i c i t l y . Many p h i l o s o p h e r s have argued that i f a Turing machine i s formal then Godel 1s Incompleteness Theorem can help us 1 6 to some i n t e r e s t i n g conclusions about machines. 'Some, siich as Lucas have argued that the Theorem r e f u t e s mechanism; others, such as 17 18' • ' Putnam.' and Turing have argued t h a t the Theorem has no bearing on the i n t e r e s t i n g p h i l o s o p h i c questions. I s h a l l argue, on the other hand, only that Turing machines are formal and that i n the important sense that philosophers have concerned themselves with,- these machines are ' predictable'. • Before entering the problem of showing any l i m i t a t i o n s of a Turing machine, we must demonstrate rather' c l e a r l y that any Turing machine can be represented a's a formal system. My demonstration of t h i s i s e s s e n t i a l l y the one used by M a r t i n Davis i n the f i r s t 1 9 chapter of h i s book, Computability and U n s o l v a b i l i t y . As I explained i n the f i r s t s e c t i o n , a machine can be i n . any one of a number of c o n f i g u r a t i o n s , q̂ ', qg, q-^,. . .' up to some-finite l i m i t . A tape, d i v i d e d i n t o d i s c r e e t u n i t s , i s fed i n t o the machine and i n each u n i t there appears a l e t t e r of a language comprising a number of symbols, S Q , S ^ , Sg,... up to some f i n i t e number. Furthermore the tape i s f i n i t e , but can be as long as i s needed. One of the e s s e n t i a l f u n c t i o n s of a Turing machine i s that i t i s a b l e , upon the r e c e i p t of a symbol, to change from one s t a t e (say) q^ to another s t a t e q^. Not only can a Turing machine change states but i t can also 20 change the symbol on the scanned u n i t or i t can move the tape along so that the next u n i t i s scanned. This p o s s i b i l i t y of changing can be represented by a quadruple, such as, q ^ S p ^ . The machine that t h i s i s a quadruple of,' w i l l , i f i t i s i n s t a t e q 1 ,and i s scanning symbol S^, change to s t a t e q 2 , and erase S 1 and put the Tsymbol S 2 i n the scanned tape u n i t . More g e n e r a l l y , a quadruple stands f o r a machine b u i l t to c a r r y out any i n s t r u c t i o n of the f o l l o w i n g form: when i n stat e q and the symbol S„ i s on the tape u n i t being scanned then change to s t a t e q (x^, y or y ^ x) and e i t h e r change the symbol on the scanned tape u n i t to S or e l s e scan the u n i t to the r i g h t or l e f t . I f a machine i s capable of f o l l o w i n g out an i n s t r u c t i o n of that form,- then i t can be represented by a quadruple. I t i s important to n o t i c e that a f t e r the machine has c a r r i e d out t h i s i n s t r u c t i o n , i t i s i n the o r i g - i n a l p o s i t i o n again i n that i t i s i n some st a t e w i t h a scanned u n i t i n f r o n t of i t . Thus the machine i s ready'to.carry out another i n s t r u c t i o n of the same form. However, i f . t h e r e i s no such i n s t r u c t i o n b u i l t i n t o the machine, then when i t reaches that s t a t e and symbol, the machine w i l l stop. Thus any machine which goes through a process or s e r i e s of changes from one p o s i t i o n to another can be represented by a s e r i e s of quadruples. Since the number of st a t e s and symbols i s f i n i t e , the number of quadruples w i l l a l s o •be f i n i t e . Therefore a l l the p o s s i b l e movements of the machine can be described by a s e r i e s of quadruples, so that t h i s s e r i e s a c t u a l l y defines the machine"s p o s s i b i l i t i e s . Any p a r t i c u l a r Turing machine can be represented, then, by a s e r i e s of quadruples. But as I s a i d , when the machine has f i n i s h e d one change i t i s i n a p o s i t i o n to c a r r y out another. This continuous change of the machine i s represented by a s e r i e s 21 of deductions. I f we take the -tape to he given f o r any p a r t i c u l a r machine, then by knowing which u n i t the machine w i l l scan f i r s t and the s t a t e that the machine i s i n when i t begins, we can deduce, using the l i s t of quadruples of that machine', the v a r i o u s steps that the machine w i l l go through to a r r i v e at the answer. So considering the q*s and S's as p r i m i t i v e words, and the o r i g i n a l tape as i n i t i a l axioms, and. the quadruples as r u l e s of i n f e r e n c e , we have constructed an axiomatic system which w i t h the a d d i t i o n of a few more s t i p u l a t i o n s can be made qui t e formal. And t h i s system rep r e s e n t s , i n symbolic terms the various'changes that a Turing machine would go through i n any a c t u a l problem. I s h a l l i n what f o l l o w s , s t a t e t h i s f a c t r a t h e r b r i e f l y by saying 21 t h a t a machine i s a concrete i n s t a n t i a t i o n of a formal system. F i n a l l y , any theorems which apply to formal systems, as formal systems, w i l l a l s o apply to Turing machines. I f we consider a computer.as a d i s c r e t e s t a t e machine whose motion f o l l o w s some formal system, then i t seems that whatever the machine does i s p r e d i c t a b l e . I f we know the i n i t i a l s t a t e of the computer and we know i t s complete l i s t of quadruples then we can p r e d i c t what the machine w i l l do once we see i t s tape. However, does i t f o l l o w from the f a c t that a machine i s formal that i t i s p r e d i c t a b l e , and f u r t h e r , i f the machine i s not p r e d i c t a b l e does t h i s show that i t i s not formal? Now there are s e v e r a l reasons to suggest that a computer i s not p r e d i c t a b l e . One reason may be'that we doiib-t have enough know- ledge of "the machine. For example, we carP'ctt p r e d i c t ( i n general) when a complicated piece of machinery w i l l break down because we don't know enough about the manufacture or s t r u c t u r a l 22 composition of the v a r i o u s p a r t s . But we a t t r i b u t e the i n a b i l i t y to p r e d i c t simply to our l a c k of knowledge which we f e e l that we could get, givah enough time and l a b o r a t o r y space. That i s , we hold that f o r these reasons machines are not ' i n p r i n c i p l e ' u n p r e d i c t a b l e . However, there are other reasons f o r the unpred- i c t a b i l i t y of computers which stem from our i n a b i l i t y to get knowledge. But t h i s i n a b i l i t y i s not a p r a c t i c a l matter but a t h e o r e t i c a l one. I'take i t - t h a t the' i m p l i c a t i o n of Heizenberg'-s U n c e r t a i n t y p r i n c i p l e i s that measurements below f i x e d amounts are not p o s s i b l e , f o r the more a c c u r a t e l y we measure the p o s i t i o n of a p a r t i c l e the more inaccurate w i l l be our measurement of i t s momentum. So much so that i f we ever d i d measure the p o s i t i o n of a p a r t i c l e completely a c c u r a t e l y then we would n e c e s s a r i l y have made an i n f i n i t e e r r o r i n : i t s momentum. Thus, considering measurements of the utmost accuracy, we must, i n p r i n c i p l e , have a f i n i t e magnitude of e r r o r , and we are unable to p r e d i c t g r e a t e r anything i n t o an accuracy^than the accuracy of the accumulated e r r o r s . However as I s a i d , we are d e a l i n g w i t h measurements of great accuracy and of course we w i l l be measuring sub-atomic s t r u c t u r e s . For i f we want to make a measurement of something to the greatest accuracy we w i l l have to consider the object as a c o l l e c t i o n of sub-atomic p a r t i c l e s . But i f we consider the object or machine as a macroscopic u n i t , then usihg-macroscopic measuring d e v i c e s , we can, w i t h i n experimental e r r o r , measure, t e s t , and..predict the movements of the mechanism. So i f we spent a great de a l of time t e s t i n g the v a r i o u s parts of some machines, the above reasons would not be s u f f i c i e n t to show that any machine i s i n p r i n c i p l e u n p r e d i c t a b l e i n macroscopic u n i t s . I t i s g e n e r a l l y contended, however, that computers 2 3 i z which c o n t a i n randoming devices'are i n p r i n c i p l e u n p r e d i c t a b l e . I want .to examine two types of randomizers, (a) a counter of the number of radium atoms to have d i s i n t e g r a t e d i n the half-minute previous and (b).the decimal expansion of TT . I take the counter as an example o'f a device which we can never, r e g a r d l e s s of how much-knowledge we had,- p r e d i c t , i . e . the number which the counter has o n - i t at any moment- i s i n p r i n c i p l e u n p r e d i c t a b l e . The reason f o r our i n a b i l i t y to p r e d i c t may be due to- the v a r i a t i o n s which a f f e c t the d i s i n t e g r a t i o n of radium atoms being of such a small magnitude that the U n c e r t a i n t y P r i n c i p l e l i m i t s our i n v e s t i g a t i o n . (This would only show that we cannot i n v e s t i g a t e the laws governing d i s i n t e g r a t i o n although there may be' some.) But granting that there are i n the world counters which are u n p r e d i c t a b l e i n the strong sense that no increase i n knowledge w i l l ever a v a i l i n p r e d i c t i n g them, what can we say about computers which contain these devices? Presumably, a computer w i t h a random device w i l l work as f o l l o w s , the machine i s given the i n s t r u c t i o n to look at the tape u n i t t o t t h e r i g h t and there i s no symbol on that u n i t . The symbol i s not w r i t t e n on the u n i t u n t i l the tape i s i n the scanner and then the symbol which i s w r i t t e n on the u n i t i s determined by the random device. In t h i s way no one could p r e d i c t how the machine would operate a f t e r t h i s i n s t r u c t i o n because we could not, i n p r i n c i p l e , know what symbol would be on the tape u n t i l the machine a c t u a l l y d i d scan the u n i t . However t h i s example i s j u s t another case of adding more i n f o r m a t i o n to the machine during, i t s c a l c u l a t i o n s . We can c e r t a i n l y b u i l d machines that w i l l do some c a l c u l a t i o n s and then come to a h a l t u n t i l more i n f o r m a t i o n i s given to i t . This would be the case 2k where the machine works out the i n i t i a l tape i n p u t , and when i t stops we a l t e r the tape, which i s j u s t the same as g i v i n g i t a new tape. Then the machine w i l l work again t h i s problem. We can make t h i s more s o p h i s t i c a t e d by having the machine i t s e l f add more i n f o r m a t i o n to the tape at c e r t a i n stages of i t s c a l c u l a t i o n s . And the case of having a randomizing device i n the machine i s an example of adding more i n f o r m a t i o n , but the i n f o r m a t i o n can not be p r e d i c t e d . When we o r i g i n a l l y thought of the problem of p r e d i c t i n g a computer, we were t h i n k i n g of a machine which was given some c a l c u l a t i o n s to do. In terms of the machines formal system, the case of p r e d i c t a b i l i t y arose where we had a f i n i t e l i s t of quadruples and a given s e r i e s of tape expressions. Then i t was asked whether or not the machine-' s • movements could be p r e d i c t e d . This i s a l l q u i t e analogous to the human s i t u a t i o n where we give someone a. problem and then t r y to f i g u r e out what t h e i r behaviour w i l l be. But the o r i g i n a l problem was not one of t r y i n g to p r e d i c t how a,•machine would r e a c t when given more in f o r m a t i o n l a t e r i n the problem, i n f o r m a t i o n which we could not get ourselves. No one would t h i n k that you had shown a machine to be unpred- i c t a b l e i f you proved that we cannot f i g u r e out i n advance how the machine would r e a c t when unknown in f o r m a t i o n was fed i n t o i t . When we ask whether or not machines are p r e d i c t a b l e we are asking whether or not, given a machine and the i n f o r m a t i o n fed i n t o it,'we can p r e d i c t the subsequent movements of the machine.' The randomizing device feeds i n f o r m a t i o n i n t o the machine from w i t h i n the machine. But I do not t h i n k that t h i s changes the case at a l l . The tape that the machine scans i s changed 25 and that creates a new axiomatic beginning f o r the machine. The f a c t that the source of the i n f o r m a t i o n i s some device w i t h i n the p h y s i c a l bounds of the machine does not make the case, d i f f e r e n t than the one. where more i n f o r m a t i o n i s fed i n from o u t s i d e . I t may be thought, however, that I am p r e j u d i c i n g the case by making the randomizing device p e r i p h e r a l to the actual machine, and that a c t u a l l y the device can be b u i l t i n t o the ' e s s e n t i a l ' workings of the machine. I myself cannot see how t h i s randomizing e f f e c t could be expressed i n terms of quadruples and tape expressions except i n a way.similar to the one suggested above. I f we b u i l d the device i n t o the e s s e n t i a l workings of the machine, then we would not have a computer but r a t h e r j u s t a super-randomizer. The purpose of a randomizer i s to supply random numbers when the machine r e q u i r e s that type of i n f o r m a t i o n , v i z . randjom numbers. Therefore, a computer w i t h a randomizer i s s t i l l q u i t e p r e d i c t a b l e as f a r as i t s movements are concerned during a problem. I t i s not p r e d i c a b l e , however, i f during the problem more unknown i n f o r m a t i o n i s fed i n t o the machine, but then no one ever thought, that a machine was p r e d i c t a b l e under those c o n d i t i o n s . I f .7the type of randomizer i s one that s e l e c t s numbers s u c c e s s i v e l y from the decimal expansion of TT ? then the computer i s completely p r e d i c t a b l e . I f we b u i l d the machine so that each time i t r e c e i v e s an i n s t r u c t i o n to ' s e a r c h 1 , i t s e l e c t s the next number s u c c e s s i v e l y i n the expansion, then the numbers which the computer s e l e c t s w i l l be random. However i f we know how many past searches the machine has done, and we know where i n the expansion the computer s t a r t e d , then we can c a l c u l a t e the next number and we w i l l know which a l t e r n a t i v e the machine w i l l 26 f o l l o w . Thus there are machines w i t h randomizers which are together completely p r e d i c t a b l e . We can conclude, t h e r e f o r e , from the d i s c u s s i o n of the two types of randomizers, that computers w i t h these devices i n them are s t i l l p r e d i c t a b l e i n the strong sense. Furthermore the f o r m a l i t y of the machine i s not upset, because we can e a s i l y a l l o w f o r a change i n the input tape, which we sai d was comparable to the axioms of a formal system. A l t e r i n g the axioms of a system does not destroy the f o r m a l i t y of the system, i t j u s t makes a new system that has d i f f e r e n t theorems. \ 27 SECTION IV WHAT IS BEHAVIOUR? 22 In h i s a r t i c l e "The Mechanical Concept of .Mind", Michael S c r i v e n presents the f o l l o w i n g argument: the outward signs ( i n c l u d i n g speech) are not i n f a l l i b l e i n d i c a t i o n s of consciousness. I t i s therefore q u i t e c e r t a i n that they arepV not, ... the same th i n g as consciousness.- : ^ This argument i s meant to show that consciousness cannot be reduced to outward signs or observable behaviour. S c r i v e n seems to have i n mind a d i s t i n c t i o n between the behavioural and the non-behavioural aspects of man. When he t a l k s about two d i s t i n c t t h i n g s , outward signs and consciousness, S c r i v e n seems to be d i s t i n g u i s h i n g between outward observable behaviour and something e l s e which i s inner and unobservable. In order to assess t h i s argument which I have quoted or any others l i k e i t , we.must make c l e a r e r t h i s d i s t i n c t i o n between outward signs and consciousness. In p a r t i c u l a r - , i t might be asked j u s t what are the outward signs? . What are the behavioural aspects of man? More g e n e r a l l y , t h i s i s j u s t the qu-stion "What i s behaviour?" When philosophers t a l k about the p o s s i b i l i t y of there being mechanical robots around, i t seems that they are also using the idea of a robot to mark the d i s t i n c t i o n between the behavioural aspects of human experience and the non-behavioural aspects. 1 The robot i s considered to be able to behave e x a c t l y l i k e a person, even, w i t h some w r i t e r s , to the p o i n t of being b a h a v i o u r a l l y i n d i s t i n g u i s h a b l e from other people;.so that whatever e l s e a man has besides behaviour, t h a t ' s what makes him d i f f e r e n t from a robot. No one ever considers a c t u a l l y b u i l d i n g a robot and philosophers are not i n t e r e s t e d i n some supposed f u t u r e problem of d i s t i n g u i s h i n g a c t u a l people from t h e i r mechanical robot s l a v e s ! When we conceive of mechanical robots, we are j u s t using a conceptual device to mark the d i s t i n c t i o n between those things which have.just behaviour and those which have something else besides. Again, however, before we can consider using t h i s conceptual device of mechanical robots, i t i s important to determine j u s t e x a c t l y what i s to be considered as behaviour. I t i s g e n e r a l l y thought that i f we could b u i l d a robot to i m i t a t e any human behaviour, that we would not be able to d i f f e r e n t i a t e the robot from other people as- f a r as i t s behaviour was concerned. However, even i f we grant that a machine could be b u i l t to i m i t a t e any human behaviour, t h i s would not .mean that i t was i n d i s t i n g u i s h a b l e from a human. The f a c t that we can b u i l d a robot to i m i t a t e any piece of human behaviour does . not prove that we can b u i l d a robot to behave the same as a human. We do not u s u a l l y equate 'acting l i k e ' someone el s e and ' i m i t a t i n g ' them. Take the case where X i s s a i d to be i m i t a t i n g Y. I f we could show that X was unaware of what Y was doing, then we could not say that X was i m i t a t i n g Y. Furthermore if we are correct, i n saying that X i s i m i t a t i n g Y, then we could c o r r e c t l y a t t r i b u t e some i n t e n t i o n to X; namely, the i n t e n t i o n to i m i t a t e Y. Whereas when we say that so-and-so i s a c t i n g l i k e another person we are implying only coincidence. Confusing 'acting l i k e ' and ' i m i t a t i n g ' i s tantamount to reducing c o i n c i d - e n t a l behaviour to conventional behaviour, l i k e confusing s i m i l a r and t y p i c a l . There i s c e r t a i n l y a d i f f e r e n c e between on the one hand, two people having s i m i l a r enough c h a r a c t e r i s t i c s to be •29 i n d i s t i n g u i s h a b l e and, on the other hand, people having c e r t a i n c h a r a c t e r i s t i c s the same but not having some others. I m i t a t i n g i s a case of having some c h a r a c t e r i s t i c s the same as whoever i s being i m i t a t e d but not having some f u r t h e r c h a r a c t e r i s t i c s . A c t i n g l i k e or being a l i k e i s a matter of doing s i m i l a r s orts of t h i n g s , things which are comparable enough to be c a l l e d the same. So i f we a l l o w that i m i t a t i o n of any piece of behaviour i s p o s s i b l e we can not move immediately to the co n c l u s i o n that robots and humans are i n d i s t i n g u i s h a b l e . Thus i f we al l o w that robots can be b u i l t that i m i t a t e human behaviour, i t by no ' means f o l l o w s that they are i n d i s t i n g u i s h a b l e , even b e h a v i o u r a l l y , from humans. This c l a i m , that i f we a l l o w that i m i t a t i o n i s p o s s i b l e does not prove that men and robots are i n d i s t i n g u i s h a b l e , i s qu i t e compatible w i t h the evident f a c t that during a performance an actor may be i n d i s t i n g u i s h a b l e from (say) someone .who i s r e a l l y mad. For to say that an actor i s i n d i s t i n g u i s h a b l e during a performance i s to admit ( t a c i t l y ) that there i s a d e f i n i t e l i m i t to the s i m i l a r i t i e s between ac t o r s and madmen. But to admit that there are l i m i t s i s to acknowledge that a c t o r s and madmen are r e a d i l y d i s t i n g u i s h a b l e i n a l a r g e r context. However, there may be cases of i m i t a t i o n which are done so w e l l that one may doubt whether there are any c h a r a c t e r i s t i c s which the i m i t a t o r has f a i l e d to d u p l i c a t e ; a s o r t of p e r f e c t i m i t a t i o n . But I f i n d t h i s case g e n e r a l l y i n c o n c e i v a b l e , since i m i t a t i n g presupposes a ( p a r t i c u l a r ) second-order i n t e n t i o n a l i t y on the part of the actor which the person i m i t a t e d doesiiapfc have. Unless one held that i n t e n t i o n a l i t y was e n t i r e l y non-behavioural., i . e . , 3 0 had no behavioural m a n i f e s t a t i o n s , then I cannot conceive of a case of p e r f e c t i m i t a t i o n . However i t i s c e r t a i n l y the case that i f we allo w that robots can be b u i l t to i m i t a t e any piece of human behaviour, we can not conclude from t h i s that they would be i n d i s t i n g u i s h a b l e , even as f a r as t h e i r behaviour i t s e l f i s concerned, from humans. However, l e t me t r y to make c l e a r e r the d i s t i n c t i o n that I drew above between the problems of s i m i l a r i t y and e x e m p l i f i c a - t i o n . In problems of s i m i l a r i t y we are t r y i n g , f o r example, to 1 determine whether some p a r t i c u l a r piece of behaviour can be c a l l e d a smile. We are .troubled because we do^not have any c l e a r t e s t f o r determining what c o n s t i t u t e s s m i l i n g . Or, we may be i n doubt about how s u c c e s s f u l one must be i n some proposed t e s t i n order to be said to have smiled. This i s the problem of t r y i n g to f i n d adequate t e s t s f o r the a p p l i c a t i o n of some c h a r a c t e r i s t i c s to given s i t u a t i o n s . By an adequate t e s t , I mean one that i s s u c c e s s f u l or p o s i t i v e when we say that the s i t u a t i o n has the c h a r a c t e r i s t i c , and unsu c e s s f u l or negative when the s i t u a t i o n . doesiGoi have the c h a r a c t e r i s t i c a t t r i b u t e d to i t . This means that the statement of the success of an adequate t e s t i s l o g i c a l l y necessary and s u f f i c i e n t f o r the statement.of the d e s c r i p t i o n of the c h a r a c t e r i s t i c to the given s i t u a t i o n . Thus when we r a i s e questions about- adequacy, :what i s i n doubt i s the r e l a t i o n s h i p between the c h a r a c t e r i s t i c s a t t r i b u t e d to some s i t u a t i o n and the t e s t s done on t h e - s i t u a t i o n . However, i n the other problem of f i n d i n g t y p i c a l examples, we may be i n doubt as to "whether two subjects have the same c h a r a c t e r i s t i c s because the t e s t we have w i l l not apply to one of them. Or, i f -we can see that they both have some character- i s t i c i n common, we may t r y to f i n d some other c h a r a c t e r i s t i c s which one has and the other haszipt* This problem may a r i s e , t h i n k i n g now of robo t s , i n which someone says they have produced an example of-'something c l a i m i n g that t h e i r product h a g a a l l the c h a r a c t e r i s t i c s of the other t h i n g s . This i s the problem of determining the c h a r a c t e r i s t i c s of any given s i t u a t i o n which one may s e l e c t to examine. Thus there are two d i s t i n c t problems: that of determining the. adequacy of t e s t s and that ofddetermining the v a r i o u s c h a r a c t e r i s t i c s of given s u b j e c t s . I do not t h i n k that these, two problems are u n r e l a t e d ; i n f a c t I s h a l l argue that one presupposes that the other has been answered. I t can r e a d i l y be seen, I t h i n k , that i n order to answer the question of whether or not some proposed subject i s to be admitted to another c l a s s of objects as a t y p i c a l example, we must have some way of determining the c h a r a c t e r i s t i c s of the members of the group and al s o of the proposed subject. I f . t h e proposed example'has a l l the c h a r a c t e r i s t i c s of the members of the group (that are r e l e v a n t to them being a group), then the example becomes a member. But t h i s problem could not be t a c k l e d u n t i l we have some adequate way of deciding when two subjects have the same c h a r a c t e r i s t i c s . And t h i s question of adequacy i s none other than the f i r s t problem we noted, that of determining s u c c e s s f u l t e s t s f o r c h a r a c t e r i s t i c s . Furthermore, unless we thought that the problem of determining success was at l e a s t capable of s o l u t i o n , then the second problem could not p r o p e r l y a r i s e . I f we could not i n p r i n c i p l e f i n d a t e s t f o r some c h a r a c t e r i s t i c , then we could.never t e s t some proposed example f o r that c h a r a c t e r i s t i c . The proposal to t e s t some example f o r 3 2 a property assumes that there i s an adequate t e s t f o r that property. Therefore to ask the second question presupposes that the f i r s t one of adequacy can be solved. Furthermore the second question of t e s t i n g examples could not even a r i s e unless i t was at l e a s t i n p r i n c i p l e p o s s i b l e to f i n d a t e s t . For i f we know a p r i o r i that no. t e s t could i n p r i n c i p l e be found, then the questions about t e s t i n g subjects f o r c h a r a c t e r i s t i c s could not a r i s e . Therefore before we can answer any questions about b u i l d i n g examples w i t h some c h a r a c t e r i s t i c s , -the p r i o r question of the p o s s i b i l i t y of f i n d i n g adequate t e s t s must be answered. When we t a l k about robots and t h e i r d i f f e r e n c e s from people, we are wondering whether.there are some c h a r a c t e r i s t i c s which people have that robots do not. This i s c l e a r l y the second problem; the one of determining the existence of c h a r a c t e r i s t i c s i n v a r i o u s s u b j e c t s . S i m i l a r l y any d i s c u s s i o n of the d i f f e r e n c e of the subjects which i l l u s t r a t e outward s i g n s , and others which may have more c h a r a c t e r i s t i c s , i s again a question of t e s t i n g some subjects to see i f they have the c h a r a c t e r i s t i c s which other given examples have. Thus to t a l k about robots and people, or outward signs of behaviour and consciousness, presup- poses that the f i r s t question i s capable of an a f f i r m a t i v e answer. That i s , i t i s assumed that we can f i n d adequate t e s t s of behaviour. In f a c t I don't t h i n k i t would.be going too f a r to say that the use of the conceptual device, robot, presupposes that behaviour, or examples of behaviour, can be adequately t e s t e d f o r . . Thus to assess the opening argument which was used by S c r i v e n , we must examine the p o s s i b i l i t y of e s t a b l i s h i n g adequate t e s t s f o r behaviour. . ' 33 So f a r I have stated the problem of adequacy In terms of c h a r a c t e r i s t i c s and t e s t s , and now I would l i k e to r e s t a t e i t i n a more general form i n order to show the fundamental character of t h i s problem. When we say that some s i t u a t i o n has a character- i s t i c we;are, speaking more g e n e r a l l y , using i n a meaningful way, some concept to t a l k about the s i t u a t i o n . The a t t r i b u t i o n of the c h a r a c t e r i s t i c 'smile' can be thought of as the meaningful use of the concept 'smile'. .Although I by no means intend to equate use and meaning, I do take use to be con c l u s i v e evidence that the. concept has. a meaning. On the other hand, however, when we t a l k of t e s t s we are, more a c c u r a t e l y , t a l k i n g about the r e s u l t s of t e s t s which i n d i c a t e the various p r o p e r t i e s of a s i t u a t i o n . The statement of the r e s u l t of some s u c c e s s f u l t e s t i s a statement saying that a given s i t u a t i o n has been tes t e d and found' to have a c e r t a i n property. So the r e s u l t s of a t e s t can be considered as the statement that a given s i t u a t i o n has a property. The question of adequacy can now be considered more g e n e r a l l y as a problem about the r e l a t i o n s h i p between the meaningful useoof a concept i n some s i t u a t i o n and the r e s u l t s of v a r i o u s t e s t s on that s i t u a t i o n . I s h a l l abbreviate the s t a t e - ment of t h i s problem i n what f o l l o w s to j u s t the problem of the r e l a t i o n s h i p between concepts and p r o p e r t i e s , but i t must be r e - membered that I am t a l k i n g about the meaningful use of a concept i n some p a r t i c u l a r s i t u a t i o n and the t e s t s which can be done on 2k that s i t u a t i o n . I have used Taylor's terminology i n t a l k i n g i n i t i a l l y about the adequacy question i n terms of t e s t s , but t h i s second f o r m u l a t i o n of the problem i n terms of concepts i n the one 2 6 that Hare uses i n h i s chapter on "Meaning and C r i t e r i o n ' . '3H I want to look at the p o s s i b l e r e l a t i o n s h i p s between the p r o p e r t i e s of given s i t u a t i o n s and the. concepts used to t a l k about these s i t u a t i o n s . (Note: t a l k about does not mean, e x c l u s i v e l y , to describe!) There are t h e o r e t i c a l l y q u i t e a number of r e l a t i o n - ships and I tend to group them under two main headings (a) l o g i c a l and (b) n o n - l o g i c a l . The l o g i c a l r e l a t i o n s h i p s are very numerous: ( i ) a property (p) i s necessary and s u f f i c i e n t f o r tiie- concept (c) ( i i ) p i s nece'ssary f o r c, . ( i i i ) p i s s u f f i c i e n t f o r c, (iv)drame group of p r o p e r t i e s (p n) are s u f f i c i e n t and necessary f o r e , ( v ) p n • i s necessary f o r c, ( v i ) • p i s s u f f i c i e n t f o r c, ( v i i ) some of a group of p r o p e r t i e s (p ). are necessary and s u f f i c i e n t f o r c, ( v i i i ) p n - k i s necessary f o r c, ( i x ) p n ~ k ; i s s u f f i c i e n t f o r c. The general form of those r e l a t i o n s h i p s which are necessary and s u f f i c i e n t i s p *-*c where' k<h and' n^l,k^.o. S i m i l a r generalforms can be found-for the necessary and f o r the s u f f i c i e n t r e l a t i o n - s h ips. I t i s therefore evident that there are, i n p r i n c i p l e , no l i m i t s to the number of l o g i c a l r e l a t i o n s h i p s between p r o p e r t i e s and concepts. And f i n a l l y those, p r o p e r t i e s which s a t i s f y or belong to one. of these r e l a t i o n s h i p s , I s h a l l c a l l a c r i t e r i o n f o r that concept. . Some p r o p e r t i e s however are only normally adquate f o r the a s c r i p t i o n of some concept. That i s to say that when a s i t u a t i o n contains a property, or s e r i e s of p r o p e r t i e s , the concept i s normally applicable.Ther^are e x c e p t i o n a l cases, of course, but ge n e r a l l y we are j u s t i f i e d i n using the concept when these p r o p e r t i e s e x i s t i n some s i t u a t i o n . The r e l a t i o n s h i p s between the p r o p e r t i e s and the concept i s not a l o g i c a l one becuase we are only normally j u s t i f i e d i n using the concept when the given s i t u a t i o n e x h i b i t s •3-5 these p r o p e r t i e s . This case may a r i s e when the p r o p e r t i e s ase good i n d u c t i v e evidence f o r the use.of the concept. Some p r o p e r t i e s may be (say) only s u f f i c i e n t i n normal circumstances, f o r the a p p l i c a t i o n of the concept. This means that the r e l a t i o n s h i p s between the concept and the property i s such that we are not normally j u s t i f i e d i n using a concept because of the r e s u l t s of a t e s t . However because i t i s a . s u f f i c i e n t r e l a t i o n s h i p , we can to g e n e r a l l y conclude from the r e s u l t s of a t e s t A t h e a p p l i c a b i l i t y of the concept. Furthermore, other p r o p e r t i e s may be (say) nec- essary, i n normal s i t u a t i o n s , f o r the a p p l i c a t i o n of the concept; i n which case we would be j u s t i f i e d i n concluding from the a p p l i c a b i l i t y of the concept to the r e s u l t s of some t e s t . So we can have p r o p e r t i e s which are, w i t h i n some normal range of s i t u a t i o n s e i t h e r necessary, or s u f f i c i e n t or p o s s i b l y both, for. the a p p l i c a t i o n of some concept. However the r e l a t i o n - ships are not l o g i c a l ' i n the formal sense because we can not s p e c i f y the range of normal s i t u a t i o n s , nor s p e c i f y the ranges that w i l l be normal i n the f u t u r e . But i n normal s i t u a t i o n s the p r o p e r t i e s could be necessary or s u f f i c i e n t or both. These p r o p e r t i e s which are r e l a t e d to concepts, I c a l l ( f o l l o w i n g a modified v e r s i o n of S c r i v e n ) 2 7 i n d i c a t o r s . There are thus as many r e l a t i o n s h i p s between i n d i c a t o r s and concepts as there are w i t h c r i t e r i o n , but the normal r e l a t i o n s h i p s are not l o g i c a l . Therefore i t seems that there are an u n l i m i t e d number of r e l a t i o n s between concepts and p r o p e r t i e s , and even although there are. two main d i v i s i o n s i n the types of r e l a t i o n s , even w i t h i n these types there is an u n l i m i t e d number of p o s s i b l e r e l a t i o n s . The question now a r i s e s q u i t e n a t u r a l l y as to what types of concepts our b e h a v i o r a l ones are? By b e h a v i o r a l concepts, I 3 6 mean those concepts which we use when t a l k i n g about how people behave; such as, smil e , smirk, g r i n , and grimace, to mention only a few from the.various f a c i a l expressions that people adopt. I t seems to me that many of the concepts are of a normal type; that i s , that there are normally j u s t i f i a b l e i n d i c a t i o n s when people are s m i l i n g , but no c r i t e r i o n f o r smiles. Granting that at l e a s t at present some of our b e h a v i o r a l concepts are of a normal type, i t may be thought that they could a l l be changed to a l o g i c a l type. That i s changed i n type; but the meanings remain the. same. In t h i s regard i t i s i n t e r e s t i n g to consider, as an example of type r e d u c t i o n a paper "Can Humans 'Feel'?" by Mr. S. Coval i n which •he argues that our b e h a v i o r a l concepts may become l o g i c a l types as we l e a r n more about the human organism. He argues (roughly) t h a t we w i l l develop behavior concepts, l i k e " t i r e d " which w i l l be i d e n t i f i e d by the cause of the c o n d i t i o n of the human. • Thus i f we could determine the exact t e s t s f o r the causes of a piece of behavior we would have the c r i t e r i o n f o r the use of that concept. Now t h i s suggests two a l t e r n a t i v e s , (a) that our present normal b e h a v i o r a l concepts could a l l be made l o g i c a l types by f i n d i n g the t e s t s which are c r i t e r i o n . But here no proof i s o f f e r e d t o show that t h i s i s p o s s i b l e i n p r i n c i p l e , and I see no reason to t h i n k that all-normal' type concepts could possibly be made l o g i c a l types. Or (b) that i f we do develop a set of l o g i c a l type behavior concepts,, we w i l l have two sets which are i r r e d u c i b l e , and I do not know what so r t of standard we should use to compare- them, as they are d i f f e r e n t types. Of course these remarks of mine about Mr. Coval's ideas are by no means meant as a r e f u t a t i o n , but on the other hand I do not see why, when we are con s i d e r i n g the r e l a t i o n s 3 7 between t e s t s and concepts, we should t a c k l e the question w i t h only one r e l a t i o n i n mind, that of l o g i c a l l y adequate. However more i m p o r t a n t l y , i t i s evident from an examination of Coval's paper, j u s t where a theory i s needed i n order to succeed i n a. re d u c t i o n . The r e d u c t i o n i s t must o f f e r e i t h e r some p r i n c i p l e of comparison between concepts;^ which are d i f f e r e n t i n type or e l s e prove a p r i o r i t hat a l l concepts we p r e s e n t l y use could be made l o g i c a l i n type without change i n meaning. In the absence of e i t h e r of these p r o o f s , we can not conclude that, a l l of our b e h a v i o r a l concepts which we now employ are r e d u c i b l e to a l o g i c a l type. Therefore we can assume i n the' absence of a r e d u c t i v e theory, that our present b e h a v i o r a l concepts do not have c r i t e r i o n . .Where does a l l t h i s leave Mr. S c r i v e n w i t h h i s mechanical r o b o t s i m i t a t i n g human behavior? Since a robot i s a mechanical device i t can be t a l k e d about e n t i r e l y i n terms of a l o g i c a l type. Nowhere i s any proof o f f e r e d e i t h e r by Sc r i v e n or anyone el s e who t a l k s about robots that OUT b e h a v i o r a l cnocepts are a l l of a l o g i c a l type. U n t i l they prove that the concepts we use to t a l k about how humans behave can be reduced to l o g i c a l type terms, then, I argue, the question of mechanical i m i t a t i o n cannot even a r i s e . Every concept that a p p l i e s to a a machine i s of a l o g i c a l type; probably even of the narrower c l a s s of l o g i c a l types c a l l e d nec- essary and s u f f i c i e n t . Thus i f some performance or movement (or act i o n ) i s to be accomplished by a mechanical device, then the e performance must be d e s c r i b a b l e i n l o g i c a l concepts. At present we recognize, t a l k about, and describe human behavior using normal type concepts. But the problem of mechanical i m i t a t i o n can only a r i s e when human behavior i s described i n l o g i c a l type terms. U n t i l '3:8 i t i s shown that a l l human behavior i s d e s c r i b a b l e i n these type of terms, then the problem of i m i t a t i o n does not and cannot a r i s e . Furthermore the opening argument about the i n f a l l i b i l i t y of out- ward signs of consciousness does not show that consciousness i s something other than behavior, i t only shows that our concepts about consciousness are not of a l o g i c a l type, but r a t h e r are of a normal type I Now i t becomes evident that the robot-man d i s t i n c t i o n i s not meant to mark something outer vs. something i n n e r , or separate outward v i s i b l e signs from inward p r i v a t e f e e l i n g s : but ra t h e r i s meant to mark the d i s t i n c t i o n between a d e s c r i p t i o n of human a c t i v i t i e s i n l o g i c a l type and n o n - l o g i c a l type terms. Or perhaps the robot-man distinction- can be thought of as d i s t i n g u i s h i n g those b e h a v i o r a l concepts which are l o g i c a l from those which are not. Here, of course the n o n - l o g i c a l type of concepts are those that we use to t a l k about consciousness. The question "What i s Behavior?" has become the question fWhat types of concepts do we use f o r b behavior?" and.now perhaps the f l y can get out of the f l y b o t t l e . '39 SECTION V A TEST FOU THINKING 29 • At the co n c l u s i o n of h i s paper "The I m i t a t i o n Game", K e i t h Gunderson tempers some of h i s previous c r i t i c i s m s w i t h the remarks: Neverthless...the general question would remain unanswered: what range of examples would s a t i s f y the i m p l i c i t c r i t e r i o n we use i n our or d i n a r y c h a r a c t e r i z a t i o n of subjects as "those capable of thought"? A c o r o l l a r y : I f we are to keep the question "Can machines t h i n k ? " i n t e r e s t i n g , we cannot withhold a p o s i t i v e answer simply on the grounds that i t (a machine) does not d u p l i c a t e human a c t i v i t y i n every respect. The question "Can a machine t h i n k i f i t can do everything a human being can do?" i s not an i n t e r e s t i n g question....30 However I do not t h i n k that these remarks j u s t i f y Mr. Gunderson i n q u a l i f y i n g h i s e a r l i e r c r i t i c i s m s . I s h a l l argue that the concept "capable of thought" has no l o g i c a l l y s u f f i c i e n t c r i t e r i o n . I f t h i s i s so then he need not worry about our i m p l i c i t ( l o g i c a l ) c r i t e r i o n f o r the concept. . Mr.'Gunderson does not f i n d the question about machines t h i n k i n g , i n t e r e s t i n g , i f we grant that machines can do everything humans do. But I should t h i n k that even i f a machine could do every t h i n g , we would s t i l l have s c e p t i c a l grounds f o r withholding our mental concepts. Machines are d i f f e r e n t from.humans and d i f f - erent in' a way that other humans do not d i f f e r from each other. Since a machine i s ;.by d e f i n i t i o n d i f f e r e n t than a human, even i f ' a machine could do everything a human does, the question of relevance of the d i f f e r e n c e s w i l l always a r i s e and I see no reason to r u l e i t out a p r i o r i as u n i n t e r e s t i n g . When we b u i l d a machine to do •„? everything t h a t humans can, we use d i f f e r e n t m a t e r i a l s to b u i l d w i t h . Even when we b u i l d a mechanical " b r a i n " we use d i f f e r e n t >+0 •materials than those the b r a i n i s made of. And because a machine i s d i f f e r e n t from a human i n ways that other humans are not, the s c e p t i c can always doubt the v a l i d i t y of the a p p l i c a t i o n of mental concepts to machines. Whether or not the s c e p t i c i s j u s t i f i e d i s another i n t e r e s t i n g question, but one that can always a r i s e w i t h machines despite the f a c t that they do everything. Gunderson's c o r o l l a r y that we cannot withhold a p o s i t i v e answer simply on the grounds' that machines do not d u p l i c a t e human a c t i v i t y i n every respect, seems to me to f a i l to n o t i c e t h i s ever present s c e p t i c a l ground. I f we could f i n d one a c t i v i t y which no machine could- do and' t h i s was a mental a c t i v i t y , then together w i t h the i m p l i c i t s c e p t i c i s m , there would .be good grounds f o r withho l d i n g a p o s i t i v e answer. .This i s the reason that some philosophers have been so impressed w i t h Godel 1s theorem. Godel showed that given any p a r t i c u l a r Turing machine, he could always f i n d a theorem which a human could prove was true but the machine could not. Thus there was at l e a s t one mental a c t i v i t y , ' i . e . , proving the Godelian statement of that machine, which-the machine could not do. When you couple t h i s f a c t w i t h the general d i f f e r e n c e s between machines and humans (or even b r a i n s ) , then there are good reasons f o r withholding mental concepts ( e s p e c i a l l y t h i n k i n g ) from machines. Gunderson f e l t however that there was a general unanswered question;, namely, what range of examples would s a t i s f y the i m p l i c i t c r i t e r i a we use i n our or d i n a r y c h a r a c t e r i z a t i o n of subjects as- "those capable of thought?" I t i s evident that, we use tone concept "capable of thought" w i t h some subjects i n some s i t u a t i o n s and not i n others.. Most people understand the concept and we can use i t , g e n e r a l l y , unambiguously. That i s to say, the concept has a meaning which most people comprehend. Now granting that a concept has meaning , and f u r t h e r that the meaning can be taught to others I should say, f o l l o w i n g W i t t g e n s t e i n that there must be paradigm instances of the use of the word. There must be some s i t u a t i o n s i n which the concept i s lised c o r r e c t l y and we know, g e n e r a l l y , which s i t u a t i o n s they are. The concept has been taught to us and i s taught by i t s use i n paradigm s i t u a t i o n s . However, granting . a l l t h i s , i t does not f o l l o w that there are c r i t e r i a , e i t h e r i m p l i c i t or e x p l i c i t , f o r the use of t h i s concept. More proof must be o f f e r e d than t h e - f a c t that the concept i s learned i n order ten prove that meaningful concepts have l o g i c a l l y r e l a t e d c r i t e r i a . Yet the attempt to f i n d a t e s t assumes j u s t t h i s p o i n t , namely, that there i s some t e s t which s a t i s f i e s the c r i t e r i a of the concept "capable of thought." There i s however, no proof o f f e r e d to show that the concept has c r i t e r i a . Some people who work wi t h computers contend that they can program a computer to do any task which any person could do. They may be q u i t e j u s t i f i e d i n th&s claim. They then argue that i f . we show them what the subjects do when we say that they are capable of thought, then they w i l l b u i l d a computer to do that job also.- Howgv-er t h i s l i n e of reasoning presuppdsse© that there are a d e f i n i t e number of s p e c i f i c tasks which, when completed, the l a b e l "capable of thought" cannot i n l o g i c a l consistency, be w i t h h e l d . But.we cannot a l l o w people to argue that because we are t e s t i n g a machine, the concept must be of a s p e c i f i c type. Rather i t can only be held that i f we are ever going to be able to f i n d a t e s t f o r the a p p l i c a t i o n of the concept, then the concept must have c r i t e r i a . However i f the concept does not have c r i t e r i a then we cannot f i n d a l o g i c a l l y s u f f i c i e n t t e s t f o r i t s a p p l i c a t i o n . S c r i v e n t h i n k s that i f we refuse to apply our mental vocabulary, each time they b u i l d a computer to do. more human achievements, then we w i l l be making a mistake. He says: •'The l o g i c a l trap i s t h i s : no one performatory - achievement w i l l be enough'to .persuade us to apply the human achievement vocabulary, but i f we refuse to use t h i s vocabulary i n each case se p a r a t e l y , on t h i s ground, we w i l l , perhaps wrongly, have committed ourselves to avoiding i t even when a l l the achievements are simultaneously a t t a i n e d . • . '31 • S c r i v e n seems to t h i n k that there'are a d e f i n i t e number, (namely, a l l of them) of achievements which one does to q u a l i f y f o r the human-achievement vocabulary. I f the number i s not d e f i n i t e (and t h i s does not mean the number i n f i n i t e ) then there i s no l o g i c a l • 3 ? t r a p . But where i s the proof that a l l of our human-achievement concepts are of a type that have a d e f i n i t e number of c r i t e r i a ? S c r i v e n does not o f f e r one, and ,1 intend to show that none can be given. I s h a l l aggue that the concept "capable of thought" i s an e v a l u a t i v e ooncept which does not have any l o g i c a l l y s u f f i c i e n t set of c h a r a c t e r i s t i c s so that no t e s t f o r character- i s t i c s of p e o p l e . w i l l . e v e r be found that i s l o g i c a l l y s u f f i c i e n t . I n order to prove t h i s , however, I must f i r s t begin by reviewing some of the conclusions that have been reached i n the; a n a l y s i s of e v a l u a t i v e language. •5-5 In the f i f t h chapter of The Language of Morals-^ Hare reformulates Moore's c r i t i c i s m of n a t u r a l i s m i n e t h i c s . In doing so Hare shows that any attempt to reduce our e v a l u a t i v e terms to the statement of a d e f i n i t e set of d e s c r i p t i v e c h a r a c t e r i s t i c s must be i n p r i n c i p l e mistaken. He s t a t e s : l e t us g e n e r a l i z e . I f P i s a good p i c t u r e 1 i s h e ld to mean the same as 'P i s a p i c t u r e • and i s C (where C.is a group of character- i s t i c s ) , then i t w i l l become impossible to to commend p i c t u r e s f o r beingC: i t w i l l be p o s s i b l e only to say that they are C. It- i s important to r e a l i z e that t h i s d i f f i c u l t y . , ' ' has nothing to do w i t h the. p a r t i c u l a r example I have chosen. ' I t i s .not because we have - chosen the wrong d e f i n i n g c h a r a c t e r i s t i c s ; • i t i s because whatever--defining character- i s t i c s we choose,- t h i s 'objection a r i s e s , that we can no longer commend an object f o r posse possessing those c h a r a c t e r i s t i c s . 3 ^ (my parenthesis added) As I s a i d , I accept e n t i r e l y Hare's proof that i f we are to evaluate or commend var i o u s subjects :for doing or being something,.then we must have e v a l u a t i v e concept's which are not j u s t equivalent to an a s s e r t i o n of a d e f i n i t e set of c h a r a c t e r i s t i c s or p r o p e r t i e s . I t i s a f a c t . t h a t we do value and commend, and as long as we , continue to , we must have value concepts. Thus i n the absence' of any prcoif a p r i o r i that at some time humans w i l l stop f o r e v e r to evaluate, i t can be assumed that we must have e v a l u a t i v e concepts. Thus we must have concepts which are not equivalent, to the a s s e r t i o n of a set of c h a r a c t e r i s t i c s . The question now a r i s e s as to whether or not when we say " X c a n t h i n k " , we are making an e v a l u a t i v e judgement. I n s e c t i o n . V I I I of h i s paper Gunderson says: A f i n a l p o i n t : the stance i s o f t e n taken that t h i n k i n g i s the crowning c a p a c i t y or achieve- --'-'''•-"'' mpnt.^of:..the:;human race, and that i f one denies 'that"machines can t h i n k , one i n e f f e e t . a s s i g n s them to some lower l e v e l of achievement than that a t t a i n e d by human being. But one might w e l l contend that machines can't .think f o r they do much b e t t e r than that.3 5 (my i t a l i c s ) I f we o f t e n say that t h i n k i n g i s the crowning c a p a c i t y , or the f a c u l t y which makes us b e t t e r , than to say of someone that they I t h i n k i s not only to say that they have some ca p a c i t y but that they are commendable'(or more valuable) because, they have i t . I f we c a l l some ca p a c i t y the crowning""one we..are i n - e f f e c t saying that .whoever has t h i s c a p a c i t y i s commendable because of i t . , .V..\\v And to o f f e r a reason f o r commendation i s simply to commend someone f o r the reason o f f e r e d . However I t cannot be denied that "X has the crowning c a p a c i t y , v i z . a b i l i t y to t h i n k " and "X can t h i n k " are d i f f e r e n t u t t e r a n c e s . ' I t i s a g e n e r a l l y accepted f a c t that people can t h i n k , and i f someone st a t e s a f a c t which everyone knows, then i t i s ' g e n e r a l l y assumed that he has some other purpose i n mind. For example when I t e l l my w i f e , t h a t she already knows, that- the house i s d i r t y , I am not j u s t s t a t i n g a f a c t but r a t h e r I am (say) condemning'this c o n d i t i o n of the house and thus recommending that she c l e a n i t . So i f someone st a t e s that people (or some person) can t h i n k and we a l l g e n e r a l l y assume t h i s , then we take i t that they have some other purpose' i n mind i n u t t e r i n g the sentence. Now when we remember that we o f t e n consider the a b i l i t y to t h i n k as a reason f o r commending people, i t i s not d i f f i c u l t to see that on some occasions at l e a s t , the purpose i n saying that someone can t h i n k i s to commend them. For i f i t i s assumed that L o i s can t h i n k as i t g e n e r a l l y i s and we o f t e n recommend people because they can t h i n k , then to say that L o i s can t h i n k i s to commend her because she can t h i n k . And I t h i n k ' t h a t the sentence "X can t h i n k " has j u s t t h i s use of commendation on some occasions I want to emphasize that a l l I wish to e s t a b l i s h i s that on some occasions . the sentence has t h i s use, while not' denying that on other occasions the sentence has other uses. But part of the meaning of the concept, i f we judge i t s meaning by i t s use, i s e v a l u a t i v e and as such w i l l have the c h a r a c t e r i s t i c s which Hare noted about e v a l u a t i v e s.fe&tements. I f we accept the v a l i d i t y of Hare's a n a l y s i s of our o r d i n a r y use of e v a l u a t i v e concepts, then we must conclude that the concept "capable.e of fcnought" i s not equivalent to the statement of a set of c h a r a c t e r i s t i c s about humans. The question now a r i s e s as to whether or not there i s a set of c h a r a c t e r i s t i c s which are l o g i c a l l y s u f f i c i e n t f o r the a s c r i p t i o n of the concept "capable of thought?" Since Hare has shown that there i s no set which i s e q u i v a l e n t , then perhaps • there is ..some set of p r o p e r t i e s wh$ch are s u f f i c i e n t f o r ascription!"!.. In t h i s case we would then set up a s e r i e s of t e s t s f o r the p r o p e r t i e s and we would have a l o g i c a l l y s u f f i c i e n t group of t e s t s which, when a machine passed them, would f o r c e us ( l o g i c a l l y ) to • say that the machine was aapable of thought. Gunderson seems to t h i n k that there ls_ a set when he asks f o r the range of examples which would s a t i s f y the i m p l i c i t c r i t e r i a ' : ; of the concept. But there i s no n e c e s s i t y that meaningful concepts have l o g i c a l l y s u f f i c i e n t criteria.}.. I argued i n s e c t i o n IY that there -,>was! an i n d e f i n i t e number of r e l a t i o n s between concepts and p r o p e r t i e s , some of which were l o g i c a l and others not. Granted that these r e l a t i o n s are conventional ones, 'this does not show that they must be l o g i c a l . The convention could be t h a t some set of c h a r a c t e r i s t i c s i s normally s u f f i c i e n t f o r the a p p l i c a t i o n of the concept,- but that we a l l o w e x c e p t i o n a l circumstances to j u s t i f y the withholding of the concept.As these circumstances can be n e i t h e r s p e c i f i e d nor f o r s e e n , i t i s evident, as I argued i n s e c t i o n IV, that the r e l a t i o n s h i p would not be a s t r i c t or l o g i c a l one. What i s the r e l a t i o n s h i p , then, between an e v a l u a t i v e concept and the c h a r a c t e r i s t i c s of s i t u a t i o n s ? Hare argues that i f we evaluate something, then we must be prepared to evaluate something r e l e v a n t l y s i m i l a r , the" same way, or el s e o f f e r a j u s t i f i c a t i o n f o r not doing so. And he says that the "must" i s a l o g i c a l one i n the sense that i f one refused to similarly..' evaluate without o f f e r i n g a j u s t i f i c a t i o n , then one would have committed a c o n t r a d i c t i o n . Thus to f a i l to o f f e r reasons i s to v i o l a t e the convention, and t h i s , Hare argues, i s tea "involve ©.heseltf din a l o g i c a l c o n t r a d i c t i o n , but t h i s i s f a r from showing that the conventional r e l a t i o n i s a l o g i c a l one. I t shows only that i f one v i o l a t e s or refuses to p a r t i c i p a t e i n t h i s language convention ( a f t e r e n t e r i n g i t by using an e v a l u a t i v e concept) then one commits a l o g i c a l f a l l a c y , but the convention i t s e l f could j u s t as e a s i l y be a normal one as a l o g i c a l one. I f one uses a concept which, as pa r t of i t s convention, r e q u i r e s a j u s t i f i c a t i o n , i n some cases and one subsequently refuses to acknowledge the demand f o r a . j u s t - i f i c a t i o n , then one c o n t r a d i c t s o n e s e l f , even i f the convention i s only one of a normal r e l a t i o n between the concept and the c h a r a c t e r i s t i c s of the s i t u a t i o n . But. Hare's a n a l y s i s of the a c t u a l conventional r e l a t i o n between e v a l u a t i v e concepts and the var i o u s p r o p e r t i e s of s i t u a t i o n s , was that i f an e v a l u a t i v e concept i s used to (say) commend a s i t u a t i o n then one must al s o commend another s i t u a t i o n or el s e j u s t i f y why one i s withholding the commendation. That the s i t u a t i o n s are both given and n u m e r i c a l l y d i s t i n c t i s proof enough that there are d i f f e r e n c e s between them, but the convention demands- a j u s t i f i c a t i o n f o r the relevance of the d i f f e r e n c e s i n •withholding e v a l u a t i o n . Furthermore the same c h a r a c t e r i s t i c may "be r e l e v a n t i n one s i t u a t i o n f o r an e v a l u a t i o n and not i n another s i t u a t i o n f o r the same e v a l u a t i o n . But a convention i n which some d e f i n i t e set of c h a r a c t e r i s t i c s are (say) s u f f i c i e n t f o r the a s c r i p t i o n of some concept except i n e x c e p t i o n a l circumstance, i.e.,. those circumstances i n which j u s t i f i c a t i o n can he found, i s the type of. convention I c a l l e d normal i n s e c t i o n l V . The convention f o r e v a l u a t i v e terms i s that the terms must he r e a p p l i e d or j u s t i — f i c a t i o n o f f e r e d f o r not rea p p l y i n g them, which means that normally they w i l l be used i n the same s i t u a t i o n s but we al l o w e x c e p t i o n a l l y j u s t i f i e d s i t u a t i o n s to be. exemp't. Therefore I conclude on the ba s i s ' o f Hare's a n a l y s i s and t f e . e d i s t i n c t i o n s I drew i n s e c t i o n IV that e v a l u a t i v e terms are of a normal type. Furthermore since the concept "capable of thought" i s an ev a l u a t i v e one, i t has t h i s n o n - l o g i c a l r e l a t i o n to the c h a r a c t e r i s t i c s of s i t u a t i o n s ; so that no set of t e s t s f o r the c h a r a c t e r i s t i c s of some proposed subject could be l o g i c a l l y s u f f i c i e n t f o r the a s c r i p t i o n of the concept. Thus no t e s t or set of t e s t s , could , i n p r i n c i p l e , be found which would be l o g i c a l l y s u f f i c i e n t to al l o w us to say '"Machines can t h i n k . " I t may be argued that i f we e l i m i n a t e the e v a l u a t i v e content from, the concept of t h i n k i n g then we s h a l l be able to. f i n d a t e s t . In the case where we f i n d a computer which s u c c e s s f u l l y passes t h i s t e s t , we w i l l then be able to say that i t t h i n k s , remembering that t h i s use of t h i n k i s non-evaluative. There are two r e p l i e s to t h i s type of c r i t i c i s m . I f i t i s thought that t h e • a p p l i c a t i o n of t h i s new concept to mechanical devices i s a step forward i n the problem of applying mental concepts to machines, then i t i s a mistake. By c u t t i n g out the troublesome p a r t of the concept, one does.not thereby make gains but rat h e r ore only saves up the tro u b l e u n t i l l a t e r . I n t h i s repect then, to change the concept i s only to by-pass the tro u b l e u n t i l l a t e r while t h i n k i n g t h a t one i s making gains. The second r e p l y i s -that i n cons i d e r i n g problems connected w i t h the concept of t h i n k i n g , the only way to l o c a t e the problem i s by considering our present concept and i t s or d i n a r y usage. When the problem "Can machines t h i n k ? " was o r i g - i n a l l y proposed, i t was assumed that people were wondering whether or not they could say of machines, what they say of l o t s of other t h i n g s ; namely that they can t h i n k . I f the concept of t h i n k i n g was not the one o r d i n a r i l y used and meaning what we o r d i n a r i l y mean, then what other p o s s i b l e meaning could i t have had? How should we have been able to f i n d any meaning f o r the question, i f the words were not used as we use them i n E n g l i s h ? I f i n • the s o l u t i o n to the problem, we change the meaning of the question, how can i t be argued that the o r i g i n a l question has been answered. Those people who change the concept have not answered the question "Can machines t h i n k ? " but ra t h e r some other problem that they have invented., Gunderson seems to have thought that h i s i n i t i a l ' c r i t i c i s m s of the I m i t a t i o n Game could be countered i f the i m p l i c i t c r i t e r i o n of the concept "capaMe of thought" could be found. He had'argued i n c r i . t i c i s m ? that the I m i t a t i o n Game was only one example, and a multitude of examples were needed to apply the concept. But he thought.that a set of t e s t s could be found which, when s a t i s f a c t o r i l y completed, would be l o g i c a l l y adequate f o r a s c r i p t i o n of the concept. However I have argued against t h i s , that there i s rio set of t e s t s which are l o g i c a l l y s u f f i c i e n t . Gunderson 1s e r r o r seems to have been that he mistook the type of concept that "capable of thought" i s . He thought i t was a l o g i c a l type concept, whereas I have argued that i t i s anormal or n o n - l o g i c a l type. By type I mean type of r e l a t i o n s h i p between the concept and the p r o p e r t i e s of s i t u a t i o n s . By mistaking the type of @0J3cept, some philosophers have assumed that i t had a l o g i c a l l y s u f f i c i e n t t e s t and set about f i n d i n g the t e s t (or t e s t s ) . However when we under- stand what type of concept "aapable of thought'" i s , I have argued, then"we can see that the search f o r a t e s t i s i n ' p r i n c i p l e f u t i l e . 50 • SECTION VI CONCLUSION In c o n c l u s i o n , I should l i k e to r e s t a t e some of the conclusions t e n t a t i v e l y a r r i v e d at i n the preceeding s e c t i o n s of the paper. I have argued i n s e c t i o n I I I that there i s good ' evidence that mechanical robots are p r e d i c t a b l e i n the important sense. Furthermore, i n s e c t i o n IV, I argued that even using the idea of a robot as j u s t a conceptual device, presupposed that c e r t a i n l i n g u i s t i c problems had been solved which indeed have not been solved. In the proceeding ^ % i ' 6 n , I t r i e d to show that we could never i n p r i n c i p l e f i n d a t e s t which was l o g i c a l l y s u f f i c i e n t f o r the utterance'"Machine X can t h i n k . " Together I t h i n k that .these conclusions add up to a r a t h e r serious c r i t i q u e of the general arguments advanced to show that machines can t h i n k . However I t h i n k that there are more f a r - r e a c h i n g i m p l i c a t i o n s to be drawn from the work i n t h i s paper. In order to p o i n t these i m p l i c a t i o n s , l e t us review the sources of e r r o r s that I suggested other philosophers had made. In arguing against the p o s s i b i l i t y of i m i t a t i o n , I showed that • philosophers had made an e r r o r by f a i l i n g to no,tice a l i n g u i s t i c question which the whole d i s c u s s i o n of i m i t a t i o n presupposed. I then.went on to i l l u s t r a t e the complexity of r e l a t i o n s that could e x i s t between a concept and the s i t u a t i o n s i n which they were used. F i n a l l y , I suggested that those philosophers who were concerned w i t h f i n d i n g a t e s t f o r t h i n k i n g had mistaken the type of concept that "capable of thought" i s . I have, i n f a c t , been c o n t i n u a l l y t r y i n g to show that the source of e r r o r s have a l l been of a l i n g u i s t i c nature. Thus one of the more general conclusions of t h i s paper i s that f a r more a t t e n t i o n must be given to language and the v a r i o u s l i n g u i s t i c problems that can a r i s e . "What i s needed i s a systematic method f o r t a c k l i n g these problems of language once they have been shown to be behind many of the more t r a d i t i o n a l problems. But even though we s t i l l l a c k a methodology, there i s a great need to focus more a t t e n t i o n upon our language,, i t s conventions, and concept types. There i s however another way of l o o k i n g at the r e s u l t s of t h i s paper. Much of my work has been i n an e f f o r t to change the form of the standard problems a s s o c i a t e d w i t h the question "Can machines t h i n k ? " . For example, I t r i e d to show that the problem of t r y i n g to f i n d a t e s t f o r t h i n k i n g i s j u s t the problem of d e t e r i n - i.ng types of concepts. In another s e c t i o n , I showed that the problem of c o n s t r u c t i n g a robot to. Imitate humans was at bottom, the problem of type reduction,, i . e . , the problem of changing a concept of one type to another without change of meaning. In making these changes, I have t r i e d to show that the problems which have bothered philosophers i n t h i s area are e s s e n t i a l l y l i n g u i s t i c i n nature; that i s , a l l the problems can.be r e s t a t e d as l i n g u i s t i c ones. When I say that I have r e s t a t e d a t r a d i t i o n a l or standard problem, I do not mean that I have given a synonymous rephrasing of the problem. I mean e i t h e r that the .standard problem can be shown to have a r i s e n because of a l a c k of c a r e f u l l i n g u i s t i c • a n a l y s i s , or that the t r a d i t i o n a l problem presupposes that some l i n g u i s t i c t h e o r i e s be s u b s t a n t i a t e d . Or even that i t can .be shown that the standard problems have as t h e i r main d i f f i c u l t y a confusion i n types'of concepts. I f a t r a d i t i o n a l problem i s r e l a t e d to a • .52 l i n g u i s t i c one i n one of these ways, then we can change i t i n t o a problem i n l i n g u i s t i c s . As I s a i d , I have t r i e d to do j u s t t h i s - r e s t a t i n g of the problems i n the area of minds and machines. The c o n c l u s i o n that I wish to suggest i s that i f the problems i n t h i s - area can be r e s t a t e d , then that i s some evidence that other problems may a l s o be r e s t a t a b l e i n t h i s way. I must, however, grant that t h i s paper i s not very s u b s t a n t i a l evidence to suggest the p o s s i b l e scope of t h i s r e s t a t i n g programme. I t i s my b e l i e f that most t r a d i t i o n a l p h i l o s o h p i c problems can be r e s t a t e d as l i n g u i s t i c ones. Thus another more general conclusion, of t h i s paper i s to suggest.the p o s s i b i l i t y of a general restatement of t r a d i t i o n a l p h i l o s o p h i c problems. Besides the more general c o n c l u s i o n s , there are the s p e c i f i c ones i n c r i t i c i s m of the arguments,for saying that machines can t h i n k . I have argued that there i s good evidence to doubt the p o s s i b i l i t y ofai.imitating robot and even i f t h i s evidence were mi s s i n g , the whole argument using i m i t a t i n g robots presupposes a l i n g u i s t i c d i f f i c u l t y which has not been answered. The analogy between men and r o b o t s , I argued, was empty, and I t r i e d to show that no l o g i c a l l y adequate t e s t of. t h i n k i n g can be found; so that these examples of machines p l a y i n g games were not c o n c l u s i v e but r a t h e r only persuasive evidence. As Gunderson s a i d : In the end, the steam d r i l l o u t l a s t e d John Henry as a digger of r a i l w a y tunnels, but that d i d not prove the machine had muscles; i t proved that muscles were not needed f o r digging r a i l w a y tunnels. There have been many i n t e r e s t i n g p o i n t s -made i n the arguments f o r .thinking machines, and these p o i n t s have had the e f f e c t of making most philosophers expand t h e i r concept of what a . • 53" machine i s . However, aside from t h i s merit of the arguments f o r the a f f i r m a t i v e , I have argued that we are s t i l l j u s t i f i e d i n saying that machines cannot t h i n k . • 5!+ FOOTNOTES 1 A.M.Turing,"Computing Machinery and I n t e l l i g e n c e " , Mind, v o l . L I X , No.236 -(1950) . 2 A.R.Anderson, ed, Minds and Machines, Englewood C l i f f s , New Jersey, P r e n t i c e - H a l l , Inc, 196 Lf. 3 Ibid'. , pp.h-5- k I b i d . , p p v 7 2 - 9 7 - 5 i b i d . , p.75 6 Another way of p u t t i n g the preceeding remarks i s to say that the 'only i f c o n d i t i o n i n 11T i s i n s t a t e A i f and only i f f l i p f l o p 36 i s on", can be cashed i n t o a f i n i t e l i s t ; such as, f l i p f l o p 1 i s e i t h e r on or o f f , f l i p f l o p 2 i s e i t h e r on or o f f , e t c . 7 A.R.Anderson, op. c i t . , pp.73-7 1+- 8 A.M.Turing, l o c . c i t . 9 A.Church, I n t r o d u c t i o n to Mathematical L o g i c , P r i n c e t o n , P r i n c e t o n U n i v e r s i t y P ress, 1956. 10 M.Davis, Computability and U n s o l v a b i l i t y , New York, McGraw-Hill Book Company, Inc, 1958. 11 A.R.Anderson, op. c i t . , p.7*+- 12 I b i d . , • p. 81 . 13 I b i d . , P- 77- 1̂ f I b i d . , P- 81 . 15 I b i d . , P- 8 2 . 16 I b i d . , P- ^ 3 . 17 I b i d . , P- 77- 18 I b i d . , P- 15. 19 M.Davis, op. c i t . , Chapter 1. 20 A.R.Anderson, op. c i t . , pp. L K3-59. 21 I b i d . , pp 22 I b i d . pp.31-^2. 23 I b i d . , p.3^ 2h C.Taylor, The E x p l a n a t i o n of Behavior„ London, Routledge 555 and Kegan P a u l , \96h, pp.82-87. 25 R.M.-Hare, The Language of Morals, Oxford, at the Clarendon Press, 1961, pp.9^-110- 26 I b i d . , pp.9^-'95- 27 M.Scriven, "The Logic of C r i t e r i o n " , The J o u r n a l of Philosophy, v o l . L V I , No.22, p.857. 28 S.C.Coval, "Can Humans 'Feel'?", unpublished. 29 A.R.Anderson, op. c i t . 30 I b i d . , p.70. 31 S.Hook, ed., Dimensions of Mind,- New York,. New York U n i v e r s i t y P r e s s , 1960, p.124-. 32 I suspect that S c r i v e n a l s o sees t h i s . Compare h i s a r t i c l e i n The J o u r n a l of Philosophy, op. c i t . , p.868. I use h i s argument only as an example of the i m p l i e d l o g i c a l trap argument wit h t e s t i n g . 33 R.M.Hare, op. c i t . , pp.79-93- 3h I b i d . , p.85. 35 A.R.Anderson, o p . c i t . , p.71• 36 Loc. c i t . 56 BIBLIOGRAPHY Books Anderson,A.R., ed. Minds and Machines. Englewood C l i f f s , New Jersey, P r e n t i c e - H a l l , Inc. , 1 96l+. (Contemporary P e r s p e c t i v e s i n Philosophy S e r i e s , .eds. J o e l Feinberg and W.C. Salmon, vol . 1 ) . Chappell,V.C., ed. The Philosophy of Mind. Englewood C l i f f s , New Jersey, P r e n t i c e - H a l l , Inc., 1962. Davis, M. Computability and U n s o l v a b i l i t y . New York, McGraw- H i l l Book Company, Inc.,,1958. Hare,R.M. The Language of Morals. Oxford, at' the Clarendon'" . P r e s s , 1961. Taylor,C. The E x p l a n a t i o n of Behavior. London, Routledge and Kegan P a u l , 196 .̂ ( I n t e r n a t i o n a l L i b r a r y of Philosophy and S c i e n t i f i c Method, ed.A.J.Ayer). . Witt g e n s t e i n , L . P h i l o s o p h i c a l I n v e s t i g a t i o n s . Trans.G.E.M. Anscombe. Oxford, B a s i l B l a c k w e l l , 1963* A r t i c l e s I • TT c o A l b r i t t o n , R . "On Wi t t g e n s t e i n s USe of the Term '"Cri t e r i o n * ". The J o u r n a l of Philosophy, v o l . L V I , No.22, pp.8>+5-856. Scriven,M. "The.L6gic of C r i t e r i o n " . The Jo u r n a l of Philosophy, v o l . L V I , No.22, pp.857-865. Wisdom,J.O. '"A New Model f o r the• Mind-Body R e l a t i o n s h i p " . The B r i t i s h J o u r n a l f o r the Philosophy of Science, vol . 2 No.8 (1951-52) pp.295-301. Wisdom,J.O. "The. Hypothesis of Cybernetics". The B r i t i s h J o u r n a l fo'r the Philosophy of Science, vol . 2 No. 5 (1951-52), pp. 1-2.M. 1

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
United States 4 0
Canada 2 0
Germany 1 46
China 1 26
City Views Downloads
Unknown 3 46
Ashburn 2 0
Toronto 1 0
Montreal 1 0
Beijing 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0105248/manifest

Comment

Related Items

Admin Tools

To re-ingest this item use button below, on average re-ingesting will take 5 minutes per item.

Reingest

To clear this item from the cache, please use the button below;

Clear Item cache