Problems in Consciousness II

Problems in Consciousness II FUNCTIONALISM & COMPUTATIONALISM The Reductive Strategy:  The goal is to reduce subjective, non-verifiable and potent...
Author: Egbert Griffith
2 downloads 1 Views 562KB Size
Problems in Consciousness II FUNCTIONALISM & COMPUTATIONALISM

The Reductive Strategy:  The goal is to reduce subjective, non-verifiable and

potentially meaningless mental state claims to something that is objectively verifiable and has the potential to help explain and predict conscious behavior.

Classical Functionalism  “Functionalism in the philosophy of mind is the

doctrine that what makes something a mental state of a particular type does not depend on its internal constitution, but rather on the way it functions, or the role it plays, in the system of which it is a part.” (Stanford Encyclopedia of Philosophy)

Classical Functionalism  Functionalists are indifferent to the material

composition of a system – It doesn’t matter what a thing is made of or what its structure is – What matters is how it works. If two things function identically and one of those things are conscious, then the other thing must also be conscious. Hence, consciousness is defined as “functioning consciously.”

Hilary Putnam & Functionalism  “I shall, in short, argue

that pain is not a brain state, in the sense of a physical-chemical state of the brain (or even of the whole nervous system), but another kind of state entirely. I propose the hypothesis that pain, or the state of being in pain, is a functional state of a whole organism.”

Classical Functionalism  The Mind as a “Black Box”  Inputs – external stimuli, a question someone asks, something one sees, smells, etc.  The Black Box – the processing unit which determines which output is to be chosen given certain “dispositional states” and certain inputs.  Output – behavior  Example:  The input: Someone asks, “Do you want to go to lunch?”  Dispositional state: Hungry  Output: You say, “Yes, good idea!”

Computationalism  Reduces mental states to computational states  Invokes the computer metaphor  The Hardware is to Software in a computer as…  The Central Nervous System (CNS) is to the Mind.  This metaphor tells us that the mind is nothing over

and above the programming dictating how the brain/CNS works.

Computationalism  “An analogy is sometimes drawn between computer hardware and

software, and the systems or substances in which the information—processing functions that constitute the mind are implemented. The exact material out of which the machine hardware is built and the software encoded is largely irrelevant, provided it is capable of running the program. This observation suggests the functionalist explanation of the mind as a living computer…. that emphasizes the information –processing algorithms by means of which the mind takes sensory and other information as input, and computes as output verbal and other behavior. Intelligence is a function of the human brain and nervous system, of alien systems with information-processing hardware very different from human brain structures and chemistry, and of computers with the requisite complexity to reproduce the same pattern of input-output transmissions.” (Dale Jacquette, Philosophy of Mind, Oxford University Press.)

Jerry Fodor & Computationalism  “ Associating the semantic

properties of mental states with those of mental symbols is fully compatible with the computer metaphor, because it is natural to think of a computer as a mechanism that manipulates symbols. A computation is a causal chain of computer states and the links in the chain are operations on semantically interpreted formulas in a machine code.” (p.514)

The Turing Test  Turing (1950) describes the following kind of game. Suppose that we have a person, a

machine, and an interrogator. The interrogator is in a room separated from the other person and the machine. The object of the game is for the interrogator to determine which of the other two is the person, and which is the machine. The interrogator knows the other person and the machine by the labels ‘X’ and ‘Y’—but, at least at the beginning of the game, does not know which of the other person and the machine is ‘X’—and at the end of the game says either ‘X is the person and Y is the machine’ or ‘X is the machine and Y is the person’. The interrogator is allowed to put questions to the person and the machine of the following kind: “Will X please tell me whether X plays chess?” Whichever of the machine and the other person is X must answer questions that are addressed to X. The object of the machine is to try to cause the interrogator to mistakenly conclude that the machine is the other person; the object of the other person is to try to help the interrogator to correctly identify the machine. About this game, Turing (1950) says:  I believe that in about fifty years' time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. … I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. (Stanford Encyclopedia of Philosophy)

Blade Runner (1982) Dir. Ridley Scott  Rachel (Sean Young) is a

replicant, though she doesn’t realize it. Except for a very specialized test called the “voicomp” she is indistinguishable from humans. Thus she passes the “Turing Test” and functions equivalently to conscious humans.  http://www.youtube.com/ watch?v=ndndERnWew&feature=related

John Searle – Computers cannot be conscious!  What Consciousness is:

"Consciousness is just an ordinary higher level physical property of nervous systems“

(Searle on Thinking Allowed with Jeffrey Mishlove.)

John Searle – Computers cannot be conscious! Why Computationalism won’t work:  “Syntax is not semantics.” (Searle on Thinking Allowed with Jeffrey Mishlove.) 



Syntax: the rules governing the manipulation of symbols – e.g. grammar and spelling Semantics: what the symbols mean or represent to us.

 “The computer manipulates formal symbols but

attaches no meaning to the, and this simple observation will enable us to refuse the thesis of mind as program.” (p.520)

John Searle – Computers cannot be conscious!  Searle rejects the Turing Test as an adequate test for consciousness:

He offers a case where one might pass the Turing Test – but still clearly not be conscious. His counterexample is called the “Chinese Room.” He proposes a case where he is in a room, and can converse via a series of blocks that have Chinese symbols printed on them. Searle does not understand Chinese but he has a manual which instructs him when to produce certain blocks if presented with certain blocks. On the other side of the room is a person who understands Chinese. With a good enough manual, Searle argues that he would be able to convince the other person that he understands Chinese but would in fact understand nothing of the conversation via blocks except that he put block B out when presented with block A.  "That's why computers are so marvelous, is that they're a symbolmanipulating machine. They don't have to know anything. They don't have to know what any of these words stand for, what any of these symbols mean.” (Searle on Thinking Allowed with Jeffrey Mishlove.)

Two Replies to the Chinese Room:  (1) Some critics concede that the man in the room doesn't understand

Chinese, but hold that at the same time there is some other thing that does understand. These critics object to the inference from the claim that the man in the room does not understand Chinese to the conclusion that no understanding has been created. There might be understanding by a larger, or different, entity. This is the strategy of The Systems Reply and the Virtual Mind Reply. These replies hold that there could be understanding in the original Chinese Room scenario.

 (2) Other critics concede Searle's claim that just running a natural language

processing program as described in the CR scenario does not create any understanding, whether by a human or a computer system. But these critics hold that a variation on the computer system could understand. The variant might be a computer embedded in a robotic body, having interaction with the physical world via sensors and motors (“The Robot Reply”), or it might be a system that simulated the detailed operation of an entire brain, neuron by neuron (“the Brain Simulator Reply”). (Stanford Encyclopedia of Philosophy)

Jerry Fodor’s Response to Searle:  “There is no computation without representation.” (p.512)

=> A computer cannot run a program unless it “understands” what the symbols in the programming codes represent.

Are We There Yet?  IBM’s “Watson” and

Natural Language  http://www.youtube.co m/watch?v=FC3IryWr4c 8  http://www.youtube.co m/watch?v=lIM7O_bRNg

Are We There Yet?  Nova ScienceNow “Social

Robots”– The very human-like Philip Dick robot (see right) and others,  http://video.pbs.org/vid eo/1801231960/