Epistemology Notes

Chapter 6

Other Minds



 
 

1.  The Problem of Other Minds:  An Overview

1.1  Some Important Questions

        There are a number of important questions that one needs to address in this area.  One of the main ones, obviously, is this:

(1)  How can one justify (a) the belief that the world contains minds other than one's own, and (b) beliefs about the particular mental states that such other minds are in at different times?

        In order to answer question (1), however, there are other questions that must be tackled, such as:

(2)  Can we have non-inferentially justified beliefs about others minds - beliefs that are justified, but not justified on the basis of any other beliefs?

(3)  If all or some of our beliefs about other minds are inferentially justified, what type of evidence is relevant?

(4)  How, in such cases, does one get from the evidence to the desired conclusion?

        What's involved in (4) should be clear in a general way from the earlier discussion of an important pattern of skeptical argumentation.  One wants to know whether the inference is deductive, or whether it involves generalization based upon instances, or whether it is a matter of something like inference to the best explanation.

        What do I have in mind with respect to the question of the type of evidence?  The answer is that philosophers have put forward quite different views concerning what sort of evidence is essential.

        One dispute, for example, concerns the relative roles of (a) evidence concerning behavior, and (b) evidence concerning the constitution of the being in question - what it is made of, and how that material is structured.  (For example, is neurophysiological information relevant?) .

        Another important question concerns whether information about oneself is crucial.  Does the justification of beliefs about other minds necessarily start out from information about oneself as a being with a mind, or can one determine whether other beings have minds without appealing to any information about oneself?

        The answer to the question of relevant evidence will also bear upon another important question:

(5)  What is the scope of the justified beliefs that one can have about mental states that are not one's own?  Can one, or could one, for example, have justified beliefs (a) about other humans, both normal humans and humans at various stages of development, or decline?  (b) About various non-human animals?  (c)  About possible extra-terrestrials with a very different physiological makeup?  (d)  About super-computers?  In addition, what about possible non-embodied minds?  Can one, or could one, be justified in believing in, and in ascribing mental states to, non-physical things such as God, or other deities, or angels, etc.?

        This fifth question, in turn, is a point where epistemology and philosophy of mind connect up with another area of philosophy - namely, ethics.  Thus, people who think that there are important questions concerning the ways in which it is morally permissible to treat non-human animals will generally be concerned as to the point at which, as one moves up the ladder from simple, one-celled animals through increasingly complex animals, sentience first appears on the scene, the point at which one first has organisms that have experiences, and that, in particular, are capable of pleasure and pain.  Similarly, comparable questions concerning the type of mental life that humans are capable of enjoying at various points in their development may be important in connection with the question of the moral status of abortion at different stages in the development of the embryo/fetus.

    Similarly, though the question does not yet loom large, what about much more complex computers of the future that may very well be able to interact with us in "human" ways?  Would they be just complex machines that we could, without moral qualms, treat in ways that we treat other machines - dismantling them at any point, throwing them on the scrap heap, etc.?  Or would there be ways of treating them that were morally wrong?  The answer will depend upon whether we think that it is possible that such super-computers might have experiences, and whether, if we do think that is possible, whether we could have good reason for believing that it was in fact the case.

    What determines what type of evidence is relevant to the question of whether something has a mind, or enjoys a mental life?  The answer is that this depends crucially upon the correct account of the analysis of language about minds and mental states.  So the following questions turn out to be crucial:

(6)  What account is to be given of the very concept of a mind?  And what type of analysis is to be given of statements about different types of mental states?

        What sorts of accounts can one offer of the analysis of statements about mental states?  Basically, there are four main sorts of accounts:

Approach 1:  The Irreducibility Approach - Phenomenalistic Version

        This is the view, first of all, that sentences dealing with mental states (or at least, some mental states) are what one might call "semantically basic".  They cannot be analyzed in terms of any sentences that are not about mental states.  But secondly, the crucial characteristic of sentences about mental states is that they involve the idea of consciousness.

Approach 2:  The Irreducibility Approach - Intentionality Version

        This approach shares with the previous approach the view that sentences dealing with mental states (or at least, some mental states) are what one might call "semantically basic", and cannot be analyzed in terms of any sentences that are not about mental states.  But instead of it being held that it is consciousness that is crucial, the claim is that it is something else - what has been called intentionality - that is crucial to sentences about mental states.

Approach 3:  Analytical, or Logical, Behaviorism

        This is the view that sentences about mental states, rather than being semantically basic, can be analyzed in terms of statements about behavior - either statements about the individual's actual behavior at a time, or else statements about the individual's behavioral dispositions, where an individual's behavioral dispositions are a matter of the behavior that the individual would exhibit if circumstances were different in various ways.  (Note the importance of counterfactuals in this context.)  Illustration:  What is it for Mary to want a cold beer?  What is it for John to believe that the bridge is unsafe?

Approach 4:  A Functionalist Analysis

        A functionalist approach agrees with logical behaviorism that language about mental states is not semantically basic.  But the functionalist offers a different account of the concept of the mental, and of specific mental state concepts, than the logical behaviorist.  The key to the functionalist approach is the idea that mental states, rather than being constituted by actual behavior together with behavioral dispositions, are instead inner states that stand in certain causal relations to behavior on the one hand, and to stimulation of the individual on the other.  Thus, as David Armstrong puts it, "the concept of a mental state is primarily the concept of a state of the person apt for bringing about a certain sort of behaviour" - though in the case of some mental states "they are also states of the person apt for being brought about by a certain sort of stimulus."   (A Materialist Theory of  the Mind, page 82)

Functionalism and the Irrelevance of the Intrinsic Nature of the States in Question

A key point to keep in mind in understanding functionalism is that what makes a given inner state a mental state of some particular type - such as a state of believing that unicorns have never existed - is simply a matter of how the state in question is related to stimulation of the individual and to the individual's behavioral responses:  the intrinsic properties of the inner state are irrelevant; any state that occupies the same role will be precisely the same sort of mental state.  So from the point of view of functionalism, it could turn out that the mind is identical with the brain, and mental states are identical with states of the brain.  But equally, it could turn out that the mind is some sort of immaterial, Cartesian soul or substance, and that mental states are states of such an immaterial substance.  Interrelationships are everything, as far as the nature of the mental is concerned.

The Computer Program Analogy

        An analogy that is commonly employed to illustrate this, and that is due to Hilary Putnam, involves the distinction between computer hardware and software.  Two computers that are very different as regards their physical makeup may still be running the same program.  One computer could, for example, consist entirely of gears, levers, and other purely mechanical parts, while the other computer was made of electronic chips and circuitry, and they could nevertheless be running the same program.  (The mechanical computer would, of course, be much slower.)  So the answer to a question about what program a computer is running implies nothing about the stuff that the computer is made of, and the same is true with regard to questions about what stage a computer is at in a given program.

        The functionalist approach to mental concepts treats those concepts, then, as referring to elements that, like elements in a computer program, have the significance that they do because of their roles, because of their relationships to other elements, and not as referring to the stuff that those elements are made up of.

        In asking the question of the proper analysis of mental language, however, it is important not to assume that the "same sort" of account should be given for every type of mental state.  Perhaps there are significant differences between mental states, so that, in the case of some mental states, one type of analysis is called for, whereas a different sort of analysis is needed for other types of mental states.  So it's important to ask:

(7)  Are there any significant divisions between types of mental states, in the sense that a very different type of account might have to be given for some types of mental states than others?

The question being raised here is related to the different approaches to the meaning of language about mental states discussed in connection with question (6).  The thought, accordingly, is that it might be the case, for example, that terms referring to some sorts of mental states should be given a phenomenalistic interpretation, while perhaps other terms should be given a functionalist analysis.

But is there any reason for thinking that there may be a significant division of this sort between mental states?  One possible reason for suspecting that that may be the case arises out of a consideration of two very different answers that have been given to the following question:

(8)  What is the "mark" of the mental?  That is to say, what is it that distinguishes states of affairs that are mental states from those that are not?

        The two main answers that have been given to this question are as follows:

(1)  Consciousness is the mark of the mental;

(2)  Brentano's Thesis:  Intentionality is the mark of the mental.

        An intuitive characterization of intentionality is as follows:  (a)  A state is intentional if it "points outside of itself", if it is characterized by "aboutness";  (b)  An intentional state may be "about", it may "point to" something that doesn't exist;  (c)  An intentional state may be about something under one description, but not under another description.

Illustrations

(1)  With regard to the second of these three features, suppose that the basketball coach wants a center who is over eight feet tall.  The mental state of wanting a center who is over eight feet "points outside of itself", but what it is "about" - a center who is over eight feet tall -may not in fact exist.

(2)  With regard to the third feature, suppose that the basketball coach would like to find the tallest person on campus.  Suppose, further, that the tallest person on campus is identical with the coach's most severe critic, and the greatest serial killer of basketball coaches in history.  It may very well not be the case that the basketball coach would like to find his most severe critic, and the greatest serial killer of basketball coaches in history.

1.2  Intensional Language and Intentional States

        Points (b) and (c) in the preceding characterization of intentionality will be clearer, I think, if they are set out against the background of a distinction, within philosophy of language, between extensional and intensional contexts within sentences.

        First, then, the concept of an extensional context.  Consider the sentence:

"Sandra threw the ball to the boy across the street"
Certain terms and expressions within that sentence refer to particular things - namely, the name, "Sandra", and the definite descriptions, "the ball". and "the boy across the street".
"Sandra threw the ball to the boy across the street"
Each of the contexts occupied by those three expressions is extensional.  What does that mean?  The answer is that extensional contexts have two central features:

Feature 1:  The Interchange of Co-Referential Terms Preserves Truth-Value

        What does it mean to say that two terms are co-referential?  The answer is that this is just to say that they refer to the very same object.  So suppose that Sandra is Suzanne's older sister.  Then the name "Sandra" and the description "Suzanne's older sister" refer to one and the same person.  So those two expressions are co-referential.  To say that the place occupied by the term "Sandra" in the above sentence is extensional is then to say that if the above sentence is true, then the sentence that results when the term "Sandra" is replaced by the expression "Suzanne's older sister" - namely, the sentence

"Suzanne's older sister threw the ball to the boy across the street"
- must also be true.  Similarly, if the original sentence was false, then the sentence that results from the interchange of the co-referential expressions must also be false.

        The same applies to the other two contexts.  So suppose that the ball in question was the thing that Mark found at the swimming pool, and that the boy across the street is the most radical dude in Boulder.  Then the two expressions "the ball", and "the thing that Mark found at the swimming pool" are co-referential expressions - they refer to one and the same object, and, similarly, for the two expressions, "the boy across the street", and "the most radical dude in Boulder".

        Consider, now the sentence that results when we transform the original sentence by interchanging all three pairs of co-referential terms:

"Suzanne's older sister threw the thing that Mark found at the swimming pool to the most radical dude in Boulder"
If the original sentence is true, then this sentence must also be true, while if the original sentence is false, then so is this one.  In short, whichever truth-value the original sentence has, the sentence that results from the interchange of co-referential expressions must also have the same truth-value.

Feature 2:  Existential Generalization, or "Quantifying in," is Permissible

        Consider, again, our original sentence:

"Sandra threw the ball to the boy across the street"
If that sentence is true, then one can move from that sentence to related sentences, that will also be true, and that rather than naming or describing the people or things involved, simply assert that there are actual people or objects that stand in the relevant relations.  Thus one can, for example, draw the following conclusion from the original sentence:
"There is some actual person who threw the ball to the boy across the street".
Here the name "Sandra" has been eliminated, and one simply says that there is some actual person who performed the action in question.  Similarly, one might eliminate the description "the ball", and merely assert that there is some actual object that was acted upon in the way described, as in the following sentence:
"There is some actual person, x, and there is some actual object, y, such that x threw y to the boy across the street"
        Finally, the same thing could be done for the expression "the boy across the street":
"There is some actual person, x, and there is some actual object, y, and there is another actual person, z, such that x threw y to z".
To sum up, then:  Extensional contexts are ones where (1) interchange of co-referential expressions cannot possibly alter the truth-value, and (2) existential generalization is necessarily a truth-preserving inference.

        Intensional contexts, on the other hand, are contexts that lack these features.  Consider, for example, the following sentence:

"Bruce believes that Apollo is an admirable god"
Here the context occupied by the name "Apollo" is not an extensional context, as is shown by the fact that, even if the sentence in question is true, the sentence that results when one eliminates the name "Apollo" in favor of existential quantification - namely,
"There is an actual person such that Bruce believes that that person is an admirable god"
- is not true.  So the context occupied by the term "Apollo" is a context in which inference via existential generalization may very well not preserve truth.

        For a context where interchange of co-referential terms may lead to a change in truth-value, consider the following sentence:

"Rip Van Winkle believes that the previous president of the United States is Ronald Reagan"
The place occupied by the description, "the previous president of the United States", is not an extensional context.  For although the expression "the previous president of the United States", and the name, "George Bush", refer to the very same individual, when one makes the relevant interchange in the above sentence, one gets a sentence - namely,
"Rip Van Winkle believes that George Bush is Ronald Reagan"
- that may very well be false, even if the original sentence is true.

1.2.1  The Logical Structure Involved in the Two Features Discussed Above

Feature 1:  Substitution of Co-Referential Terms Preserves Truth-Value

        What is involved here is a matter of the validity of the following inference:
 

                     Fa
                  a =b
Therefore:     Fb


Feature 2:  Existential Generalization, or "Quantifying In", is Truth-Preserving

        This is a matter of the validity of the following inference:

                                                                                     Fa
                                                                  Therefore:     (Ex)Fx

1.2.2  Intentional States and Intensional Contexts

        As the sentences about beliefs illustrate, some sentences attributing mental states to a person involve intensional contexts.  This is true not only in the case of beliefs, but also in the case of desires, thoughts, hopes, fears, etc., as the following sentences illustrate:
 

"Tom wants to find a way of squaring a circle"

"John thinks that the third planet from the sun is Mars"

"Mary hopes that the lost continent of Atlantis will soon be found"

"Some Greeks feared Zeus more than any other gods"
 

It is rather natural, then, to relate intentionality to the occurrence of intensional contexts within sentences that are used to attribute mental states to individuals.  But while this is fine as far as it goes, a problem arises if one wants to use intentionality as a mark of the mental.  For then it will not do to appeal to the occurrence of intensional contexts within sentences that are used to attribute mental states, since then one's account would be circular.  Now if the only place where intensional contexts were to be found was in sentences used to attribute mental states, there would be no problem: one could define intentional states as those states that are attributed by means of sentences containing intensional contexts.  But sentences that do not function to attribute mental states also contain intensional contexts:
"It is a logical truth that Bill Clinton is identical with Bill Clinton"

"It is a necessary truth that 9 = 3 x 3".

For while Bill Clinton is identical with the present President of the United States, it does not follow that:
"It is a logical truth that Bill Clinton is identical with the present President of the United States"
Similarly, although it is true that the number 9 is identical with the number of the planets, it does not follow that:
"It is a necessary truth that the number of the planets = 3 x 3".
Some philosophers - perhaps most notably Roderick Chisholm - have tried to show that one can distinguish between those sentences that have intensional contexts and that deal with mental states and those sentences that have intensional contexts but do not deal with mental states, and that one can do this without appealing to the concept of a mental state.  If that is right, then one can make use of the notion of an intensional context to provide an explication of the concept of an intentional state that could, potentially, provide a non-circular mark of the mental.  But that issue is rather complex, and I do not think that it worthwhile to pursue that issue here.

1.3  Is Consciousness the Mark of the Mental?

What is one to say about these two proposed "marks" of the mental?  Consider consciousness.  To claim that consciousness is the mark of the mental is to claim that, at least:

(1)  Only mental states are characterized by the presence of consciousness;

(2)  All mental states are characterized by the presence of consciousness.

        The first of these claims may very well seem unproblematic, and perhaps it is.  In any case, it is best to begin with the second claim.  So let us start by asking: Are all mental states characterized by the presence of consciousness in the sense that all mental states are states of consciousness?  Or is there a difference, for example, between experiencing pain, and having some belief?  Isn't it rather plausible to say that while pain is itself a state of consciousness, having a particular belief is not?

        Even if this were granted, however, it might be contended that all mental states, even if they are not all states of consciousness, must at least involve some relation to consciousness.  So, in the case of beliefs, for example, it might be said that a belief is a disposition to have a certain thought, and that, since thought episodes are necessarily states of consciousness, beliefs are necessarily related to states of consciousness.

        This is a rather natural idea, but some difficulties may arise with respect to precisely what the relevant relation to consciousness is.  For when the "all" claim is modified from the claim that all mental states are states of consciousness to the weaker claim that all mental states are either states of consciousness, or else stand in a certain relation to states of consciousness, a similar modification has to be made in the "only" claim, if one is to have a mark of the mental, if one is to have found some feature that is possessed by all and only mental states.  And then the danger is that if one doesn't hit on the right relation to consciousness, one may find that too many things will qualify as mental states, since there may be states that, intuitively, are not mental states, but which stand in the specified relation to consciousness.

1.4  Is Intentionality the Mark of the Mental?

        What about intentionality?  Is intentionality the mark of the mental?  For that to be so, the following two claims, at least, will have to be true:

(1)  Only mental states are characterized by intentionality;

(2)  All mental states are characterized by intentionality.

        Neither of these claims is unproblematic.  Consider, first, the "all" claim.  Is it true that all types of mental states are characterized by intentionality?  Are all mental states characterized by "aboutness", by "directedness" at something outside of themselves, something that may not exist?  The types of cases that pose a difficulty for this claim are certain cases of sensations.  Is a painful sensation, for example, directed at something outside of itself?  Or consider the rather uninteresting visual experiences that one has when one closes one's eyes:  Are those experiences directed at anything outside of the experiences themselves?  In short, aren't there cases of being aware of mere sensations that have no "aboutness"?

        What about the "only" claim?  Is it true that only mental states are characterized by aboutness?  Or are there other states that are directed at things that may not exist?

(1)  Language, and the question of the source of intentionality;

(2)  Purely physical systems: the case of the heat-seeking missile.

1.4.1   Language, and the Question of the Source of Intentionality

        Many mental states seem to be characterized by "aboutness", by "directedness" at something outside of themselves, and at something that need not exist.  But isn't language also characterized by intentionality?  Isn't an utterance - or, at least, any utterance that makes an assertion - "about" some thing, or some state of affairs, that, generally speaking, lies outside of the utterance, and that may fail to exist?  For an assertion may, in general, fail to be true, and if an utterance is false, then it "points", so to speak, towards a state of affairs that fails to exist.

        But are utterances intentional in the way that at least some mental states are?  Can one relate the intentionality of language to the intensionality of contexts within sentences?  In support of an affirmative answer to this question, one might cite sentences that talk about the meanings of sentences.  Consider, for example, the sentence:

"Bill Clinton ist ein Vater" (in German) means that Bill Clinton is a father.
        The context within this sentence occupied by the second occurrence of the name "Bill Clinton" is not an extensional context, since, though the original sentence is true, if one replaces "Bill Clinton" by the expression, "the present president of the United States", which refers to one and the same individual, the sentence that results is not true.

        Given that language is also, apparently, characterized by intentionality, the question naturally arises as to whether there is any relationship between the intentionality of the mental and the intentionality of language.  In particular, is one of these more fundamental, and the explanation of the other?  (The sun/moon analogy: the sun shines, and the moon shines, but the former is the source of the latter; the moon shines only because the sun shines.)

        This issue was debated, some time ago, by Wilfred Sellars and Roderick Chisholm.  Chisholm maintained that it was the mental that was the source of intentionality, and that language possessed intentionality only because of its relation to the mental.  Utterances are about things only because they have meaning, and they have meaning only because they are used to express, for example, beliefs.

        Sellars defended the opposite view.  He held that it was language that was primary, and that, rather than sentences having meanings because they expressed thoughts, thoughts themselves involved an internalized use of language, so that the way to explain the intentionality of thoughts was in terms of the intentionality of language.

1.4.2  Purely Physical Systems:  The Case of the Heat-Seeking Missile

        Regardless of which is more fundamental - the intentionality of language, or the intentionality of the mental - one is still left with the question of whether intentionality has to be treated as a brute fact, or whether, on the contrary, some explanation of it can be offered.  And if the latter is the case, if intentionality has, so to speak, some non-intentional source, what is that source?

        A possible answer to this question is suggested by certain purely physical systems - namely, ones whose physical movement is, metaphorically-speaking, "goal-directed".  Consider, for example, the case of a heat-seeking missile - a device that "looks" for objects in its environment whose temperature exceeds a certain amount, and then which, if it "finds" any such object, moves toward the hottest such object, and, when sufficiently close, "engages" in explosive behavior.

        Compare the "behavior" of such a device with the behavior of someone searching for a kangaroo in a certain park.  Just as there may be no kangaroo in the park, so there may be no sufficiently hot object in the vicinity of the heat-seeking missile.  Or compare the situation with that of a coach looking for a seven-foot center on campus.  Just as it may be that the seven-foot center is the best mathematician on campus, and yet the coach is not looking for the best mathematician on campus, so it may be true that although the hottest object in the heat-seeking missile's environment is also the reddest object, the heat-seeking missile is not "trying" to get close to the reddest object in its environment.

        But how is the metaphorical mentalistic talk about "trying", etc., to be cashed out in the case of the heat-seeking missile?  The answer is that it is to be done in terms of the concept of factors that are causally relevant to the missile's movement.  This if A is the hottest object in the missile's vicinity, that feature of A will cause the missile to move towards A.  But if A also happens to be the reddest object in the vicinity of A, that feature of A will not play a causal role in the missile's movement.  In short, the following inference is not a valid inference:

(1)  The missile's movement towards A is caused by A's being the hottest object around;

(2)  The hottest object around is also the reddest object around;

Therefore:
(3)  The missile's movement towards A is caused by A's being the reddest object around.
        So substitution of co-referential expressions within some causal contexts may result in a change of truth-value.

        Alternatively, the point can be put in terms of dispositions.  So put, the point will be that the following inference is not valid:

(1)  The missile is disposed to move towards the hottest object around;

(2)  The hottest object around is also the reddest object around;

Therefore:
(3)  The missile is disposed to move towards the reddest object around.
        So substitution of co-referential expressions within some dispositional contexts may result in a change of truth-value.

        At the very least, then, one can see that sentences describing purely physical systems that lack all mental states can involve intensional contexts.  So the possibility of an account of the intentionality of the mental is by no means in principle impossible.  But in addition, if one can maintain that either dispositions, or else causal connections, play an essential role with respect to at least some mental states, then more than a possibility is opened up by the above example.  For then the intensionality either of sentences used to attribute dispositions to an object, or of sentences about causally relevant properties, may be the underlying source of, and the explanation of, the intentionality of the mental, or of those mental states that are characterized by intentionality.

1.5  "That" Clauses and Two Types of Mental States

        Notice that, when one thinks of mental states which it is natural to view as characterized by intentionality, many sentences about such states involve "that" clauses: they are sentences concerning beliefs that something is the case, or desires that something be the case, or hopes that something is the case, and so on.  There are, of course, sentences that attribute mental states that seem to be characterized by intentionality where the sentences don't involve "that" clauses - such as, e.g., "Bruce is looking for a unicorn".  But might not one plausibly be argued that such sentences can always be rewritten so that they do contain "that" clauses.  Thus, for example, can't one replace the sentence "Bruce is looking for a unicorn" by the sentence "Bruce wants it to be the case that he has found a unicorn", or by the sentence "Bruce is trying to make it to be the case that he has found a unicorn"??

        If this is right, it suggests that all mental states that involve intentionality can be described via sentences that involve "that" clauses.  This in turn means that one way of searching for mental states that are not characterized by intentionality is by searching for mental states that cannot be describe by sentences that cannot be translated into sentences that involve "that" clauses.

2.  The Problem of Other Minds, and the Analysis of Talk About Mental States

        Whether establishing that one can have knowledge of, or justified beliefs about, other minds is a difficult philosophical problem depends upon what the correct account is of the meaning of talk about mental states.  For there are some analyses of mentalistic language that entail that justifying conclusions about the existence of other minds, and about the mental states of others, need not be any more difficult than justifying conclusions concerning physical states of affairs.

        Two theories that lead to this conclusion are (1) analytical behaviorism, and (2) functionalism.  To show that the problem of other minds involves difficulties beyond those that involved in that of justifying beliefs about physical states of affairs one would need, therefore, to show that neither analytical behaviorism nor functionalism provides a satisfactory account of the meaning of mentalistic language in general.  Here, however, I shall discuss only analytical behaviorism - though the two arguments against analytical behaviorism that I shall discuss can be reformulated, I believe, so that they apply to functionalism.

2.1  Analytical Behaviorism:  Some Objections

        Recall that analytical, or logical behaviorism is the view that sentences about mental states, rather than being semantically basic, can be analyzed in terms of statements about behavior - either statements about the individual's actual behavior at a time, or else statements about he individual's behavioral dispositions, where an individual's behavioral dispositions are a matter of the behavior that the individual would exhibit if circumstances were different in various ways.

        Probably the most common reason offered for rejecting an analytical behaviorist account of mental states is the claim that behaviorism is incompatible with the plain evidence of introspection . The suggestion is that one is aware, in introspection, of mental states with properties which cannot be identified with behavioral states.

        There are some classical objections to logical behaviorism that ultimately rest upon this appeal to the evidence of introspection.  Nevertheless I think that the classical arguments are much more likely to convince than a bald appeal to introspection, for they provide one with more explicit statements of exactly what is being claimed.

        If logical behaviorism is true, then, if organisms A and B are exhibiting the same behavior, and have the same behavioral dispositions, it necessarily follows that organisms A and B are in the same mental state.  Here are three ways in which one might try to show that this consequence of logical behaviorism is false, and hence that logical behaviorism must be rejected:

(l) One can try to describe a case in which two organisms will be in the same behavioral state, but in different mental states.

(2) One can try to show that, for any behavioral state whatever, it is possible for an organism to be in that behavioral state without being in any mental state, or at least, in any state of consciousness.

(3) One can try to show that one can have knowledge of behavioral states without thereby having knowledge of mental states.

        Let us now consider arguments which exemplify each of these approaches:

(l) The Inverted Spectrum Argument;

(2) The Unconsciousness Argument;

(3)  The Understanding-Sensation Terms Argument.


2.1.1  The Inverted Spectrum Argument

        This is perhaps the most famous objection to analytical behaviorism.  One way of formulating it is as follows:

(l)  Consider the sort of experience you have when you look at something red, under normal conditions of illumination, etc.  It is different from the experience you have when you look at something green under similar conditions, providing that you are not red-green color-blind.  Now can't you imagine what it would be like for ripe tomatoes to give rise, not to the sort of visual experience you now have when you look at them, i.e., a sensation of redness, but instead to the sort of experience that you now have when you look at a ripe lime?  And conversely, can't you imagine what it would be like if the experience you got when you looked at a lime was like the experience you now get when you at a tomato?

        More generally, one can imagine what your experiences would be like if one's complete visual spectrum were, so to speak, to flip over.  And not merely can one imagine this, one could take make a film which would produce the sort of experiences one would be having if one's visual spectrum were to flip over.

Moreover, this possibility is not merely a logical possibility.  It is at least empirically impossible, even if not technically possible at present.  For it will surely be possible someday to insert a device in a person that will systematically alter the messages sent from the eyes to the visual centers in the brain, and thus produce an inversion of one's visual spectrum.

(2)  The particular words that one uses to describe the colors of objects, and to describe one's visual experiences, surely do not affect the nature of those experiences.  You might, for example, decide to use the word "green" to describe objects that everyone else uses the word "red" to describe.  This would not affect the way that ripe tomatoes and limes look to one; this would not alter the visual experiences one has.

(3)  Suppose now that you have an identical twin.  In the interests of philosophy, your twin is separated from you, and placed in an artificial environment in which there no natural objects such as tomatoes and limes which tend to be of fixed colors.  Your twin is then taught a language that is just like English, except that the use of color words is switched around. If you were to see an object which she would label "red", you would label it "green", and so on.

(4)  Suppose that her eyes and the visual portions of her brain are exactly similar to yours.  Then you have excellent reason to believe that the visual experiences she has when she looks at something which you would call "red" and she would call "green" are the same in quality as the experiences you would have when you look at the same object.  For it has been argued that the language one learns to describe visual experiences and the properties thereof does not significantly affect the quality of those experiences.

(5)  An operation is now performed on your twin which results in an inverting of her visual spectrum.  Objects that before the operation she would have labeled "red" she will now label "green".  Hence when you and she now look at the same object, your color experiences will be different than hers, just as your own color experiences would be different looking at a given object before and after such an operation.

(6) Your twin is now let out of her artificial environment.  The claim is now that although when she looks at a ripe tomato she will have the sort of visual experience that you have when you look at a lime, there will be no relevant difference between you either with respect to actual behavior or with respect to behavioral dispositions.

(7)  Conclusion: One can imagine cases, and in the future will actually be able to construct them, in which one has differences in mental states that are not accompanied by differences in behavioral states.  Hence it cannot be a conceptual truth that mental states are identical with. behavioral states.

        The argument can equally well be expressed simply in terms of one's own case.  One way of expressing the argument is in terms of something that might have happened to you in the distant past.  Thus, you can imagine what it would be like for your spectrum to invert right now.  If so, you can equally well imagine what it would be like for your spectrum to have inverted at some time before you learned to speak, say, at the age of one month.  If that had in fact happened, you would be having a different experience when you looked at a ripe tomato from what you now have when you look at one.  But your behavior and behavioral dispositions might well be just the same as they are now.  Hence the properties of one's visual experiences cannot be identified with properties of behavioral states.

        Another way of putting the argument in terms of yourself focuses upon what could happen to you right now.  In this case, we need to consider three possible changes that might occur to you now.  One involves a change to your brain that inverts your visual spectrum.  A second involves a change to the linguistic center in your brain, so that your use of language for visual properties also undergoes a systematic inversion.  And finally, a third change is needed to some of the memories that you currently have - namely, memories concerned with your previous visual experiences and your use of language in relation to those experiences.  The idea is then to consider, first, making these changes one at a time, and then secondly, making the changes all at once.  When the latter is done, it seems very plausible to say that the experiences you have, for example, when you look at a ripe tomato will be different from what they were before the changes, even though, first, you will not be aware of the fact that they are different, and secondly, there will be no changes in your behavior or behavioral dispositions.

2.1.2  The Unconsciousness Argument

        The thrust of this second argument is that one can imagine what it would be like to be in the behavioral states which analytical behaviorists claim are identical with states of consciousness, and yet not to be enjoying the states of consciousness in question. Formulated with respect to visual experiences, the argument might run as follows:

(l) Any person who is not blind can, simply by opening and closing his or her eyes, appreciate the radical difference between having visual experiences and not having visual experiences.

(2) One can also understand what it would be like to have powers of extrasensory perception, that is, to have the ability to form true beliefs about objects in the absence of sensory input.  Imagine now that one has this power in a very high degree, but that it is restricted to information about the visual properties of objects.  That is, you are to imagine that while sitting here, you are capable of describing correctly some scene in New York city, and that you can do it as quickly, and in as much detail, as someone who is there witnessing the scene.

(3) Mightn't one say in such a case that you are somehow able to see what is going on in New York city? No, for it is not that you are having the sort of visual experiences you would be having if you were in New York city.  You are to imagine that the visual experiences you are having are either of your immediate surroundings, or of the sort that you typically have when you close your eyes, or none at all.

(4) If you can imagine what it would be like to have such a power, you can imagine what it would be like to go blind, but to have this ability to correctly describe the visual properties of objects.  Suppose further that your ability to correctly describe the visual properties of objects becomes restricted in scope, so that you can do it only when your eyes are open, and when sufficient light is traveling from the object to one's eyes.  Although blind, you would possess precisely the same capacity for discriminating among objects that people not blind possess.  So it would seem that one can imagine what it would be like to exhibit the same behavior, to have the same behavioral dispositions and capacities - verbal behavior and behavioral dispositions aside - that are associated with a person who can see, and yet not to be having any visual experiences.  If so, the claim that it is a conceptual truth that there is some non-linguistic behavioral state that is identical with the having of visual experiences must be rejected.

(5)  There will, of course, be differences with respect to what one is inclined to say, for one will say that one is no longer having visual experiences.  But there are at least three reasons for thinking that the latter fact cannot save a behaviorist account.  The first is that many animals that cannot use a language certainly have visual experiences, so being inclined to talk about visual experiences cannot be a necessary condition of having such experiences.  Secondly, one can imagine an individual who is disposed to say that he is having certain visual experiences whenever he acquires "corresponding" visual information, or, at least, inclinations to have certain visual beliefs.  (Compare Armstrong's account of what it is to have a visual experience.)  Thirdly, the account that the behaviorist is offering of what it is to have a visual experience would seem to be circular if one says that part of what it is to have an experience is to be disposed to say that one is having a visual experience.

Comment

        This sort of argument can be formulated in much more restricted and much more expansive forms. A more restricted version might focus upon some bodily sensation, like an itch:

(l)  What account can the behaviorist give of something like an itching sensation?  One possible account is to identify the having of an itching sensation in a particular part of one's body with the disposition to scratch that part of one's body.

(2)  People sometimes have compulsions to perform various acts.  Imagine, then, someone who has a compulsion to scratch his typewriter.  Is he experiencing an itching sensation located in his typewriter?  Well it might be objected that his typewriter is not part of his body.  Suppose, then, that he has a compulsion to scratch his teeth or his fingernails, i.e., parts of the body that one rarely locates itches in.  Do you want to say that if he has a disposition to scratch his tooth, he must be experiencing an itch there?  Can't you imagine, in your own case, what it would be like not to be experiencing an itch anywhere, and yet to have this compulsion or disposition to scratch a tooth?

(3) Now the behaviorist may object that in such a case your verbal behavior, or your dispositions to verbal behavior, will be different from what it would be if you were really experiencing an itch in your tooth. How will it be different? Presumably the difference will be simply that in the one case you will be disposed to say:  "My tooth itches", while in the other you will be disposed to say: "My tooth doesn't itch, but I have this weird disposition to scratch it nonetheless."

(4)  But here, too, this reply seems unacceptable.  In the first place, there does not seem to be any conceptual connection between mental states of the bodily sensation sort and verbal behavior.  Surely animals that are incapable of using a language enjoy bodily sensations such as itches.  Secondly, if the verbal behavior suggested is incorporated into the behaviorist's account of what it is to experience an itching sensation, the account will be circular. For we now saying that to experience an itching sensation is to be disposed to scratch and to be disposed to say that one is experiencing an itching sensation.

(5)  Once again, the conclusion seems to be that one could be in the relevant behavioral state, and yet not be having any experience of the sort that is supposed to be identical with that behavioral state.

        A more expansive version of the argument might involve an attempt to show that one can understand what it would be like to be in precisely the same behavioral states that one is in when one is conscious, and yet to be completely unconscious:

(l)  Having been awake at some times, and asleep at others, one appreciates the distinction between being conscious and being unconscious.

(2)  One can imagine oneself doing various things while unconscious, e.g., walking in one's sleep. But if one can imagine oneself doing something while asleep, why can't one imagine one engaged in a much wider range of activities: eating, talking to people, responding to all sorts of stimuli, and so on?

(3)  It is true that if this were to happen, others could not tell by observing one's behavior that one was unconscious.  They would certainly judge you to be completely conscious.  But isn't it possible, nonetheless, that the states of consciousness you would be enjoying would be, not those you are now enjoying, but those that you enjoy while in a dreamless sleep?  If so, it must be possible for one to behave, and to be disposed to behave, just as one would if one were conscious, and yet to be completely unconscious. Hence states of consciousness cannot be necessarily identical with behavioral states.

2.1.3  The Understanding-Sensation-Terms Argument

        This final argument is essentially just the sort of argument advanced by Frank Jackson that was discussed earlier.  Thus, if one considers the case of Mary whose visual experiences are all of the black and white variety, she is able to have a complete understanding of the behavioral dispositions of those who have experiences of the red variety.  If analytical behaviorism provided a correct account of the meaning of sensation terms, it would follow that Mary was capable of acquiring an understanding of what is meant by an expression such as "experience of the red variety".  But it seems, on the contrary, that Mary cannot acquire an understanding of the meaning of such an expression, simply on the basis of her knowledge of the behavioral dispositions hat may be associated with someone who is having an experience of the red variety.  So a behaviorist analysis of sensation-terms cannot be correct.

3.  Alternative Accounts of the Justification of our Beliefs about Other Minds

        There are three main types of arguments that have been advanced in an attempt to provide an account of the justification of one's beliefs about other minds which will be compatible with a dualistic view of states of consciousness.  Each argument could, of course, equally well be employed in conjunction with a materialist view of the mind.

3.1  The Argument from Analogy

        Perhaps the most natural account of the justification of one's beliefs about other minds is provided by the argument from analogy.  The basic claim is that beliefs about other minds can be justified by a process of inductive generalization based upon evidence provided by one's own case, and in which observation of one's own mental states plays an essential role.

        The general idea is that one can find out, by a combination of observation and introspection, that there are certain laws relating states of one's body to states of consciousness which hold true at least in one's own case.  One then extrapolates these regularities, and gets laws that apply to other organisms that are similar to oneself in the relevant respects.  The resulting laws will imply that physical objects that are similar in the relevant respects to one's body will probably have states of consciousness associated with them in the way that states of consciousness are associated with one's own body.  And then, since there are in fact physical objects that are similar to one's own body, one can apply the laws in question to these objects, and draw the conclusion that there are states of consciousness, which one notices are not one's own, associated with these other physical organisms.

3.2  The Inference to the Best Explanation

        This second account differs radically from the argument from analogy approach.  The claim that the appropriate reasoning is a matter of inductive generalization from one's own case is rejected.  It is contended, first, that one's justification for believing that others have minds does not rest upon observation of mental states in one's own case, and secondly, that the reasoning is not a matter of inductive generalization, but of what may be labeled "inference to the best explanation".

        The basic idea is that the theory that other physical organisms have minds is the hypothesis that best explains their behavior.  The assumption that physical organisms similar to oneself have minds that stand in certain causal relations to their bodies leads one to expect those organisms to behave in certain ways.  These expectations are generally fulfilled.  It is then contended that there is no other hypothesis of comparable simplicity presently available that will enable one to explain and predict the behavior of such organisms to the same extent.  If so, one is surely justified in accepting the hypothesis that other organisms have minds, on the grounds that it provides the best available explanation of their observable behavior.

3.3  A Non-Analogical Argument Based upon Use of Mentalistic Language

    This third account of our knowledge of the mental states of others centers upon the contention that the critical evidence in support of our beliefs about other minds consists in certain facts about the linguistic behavior of other organisms, specifically:

(l) The fact that other organisms employ, and apparently understand, language that is apparently about mental states;

(2) The fact that other organisms assert (or at least apparently assert) that they enjoy mental states.

The basic suggestion is that it is reasonable, given facts (l) and (2), to believe that organisms of which (1) and (2) are true probably enjoy mental states.

4. The Argument from Analogy

4.1  A Formulation of the Argument from Analogy

        The argument involved in this first approach to the question of the justification of one's beliefs about other minds runs roughly as follows:

(1) There is a certain physical object B (one's own body) such that there is a certain person (namely, oneself) such that one finds that there are correlation laws connecting physical states of B and mental states belonging to that person (oneself).

Comment

        The physical states in question may be either physiological states or behavioral states.  For, on the one hand, one finds that whenever one's elbow is struck at a certain spot, one experiences a certain characteristic sensation.  And on the other hand, one notices that whenever one feels happy there are certain ways in which one is behaving or at least disposed to behave.

(2)  One notices that there are other physical objects that are quite similar to one's body - that is, to physical object B.  These other objects are capable both of similar behavior and of being in similar physiological states.

(3)  Suppose now that one generalizes the correlation laws that one finds are true in one's own case.  What will the resulting laws say?  The answer is that there are different ways in which one might generalize the laws in question.  One possible type of generalization is illustrated by the following:

There is a certain person - namely oneself - such that whenever an elbow on any body similar to body B is struck in a certain spot,  that person experiences a certain characteristic sort of sensation.
But this generalization obviously won't do.  For it is simply false that when elbows on bodies other than one's own are struck, one has any sort of sensation.  What one wants is a generalization that says merely that there is someone who has an experience under the circumstances in question:
For every physical object that is similar to physical object B, there is some person such that whenever an elbow on that physical object is struck in a certain place, the person in question enjoys the relevant sort of experience.
(4)  If one now applies such generalizations to physical objects that are similar to one's body, one is led to the conclusion that when such objects are in certain physiological states, or are exhibiting certain sorts of behavior, there is a person associated with those bodies who is enjoying certain mental states.

(5)  But one can tell by introspection that when physical objects other than B (one's own body) are in the relevant physical states, one is not enjoying the correlated mental states oneself.

(6)  Hence one is forced to conclude that there are other persons associated with physical objects resembling one's own body.

4.2  Physiological States or Behavioral States?

        One issue worth thinking about is whether the argument from analogy should be formulated in terms of physiological states or behavioral states.  A number of writers tend to place the emphasis upon behavioral states.  This is true, for example, of H. H. Price in his article "Our Evidence for the Existence of Other Minds" (Philosophy, 1938), and also of A. J. Ayer in his discussion of the argument from analogy in his book, The Problem of Knowledge.  Thus Ayer says: "Consideration of the physiological resemblances between myself and others plays a secondary role."  (221-2)

        This view seems to me very much open to question.  What one wants to ask is this:  What are the basic correlation laws in one's own case from which one is extrapolating?  Physiological resemblances will play a secondary role, and behavioral resemblances a primary role, only if the laws in question connect up mental states and behavioral states.  But this is surely very doubtful.  The laws running from physical causes to mental effects surely relate brain states as causes to mental states as effects.  But in addition, the laws running in the opposite direction, while they relate mental states as causes to behavioral effects, must surely involve causal processes that run through complex physiological states, including brain states.  And given that this is so, what justification can there be for concluding, by means of the argument from analogy, that something that is behaviorally similar to one, but physiologically radically different - such as a machine - probably enjoys states of consciousness?  For it will be disanalogous in precisely the crucial respect, i.e., with respect to its brain states.

The upshot seems to be that it is similarity of physiological states that is crucial to the argument from analogy.

5.  Objections to the Argument from Analogy

        I think it is correct to say that the majority of contemporary philosophers feel that the argument from analogy does not provide the property dualist with an acceptable account of the justification of one's beliefs about the states of consciousness of other people.  There are a variety of reasons that have been offered for thinking that the argument from analogy does not do the job.  The most important objections seem to me to be the following:

(l)  The verifiability objection;
(2)  Strawson's objection;
(3)  The checkability objection;
(4)  The objection that the reasoning is inductively unsound;
(5)  The objection that the reasoning lends only very weak support to the conclusion;
(6)  The objection that, though the argument from analogy is in principle sound, it implies that justified beliefs about other minds presupposes detailed neurophysiological knowledge.

        My comments on the first four objections will be relatively brief.  I think that the crucial objections are the last two, and so it is those that I want to examine very closely.

5.1 Objection 1: The Verifiability Objection

        A. J. Ayer, in his famous book, Language, Truth, and Logic, rejected the argument from analogy on the ground that the conclusion that it purports to establish - that there are private states of consciousness that are not one's own - is devoid of factual meaning, since it is unverifiable.  Ayer's view was that it made sense to talk about private states of consciousness in one's own case - since there one can verify that one is in a given state of consciousness - but not in the case of others.

        One might think that the verifiability objection begs the question.  For after all, if the argument from analogy is sound, then one can in principle have evidence that supports the claim that there are mental states that are private and that are not one's own.  So if the argument from analogy is sound, sentences about the private mental states of others are verifiable.

        The problem with this reply is that in setting out the argument from analogy one is implicitly assuming that it is meaningful to speak of private mental states that are not one's own.  One does this, in effect, when one generalizes upon the correlation laws that one discovers in one's own case.  The result is that if one agrees that a sentence is not factually meaningful unless it is verifiable, one cannot employ the argument from analogy to show that sentences about other minds are verifiable, and hence meaningful, since in formulating the argument one is implicitly assuming that sentences about other minds are meaningful.

        What one must do in order to answer the verificationist objection is to examine critically the versions of the verificationist principle which are being appealed to.  This undertaking is out of the question here.  In any case, if the verificationist objection is sound, it is an objection not to the argument from analogy per se, but to dualism.  The verificationist objection does not raise any problems specifically concerned with the justification of one's beliefs about other minds.

5.2  Objection 2: Strawson's Objection

        Strawson's objection to the argument from analogy is somewhat difficult to disentangle.  It is contained in the following passage from his essay "Persons":
 

"To put it briefly: one can ascribe states of consciousness to oneself only if one can ascribe them to others; one can ascribe them to others only if one can identify other subjects of experience; and one cannot identify others if one can identify them only as subjects of experience, possessors of states of consciousness.

"It might be objected that this way with Cartesianism is too short.  After all, there is no difficulty about distinguishing bodies from one another, no difficulty about identifying bodies.  And does this not give us an indirect way of identifying subjects of experience, while preserving the Cartesian mode?  Can we not identify such a subject as, for example, 'the subject that stands to that body in the same special relation as I stand to this one'; or, in other words, 'the subject of those experiences which stand in the same unique causal relation to body N as my experiences stand to body M'?  But this suggestion is useless.  It requires me to have noted that my experiences stand in a special relation to body M, when it is just the right to speak of my experiences at all that is in question.  (It requires me to have noted that my experiences stand in a special relation to body M; but it requires me to have noted this as a condition of being able to identify other subjects of experience, i.e., as a condition of having the idea of myself as a subject of experience, i.e., as a condition of thinking of any experience as mine.)"  (Pages 134-5)


        The crucial argument here seems to be contained within the parentheses.  The following would seem to be a fair statement of it:

(1)  I can't refer to an experience as mine, I can't say that an experience belongs to me, unless I possess the concept of myself as a person.

(2)  I can't possess the concept of myself as a person unless I also have the concept of persons other than myself.

(3)  I can't possess the concept of persons other than myself unless I am able to identify other subjects of experience.

(4)  It follows from (1), (2), and (3) that I can't refer to any experiences as mine unless I know how, in principle, to identify other subjects of experiences.

(5)  If subjects of experiences are conceived of as private entities - either as Cartesian spiritual substances or as bundles of mental states - then one might not know how to identify other subjects of experience, since one might not know how to determine even whether there are other subjects of experience.

(6)  The advocate of the argument from analogy thinks that one can first identify certain experiences as one's own, hit upon the argument from analogy and determine that it is a sound argument, and then use it to identify other subjects of experience.  This presupposes, if one takes the view that subjects of experiences are private entities, that one might know that certain experiences were one's own without knowing how to identify other subjects of experiences. But this is incompatible with the conclusion drawn at step (4).

        What Strawson's argument purports to establish, then, is this:

If subjects of experiences are private entities, the argument from analogy cannot be employed to identify other subjects of experience.
        Strawson's argument is thus directed not against the argument from analogy per se, but against the view that subjects of experiences are private entities.  What it shows, with respect to the argument from analogy, is that the argument is either useless or unnecessary.  For if subjects of experience are private, Strawson's argument purports to show that the argument from analogy cannot be used to show that there are other subjects of experience.  While if subjects of experiences are not private, then one will be able to acquire knowledge about other subjects of experiences in a more direct fashion, without resorting to the argument from analogy.

Comment

    The crucial assumption in this argument is that at step (3):

One can't possess the concept of other persons unless one knows how to identify other persons.
        What is the justification of this assumption?  Strawson does not, as far as I can see, offer any satisfactory justification for it. In a footnote (pages 133-4) he says:
"The main point here is a purely logical one: the idea of a predicate is correlative with that of a range of distinguishable individuals of which the predicate can be significantly, though not necessarily truly, affirmed."
        But there seems to be a crucial ambiguity here.  It is certainly true that it is a logical point about (most) predicates that it is possible for them to apply to more than one individual.  This implies that one cannot have the concept of oneself as a person without having the concept of other persons: the predicate "...is a person" must be capable of applying to more than one individual.  But it is something quite different to claim that one must be able to distinguish different individuals that may fall under a given predicate.  This latter claim does not seem to be a purely logical point.  It appears rather to be a version of the verifiability principle slightly disguised.

        My own conclusion, then, is that Strawson can defend step (3) only by appealing to a version of the verifiability principle.  If verificationism is unsound - if one can understand a sentence without knowing how, even in principle, one could verify it - then Strawson's argument must be rejected.

5.3  Objection 3: The Checkability Objection

        This third objection turns upon what is sometimes referred to as the checkability requirement. This requirement claims that a conclusion that is arrived at by means of an analogical argument can be viewed as supported by the argument only if it is in principle possible to check up on the conclusion of the argument in a more direct fashion.

        Thus, suppose that one opens a number of green boxes and finds that each box contains a single red marble.  One would seem justified at least to some extent in concluding that a similar green box which one has not yet opened probably also contains a single red marble.  What the advocate of the checkability requirement contends is that it is crucial here that it is possible to check out this conclusion directly, i.e., by opening the box and looking into it to see whether it does in fact contain a single red marble.  If this direct check were not possible, the argument would not support the conclusion.

        But if one is using the argument from analogy to establish that there are private states of consciousness that are not one's own, it is surely impossible to check in a more direct fashion to determine whether the conclusion is correct. The argument from analogy, when used to establish a dualistic conclusion, must thus be rejected as violating the checkability requirement.

        The response to this objection which I would be inclined to is simply this. Unless it can be shown that there is some formal requirement for inductive reasoning that is violated by the argument from analogy, one is unwarranted in rejecting the argument.  Given that the premises are true, and that the reasoning is free of fallacies - is formally acceptable - one ought to accept the conclusion.  If the checkability requirement implies instead that the argument should be rejected, that merely provides one with grounds for concluding that the checkability requirement is a bad one.

5.4  Objection 4: The Reasoning is Inductively Unsound

        Some writers have tried to show that the reasoning involved in the argument from analogy is inductively unsound. Alvin Plantinga, for example, has argued in an article on "Induction and Other Minds' (Review of Metaphysics, 1966) and elsewhere that the argument from analogy violates the following condition that must be satisfied by any acceptable simple inductive argument:

"A simple inductive argument is acceptable only if it is logically possible that its sample class contain a counterinstance to its conclusion"
        The thrust of this requirement is that it is illegitimate to attempt to support a generalization by means of evidence that has been collected in such a way that there was no possibility of the evidence containing a counterinstance to the generalization.

        Imagine, for example, that someone wants to establish that all the fish in some lake weigh more than 20 pounds, but that to do so he uses a net that won't trap fish that weigh less than 20 pounds.  The fact that all the fish he has caught weigh more than 20 pounds surely does not give one good reason for concluding that all the fish in the lake weigh more than 20 pounds.  For he has collected his sample in such a way that it was practically impossible that it contain a counterinstance to the generalization he wanted to establish.  But if this is objectionable, it must surely be even more objectionable to collect one's sample in such a way that it is logically impossible for it to contain a counterinstance to the generalization one wants to establish.

        Why does Plantinga think that the argument from analogy violates this requirement?  Essentially because he thinks that the reasoning involved in the argument from analogy is roughly of the following sort:

(1a) Every case of a physical state of type B such that I have determined whether or not it was accompanied by a mental state of type M was so accompanied.

(2a) Therefore it is probable that every case of a physical state of type B is accompanied by a mental state of type M

- where "determined" means "determined by direct observation".

        Since I can determine by direct observation that a given physical state is accompanied by a mental state, but cannot determine by direct observation that it is not accompanied by a mental state (conceived dualistically), there is no way that (1a) could possibly be false.  (I can determine by observation that a physical state is not accompanied by a mental state belonging to me, but I cannot determine that it is not accompanied by a mental state belonging to someone or other.)

        The reasoning from (1a) to (2a) does violate the requirement stated above. But the reasoning involved in the argument from analogy is not of the sort suggested by Plantinga. It is rather of the following sort:

(1b) Every case of a physical state of type P of physical object B (my body) such that I have determined by direct observation whether or not it was accompanied by a mental state of type M belonging to me was so accompanied.

(2b) Therefore it is probably a law that every physical state of type P of body B is accompanied by a mental state of type M belonging to me.

        But it is certainly possible to falsify (1a) by direct observation, since what one is claiming here is not merely that the physical state is accompanied by a mental state belonging to someone or other, but that it is accompanied by a mental state belonging to oneself.

Conclusion

        Plantinga's criticism is based upon an incorrect formulation of the argument from analogy.  When the argument from analogy is properly formulated, it does not violate the requirement which Plantinga suggests.

5.5   Objection 5:  The Argument from Analogy Lends Only Very Weak Support to the Conclusion

        This objection has been advanced by a number of philosophers. Norman Malcolm, for example, mentions this objection in passing in his article on "Knowledge of Other Minds": "I shall pass by the possible objection that this would be very weak inductive reasoning, based as it is on the observation of a single instance."

        The basic thrust of this fifth objection is thus that the reasoning involved in the argument from analogy is just like the reasoning that would be involved if someone were to open a green box, find that the inside was painted red, and then conclude that probably all green boxes are painted red on the inside. It may be true that one's examining a green box and finding that it is painted red on the inside makes it slightly more probable that other green boxes are also painted red on the inside, but surely it does not make it sufficiently probable for one to be justified in believing that other green boxes are painted red on the inside. Similarly, the fact that one has discovered that one physical object - one's own body - has a mind associated with it is surely not sufficient grounds for concluding that similar physical objects probably have minds associated with them.

        One common reply is that the analogy is unfair, since in the case of the argument from analogy one has observed correlations between physical states and mental states on a number of occasions, so that the reasoning is not really generalization based upon a single instance.  Thus, for example, I have pressed my hands together a large number of times, and each time I have experienced a certain sort of tactile sensation.  So the conclusion that when hands are pressed together, certain tactile sensations are produced rests upon a large number of instances.

        The counter reply then runs as follows.  The suggested reasoning can be compared with that of a person who opens a green box, finds that it's red on the inside, closes it, then opens it again, and finds that it's red on the inside, closes it, then opens it again, and finds that it's red on the inside, etc.  The person who does this grants that if he had only looked in the box once, he would have had very little evidence for the claim that all green boxes are red on the inside.  But now, having looked inside the same green box a number of times and having found it to be red on the inside each time, he is confident that all green boxes are red on the inside.  This is precisely what the defender of the argument from analogy would have us believe is an acceptable pattern of reasoning.  One presses one's hands together and notices a certain tactile sensation.  One presses them together again and notices the same sort of sensation.  One repeats this operation a number of times.  What one is doing each time is simply confirming the fact that there is a mind associated with this body. Why should repeated confirmation of this fact give one good reason to believe that there is a mind associated with some other body, any more than repeated confirmations of the fact that this green box is red on the inside gives one good reason to believe that other green boxes are red on the inside? Surely the reasoning is equally unacceptable in both cases.

        A possible response at this point is that there are a large number of distinct correlations in the case of the argument from analogy.  Thus one notices that certain sorts of stimulation give rise to tactile sensations, other sorts of stimulation give rise to visual experiences, other sorts to auditory experiences, and so on. And within each of these general types, there are more specific types of physical stimulation that give rise to specific sorts of sensations and experiences.  The situation here is quite different from that of the green box example.

        But is this difference really relevant?  It's far from clear that it is.  For it may be the case not only that the inside of the box is red, but that it is smooth to the touch.  Thus whenever one looks inside the box, one sees that it's red, and whenever one feels inside it one discovers that it's smooth.  It's very difficult to see why merely increasing the number of generalizations that are found to be true of a particular object should give one more reason to believe that any of the generalizations are true of other objects.  As a consequence, I don't think that appeal to the large number of generalizations that exist in the mindbody case provides one with an answer to the objection.

        What then is the defender of the argument from analogy to say?  The first thing he or she should do is to try to suggest another model of the sort reasoning that he or she thinks is involved here. The critic's model is that of projecting a property or group of properties that one particular object has been found on numerous occasions to have onto other objects that are similar to it.  The defender of the argument from analogy has to come up with some other way of looking at the reasoning in question.

        It seems clear that the way the defender of the argument from analogy wants to view the reasoning is as a matter of extrapolating a law that has been found to be true in one area to other areas.  This process of reasoning is surely a perfectly legitimate one. Thus, for example, one has no qualms about concluding that laws that have been found to be true on earth are probably true of other parts of the physical universe.  What the defender of the argument from analogy is suggesting is that what one is doing is extrapolating laws that one has found relate physiological states and mental states in the part of the world that consists of one's body to other parts of the world.  The situation is, it is being contended, precisely parallel to the extrapolation of laws that are discovered on earth to other parts of the universe.  The difference is merely one of degree: one's laboratory is somewhat larger in the latter case than in the case of the argument from analogy.

        It is clear that there is a radical difference between the projection of properties and the projection of laws.  The earth has the property of having an atmosphere that contains oxygen.  This has been confirmed by a tremendous number of observations.  Still no one would think of projecting this property to other planets and concluding, on that basis alone, that probably other planets have an atmosphere that contains oxygen.  One would not think of doing this even if one had no independent information about the atmosphere of other planets.  In contrast, one is quite willing to generalize upon laws that are found to hold true on earth, applying them to other planets.  Where the law is not a basic law, but a derived law - such as the law describing the behavior of falling objects near the surface of the earth - the law in question will not of course be true on other planets, but there will be a more basic law, from which the law in question can be derived, and which will be true on other planets.  So that what we can say is this.  If L is a law that has been found to hold true in a certain area, we are confident that either law L will hold true in other parts of the universe or else there will be some more basic law L* from which L can be derived and which will be true in other areas.  It is this relationship that I have in mind when I speak of extrapolating or generalizing laws from one area to another.

        The distinction between projecting properties and projecting laws thus seems to be a promising line of thought for the defender of the argument from analogy.  The question that remains is whether it can be argued that in the case of the argument from analogy what one in fact has is a projection of laws rather than a projection of properties. This question is unfortunately not as easy to settle as it might at first appear.  The problem is that to attribute a property to an object is often just to say that a certain generalization is true of that object.  Thus consider the assertion that something is green.  It might be argued that to say that an object is green is just to say that it is a law about the object that it appears green to normal observers under standard conditions.  As a consequence, drawing a sharp line between the projection of properties and the projection of laws is more problematical than it first seems.

        Rather than discussing this problem, let me indicate what I think the outcome of such discussion is likely to be.  It seems that in the case of the argument from analogy one is confronted with a choice between the following two views:

(1) There are basic laws of the form: If one has a physical event of type P1, that event will causally give rise to a private state of consciousness of type E1;

(2) There is some new primary property M - intuitively, the property of having a mind - and there are basic laws of the following form: If there is a physical event of type P1 that occurs in a body that has property M, then that will causally give rise to a private state of consciousness of type E1."

        To say that M is a primary property is to say that there are no laws connecting M with the properties that are required for the formulation of theories in physics in such a way that the presence and absence of M can be predicted from information about these other properties.  Thus greens of information about the molecular properties of its surface, and similarly one can (or will be able to) predict whether a substance will dissolve in water once one has full knowledge of its molecular properties.

        If the property M of having a mind were a secondary property, there would have to be some complex K of physical properties on which it was causally dependent.  There would then be no need to mention property M in one's basic laws.  For one could replace a law such as that expressed by the statement, "If there is a physical event of type P1 that occurs in a body with property M, then that will causally give rise to a private state of consciousness of type E1" by: "If there is a physical event of type P1 that occurs in a body with properties K, that will causally give rise to a private state of consciousness of type E1".  That is to say, if the property of being minded were a secondary property, the basic psycho-physical laws would differ only in complexity from laws of the sort involved with option (1).  They would not contain any reference to some new non- physical property.

*** Which option should one choose: (1) or (2)?

        If (1) is the correct view, then the argument from analogy is in principle sound.  There will be some laws that one can apply to organisms that resemble one's own body, and thus derive the conclusion that there are private states of consciousness that are not one's own.

        If, on the other hand, (2) is the correct view, there will be no laws that one can confidently apply to other physical organisms, since before one could do so one would have to know that the organism in question possessed the non-physical property M of having a mind.  And since M is a primary property, and a non-physical one, there will be no test for its presence available to one.

*** Can the defender of the argument from analogy offer any reason for preferring option (1) to option (2)?

        One possible response is: an appeal to simplicity.  That is to say, one reply is that when one has to choose between two hypotheses that are otherwise equivalent the simpler one should be selected.

        It can then be argued that option (1) is simpler than option (2), in two ways:  First, the laws that are introduced by option (1) are simpler than those introduced by option (2), since they involve reference to one less property.  Secondly, option (2) involves the postulation of a new primary property, whereas option (1) involves no primary properties beyond those that the property dualist must accept to deal with the physical world and with his or her own states of consciousness.

        But there is another point to be made here.  For on the approach that postulates a new primary property, it turns out that that property enters into a very large number of laws, since the number of different sensuous properties that one is aware of are very large, and there will be a separate law for each, linking it to those neural states of affairs that give rise to experiences with that property.  But then what is the explanation of the fact that the new primary property happens to enter into all of these laws?  If no explanation is forthcoming, one has an enormous accident, which is surely very improbable.

        One could, of course, avoid this massive coincidence by postulating more than one new primary property.  One could even postulate one new primary property for each psychophysical law.  But this move, in addition to adding force to the earlier, simplicity objection, doesn't entirely dispose of the coincidence objection.  For now, rather than its being the case that I happen to have a single, new, primary property, it's instead the case that I happen to have a very large number of new, primary properties.  So it seems that one grand accident has been replaced by another.

Conclusions

1.  If simplicity is a legitimate criterion to appeal to in choosing between competing hypotheses, then the defender of the argument from analogy would seem justified in maintaining that the psycho-physical laws that he or she finds are true in the case of his or her own body also apply to other physical objects that resemble that body in the relevant respects. To assume instead that they apply to one's own body, but not to objects that resemble it, necessitates the postulation of a new property to explain why the laws in question are true of one body but not others.

2.  The postulation of a new, primary property gives rise to massive coincidences, since this property enters into a very large number of laws.  This coincidence is avoided if one holds that psychophysical laws do not involve any new, primary property.

5.6  Objection 6:  The Argument from Analogy Presupposes Detailed Neurophysiological Knowledge.

        This objection arises very naturally out of the line of thought that led to the conclusion that the crucial physical states that are being correlated with mental states are physiological states, not behavioral ones.  For recall that the argument in support of that conclusion appealed to the idea that if there is a causal connection between mental states and behavioral states, that causal connection must be via physiological states - and, in particular, states of the brain must enter into that causal connection.  But if that is right, then the crucial connections are those between brain states and mental states, and it would then seem that it is causal laws connecting those two sorts of states that one must establish in one's own case, and then apply to others.  But now one is confronted with the objection that one is not presently in any position to establish such laws.  Doing that will require very careful scientific investigation.

        The thrust of the objection, accordingly, is that even if the argument from analogy can in principle justify beliefs about the existence of other minds, the type of self-observation of one's own body that is needed is such that one cannot presently make such observations.  And so, if the argument from analogy is the only way in which one can justify beliefs about other minds, such beliefs are not presently justified.  So skepticism about other minds is the right view now.

        A way of making this objection even more vivid is to consider the situation, say, at the time of Plato and Aristotle.  Then it was not known that it is the brain that is crucial for mental life.  So how could a person living at that time ever have justified his or her belief in the existence of other minds if the argument from analogy was the only possible route?

        One attempt to answer this objection is to say that what one needs to establish is not the precise laws that are involved, but only that there are some laws or other linking some physical states of one's body with one's mental states.  But this response is open to the objection that unless one has established specific laws, one has no real justification for supposing that other humans - let alone animals belonging to other species - are similar to oneself in the respects that are crucial if the laws in question are to be applicable to them.  If one knew the precise laws, then one could check to see whether other people are in the relevant neurophysiological states.  But in the absence of such knowledge, how can one ever be justified in believing that the brains of others are not merely similar to one's own, but share just those properties that enter into the relevant psychophysical laws?

6.  The Inference to the Best Explanation

        This second approach to the justification of beliefs about the mental states of others can be sketched as follows:

(1)  One observes physical organisms around one that exhibit behavior of a complex and sophisticated sort.

(2)  It is not possible to predict how an organism will behave simply on the basis of information about its present environment: different organisms behave differently in exactly the same situation.  Of course one could do a much better job if one had information about the organism's past environment as well as information about its present environment.  But it seems reasonable to believe that the reason that an organism's past environment affects its present behavior is that its past environment gave rise to certain internal states that have played a role in producing the internal states that the organism is currently in.  And thus it seems reasonable to conclude that the theory which best explains the complex and variable behavior of organisms will be one involving reference to internal states that change over time.

(3)  It is perfectly possible that a theory that refers only to internal physiological states will someday provide us with the most adequate explanation of the behavior of humans and other animals.  But at present the hypothesis that best explains at least a certain range of behavior is the everyday theory that involves attributing psychological states such as beliefs, preferences, thoughts, sensations, sense experiences, etc. to other organisms.  This hypothesis that other organisms enjoy psychological states allows one to explain and predict the behavior of other organisms in a wide variety of circumstances, and since there is no other theory available that can explain and predict the same range of phenomena, one is justified in accepting the hypothesis, and thus in attributing mental states to other organisms.

Comment

        There is a different version of this inference to the best explanation line of argument.  The other version is concerned not with what may be the best explanation today (but possibly not tomorrow), but with what will ultimately turn out to be the best explanation.  Thus suppose that it were to turn out that there are brain events that cannot be explained and predicted on the basis of information about earlier physical events, and where the failure of prediction could not be explained as due simply to quantum indeterminacy.  Then it could be argued that one was justified in accepting the hypothesis that there are mental events that interact causally with brain events.  So it would be reasonable to believe that there are other minds.

        This latter version of the argument certainly provides a possible justification for belief in other minds.  But can it be used to show that one's present beliefs about other minds are justified?

        It seems to me that it can't.  For one surely does not have good evidence based on observation of brain events of other organisms that events in their brains cannot be explained and predicted on the basis of prior physical events.  But mightn't one establish this indirectly, by arguing against both materialism and epiphenomenalism, thus establishing interactionistic dualism?  The answer is that even if one could successfully carry out this enterprise, what one would have established is that interactionistic dualism is true in one's own case.  To apply it to the case of other organisms in order to establish that events in their brains do not have purely physical causes would be to argue in a circle, since in applying it to other organisms one is assuming that they have minds.

7.  Possible Objections to the Inference to the Best Explanation Approach

        There are three main reasons, I think, why one might doubt whether the inference to the best explanation approach provides the dualist with a completely satisfactory account of the justification of one's beliefs about other minds.

7.1  Difficulty 1: Machines and Paralyzed Persons

        The inference to the best explanation approach, at least as set out above, considers only the behavior of the organisms in question.  Suppose, then, that it turns out that some of the organisms around one, which one has always taken to be humans, are not humans but robots.  If the justification for believing in other minds is that the hypothesis of other minds provides the best available explanation of the complex behavior of organisms, then given that the behavior of the robots would be indistinguishable from that of genuine humans, wouldn't one be forced to conclude that it was just as reasonable to attribute minds to the robots as to humans and other biological organisms?  But does this agree with one's normal intuitions?  Doesn't one normally think that if one of one's friends were to turn out to be a machine, then it would be less likely that he or she had a mind?

        It is possible that what I am claiming is one's everyday intuition on the matter simply reflects confusion.  But one can at least say that here is an issue that deserves consideration if one wants to opt for an inference to the best explanation account of knowledge of other minds.

        The case of the robot causes problems because one has similarity of behavior conjoined with an absence of physiological resemblance.  Another case which causes difficulty is the opposite one, where one has no behavior (to speak of), but where one does have physiological similarity.  This is the case of a paralyzed person.  One certainly wants to attribute mental states to a paralyzed person.  But since there is no behavior to be explained, the inference to the best explanation approach won't justify the belief in question.

        A possible rejoinder here is to make use of the argument from analogy to deal with the case of a paralyzed person.  For in using the argument from analogy here one will not be extrapolating from a single case, unless one is the only non-paralyzed person in the world.  There are many non-paralyzed persons around, and one can use the inference to the best explanation approach to establish that they have minds, and then use the fact that the paralyzed individual resembles all of the non-paralyzed persons to argue by analogy that the paralyzed person has a mind too.

7.2  Difficulty 2:  Epiphenomenalism and Knowledge of Other Minds

        If the correct justification of one's beliefs about the mental states of others is provided by the inference to the best explanation approach, it follows that if it turns out to be possible to explain all behavior by reference simply to physical states of affairs, one will no longer be justified in ascribing minds to other organisms.  So assuming that one would still be justified in ascribing mental states to oneself, the result is that future scientific advances might well compel one to become a solipsist, and to believe that the only person that exists is oneself.

        This picture of one's knowledge of the mental states of others getting undercut by future advances in neurophysiology is a somewhat disconcerting one.  The argument from analogy has the advantage that it avoids any such situation, since it is enough for the argument from analogy if there are causal relations running from the physical to the mental.  So if it turns out that epiphenomenalism is true, and that physiological events give rise to states of consciousness, but that the latter have no causal effects upon physical events, this will not affect one's knowledge of other minds at all if such knowledge is based upon the argument from analogy.  Whereas if one's knowledge of other minds rests upon the inference to the best explanation, one will cease to have such knowledge if good evidence in support of epiphenomenalism is discovered.

        It's unclear how much force this difficulty has.  The main question is whether epiphenomenalism is itself a coherent position.  Some philosophers would probably reply to this difficulty by arguing that epiphenomenalism is internally inconsistent.  Others might argue that even if epiphenomenalism is a coherent position, we can know that it is not true, and thus the suggestion that our knowledge of other minds, if based upon the inference to the best explanation, might be undermined at some future date, is not a real possibility.

7.3  Difficulty 3: An Unjustifiably Strong Hypothesis

        The fundamental objection to the inference to the best explanation approach, if it is employed to support the claim that there are private states of consciousness that are not one's own, is that the hypothesis one is using to explain the behavior of other organisms is unnecessarily strong.  There are more modest hypotheses that have the same explanatory power with respect to the behavior of organisms.

        Perhaps the simplest way of bringing this out is by comparing dualism and emergent materialism.  Both positions contend that states of consciousness involve the having of emergent properties.  They disagree about whether the emergent properties are physical.  The question then is this.  If what one is interested in is explaining the behavior of organisms, what difference does it make whether one postulates physical properties or nonphysical properties? All that matters is the causal interrelations among the postulated properties, and their causal relations to behavior.  The upshot is that the claim that the properties are nonphysical properties has no explanatory power.  The theory in question will explain behavior just as well when it is stripped of the assumption that the theoretical states in question are dualistic states.

        This is not to say that the inference to the best explanation supports an emergent materialist view of other minds rather than a dualist view.  For, in the first place, the assumption that the properties in question are physical properties also has no explanatory power.  The hypothesis that one is introducing to explain the behavior of other organisms should thus be neutral on the issue of whether the properties are emergent physical properties or dualistic ones.

        But secondly, the assumption that the theoretical states being postulated involve emergent properties is also unjustified, since it does not add anything to the theory's ability to explain the behavior of other organisms.  Again, all that matters is that the theoretical states which one is postulating stand in certain relations to each other and to behavior.  Whether they involve emergent properties or not has no effect on such causal relations, and hence no effect on their explanatory power with respect to behavior.

        While the inference to the best explanation approach justifies one in postulating some internal states to account for the behavior of other organisms, it would not seem to justify particular hypotheses about the nature of the states in question.  The relevant internal states may turn out to be purely physiological.  Consequently, one is not justified in holding, on the basis of the inference to the best explanation, that the states in question involve the properties that one is aware of, introspectively, in the case of one's own experiences.  And so the inference to the best explanation cannot, it would seem, suffice to justify the belief that there are other people who have experiences in the sense of "experiences" in which it is true of oneself that one has experiences.

        To put this in a slightly different way.  There is a commonsense theory of the mental that involves concepts such as experiences, thoughts, beliefs, desires, and the like.  In that commonsense theory, one assigns a certain content to the term "experience" and to the term "thought" according to which those terms pick out states that have properties that one can be immediately aware of.  So that when one is looking at something green under normal conditions, one's experience has a certain quality that one is directly aware of.  As a result, the concept of an experience is not merely the concept of a mental state that stands in certain relations to other states.  It is also the concept of a type of state that can possess certain intrinsic properties.  But now one can compare that commonsense theory of the mental - call it M - with a competing theory - call it M* - which has the same structure as M, but which rather than postulating experiences and thoughts, postulates experiences* and thoughts*, where these are related to other states in the way that experiences and thoughts are, but differ in that it is not postulated that they involve the intrinsic, phenomenalistic qualities that characterize one's own experiences and thoughts.  The point is now that theory M* is a simpler theory than M, but one that explains the behavior of others just as well as M does.  In addition, theory M* is more modest than theory M, since it is entailed by M, but does not entail M.  So M cannot be true without M*'s also being true, while M* can be true even if M is not true.  Accordingly, given that M* explains the behavior of others as well as M, surely it is M* that one should accept, if it is a matter of an inference to the best explanation of the behavior of others.

8.  A Combined Approach: Physiology and Behavior

        In setting out the argument from analogy I focused attention upon the physiological similarity of other organisms to oneself.  In contrast, the inference to the best explanation, as formulated above, was concerned only with the behavior of other organisms.  I think that it is clear that one can get an account that is much more satisfactory than either of the above two accounts if one takes into consideration both the physiological resemblance of other organisms to oneself and the behavior of those organisms.  The resulting account might be classified either as a version of the argument from analogy - it is in fact essentially the version advanced by John Stuart Mill - or as a version of the inference to the best explanation.

        The argument, viewed as a version of the argument from analogy, can be expressed as follows:

(1)  By a combination of observation and introspection I can establish certain causal laws that hold in my own case.  These laws are of two types: (i) Causal laws relating physiological states as causes to states of consciousness as effects; (ii) Causal laws relating mental states as causes to physiological (or behavioral) states as effects.

(2)  I extrapolate these causal laws from my own case to the case of other organisms.

(3)  When I apply the generalized causal laws of type (i) which result to other organisms, I am led to conclude that other organisms also enjoy states of consciousness.

(4)  Since I know that other organisms enjoy states of consciousness, I can apply the generalized causal laws of type (ii) to them, and derive some predictions about what behavior other organisms will exhibit in certain circumstances.

(5)  When I check out these predictions, I find that they are reasonably accurate a large proportion of the time.

(6)  The fact that the application of generalized causal laws of types (i) and (ii) to other organisms leads to generally correct predictions makes it reasonable for me to believe that those generalized causal laws really are true of other organisms, and hence makes it reasonable for me to believe that other organisms also have minds.

        Viewed as a version of the inference to the best explanation, it might run as follows:

(1)  In the case of other organisms, I find that there are certain causal relations that exist between the physical stimulation of the organism and its behavioral responses.

(2)  I find that the same input-output relations exist in the case of my own body.

(3)  I also find, however, that in my own case the input-output relations are derived from more basic causal relations of two types: (i) those expressed by laws connecting physiological states as causes to states of consciousness as effects; (ii) those expressed by laws connecting mental states as causes to physiological (or behavioral) states as effects.

(4)  The question now is: What hypothesis best explains the causal relationships that I find in my own case, and those that I find in the case of others?  The simplest hypothesis would seem to be that the input-output relations which I find both in my own case and in the case of other organisms are derivative in both cases from more basic causal laws of the sort that I know hold in my own case.

(5)  For what would the alternative be?  Since it is surely reasonable to view the input-output relations as derivative, the only alternative is to say that the intervening causal states are different in the case of other organisms than in one's own case.  But if physiological stimulation of other organisms leads to different internal states than it leads to in one's own case, it would seem to be an incredible coincidence that these different internal states lead to the same behavioral states, and hence to the same input-output relations.

(6)  Suggested Conclusion: One is justified in assuming that other organisms enjoy the same sort of intervening internal states that one enjoys oneself, since otherwise one has no explanation of why the same input-output relations should exist in both cases.  The hypothesis that other people have minds in the sense of having the sort of thing that one has oneself is the best explanation of what one observes in one's own case and in the case of others.

9.  An Argument Based upon the Use of Mentalistic Language

9.1  A Formulation of the Argument

        The final account of one's knowledge of the mental states of others that I want to consider centers around the contention that the basis - or at least one possible basis - of one's being justified in ascribing states of consciousness to other organisms is either the fact that

(1) Other organisms use mentalistic language,
or the fact that
(2) Other organisms appear to assert that they have states of consciousness.
        Michael Scriven, in an article entitled "The Compleat Robot: A Prolegomena to Androidology", advances a version of this argument to support the claim that there are conditions under which one would be justified in attributing experiences, sensations, thoughts, feelings, and so on to non-biological machines.  One reason this sort of argument is very interesting is that if it is sound, it provides one with a method of answering the question of other minds both in the case of biological organisms that resemble oneself and in the cases of biological organisms that do not resemble oneself, and secondly, in the case of non-biological entities - such as robots.  In this respect the argument contrasts sharply with the argument from analogy, and with the argument just set out.  For if the argument from analogy is the only way in which one can justify a belief in other minds, the conclusion would seem to be that even if Martians and robots really do have states of consciousness, there will be no way in which one can justify the belief that they do.

        (Scriven's article is available in Dimensions of Mind, edited by Sidney Hook, pages 113-33.  There is also a reference to the argument in question in the addendum to Scriven's article, "The Mechanical Concept of Mind", in Minds and Machines, edited by Alan Ross Anderson.)

        The basic argument here might be set out as follows:

(1)  Consider any object that is capable of producing what appear to be English sentences.  The object might be either some living organism, such as a man or a Martian, or a robot.

(2)  By carefully observing the sentences that the object utters in different contexts, one can determine whether the object employs some term, such as "table", in the same way as one does oneself.  In the case of low level observational terms such as "red", "circular", "hard", "smooth", and the like, the most important consideration will be whether or not the object applies the terms to the same objects as you do.  If its use of the term does in fact agree with yours, then surely you are justified in concluding that the term has the same meaning for it as it does for you.

(3)  But if one can do this in the case of a term such as "table", why can't one equally well do it in the case of psychological terms such as "... is in pain", "...sees an orange after-image", "...is experiencing giddiness", etc.?

(4)  Suppose then that you have determined how the individual in question uses such psychological expressions, and it turns out that it does assign the same meaning to them as you do.

(5)  Then in order to determine whether or not the individuals has sensations and other states of consciousness, all you have to do is to ask it!  If its answer is yes, then you are justified in concluding that it does have a mind.

(6)  It might be objected that one can't be sure that the individual's answer is true.  The reply is simply that it is true that one can't be sure.  However, that should not be disturbing here any more than anywhere else:  certainty about contingent matters is never possible; one has to be satisfied with justified beliefs.  And one can have, here as elsewhere, good reasons to believe that what the individual asserts is probably true.  For may check up on other sorts of assertions that the individual has made - assertions about physical objects - and if one finds that the sentences an individual utters in these other cases that can be independently checked always, or almost always, turn out to be true, then one is surely justified in believing what it says in cases where there is no possibility of independently checking out the assertion.  Or alternatively, even if it is the case that the individual often utters sentences that it doesn't believe, one may have discovered a device that will function as a lie detector for the individual in question, so that one could use that device to decide whether or not it is telling the truth when it says that it enjoys states of consciousness.

Suggested Conclusion

        If there is some individual - either a living organism or a machine - which satisfies the following three conditions, one is justified in believing that the individual has a mind:

(1) The individual uses psychological language in the way that one uses it oneself;

(2) The individual asserts that it has a mind;

(3) Either the individual generally utters only true sentences, or there is some method of detecting when it is making an assertion that it doesn't believe, and this method indicates that it is not lying when it asserts that it has a mind.

Comment:  Comparison with Scriven's Formulation

        Scriven's statement of the argument differs somewhat from that just given.  Scriven imagines that rather than simply encountering an individual who already speaks a language, one takes an individual that doesn't speak a language, but which has the ability to do so, and proceeds to teach it so a language.  Scriven thinks that one must, however, be very careful about how one teaches it terms that refer to experiences, feelings, and the like.  He suggests that what one ought to do is to teach the individual both phenomenalistic talk about states of consciousness, and the behavioristic (or functionalistic/topic neutral) counterpart of the phenomenalistic terms.  In doing this, one should leave it an open question whether the phenomenalistic expressions can be truly applied to the individual learning the language.  In teaching it the use of sensation terms one applies them only to humans.  Only the behavioristic (or topic neutral) counterparts will be applied to the individual who is learning the language. Then, once the individual in question has acquired a language in this way, the idea is to ask it whether it would apply sensation terms - construed phenomenalistically - to itself.

9.2  Possible Objections?

1.  One possible objection is as follows.  How can a person understand a term that refers (in a rigid way) to an emergent property if he has not himself at some time experienced the property in question?  This is just the blind man point: a person cannot understand the expression "sensation of redness" (in the appropriate sense) unless he has enjoyed such a sensation.  So one can't first establish that a person understands the relevant expressions - construed phenomenalistically - and then go on to determine whether they apply, or at least have applied at some time, to him.

This point doesn't seem crucial, however.  For the conclusion that one must simultaneously establish both that a person understands a given sensation-expression and that he has had a sensation of that sort does not undermine the argument.  At most, it would seem, it merely makes it necessary to restate it slightly.

2.  Another possible objection is this:  "How does one know that the person is really interpreting the sensation-expressions phenomenalistically, rather than behavioristically, or functionalistically/topic neutrally?

A possible response is that one can ask it how it is interpreting the expressions in question.  Suppose that it claims that the meaning it assigns to "sensation of redness" is such that a sensation of redness is not the sort of thing that is publicly observable, even in principle.  Wouldn't that be sufficient reason for concluding that it was assigning the right sort of meaning to the expressions?

        One might object to this, on the grounds that the meaning of an expression is not determined by what a person claims it is; it is determined by how it uses it.  The problem is that if the criterion of application of some expression is some private property, there is no possibility - in the present context, where one has not yet justified the belief that there is an other mind there - of determining how the expression is being used by telling whether it is applied only when a certain private sensation exists.  But can't one argue that even though an individual's intuitions about how he, she, or it is using a given term is not decisive, still it is reasonable to assign some weight to such intuitions?

        If the entity has been programmed to say this sort of thing, then it would seem reasonable not to assign any weight to the individual "intuitions".  But the case where the entity has not been programmed in this regard does not seem as clear.  For in the latter case, mightn't it be argued that the best explanation of the entity's claim, for example, that the meaning it assigns to "sensation of redness" is such that a sensation of redness is not the sort of thing that is publicly observable, even in principle, is that the entity says this sort of thing because it has certain experiences of which it is directly aware.

Concluding Comment

        This argument is not, it seems to me, easy to evaluate.  But one point worth keeping in mind is that if it is sound, it enables one to deal with cases - such as Martians who are physiologically very different from us, and electronic machines, which are even more different - that otherwise, given a phenomenalistic view of states of consciousness, seem impossible to handle.