All rights reserved
A DIRECT REALIST ACCOUNT OF PERCEPTUAL AWARENESS
A dissertation submitted to the Graduate School-New Brunswick Rutgers, the State University of New Jersey
in partial fulfillment of the requirements for the degree of
Doctor of Philosophy
Graduate Program in Philosophy
written under the direction of
Professor Peter D. Klein
and approved by
New Brunswick, New Jersey
ABSTRACT OF THE DISSERTATION
A Direct Realist Account of Perceptual Awareness
by MICHAEL HUEMER
Peter D. Klein
The dissertation presents a direct realist account of our awareness of the external world, embodying two main theses: First, in normal cases, sensory experiences constitute direct awareness of the external world; second, certain beliefs about the external world are prima facie justified by virtue of being based on sensory experiences.
In the first chapter, I explain the concept of awareness and the distinction between direct and indirect awareness. Direct awareness of x is understood as awareness of x which is not based on awareness of anything else, and the "based on" relation is understood as a particular way in which one state of awareness can be caused by another state of awareness when the contents of the two states are logically related.
In chapter 2, I defend a traditional account of perception according to which perceiving can be analyzed into three components: (a) the occurrence of a purely internal mental state, different from belief, called a "perceptual experience", (b) the existence of an external object roughly satisfying the content of the experience, and (c) an appropriate causal connection between the object and the experience.
In chapter 3, I examine the nature of sensory experiences, distinguishing three important aspects of them: their qualia, their representational contents, and their "forcefulness." The content of experience is further divided into conceptual content and non-conceptual content. The attribute of experience by which the objects of experience seem to the subject to be actually present is called "forcefulness."
In chapter 4, I consider how perception leads to knowledge of the external world. I defend an epistemological principle according to which the circumstance of its seeming to S as if P constitutes prima facie justification for S to believe that P. As a result, we are prima facie justified in believing propositions that are entailed by the contents of our experiences.
In the fifth and final chapter, I show how the direct realist
theory developed avoids three important kinds of philosophical
skepticism: first, Hume's external world skepticism; second, the
regress argument of Agrippa; and third, the brain-in-a-vat argument.
I would like to thank the members of my committee; Peter Klein, Brian McLaughlin, Richard Foley, and Richard Fumerton; for their comments and advice on the work in progress. In addition, I would like to recognize the supererogatory efforts made by professors Klein and McLaughlin towards enabling me to graduate this year. None of these philosophers, of course, is responsible for any mistakes contained in the following work.
TABLE OF CONTENTS
1. The notion of direct awareness 10
1.1. Awareness in general 11
1.2. "Awareness" and "knowledge" 24
1.3. Epistemic dependence 26
1.4. Direct vs. indirect realism 44
2. Perception as awareness 57
2.1. The traditional analysis of perception 57
2.2. The radical intellectualist account 60
2.3. Ultra-direct realism 65
2.4. The content-satisfaction condition 81
3. The nature of perceptual experience 102
3.1. Sensory qualia 103
3.2. Non-conceptual content 121
3.3. Conceptual content 130
3.4. The forcefulness of experience 132
4. Perceptual knowledge 136
4.1. The justification of perceptual beliefs 137
4.2. Defense of appearance conservatism 140
4.3. Objections 147
5. Direct realism & skepticism 158
5.1. Hume's problem 158
5.2. The regress argument 165
5.3. The brain-in-the-vat argument 170
5.3.1. Two contemporary responses 171
5.3.2. What's wrong with these replies? 175
5.3.3. The direct realist's response 179
5.3.4. An objection 185
In the following pages, I have defended a general theory of
perception that answers what are probably the three most important
philosophical questions about perception: (1) What is perception?
(2) What is it that perception makes us aware of? And (3) how does
perception enable us to gain knowledge of the external world? In
very broad terms, the theory I have put forward can be described as
a version of direct realism -- or, if you like, naive realism.
Start with the first question: What is it to perceive? The act of perceiving something involves three elements: first, there is the occurrence of a certain kind of purely internal, mental state, a 'perceptual experience'; second, there is an external phenomenon that roughly satisfies the content of this state; and third, there is a causal relation between the object and the experience.
That leaves the question of what a perceptual experience is.
A perceptual experience is understood as a kind of mental state
different from belief, but nevertheless having representational
content -- i.e., there is a way that a given perceptual experience
represents the world to be. Perceptual experiences also have an
attribute I call their "forcefulness": when one has a perceptual
experience, the objects of the experience always seem to one to be
actually present (this is different from, for example, imagination
or mere supposition, which is not forceful). Both of these are
necessary characteristics of perceptual experience. In addition,
I have answered certain currently much-discussed questions about
perceptual experience, namely, whether perceptual experiences have
'non-conceptual content' and whether they have 'qualia'. In both
cases I have answered in the affirmative (see chapter 3).
Second question: What does perception make us aware of?
Philosophers have traditionally given three answers to this:
(i) Direct realism holds that in perception, we are directly aware of the external world.
(ii) Indirect realism (or "representationalism") holds that in perception, we are directly aware only of certain mind-dependent phenomena (e.g., ideas, sense data, appearings), and we are indirectly aware of external objects.
(iii) Idealism holds that in perception, we are directly aware of
mind-dependent phenomena, and we are not aware of anything else.
Idealism is generally regarded nowadays as too prima facie implausible to be considered, so direct and indirect realism remain as the two main alternatives.
Indirect realism has historically been the dominant view among
philosophers, but it really is quite an incredible idea. It seems
to me that I am right now seeing a table in front of me, and thereby
enjoying awareness of that table. This thing of which I find myself
aware, I think, has four sides and a brown surface; it is physically
in front of me; it exists independently of me; I can sit on it and
it will support my weight. It certainly is no mere 'idea' or
'appearance'. Furthermore, there is, as far as I know, no other
relevant thing of which I am enjoying awareness in seeing the table.
There certainly does not seem to be, in addition to the real table,
a second 'table' that exists only in my mind and that I'm also
perceiving, nor is there some thing of a kind radically different
from a table that I'm perceiving or otherwise apprehending when I
see the table. To repeat, it seems obvious that, in seeing the
table, there is exactly one object that I'm aware of, and that
object is a table. Yet it is this seemingly obvious thesis that
Hume, one of the early representationalists, claimed would be "soon
destroyed by the slightest philosophy":
The table which we see seems to diminish as we remove farther
from it: but the real table, which exists independent of us,
suffers no alteration: it was, therefore, nothing but its
image which was present to the mind. These are the obvious
dictates of reason, and no man who reflects ever doubted that
the existences which we consider when we say, this house and
that tree, are nothing but perceptions in the mind, and
fleeting copies or representations of other existences, which
remain uniform and independent.(1)
Admittedly, Hume was one of the more incautious of indirect realists -- surely the indirect realist should not claim that the expression "this tree" typically refers to a perception, rather than to a tree. It is, nevertheless, quite incredible that this thing (the one that I'm now directly aware of, as I (seemingly) view my table) is a mental state or mental object, rather than a table. The story becomes perhaps still more incredible when we hear that I have throughout my life been constantly mistaking mental phenomena for physical objects, and that I have perhaps never once perceived anything without making that mistake. Furthermore, this view generates a problem with respect to our third question, that of how perception enables us to gain knowledge of the external world: if we are only ever directly aware of ideas, how do we know that anything other than ideas exists?
Fortunately, Hume's argument is invalid and his conclusion mistaken. The argument fails because Hume overlooks the possibility that the table we see appears to get smaller but does not actually get smaller -- thus, the real table may, after all, be one and the same with the table we see. In fact, the table we see appears precisely the way one would expect the table to appear, assuming that we did perceive a real table. This tends to confirm that it is the real table we see.(2)
Moreover, we can see that Hume's conclusion is mistaken and
that perception is direct awareness of the external world, by
turning to the analysis of "direct awareness", while keeping in mind
the analysis of perception we have provided. To be aware of a thing
is to have at least a roughly accurate representation of it, where
the accuracy of the representation is not merely accidental (not
merely a matter of chance). To be directly aware of something is
to be aware of it, where one's awareness of it is not based on one's
awareness of something else (this notion is discussed more fully in
chapter 1). Now, in perception, one has a perceptual experience
that represents there to be something having certain physical
characteristics. For example, my current visual experience
represents the table in front of me as being brown and rectangular.
My tactile experience represents it as being hard and smooth. Etc.
The contents of these experiences are at least roughly satisfied by
the real, physical table -- the real table has those characteristics
-- and not by anything else. There is no other brown, rectangular,
etc., object in the offing. Nor do I have any (relevant) second
representational state with another content. (Of course, I might
happen to have a second representational state at the same time, but
that's beside the point -- no such state is required in order for me
to be perceiving the table.) Finally, since my perceptual
experience is caused in the normal way by the real table, the
accuracy of the representation is non-accidental. So my perceptual
experience constitutes direct awareness of the real table, and it
does not constitute awareness of any mental item.
Our third main question was, How does perception enable us to acquire knowledge of the external world? Our perceptual experiences cause us to accept certain beliefs about the external world -- these 'perceptual beliefs' are based on perceptual experiences. What makes it epistemically rational to accept such beliefs?
In chapter 4 I argue for the following epistemological principle: if it seems to S as if P, then S is prima facie justified in believing that P. Thus, the forcefulness of perceptual experience makes our perceptual beliefs justified. I argue that this general epistemological principle underlies our epistemic practices in a very fundamental way -- that in fact there is, excluding such epistemically irrational practices as self-deception or religious faith, no other way of forming beliefs than accepting what seems to oneself to be the case. This principle also underlies our ways of evaluating arguments, or even of identifying what counts as an argument, so that it is impossible rationally to argue against the principle.
I show in the last chapter how my account of perception and perceptual belief avoids three traditional arguments for philosophical skepticism. The first of these is the infinite regress argument, due to Agrippa. It begins with the premise that a person knows a proposition only if he has a reason for believing it. Furthermore, the reason must itself be something he knows to be true, so there will have to be a reason for the reason, and a reason for the reason for the reason, and so on. But no person actually has an infinitely long chain of reasons to support any of his beliefs, and it is not permissible for the chain of reasons to circle back on itself, so ultimately all our beliefs must rest on arbitrary assumptions (claims for which we have no reasons). Hence, all our beliefs are unjustified.
My account blocks the threatened regress of reasons. The series ends when it hits perceptual experience: our perceptual beliefs are rendered justified by our perceptual experiences; hence, they are not mere 'assumptions'. However, since perceptual experiences are not beliefs, it cannot be sensibly asked what one's 'reason for' a perceptual experience is. Perceptual experiences (in normal cases) are indeed a kind of awareness, but they are never a form of knowledge (because not beliefs), so the first premise of the regress argument does not apply to them.
The second form of skepticism that my account blocks is one that was mentioned above, in connection with Hume's representationalism -- indirect realist theories face a problem of explaining how one can get from premises about ideas (or sense data, or whatever) to conclusions about the physical world. Hume himself argued that there was no rational way to do it. My direct realist theory has the advantage of avoiding this problem altogether, since certain propositions about the physical world are prima facie justified and hence do not need to be supported with argument.
Third, I consider the brain-in-a-vat argument. In this
argument, one imagines a situation in which scientists have removed
a brain from its body, keeping it alive in a vat of nutrients. They
insert tiny electrodes into the brain, cleverly stimulating the
sensory cortex of the brain in precisely the patterns in which the
sensory cortex of a brain is normally stimulated when a person is
perceiving and interacting with the world. In such a scenario, the
brain would undergo exactly the same kind of experiences that a
person undergoes during normal life -- in fact, the same kind of
experience that you're having right now. This brain would have no
way of knowing that it was merely a brain in a vat; everything would
seem normal. And this raises the question, How do you know that
you're not a brain in a vat, right now? The skeptic will argue
something like this:
1. If you were a brain in a vat, you would be having just the sort of sensory experiences you're having now.
2. Therefore, your sensory experiences are not evidence that you're not a brain in a vat. (from 1)
3. Your sensory experiences are the only evidence you have for claims about the external world.
4. Therefore, you have no evidence that you're not a brain in a vat. (from 2,3)
5. Therefore, you don't know that you're not a brain in a vat. (from 4)
6. Therefore, you don't know that you have a body, that you're
sitting in this room, etc. (from 5)
The indirect realist should be worried by this argument. On his account, what we are directly aware of is merely what sort of experiences we are having, and it is on that evidence (i.e., that we're having such-and-such experiences) that we must try to build our knowledge of the external world. The skeptic is right to argue that the occurrence of these experiences is no evidence against the brain-in-a-vat hypothesis, since those experiences would occur if the brain-in-a-vat hypothesis were true. But on the direct realist's account, what we are directly aware of -- hence, what we may consider as our available 'evidence' -- is the objects of our perceptual experiences, not our experiences themselves. This may seem like a subtle distinction, but it is of the last importance here. The objects of our perceptual experiences, for the direct realist, are external phenomena. Hence, (3) is clearly false. I do have evidence, other than my sensory experiences, relevant to claims about the external world -- namely, I have the actual physical objects and events that I perceive as evidence for such claims. And taking this into account, (4) is certainly false. Among the evidence I have, for example, is the presence of my two hands. This evidence verifies that I am not a brain in a vat, since a mere brain in a vat has no hands.
This sort of refutation of the brain-in-the-vat hypothesis seems like begging the question; however, it begs the question if and only if ruling out the brain-in-a-vat scenario is a precondition on knowing that one has hands. This is not a precondition under a direct realist form of foundationalism. In fact, as I show in chapter 5, this refutation of the BIV scenario begs the question if and only if one assumes an indirect realist theory of perceptual knowledge; otherwise, the refutation succeeds.
We can see that Thomas Reid's remarks concerning indirect
realism are as appropriate today as they were when he wrote them in
response to the likes of Locke, Hume, and Descartes (imagine "sense
data" substituted for "ideas"):
We shall afterwards examine this system of ideas, and
endeavour to make it appear, that no solid proof has ever been
advanced of the existence of ideas; that they are a mere
fiction and hypothesis ... and that this hypothesis of ideas
or images of things in the mind, or in the sensorium, is the
parent of those many paradoxes so shocking to common sense,
and of that scepticism which disgrace our philosophy of the
mind, and have brought upon it the ridicule and contempt of
My theory of perception, I think, vindicates common sense -- both with respect to the conviction that we are directly aware of physical things when we perceive, rather than ideas or some such, and with respect to the conviction that we know facts about the physical world as a result of perception. And this is among the chief advantages which I would claim for it.
1. THE NOTION OF DIRECT AWARENESS
Stated simply, the thesis of direct realism is that in perception, we have direct awareness of (some parts or aspects of) the external world. The task of the present chapter is to provide an interpretation of this thesis and in particular of the notion of "direct awareness."
Take the easy part first: the "external world" is what exists independent of the mind. Things can be external to one mind but not external to another mind: Boris Yeltsin's beliefs are external for me, but they are not external for Boris Yeltsin, because Yeltsin's beliefs are metaphysically independent of my mind, but they are not metaphysically independent of his mind. In general, what is external for S is what could (metaphysically) exist while S's mind did not. And we can call something "external" without qualification if it is external for everyone. There can be external objects (such as planets and sofas), in addition to external events, external states of affairs, external properties, and so on (fill in the "and so on" with whatever general sorts of things you believe exist). So what the direct realist holds is that there are some things of this very broad kind -- either external objects, or external events, or external states of affairs, etc. -- of which we are directly aware in perception.
The more difficult questions are, first, what is awareness,
and second, what distinguishes direct awareness from indirect
awareness. I won't attempt to provide exact definitions, in terms
of necessary and sufficient conditions, on either of these counts,
but some characterization of these notions short of that can
nevertheless be illuminating.
1.1. Awareness in general
As I use the term, awareness is always awareness of something. And in my view, all of the general metaphysical categories of things that exist are kinds of things of which it is possible to be aware. That is, one can have awareness of objects ("substances" in the metaphysicians' sense), events, universals (if universals exist), tropes (if they exist), facts (if they exist), and so on. Of course, there might be some more specific classes of things of which we couldn't become aware for various non-philosophical reasons (e.g. parallel universes of which we can't become aware due to their inability to interact with our universe). The existence of such objects must, by the nature of the case, remain at best a matter of speculation.
Here are some examples of things that I'm presently aware of: There's a coffee cup on my desk, which I can see, and as a result, I am aware of the coffee cup (a substance). I am also aware of the color of the cup (a trope). If I pick up the cup, I can become aware of its motion (an event). I'm aware of a number of facts about the cup -- I'm aware that it's green (aware of the fact that it's green), I'm aware that it's a cup, and so on. I might also enjoy awareness of more exotic objects, if they exist. If universals exist, then I might have episodes of 'grasping' them, which would be episodes of awareness of the universals, though obviously a different kind of awareness. The same should be said for other abstract objects, such as sets, or propositions, or numbers.
Not all of my awareness is as direct as these examples, of course. I'm also aware of the fact that there are over a billion people in China (and so I am, albeit somewhat tenuously, aware of the people in China themselves), I'm aware of some of the properties of electrons, and I'm aware of the fact that the square on the hypotenuse of any right triangle equals the sum of the squares on the other two sides. These are all certainly cases of indirect awareness, though what that means we will have to discuss further below.
These examples also illustrate that there are different ways of being aware of something -- I can be aware of something by virtue of perceiving it, by virtue of intellectually grasping it (if it is an abstract object), or by virtue of making certain inferences, among other ways.
What can we say about the general nature of this phenomenon --
are there some things that all of these paradigmatic instances of
awareness have in common? I think that in general, we can find the
following three elements in any episode of awareness, namely, when
S is aware of X,
(i) S has a certain kind of intentional (content-bearing) mental state;
(ii) X exists and at least roughly satisfies the content of the state; and
(iii) there is some kind of appropriate connection between X and the
mental state, making it not merely accidental that S is enjoying a
veridical mental state.
The intentional mental state we can call the state of awareness (of course, this means that in some cases, the state of awareness will be a state that might not have been awareness, if the other conditions hadn't been satisfied). The thing that S is aware of we can call the object of awareness, or the object of that mental state. And we can say a mental state is "veridical" if it has an (existent) object that satisfies its content, and "unveridical" otherwise. So all states of awareness are veridical (this is to say that "aware" is a success term), although there are some mental states that are otherwise like states of awareness but for being unveridical. A child who believes in Santa Claus is in a mental state that, to him, probably seems like being aware of Santa Claus. But of course, no one is actually aware of Santa Claus, because there is no such person to be aware of.
Notice that as a consequence of this characterization of awareness, as a mental state related in a certain way to its object, "aware of _______" (where the blank is to be filled in with some referring expression) is an extensional context (for if X stands in some relation to S, then X exists, and if X stands in some relation to S and X = Y, then Y stands in that relation to S). So if I'm aware of this coffee cup and this coffee cup is the 5000th plastic product produced at the Rubbermaid factory in Tacoma, then I'm aware of the 5000th plastic product produced at the Rubbermaid factory in Tacoma, even though I'm not aware that it is the 5000th plastic product produced at the Rubbermaid factory in Tacoma.
Two sorts of apparent counter-examples might be urged against this point and therefore against viewing awareness as a genuine relation between object and subject. The first sort would involve "awareness that ..." Let us suppose that this coffee cup is indeed the 5000th product of the Rubbermaid factory, etc., although I do not know this. Then I am aware that I am drinking from this cup, but I am not aware that I am drinking from the 5000th product of the Rubbermaid factory, etc. Does this show that "aware ..." creates an intensional context? No. What extensionality requires is that if one is aware of X, one is aware of Y whenever Y is identical with X. Now, being aware that I am drinking from this cup is being aware of the fact that I am drinking from this cup. But the fact that I am drinking from this cup is not identical with the fact that I am drinking from the 5000th product of the Rubbermaid factory, etc. So the move from "I am aware that I am drinking from this cup" to "I am aware that I am drinking from the 5000th product of the Rubbermaid factory, etc." does not represent a genuine substitution of identicals in the relevant sense. That is why the move is illegitimate. "aware of the fact that _______ is F" is intensional, even though "aware of _______" is extensional, because substitution of co-referring expressions into the context "the fact that _______ is F" does not guarantee reference to the same fact.
Another, possibly more convincing kind of case involves awareness of properties. Let's suppose that colors are spectral reflectance distributions, so that the color of an object is always identical with its spectral reflectance distribution. Now, I am aware of the color of this cup (the shade of green that it has). But we should at least hesitate to say that I am aware of its spectral reflectance distribution. At least, that doesn't seem like something that perception makes me aware of, even though perception does make me aware of the color of the cup. Does this show that "aware of _______" is an intensional context?
We have to be careful about the phrase "aware of the cup's spectral reflectance distribution." One natural reading of that is that it means aware of which spectral reflectance distribution the cup has, i.e. aware that it has such-and-such spectral reflectance distribution. (Compare: "I know Bill's phone number" means that I know which phone number Bill has, i.e. I know that he has phone number X, for some X.) This would be awareness of a fact. On this reading, I think it is clear that we do not (except perhaps for some scientists) enjoy awareness of the spectral reflectance distributions of the things around us. That is, we do not know what spectral reflectance distributions they have. And certainly we do not know this directly by perception. I think it is also clear that on this reading, we do normally enjoy awareness of the colors of things -- that is, we know which colors things have. But again, this is because, even if colors are identical with spectral reflectance distributions, the fact that an object has a certain color need not be identical with the fact that it has the corresponding spectral reflectance distribution. Notice that, on this reading of "I am aware of the cup's color," "the cup's color" is not being used to directly identify the object of awareness (despite surface appearances). Rather, it is being used to identify a certain determinable, about which the assertion is that I am aware of which determinate value falling under that determinable characterizes the cup. A similar case is "awareness whether": If I say, "I am aware of whether it has rained," I am not saying that there is an object called "whether it has rained" such that I'm aware of that object. Rather, I'm saying that I'm either aware that it has rained or aware that it has not.
The other reading of "aware of the cup's color" is that it is being used to attribute awareness of a trope (a property-instance), so that "the cup's color" really does refer to a certain entity of which I have awareness. But on this reading, I think it is much less clear that I don't have awareness of the cup's spectral reflectance distribution. If that's what colors are, then that is in fact what my visual system is detecting, and making me aware of. If one still wants to insist that I am not aware of objects' spectral reflectance distributions, in the sense of those particular property instances, then one should just reject this particular reductionist account of color.
That's enough to say about condition (ii). It will become clearer later, when we discuss perception in particular, why I say "roughly satisfies" (one can perceive a thing, and thereby be aware of it, while the thing does not exactly satisfy the content of your perceptual experience, but not if the thing's character is radically mismatched from the content of your experience).
The qualification, "a certain kind of," in (i) calls for some comment. Not just any intentional mental state is a candidate for awareness. Some kinds of intentional mental states cannot in principle be states of awareness, for reasons other than failure to satisfy conditions (ii) and (iii). For example, a state of believing that P is a candidate for constituting awareness of the fact that P. But a state of entertaining or wondering whether P is not even a candidate for constituting the awareness of the fact that P, regardless of whether it satisfies the other conditions -- a wondering-whether-P is an intentional mental state, and it might have an object that satisfies its content (namely, there might exist the fact that P), and it might even be appropriately connected with the fact that satisfies its content (depending on what kind of connection one requires -- at least there could be a reliable mechanism that leads to wondering-whether-P whenever P holds), but it still wouldn't be the awareness that P. And similarly, a state of imagining X is not a candidate for being the awareness of X, even if X exists, satisfies the content of the imagining, and is appropriately connected with the imagining (so it is non-accidental that I only imagine things that exist). If I am imagining Margaret Thatcher, I am not thereby aware of her. This is not to say, of course, that I may not be aware of her at the time (indeed, plausibly it is a precondition on my imagining her in particular that I be aware of her) -- I may indeed know that I am imagining a person who exists, and so be aware of Mrs. Thatcher, but my imagining can not constitute my awareness of her. My actual awareness is constituted by the various (true, justified, etc.) beliefs I have about her (or maybe just the belief that she exists), that I bring to mind as I form an image of her.
The only way I have of describing the relevant characteristic of mental states that qualifies them as candidates for being awarenesses -- the characteristic that both beliefs and perceptual experiences have but imaginings and wonderings-whether lack -- is a metaphor, but I think it nevertheless constitutes some clarification of the idea. It is this: certain mental states purport to represent reality. A belief 'purports' that reality satisfies its content, whereas a mere wondering does not. Again, we might say that a belief is 'assertive,' while a wondering-whether is 'neutral.' Likewise, perceptual experience is assertive (it purports to represent reality), while mere imaginings are not. But these metaphors also immediately invite certain misunderstandings that we must explicitly disavow. The metaphor of a belief's making some purport compares a belief to a person who makes an assertion or perhaps to the assertion that a person makes, and this immediately suggests the image of a speaker, a listener, and an utterance the speaker makes, all distinct from one another. But of course there are no such distinctions to be made with respect to beliefs -- there is no distinction between the 'assertion' that a belief makes and the belief itself, and there is no distinction analogous to the speaker/listener distinction. Because of the former point, it might be less misleading to say not that a belief makes a certain purport, but that the believer is purporting that reality is a certain way in having the belief, and likewise that I am purporting that reality is a certain way in having perceptual experience (which is not to say that in having perceptual experience I am believing that reality is a certain way; belief and perceptual experience are two different 'assertive' states). The only point of the metaphor is really this: there is a distinction between assertions and other, non-assertive speech acts (such as questions), and this is analogous to the distinction between beliefs and non-assertive propositional attitudes (such as wonderings-whether), and to the distinction between perceptual experiences and other non-assertive mental representations of particulars (namely, imaginings). The assertiveness of perceptual experience will call for further discussion below when we come to the analysis of perceptual experience.
We now turn to condition (iii), the connection between a state
of awareness and its object. The need for this condition is shown
by examples like the following:
(a) The case of veridical hallucination: suppose that excessive
doses of LSD are causing me to hallucinate a spider crawling on my
desk. As it happens, there is a spider crawling on my desk, but
that isn't what is causing it to look to me as if there is a spider.
It's the abnormal drugs in my brain that are causing me to have this
experience (so the experience would be the same even if there
weren't any spider). I think we have to allow the possibility of
such cases -- whatever abnormal brain processes cause hallucinations
could be going on and causing perception-like experiences while, by
coincidence, there also existed in reality an object of the kind
that the subject is hallucinating. In this case, I would not be
aware of the spider on the desk (nor would I be seeing it), even
though the spider would be satisfying the content of my experience.
(b) The lucky guess: suppose a gambler standing at the roulette
wheel just 'feels' that the ball will land on black. He bets a lot
of money on it, saying, "I just know it will land on black." It is
generally agreed that this does not constitute his genuinely knowing
that the ball will land on black (that is, assuming that he does not
really possess psychic powers), and likewise it does not constitute
his being aware of the fact that the ball will land on black. Of
course, the gambler might get lucky -- the ball might in fact land
on black. But he still was not aware that it was going to; he
merely guessed and got lucky.
(c) The fortuitous leap of faith: Suppose that a number of people
have come to believe, by a leap of faith, in an invisible unicorn
that roams the surface of Mars. That is, these people have no
cogent evidence of the existence of such a being, and have never had
any contact with it, but they have chosen to believe in it by a
sheer act of will (you may object to the notion of choosing to
believe, but certainly there is some phenomenon called taking a
'leap of faith,' evidenced by some religious beliefs, so assume that
that phenomenon is going on). It might happen, as chance would have
it, that there really is an invisible unicorn roaming the surface
of Mars. Still, their belief by virtue of faith does not constitute
awareness of the unicorn, nor awareness of the fact that some such
I assume that the reader has the same intuitions about these cases. These sorts of cases not only illustrate the need for condition (iii), but also suggest some ways of interpreting it, i.e. some appropriate kinds of 'connections.' In the first case, the natural analysis is that the veridicality of the experience is 'accidental,' or merely a matter of chance, because the thing that satisfies the content of the experience (the spider) does not cause the experience.
The second case differs in that a belief is involved. However, if one thinks of the gambler's feeling as an experience similar to perceptual experience, then it can seem that this case, too, is a case of veridical hallucination, in which case the causal analysis may seem appropriate here too -- or, better yet, an analysis in terms of reliability. The reliability analysis says that it is accidental that the gambler's belief is true because the sorts of feelings that he is relying on are not generally reliable. The reason I say the reliability analysis may be better than a causal analysis is that, since the gambler's belief is about the future, the thing that makes it true could not possibly cause the feeling that he has; yet we seem to have some sort of idea of what it would be to possess psychic powers in the form of precognition, such that one has to stipulate (in order to get the desired intuitive verdict) that the gambler in case (b) does not possess such powers. What it would be to possess precognition might be to possess some faculty that is reliable in producing the feeling that X is going to happen only when X is something that is really going to happen. The lucky gambler's problem is that he doesn't have such a faculty. This is why the feeling that the ball is going to land on black does not constitute awareness that the ball will land on black. The explanation of why the gambler's belief that the ball will land on black does not count as a state of awareness may simply be that his belief is based on the feeling, and the feeling is not a state of awareness.
Regardless of whether one views the gambler's feeling as analogous to a hallucination and accepts the above analysis, there is in addition, at least, a second reason why the gambler's belief is not a state of awareness. This is that it is epistemically irresponsible for the gambler to hold this belief (not to mention imprudent to bet on it -- and we regard it as foolish to bet on it because it is foolish to hold the belief). There is no reason, or no adequate reason, to think that the ball is going to land on black, and in the context of background knowledge that is available to any mentally competent adult in our society, it is highly unlikely that a given individual's feeling that a ball on a roulette wheel is going to land on black constitutes an extra-sensory perception. (Among other things, if ESP did exist, it would probably have been scientifically validated by now.) Because there is no good reason to think that the ball will land on black, it is a matter of chance that the gambler gets it right.
Case (c) lends itself best to a justification- or epistemic-responsibility-based analysis. That is, since the believers in this case clearly have no epistemic justification for thinking that there is an invisible unicorn on Mars, if they turn out to be right, then they're just lucky. But this case could also be analyzed in terms of reliability -- leaps of faith are not, in general, a reliable way of getting the truth.
One could also, at least in cases (a) and (c), appeal to a counter-factual analysis: what makes it accidental that my perceptual experience in case (a) is veridical is that I would have had the experience even if there were no spider on the table, and what makes the truth of the beliefs in case (c) accidental is that the subjects would have held those beliefs even if there were no invisible unicorn on Mars. (Case (b) is more problematic for this approach, since to apply this approach to case (b) would require one to evaluate a backtracking counter-factual.)
So we've seen four apparent ways of failing to satisfy condition (iii) on awareness: an intentional mental state can be accidentally veridical by reason of lacking justification (if the state is a belief), by lacking a causal connection with its object, by being unreliable (in the sense that states of this kind, or states produced in this manner, do not generally tend to be veridical), or by being such that the state would have occurred regardless of whether its content had been satisfied. I am not going to attempt to say which of these accounts is the correct account of the non-accidentality condition for awareness in general. That project, of course, is the generalized Gettier problem (generalized because it is the problem for awareness rather than just knowledge). I leave it open that different kinds of awareness might satisfy the non-accidentality condition in different ways, and I also leave it open that more than one of these versions of the condition might apply simultaneously. To illustrate what I mean by this, consider one of the things that I said I was aware of earlier. I am aware of the fact that there are over a billion people living in China. I am aware of this partly in virtue of my believing that there are over a billion people living in China. It is particularly plausible in this case that my belief, to constitute a state of awareness, needs to be epistemically justified (I need to have adequate reason for believing that there are over a billion people in China). Perhaps it also needs to be formed by a reliable method. Perhaps it also needs to be caused by the fact that there are over a billion people living in China. Or perhaps its just having one of these characteristics is sufficient (maybe there is a specific one that it needs to have, or maybe any of them will do). About all of that I remain noncommittal.
What I am interested in is two particular species of
awareness: our episodes of perceiving our environment, and our
perceptual knowledge of our environment -- that is to say, the
awareness of our environment that, I claim, is constituted by our
perceptual experiences, and the awareness of certain facts about our
environment that is constituted by our perceptual beliefs. I will
say below how I think the first kind of state satisfies the non-accidentality condition so as to qualify as awareness, and with
regard to the latter state, I will explain how it satisfies what I
take to be the most difficult version of the non-accidentality
condition. It is not difficult to account for the reliability of
our perceptual beliefs, nor for their causal connection to the facts
that make them true, nor for their satisfaction of the counter-factual condition. Of the requirements that might plausibly be
imposed for our perceptual beliefs to count as awareness, the one
that it is most difficult to account for their satisfying (the one
for which the rough account is not immediately obvious) is the
requirement that they should be epistemically justified. So I will
explain how I think perceptual beliefs do satisfy this requirement.
1.2. Awareness and knowledge
So far I've skirted the issue of the relationship between awareness and knowledge, though what I've said should make it clear that there is some close relation between knowledge of the kind epistemologists are usually interested in (i.e. propositional knowledge) and awareness. The relation, I maintain, is that of genus to species, with knowledge being the species. We can now see how knowledge exemplifies the general characteristics of awareness that I've identified above. In the case of knowledge, the state of awareness is a belief. The object of awareness is a fact corresponding to (satisfying) the propositional content of the belief -- i.e., if the content of the belief is that P, then the object of awareness is the fact that P. For a belief to be true is just for there to exist a fact corresponding to it in this sense, so my condition (ii) on awareness, as applied to belief, gives the truth condition for knowledge.
All of the versions of the non-accidentality condition that I made use of above were originally proposed by philosophers as conditions for knowledge -- I have simply generalized them. Where Nozick proposes that S knows that P only if, if P weren't true, S wouldn't believe that P, we can generalize this to: S is aware of X only if, if X didn't exist, S wouldn't be in the intentional state that he's in. Where the reliabilist says that S knows that P only if the mechanism by which S formed the belief that P is reliable (in the sense that it tends to produce mostly true beliefs), we could propose more generally that S is aware of X only if the mechanism that produced S's mental state whose object is X is reliable (in that it tends to produce mostly veridical states). It is straightforward how the causal account generalizes. Again, I do not mean to take a stand on any of these particular analyses of knowledge, and I don't even mean to imply that there must be a unique one of them that is correct for all kinds of knowledge.
The one analysis (or family of analyses) of non-accidentality that does not generalize is the analysis involving justification. From "S knows that P only if S is justified in believing that P," I can not generalize to a possible condition for awareness in general. The reason for this is that the notion of epistemic justification only applies to beliefs. Mental states other than beliefs can not be epistemically justified or unjustified (for example, if I am having a certain sensation, it doesn't make sense to ask whether I'm epistemically justified in having that sensation). What this means is just that we will have to use some other account of non-accidentality for non-belief forms of awareness (we will return to this point in chapter 2), but it may still be true that epistemic justification is a necessary condition on knowledge. One consequence of this point is that it is possible for a philosophical skeptic to argue that we lack knowledge about the external world, without denying that we have perceptual awareness of (i.e., perceive) the external world, on the ground that beliefs have to satisfy a more stringent requirement to count as awareness than perceptual experiences do.
We can summarize the relation between knowledge and awareness
by saying simply that knowledge is the awareness of facts. So when
I said that I am aware of the fact that there are over a billion
people living in China, that was just to say that I know that there
are over a billion people living in China.
1.3. Epistemic dependence
As I briefly mentioned above, sometimes states of awareness
(or mental states that are candidates for awareness) are based on
other states of awareness (or awareness candidates). Let's look at
some examples of this relation before trying to characterize it in
(a) I believe that there are well over a billion people in China. Why do I believe this? Because earlier during the writing of this chapter, I looked up China in a recent almanac, and I then saw that the almanac listed for the population of China a number beginning with a 1 and a 2 and having ten digits. Upon seeing this, I believed that the almanac reported that the population of China was well over a billion. So my belief that China contains well over a billion people is based on the belief that the almanac reported that China contains well over a billion people. Since each of these beliefs do as a matter of fact constitute knowledge, we can also say: my knowledge that there are well over a billion people in China is based on my knowledge that the almanac reported that there were well over a billion people in China.
The based-on relation for beliefs is the most-discussed form
of the relation among philosophers. The relation can also hold
between non-belief intentional states, however:
(b) Return to my example of the gambler who believes that the ball on the roulette wheel will land on black. Why does he believe this? Because he has a feeling that it will. So his belief that the ball will land on black is based on his 'feeling' or hunch (scare quotes around "feeling" because it's not a feeling in the sense of either an emotion or a tactile sensation).
We can see that the feeling and the belief are two different
states, because it would be possible for the feeling to exist
without the belief -- for example, if the gambler were more
reasonable, he might have the feeling that the ball will land on
black but still doubt whether it will, because he might tell himself
that he had no good reason to trust this feeling.
(c) When I converse with other people, I am to some extent aware of their emotional states, other than by their reporting those states. How am I aware of this? Well, in part because people have facial expressions and tones of voice that reflect their emotional states, and I perceive those expressions and tones of voice. So my awareness of their emotional states is based on my awareness of their facial expressions and certain tonal qualities in their voices.
This is not to say that my awareness of people's emotional states is based on beliefs about their facial expressions and tones of voice. In many cases I do not have the relevant beliefs at all, because I do not possess the concepts that would be required to classify complex tonal qualities and combinations of movements of facial features, and/or because I do not take notice of those properties. For example, I may sense that Sally is tired by hearing a certain quality in her voice -- call it tonal quality Q1. At the same time, I may not possess any concept for picking out Q1 and distinguishing it from other qualities. Furthermore, if I did possess such a concept, I might even then fail to believe that Sally's voice had Q1, despite my hearing that quality. In all probability, Q1 would be a very complex and subtle characteristic. The concepts required for identifying it might be complicated and abstract mathematical ones (like the concepts required for identifying a voiceprint), so that it would not be at all obvious at first glance that Sally's voice had Q1. Still, my hearing that quality might in fact be how I know when Sally is tired.
Nor need my awareness of other people's emotional states
consist in beliefs either. We can show that my sensing of a
person's emotional states is distinct from my beliefs about their
emotional states by an argument parallel to the one used in (b)
above -- namely, that it is possible for me to have the sense that
another person is in a certain mental state without believing that
they are. For example, suppose I believe (whether correctly or not)
that the person I am observing is an actor. Suppose that this
person is putting on a great show of grief. I may believe (or even
know) that in fact he is not unhappy at all, while still feeling a
strong impression of his unhappiness -- he seems unhappy.
What we have here are three examples of the 'basing' relation
for states of awareness. In example (a), we have a belief based on
another belief. In (b), there is a belief based upon a non-belief
mental state. And in (c), there are non-belief mental states based
on other non-belief mental states. We can make the following
general observations about the relation:
(i) The based-on relation holds between mental states of the kind that are candidates for awareness.
This is partly a terminological point. Sometimes we speak of
facts or other objects of awareness as bases for states of
awareness. For instance, if asked to explain the basis for my
belief that there are over a billion people in China, I may equally
"My belief that there are over a billion people in China is based on the belief that the 1996 almanac reported that there were over a billion people in China," or
"My belief that there are over a billion people in China is based on the fact that the 1996 almanac reported that there were over a billion people in China," or
"My belief that there are over a billion people in China is
based on the 1996 almanac's report."
Although all of these sound superficially as though they are identifying radically different things as standing in the same relation to the same belief, I think that they are merely notational variants. There is a certain relation that can hold between one mental state, A, and another mental state, B. When this relation holds, sometimes we describe the situation by saying that B is based on A, and sometimes we describe it (using "based on" in a slightly different sense) by saying that B is based on the thing that is the object of A. We can describe the relationship between these two senses of "based on" as follows: B is based on x in the second sense if and only if there is some mental state, A, such that B is based on A in the first sense, and x is the object of A.
Although these two ways of using the terminology are equally
acceptable in standard English, it is necessary for our purposes to
settle on one usage, so as to avoid confusion -- in a discussion of
direct realism, it is particularly important to avoid any possible
sources of confusion between mental states and their objects. I
will therefore from here on out use "based on" only in the first
sense -- that is, the sense which allows mental states to be based
only on other mental states. This usage has one important advantage
over the other usage: it allows us to characterize a relation that
may hold between unveridical mental states. For example, suppose
that Sam believes that he will have eternal life, (partly) because
he believes that God has promised eternal life to all Christians.
But suppose that Sam is mistaken -- God has not, in fact, made any
such promise. Then it would be incorrect to say Sam's belief that
he will have eternal life is based on the fact that God has promised
eternal life to all Christians, or that it is based on God's promise
to all Christians, because there is no such fact, and there is no
such promise. We can only describe the situation if we adopt the
first way of speaking and say that Sam's belief that he will have
eternal life is based on his belief that God has promised, etc. (We
could try instead speaking of the 'apparent fact' or 'ostensible
fact' that God has promised eternal life to all Christians, but this
seems like really just another way of referring to Sam's belief,
since "apparent" would have to mean apparent to Sam, not apparent
to the speaker.)
(ii) The basing relation is a form of epistemic dependence.
What I mean by this is that, when B is based on A, B's status as awareness depends on A's status as awareness. Intuitively, the basing relation is the relation whereby one mental state can transmit its favorable epistemic status to another mental state. If the first state doesn't have any favorable epistemic status (i.e. it doesn't count as awareness), then it can't transmit any (so the second state won't count as awareness). The 'favorable epistemic status' of a state of awareness amounts to its characteristics of being veridical and appropriately connected with its object, as discussed in section 1.1. So another way to put the point is this: If B is based on A, then B will count as awareness only if A is both veridical and appropriately connected with its object.
Thus, in light of the fact that my belief that there are over a billion people in China is based on my belief that the 1996 almanac reported that there were over a billion people in China, we can say that I know that there are over a billion people in China only if I not only believe but know that the 1996 almanac reported that there were over a billion people in China. If it turns out that my belief about the almanac was either false or unjustified (assuming this is my sole basis for my belief about the population of China), we would not say that I knew the population of China.(4)
We can say the same about example (b): given that the gambler's belief that the ball will land on black is based on a feeling, he knows that the ball will land on black only if the feeling constitutes awareness. If the gambler doesn't actually possess extrasensory powers (so the feeling is not genuine awareness), then he doesn't know (isn't aware) that the ball will land on black. And of course the same goes for cases like (c): if my perceptual experiences of people's facial expressions and voices are not genuine awarenesses (if they're hallucinatory), then I also will fail to be genuinely aware of their emotional states.
Notice that I have not said that, if B is based on A, then if A counts as awareness, B will count as awareness; I have only said that B will not count as awareness unless A does. I have only asserted a necessary condition for B's counting as awareness. There are two reasons why it would be a mistake to make the sufficiency claim: first, because A's being a state of awareness and B's being based on A are not sufficient to guarantee that B is veridical; and second, because even if B is based on A, A is a genuine state of awareness, and B is veridical, all this still is not sufficient for B to be non-accidentally veridical. Both points can be made using an example of Goldman's: Suppose that Henry is driving through an area where there are a lot of phony barns. These phony barns look just like real barns from the road, but in fact they are just façades, with no rear walls or interiors. Henry sees something that looks like a barn, and he believes that there is a barn there. We can now imagine two cases: First, suppose that the barn-like object is a barn façade. In that case, Henry is seeing (hence, is aware of) the barn façade. The barn façade roughly satisfies the content of Henry's visual experience (by having the right size, shape, location, and distribution of colors), and is appropriately connected with Henry's visual experience (by causing it in the normal way). Henry's visual experience is a state of awareness (albeit not the awareness of a barn), and his belief that there is a barn there is based on his visual experience, but his belief is not a state of awareness, because it is false. This sort of example can also be devised for beliefs arrived at through non-demonstrative inference.
Second, we can imagine that the barn-like object is a real
barn, although there are plenty of barn façades in the vicinity, and
if Henry were to see one of them, he would mistake it for a barn.
In this case, Henry's visual experience is a state of awareness (he
is aware of the barn), his belief that there is a barn there is
based on it, and his belief is true, but his belief still does not
amount to awareness, or knowledge, due to the possibility of there
having been a barn façade present instead of a real barn.(5)
(iii) Inference is a special case of basing.
Just as knowledge is the form of awareness which is
constituted by beliefs, inference is the form of the basing relation
in which a belief is based on another belief -- that is, to infer Q
from P is to base a belief that Q on a belief that P. This is
illustrated in example (a). It is interesting that just as
knowledge is the most-studied form of awareness in philosophy,
inference is the most-studied form of the basing relation in
(iv) When B is based on A, the content of A must be relevant to the content of B, such that the existence of something satisfying the actual content of A either entails or makes highly probable, or appears to the subject to entail or make highly probable, the existence of something satisfying the actual content of B.
This relation is most apparent in example (b), for the content of the gambler's feeling is the same as the content of his belief, so of course the existence of something satisfying the actual content of his feeling would guarantee that something satisfies the actual content of his belief. There is an equally strong connection in cases of deductive inferences -- i.e., when I deduce one belief from another, the content of the former is typically related to the content of the latter in such a way that if the latter is true, the former must be.
The "appears to the subject" clause is needed to account for cases in which a person commits a fallacy. In such a case, a person may base a belief that Q, for example, on a belief that P, even though in fact P does not entail or render probable Q. Still, P will at least appear to the subject to entail or render probable Q.
The beginning clause in my statement of (iv) about 'relevance' is not meant to be redundant. It is meant to account for the fact that one cannot, for example, base a belief in the four-color theorem on the belief that today is Tuesday -- even though, technically, one might say the fact that today is Tuesday guarantees (entails) that the four-color theorem is true. (In this sense, any proposition whatever guarantees that the four-color theorem is true, since the four-color theorem is necessarily true.) The relevance condition stipulates that the entailment must hold in virtue of the relevance between the contents of A and B.
The 'making probable' clause applies to both examples (a) and (c). That the almanac reports that there are over a billion people in China does not strictly guarantee that there are over a billion people in China, but it does make it highly probable, in the context of my background knowledge. Similarly, people's having certain facial expressions and tones of voice makes it highly probable that they have certain emotional states, given normal circumstances.
I characterized the probability relation differently for the two cases, and this is something that requires comment. In both cases, the probability relation was itself relative to something (i.e., something other than the mental state that the other state is based upon) -- in the first case, I said it was relative to background knowledge. In the latter case, I said it was relative to normal conditions. I think the motivation for the relativization in either case is clear enough. I don't want to say that, completely a priori, the fact that an almanac reports that the population of China is greater than a billion makes it highly probable that the population of China is greater than a billion. Abstracting from our background knowledge about almanacs, statistics-gathering practices in our society, and so on, I have no idea whether the existence of the almanac's report would confirm the proposition that it reports or not. Likewise, the occurrence of certain facial movements only makes it highly probable that a person is feeling anger in context of certain general but contingent facts about humans.
There is a rationale for relativizing in the first case to background knowledge and for not doing so in the second case. In the first case, we have inferential basing -- i.e., a belief based on another belief. The inferential basing relation needs to be able to transmit justification. That is, inference is a way (at least potentially, if everything goes right) of justifying a belief. But the existence of a probability relation between (the content of) belief B and a set of facts of which I (the believer) am entirely unaware would not contribute to the justification of B. B could, however, be rendered justified by being highly probable relative to another belief of mine and my background knowledge.
The same consideration does not apply to case (c). There is no requirement that non-inferential basings should transmit justification -- in fact, they cannot do so, because non-belief mental states are incapable of being either justified or unjustified. Furthermore, we should not make use of the notion of background knowledge in case (c), if we want to preserve that as an instance of awareness, because the relevant 'background facts' that make it highly probable that a person with certain facial expressions has certain emotions are likely to be unknown to me. What sort of facts would these be? Perhaps facts about the way human beings are generally 'wired up,' together with facts about the sort of conditions that usually obtain in our environment. I am, however, woefully ignorant of physiology (as are most people), and certainly don't know anything that would be adequate to underwrite the sort of probability relation in question. Or perhaps we should look to something simpler, such as the fact that people with facial expression F usually have emotion E. This too, I am afraid, is something I would be unaware of in many if not most cases. You might say, I know that smiles indicate happiness and frowns indicate unhappiness. But I know very little beyond that (and similar simplistic criteria), and this sort of principle is too crude and simple to do justice to my in-practice ability to distinguish a variety of different emotional states on the basis of facial expressions. I do not know what facial expression indicates tiredness, or annoyance, or what distinguishes a happy smile from a nervous smile. I am able to tell when a person is tired, or annoyed, etc., but I do not know how I do it. If someone were to propose to me that when people are tired, their eyes diverge slightly, and this is part of how we are able to tell when others are tired, I would have to say, "That may very well be; I do not know."
Because of these considerations, we should say that when B is
based on A, if B is a belief, then the existence of something
satisfying A makes probable the existence of something satisfying
B, given background knowledge of the subject; and if B is not a
belief, then the existence of something satisfying A makes probable
the existence of something satisfying B, given normal conditions.
We note that normal conditions are not necessarily conditions that
the subject knows or believes to be normal, or even believes to
(v) Basing is a species of causal relation.
When B is based on A, A causes or causally sustains B, so that if the subject weren't in or hadn't been in state A, he wouldn't be in state B. This is clear in each of my examples:
(a) My belief that the almanac reports that there are over a billion people in China causes me to believe that there are over a billion people in China, and that is why, if I hadn't believed that the almanac reported that there were over a billion people in China, I wouldn't have believed that there were over a billion people in China. (In actual fact, my belief is also partly based on memory of statements I have heard from other people over the years. But we are pretending that my knowledge of the almanac report is my sole basis.)
(b) The gambler's feeling that the ball will land on black causes him to believe that the ball will land on black, and if he hadn't had the feeling, he wouldn't have had the belief.
(c) My seeing people's facial expressions and hearing their voices causes me to be aware of their emotional states. If I didn't see their facial expressions or hear their tones of voice, then I wouldn't be aware of their emotional states. (It is difficult to judge a person's mood from written correspondence precisely because much of our awareness of others' emotional states is based on awareness of their facial expressions and tones of voice. Of course, some of it is also based on what they say.)
We can appreciate the importance of condition (v) by considering cases where it fails. Here's a simple case involving beliefs: I'm in a math class, and the teacher informs me of the Pythagorean Theorem. This immediately causes me, let's suppose, to believe the Pythagorean Theorem, because I trust my teacher. There are in fact many different ways of proving the Theorem mathematically, though I do not know any of them (in the sense that I have not seen them worked out and could not produce any of them). Some of these ways involve only using certain simple arithmetical and geometrical premises which I do know. From a purely logical standpoint, it might be said that these items of knowledge that I have are grounds for the Pythagorean Theorem, and of course they are better grounds than the knowledge of a teacher's testimony. But I do not think anyone would say they are in fact my grounds for believing the Pythagorean Theorem, because the beliefs comprising them were not active in the process by which I formed my belief in the Pythagorean Theorem. My actual basis for belief is my belief that the teacher has said the Pythagorean Theorem is true.
(v) is also what distinguishes genuine reports of reasons for belief from rationalizations -- if a belief that P has not played any role in either producing or sustaining my belief that Q, then if I cite P as my reason for Q, even if I do happen to believe P and even if P is a good reason for Q, we call this "rationalizing."
Note that the causal connection is fundamental and necessary, and not the counter-factual connection. That is, the reason why most of the time, when B is based on A, B counterfactually depends on A is that whenever B is based on A, A causes B. It is possible to have a case in which B is based on A even though, if the subject hadn't been in state A, he would still have been in B -- just suppose that if S hadn't been in A, he would have been in some other cognitive state, C, which would have generated B instead (the case of the pre-empted cause). But it is not possible to have a case in which A is the basis for B while A plays no causal role in producing or sustaining B.
The causal condition is equally important in cases of basing not involving beliefs. Return to example (c). Suppose that research shows that when human beings get tired, a number of perceptible things happen to their faces. Let's suppose that their eyes diverge slightly and their lower lips sag just a little. And suppose we find that both of these changes are easily detectible to other humans' visual systems. It seems clear that, that much having been discovered, there remains an open question: which of these changes, if any (or both), is it the perception of which forms the basis for our sense that the person undergoing them is tired? The question seems to turn on which of the changes the perception of which plays a role in bringing about the perception of tiredness. It could be that we perceive the eye change but that it doesn't have anything to do with our sensing the person's emotions, for example.
We should be careful now about the difference between a cause and the cause of a state of awareness. It may turn out that there are several different states of awareness that contribute to sustaining state B. For example, there are by now many different causes of my belief in the Pythagorean Theorem -- I have heard it asserted or assumed by many different people and textbooks, I have seen a visual demonstration of it in a museum and another on a television program, and I've seen at least one or two proofs of it. Each of these experiences might be called a cause of my belief, though none of them is the cause, because none of them clearly stands out from the rest as the most important. Shall we say that my belief is based on one or more of these experiences? Well, none of these experiences is the basis for my belief, but my belief is partly based on each of them, and we can say that my belief is (without qualification) based on the totality of the experiences.
We might now worry about deviant causal chains. Observations (i), (iv) and (v) are each necessary for a state B to be based on A, but they are not sufficient, because of the problem of deviant causal chains. If we add in (ii), we might have a sufficient set of conditions, but (ii) isn't really one of the conditions on the basing relation in the same sense that (i), (iv), and (v) are -- once we've stated (ii), we can still wonder under what conditions B's epistemic status does depend on A's epistemic status, and the other conditions are supposed to answer that, at least partly.
Here is an example of a deviant causal chain: Return to my learning of the Pythagorean Theorem. Suppose that earlier in the year, this same teacher has taught me certain axioms of arithmetic and geometry. These axioms are propositions from which the Pythagorean Theorem follows, although just as before, I have not actually seen any of the proofs of the Theorem, and my belief is proximately caused by the teacher's testimony and based on my trust in his veracity. We have agreed that in this case, my belief in the Pythagorean Theorem is not based on the mentioned axioms of geometry and arithmetic. But now add this detail: suppose that the teacher would not have taught me the Pythagorean Theorem unless I had first demonstrated mastery of the material presented earlier in the course. He has very accurate testing techniques, and he never teaches the Pythagorean Theorem to anyone who has not first learned the axioms of arithmetic and geometry. Thus, my knowledge of the axioms of arithmetic and geometry has (indirectly) caused me to believe the Pythagorean Theorem, by causing the teacher to tell it to me. Still, my beliefs in the axioms are not the basis for my belief in the Pythagorean Theorem. My sheer trust in the teacher's veracity is.
We could equally well imagine that the teacher won't teach students the Pythagorean Theorem unless they have first mastered a history lesson, so my knowledge of history would cause me to believe the Pythagorean Theorem -- here the failure of the one belief to be based on the other is even clearer, if that's possible. But to return to the above case: Condition (v) is satisfied: my belief in the axioms causes (albeit indirectly) my belief in the Pythagorean Theorem, explaining why if I hadn't believed the axioms, I wouldn't have come to believe the Pythagorean Theorem. Condition (iv) is satisfied: the axioms are logically relevant to the Pythagorean Theorem and in fact entail it. Condition (i) is unproblematic: beliefs are candidates for awareness (in the shape of knowledge). We can even make a go at condition (ii): suppose that the teacher's testing techniques are very good at distinguishing genuine understanding from mere rote repetition, so that if I hadn't really known but merely believed the axioms, the teacher would not have proceeded with the later course material. Still, my belief in the axioms isn't the basis for my belief in the Pythagorean Theorem.
Here is why I said (ii) might give us a sufficient set of conditions: although it's true in a sense that in this case, my warrant for believing Q depends on my warrant for believing P (where Q is the Pythagorean Theorem and P is the appropriate conjunction of axioms), it can be argued that this is not the relevant sense of "dependence." The dependence involved in the case is causal dependence, whereas what (ii) intends is constitutive dependence. That is, what (ii) requires is not that my warrant for believing P causes me to get warrant for believing Q, but rather that my warrant for believing Q depends on my warrant for believing P in the sense that my warrant for believing Q partly consists in my having warrant for believing P. And intuitively, this 'consists in' relation does not hold in the case at hand. Now, that states the meaning of (ii) for the case of beliefs based on other beliefs (warrant being a property of beliefs). We can convert it into a general statement about awareness easily enough: just as epistemologists use "warrant" (following Plantinga) for the property that converts true belief into knowledge, we could introduce a term for the property that converts a veridical intentional state into awareness -- I have above referred to it as "connectedness," "non-accidentality," and "positive epistemic status." We can then say that when B is based on A, where A and B are any kind of awareness, B's positive epistemic status partly consists in A's positive epistemic status. Consider how this principle would apply to example (c): What makes my awareness of Sally's voice non-accidentally veridical (I mean what makes the veridicality non-accidental) is that my auditory experience as of Sally's voice is caused by Sally's voice. Now what makes my awareness of Sally's tiredness non-accidentally veridical is also, in part, the fact that my auditory experience as of Sally's voice is caused by Sally's voice -- if my auditory experience were a veridical hallucination, then I would not be being aware of Sally's emotional state. This is not because the fact that my auditory experience is caused by Sally's voice causes my veridical sense of Sally's tiredness to be appropriately connected with its object; it is rather that the fact that my auditory experience is caused by Sally's voice (and hence amounts to hearing her voice) partly constitutes the appropriate connection between Sally's tiredness and my sense of Sally's tiredness. The appropriate connection between A's sense of B's emotional state and B's actual emotional state partly consists in the fact that A is perceiving some of the physical features of B that manifest B's emotional state (as opposed to hallucinating them).
I'm not going to press this as a satisfying answer to the deviant-causal-chains problem because I think a genuine analysis of the basing relation would require an answer to the question, "Under what conditions, exactly, does the positive epistemic status of A partly constitute the positive epistemic status of B?" and I do not have an answer to that question, other than to cite conditions (iv) and (v) -- which we have seen are insufficient. We can get closer to a sufficient set of conditions by stipulating that the causal mechanism connecting A to B has to be internal to the subject's mind -- that rules out my case with the math teacher. But there could still be (increasingly farfetched) cases of internally deviant causal chains: suppose that when I learn the axioms of arithmetic and geometry, I am so pleased with my intellectual progress (at having learned such interesting truths), that I get into a mood of reckless intellectual self-confidence, in which I decide that the next proposition I think up will surely be true and important. As it happens, the next proposition that I think up is one that (unbeknownst to me) follows from these axioms -- the Pythagorean Theorem -- and I immediately endorse it. So my knowledge of the axioms of arithmetic and geometry causes me to accept the Pythagorean Theorem, and the axioms entail the theorem, but it isn't the case that my belief in the theorem is based on my knowledge of the axioms. Again, the latter beliefs don't cause the former belief in the right way.
I don't have an answer to this analytical problem, but nothing
crucial turns on the exact analysis of deviancy in causal chains,
and I also do not take the lack of such an analysis as an obstacle
to accepting that cognitive basing is a kind of causal relation.
Deviants crop up wherever causal relations are involved: for an
action to be intentional, it must be caused by an intention in an
appropriate way; for a person to perceive an object, the object must
cause an experience in an appropriate way; for a person to knock
over an object, the person must cause the object to fall in a
certain sort of way; for a person to break a vase, he must cause the
vase to break in a certain way. Deviant causal chains can appear
in any of these cases -- there can even be cases of causing a vase
to break without breaking the vase (suppose I hire Joe to throw the
vase on the floor: have I then broken the vase?), although there is
no doubt that some sort of causal analysis of "breaking" is
correct.(6) The moral is that a problem of deviant causal chains is
what we should expect (even) if a causal account of cognitive
"basing" is correct.
1.4. Direct vs. indirect realism, at last
All of the above has been by way of putting us in a position to appreciate my definition of direct realism: direct realism is the thesis that perception constitutes direct awareness of the external world. That is: in perception, we are aware of certain parts or aspects of the external world, and our awareness of these things is not based on our awareness of anything that's not in the external world. In contrast, idealism holds that in perception, we are aware of some internal (mind-dependent) phenomena, and we are not aware of the external world, since (abstract objects aside) there is no external world to be aware of. And indirect realism holds that either in perception or as a result of perception, we are aware of the external world, but our awareness of the external world (abstract objects aside) is always based on our awareness of something internal.
The way I've just defined the distinction between direct and
indirect realism is not the only way of doing so, of course. It's
not the only reasonable way of drawing the distinction, and it's not
the only way that has been used by writers on perception. Since
"direct realism" is a philosophers' term of art, there isn't really
any unique right way of defining it, but some ways are more useful
than others (mainly by virtue of making the view designated by the
term more interesting). So I propose to look at a few other ways
of drawing the distinction to point out how they differ from mine:
(1) Direct realism is the view that in at least some cases, we directly perceive the external world, where the notion of directness should be understood causally. That is, according to direct realism, the causal link between an object and our perceptual experience of the object is direct, in the sense that there are no intermediate causes between the object and the experience.
There's no need to dwell on the impropriety of this definition
for my purposes. The definition could be appropriate if one wants
"direct realism" to designate a clearly unacceptable view (as
perhaps the term "naive realism" is meant to imply), but not if one
wants it to designate a view of perception that has actually been
held by philosophers. The falsity of 'direct realism' in the above
sense is hardly a recent or surprising discovery of science or
philosophy. The role of the brain in producing (e.g.) visual
experiences may be regarded as a relatively recent and perhaps
surprising scientific discovery, but the rough role of light (i.e.
that light must travel between the object perceived and our eyes in
order for us to see) has been known at least for centuries (it is
why you don't see things if you interpose an opaque object between
them and your eyes), and it would be inconsistent with 'direct
realism' construed in this way. But any view that can be refuted
by the fact that you can't see objects through an opaque screen is
not very interesting. (Granted, a bit more than that might be
required, to show that the reason you can't see through an opaque
screen is that it doesn't transmit light, but I think the point
remains that this form of 'direct realism' isn't very interesting.)
On the other hand, we can also find ways of defining the
direct/indirect distinction that make indirect realism too easy to
refute, such as:
(2) Indirect realism is the view that we never really see, feel, taste, or otherwise perceive anything external, although we nevertheless know that some such things exist. Instead, we only truly see, feel, etc. sense-data or ideas in the mind. Direct realism is the view that we can perceive external objects or events.
This is the form of indirect realism that Berkeley, by and by,
manages to saddle Hylas with:
Hyl. Properly and immediately nothing can be perceived but ideas. All material things therefore are in themselves insensible, and to be perceived only by their ideas.
Phil. Ideas then are sensible, and their archetypes or originals insensible.
There is no need to recount all the fun that Philonous goes on to have from there.(7)
Again, I think this characterization of the distinction just
makes it too obvious which view is correct. The indirect realist
-- even a sense data theorist -- needn't be committed to saying that
we see sense data, still less that we don't see physical objects
(suppose that seeing X is being aware of X on the basis of one's
awareness of a visual sense datum of X: then we don't see visual
sense data, since we don't have visual sense data of visual sense
data, and we do see physical objects, since we are aware of them on
the basis of awareness of visual sense data of them). Whatever
one's account of seeing is, it had better wind up that typical
middle-sized dry goods, such as sofas and elephants, are visible --
and I'm inclined to insist, in addition, that it better not turn out
that sensations or ideas in the mind are visible.
(3) Indirect realism is the view that, in the situations in which we normally say we believe that certain physical objects are present in our environment because we see or otherwise perceive them, we are actually inferring that such objects are present, from our beliefs about what internal states we are enjoying. Direct realism is the view that we typically form physical-object beliefs without inferring them from beliefs about our mental states.
This version of 'indirect realism' just gets the phenomenology
of perception wrong, in a fairly obvious way. Introspection reveals
that we normally don't go through processes of inferring when it
comes to simple physical-object beliefs about our immediate
environment. When I see a coffee cup in an everyday context, I do
not think to myself, "I am now having a visual experience as of a
coffee cup. It is highly probable that if I have a visual
experience as of a coffee cup, then there is a coffee cup present.
Therefore, (probably) there is a coffee cup here." Instead, when
I see the cup I just straightaway take it for granted that there is
a cup there. There are even some people (philosophers, mainly) who
would deny that there are such things as 'visual experiences' (not
that they would deny that we see things, but they would deny that
seeing can be analyzed in terms of enjoying a mental state called
a 'visual experience' -- see chapter 2 below), and yet these
philosophers still know (by seeing) that there are physical objects
around them. There are even more people who, while knowing lots of
things about their physical environment, entertain no opinion about
such things as visual experiences, perhaps not even having the
concept 'visual experience,' or 'tactile experience,' etc. They
just don't think about those things. Unless one is either a
philosopher or a cognitive psychologist, one just doesn't have much
occasion to form or apply such concepts. But again, a view that can
be refuted by (approximately) the fact that you can see that there's
a cup on the desk without having the concept of a visual experience,
is not very interesting.
(4) Direct realism is the view that we are sometimes noninferentially justified in believing a proposition asserting the existence of a physical object.(8) Indirect realism is the view that we are sometimes inferentially justified in believing a proposition asserting the existence of a physical object, but we're never noninferentially justified in believing such a proposition.
This pair of definitions is reminiscent of (3), but not the same as (3), for this reason: According to Fumerton, the claim that S's justification for believing P is inferential does not entail that S's belief that P was actually the product of an inference; there is a derivative sense of inferential justification, having to do in part with a belief's being caused by an experience, such that a belief can be inferentially justified on the basis of an experience, without the belief having been inferred from anything.(9)
How does this formulation differ from mine? One difference is perhaps only verbal: Fumerton and I both agree that a belief might be based on some mental state that's not a belief (such as an experience), where this involves the mental state's causing the belief in an appropriate way. He calls this relationship a form of "inferential justification," while I reserve any term involving a cognate of "infer" for a relation between beliefs. But Fumerton may be using "on the basis of" in the way that I said I was not going to -- i.e., the possibility he allows may only be what I would express by saying that a belief can be based on beliefs about experiences.
Second, Fumerton's version of direct realism is about beliefs,
while I have spoken more broadly of awareness. So my direct
realist, unlike Fumerton's, is not committed to saying that there
are any beliefs that aren't based on anything. Instead, he says
that there are some states of awareness of the external world that
aren't based on anything. Whether this includes beliefs is left
open. Also, my indirect realist is not forced to posit an extra
sense or extra form of "inferential justification" -- although he
would have to posit other forms of 'basing' besides inferential
My formulation is meant to retain the epistemological spirit
of (3) and (4) while allowing the indirect realist (at least prima
facie) to escape the obvious psychological objections to (3). On
my formulation, the indirect realist must hold that when perceiving,
people are in some manner aware of certain internal states or
events, but the indirect realist need not hold that people believe
these internal events are going on or have concepts of these
internal events. The indirect realist also doesn't have to hold
that any inferring is going on, though he does have to hold that our
awareness of the external world is in some manner based on our
awareness of the internal phenomena. My formulation is thus
designed to give the epistemological indirect realist a break.
(5) Indirect realism is the view that we perceive external things at least in part by virtue of having certain internal mental states that 'represent' them, in the sense of having intentional contents that external things can satisfy or fail to satisfy. Direct realism is the view that people perceive things without having any such states.
It should be clear that this formulation is very different from my own, since on my conception of awareness, being aware of something always entails having a mental state with content, and yet I haven't said anything to imply that 'direct realism' is incompatible with our having awareness (as it would be on the present definition of "direct realism," if that conception of awareness is correct). If one describes such a mental state as a "representation," then being aware of something always involves having a mental representation. It should also be clear that it is not true, on my view, that being aware of something always involves being aware of a mental representation of that thing. For, given the conception of awareness at hand, to be aware of a mental representation would involve having a mental representation of a mental representation, and it is surely false that whenever S is aware of X, S has a mental representation of a mental representation of X. In fact, nothing I've said so far entails that anyone is ever aware of mental representations (except that I do, by talking about mental representations, at least imply that I am, presently, aware that they exist).
To make the issue clearer: according to formulation (5), the
'indirect realist' holds that there's a mental state such that (a)
when you're perceiving a table (where "perceive" is a success term),
you're in that state, (b) you perceive the table at least in part
by virtue of being in that state, but (c) it is metaphysically
possible for you to be in that state while there is no table. The
'direct realist,' under definition (5), holds that there is no such
state; he says that the only mental state that you're in when
perceiving a table, such that you perceive the table by being in
that state, is a state of perceiving the table -- the things I've
been calling "perceptual experiences" don't exist.(10) We'll discuss
this kind of 'ultra-direct realism' further in chapter 2, where I'll
give reasons for rejecting it.
(6) Direct realism holds that there are some external objects that we see, such that we don't see them in virtue of seeing anything else. Indirect realism holds that there are some mental objects that we see not in virtue of seeing anything else, and there are also some external objects that we see, but we always see external objects in virtue of seeing things distinct from them.(11)
This is Frank Jackson's formulation of the issue. One respect in which it differs from mine is that Jackson restricts his concern to visual perception, while I am concerned with perceptual awareness in general. Another difference is that Jackson commits the indirect realist to the view that we can sometimes see mental phenomena. Thus, consider the possible sense-data theorist mentioned above under (2), the one who holds that seeing X should be defined in terms of (among other things) having a visual sense-datum of X: on Jackson's formulation, this would actually be incompatible with indirect realism, because it implies that (given that we don't have visual sense data of any mental phenomena) we don't see any mental phenomena, and it implies that when we see physical objects, we sometimes don't see them in virtue of seeing anything else (given that we don't see any non-physical things). This apparent deficiency in the formulation could be remedied by replacing "see" with "apprehend" or "become aware of," thus: the direct realist maintains that we sometimes are aware of certain physical phenomena not in virtue of being aware of anything else; the indirect realist maintains that we're sometimes aware of certain mental phenomena not in virtue of being aware of anything else, and we're also sometimes aware of physical phenomena, but we're always aware of physical phenomena in virtue of being aware of other things (meaning: for each X such that X is physical and someone is aware of X, there exists a Y such that he's aware of X in virtue of being aware of Y. Y might be physical or non-physical).
More importantly, my formulation is closer to the
epistemological concerns of (4) as a result of using the 'based on'
relation rather than the 'in virtue of' relation. These relations
are importantly different. My distinction between direct and
indirect awareness stems from the observation that sometimes a
mental state gets to count as awareness (in part) in virtue of being
appropriately related to, including being appropriately caused by,
another state of awareness. Jackson's distinction between mediate
and immediate awareness (assuming he would allow the substitution
of "be aware of" for "see") stems from the observation that
sometimes a given mental state gets to count as the awareness of X
(in part) in virtue of being the awareness of Y. So my 'basing'
relation is a kind of causal dependence ("B is based on A" implies
"A causes B"),(12) while Jackson's 'in virtue of' relation is a kind
of constitutive dependence: "S s in virtue of ing" is more like
"S's ing counts as (or constitutes) S's ing" than like "S's ing
causes S's ing." This comes out in his definitions of "in virtue
An A is F in virtue of a B being F if the application of "----
is F" to an A is definable in terms of its application to a B
and a relation, R, between As and Bs, but not conversely.
This A is F in virtue of this B being F if (i) an A is F in
virtue of a B being F (as just defined), (ii) this A and this
B are F, and (iii) this A and this B bear R to each other.(13)
So, perhaps surprisingly, it would be possible to be a direct
realist in my sense while being an indirect realist in Jackson's
sense, and it would be possible to be a direct realist in Jackson's
sense but an indirect realist in my sense. To illustrate, consider
what would be (on my definition) a paradigmatic indirect realism:
suppose a philosopher believes that we have knowledge of the
external world, and the way we know about the external world is
always by inferring propositions about the external world from our
beliefs about the character of our sense data. (Although I have
said that my formulation does not require the 'indirect realist' to
hold this extreme position, certainly this would be one possible
form of indirect realism.) Nevertheless, suppose this philosopher
doesn't think that we are aware of external objects in virtue of
being aware of our sense data, because he denies that being aware
of a physical object can be defined in terms of (or is constituted
by, or anything like that) being aware of a sense datum that has a
certain relation to the physical object. Note that this latter
belief would not be at all an unnatural addition to the first part
of the theory, as we see by comparing other instances of inferential
knowledge. Suppose that I see a soaking wet person enter the room,
and this prompts me to infer (from the belief that a soaking wet
person has entered the room) that it is raining outside. I then,
of course, have two distinct beliefs, which are causally connected.
In this case, we should not say any of the following:
My knowledge (or belief) that a wet person has entered the room constitutes knowledge that it's raining outside.
My knowledge (or belief) that a wet person has entered the room counts as knowledge that it's raining outside.
Knowing that it's raining outside is definable in terms of knowing that a wet person has entered a room.
I know that it's raining outside in virtue of the fact that I
know that a wet person has entered the room.
Any of these would be unnatural, and incorrect, descriptions of the
relation between my two items of knowledge (I take it that all of
the above are similar, if not equivalent, claims -- all describe a
relation something like constitution). On the other hand, the
following would be appropriate descriptions of the case:
My knowledge (or belief) that it's raining outside is based on my knowledge (or belief) that a wet person has entered the room.
My knowledge (or belief) that it's raining outside is caused
by my knowledge (or belief) that a wet person has entered the
So it would be natural for the sense datum fundamentalist I imagine, who holds that all external-object beliefs are inferred from sense-datum beliefs, to expressly deny that we're aware of the external world in virtue of being aware of sense data (the connection is not that close), while holding that our awareness of the external world is, instead, based on our awareness of sense data. This would make the theory a form of indirect realism in my sense, but (oddly enough) direct realism in Jackson's sense.
We could also imagine a sense datum theorist who would be an indirect realist in Jackson's sense but a direct realist in my sense. This philosopher would hold that we enjoy states of awareness of sense data and that these states also sometimes count as awareness of physical objects in virtue of some relation holding between the physical objects and the sense data (much as perceiving Bob's head can count as perceiving Bob, in virtue of a certain relationship between the head and Bob), and he would deny that these states cause us to be aware of physical objects. This position is at least consistent, and the first part of it may actually require the second part (a counts as relation may be incompatible with a causal relation).
I'm not going to argue that either Jackson's or my formulation of the issue is intrinsically better than the other. Rather, we should merely distinguish the two formulations as addressed to different concerns. In fact, the difference will not prove very important, because the account of perception that I defend will be a form of 'direct realism' on either definition of the term, and my criticisms of what I call "indirect realism" will also apply to the view of perception that Jackson has developed. I will be criticizing the idea that in perception, we normally enjoy awareness of our own minds (or mind-dependent phenomena), which would be common ground between the 'indirect realists' of the two preceding paragraphs.
2. PERCEPTION AS AWARENESS
In the last chapter, we developed a general conception of what it
is to be aware of something. The next task is to exhibit perception
as a form of awareness of the external world, under that conception.
2.1. The traditional analysis of perception
The conception that I'm going to call "the traditional
analysis of perception" is probably common ground between my version
of direct realism and a number of forms of indirect realism, though
I shan't worry about arguing that the analysis really is a
traditional one (let alone the traditional one). The conception is
this: For an observer, S, to perceive a physical object, X, four
things have to happen:
(i) S must have a certain kind of mental state called a 'perceptual experience';
(ii) X must exist (this is at least part of what is meant by saying that "perceive" is a success verb);
(iii) X must at least roughly satisfy the content of the experience; and
(iv) X must cause S to have the perceptual experience.
Condition (iii) is probably the most controversial and least worthy of inclusion in something called "the traditional analysis," for reasons that will become clear below.
We can see how, on the above analysis, perception is a form of awareness. The state of awareness is the perceptual experience, conditions (ii) and (iii) secure that the state is veridical (condition (ii) being redundant, of course), and condition (iv) constitutes the 'appropriate connection' between the intentional state and its object for this type of awareness.
I'm not going to spend much time on (ii). That I'll take for
granted. Some doubt about it could be raised in cases like the
(a) Joe is experiencing double vision, when he holds his hand in front of his face. We might describe this situation by saying, "Joe sees two hands, where there really is only one." But it isn't the case that there exist two hands such that Joe sees them.
(b) Suppose that I'm seeing an after-image on the wall. Is it the case that there exists an after-image, such that I'm seeing it on the wall? This seems questionable.
(c) I'm being subjected to brain surgery while I'm still conscious.
The doctor stimulates certain parts of my brain and asks me to
"describe what I see." I say something like, "Okay, I see some
yellow splotches sliding across the ceiling." Do the yellow
splotches exist? Suppose I go on to report, "Ooh, now I'm seeing
some flying pink elephants." Certainly there don't exist any flying
pink elephants such that I'm seeing them.
Examples like this are a little less convincing if one uses
"perceive" instead of "see," though this may just be because "see"
is a more commonly-used word. These examples do not trouble me,
however. I'm not interested in arguing that there is no non-success-verb use of "see" in English (the so-called 'phenomenal
sense' of "see"), and my view of perception will leave room for a
straightforward analysis of 'phenomenal seeing': to phenomenally see
an F is merely to have a visual experience, where "an F" figures in
the content of the experience. What I maintain is just that there
is a use of "see" in which it is a success verb (the 'relational
sense' of "see"). One can see this by reflecting on hypothetical
exchanges such as this:
A: "I think I saw King Ulgar last night."
B: "You couldn't have seen King Ulgar last night; he died over
a week ago."
B's response seems perfectly logical. Well, A could still claim to
have seen Ulgar, if what he thinks he saw was the corpse (or perhaps
a photograph of Ulgar, etc.); but otherwise not. An even more
forceful response (if true) would be something like:
B: "You couldn't have seen King Ulgar. King Ulgar is a
fictional character that I made up."
Similar lessons apply to "perceive" as "see" (I take it that seeing is always a form of perceiving).
I also will not discuss (iv) at length. We saw the motivation for that condition in section 1.1, with the case of veridical hallucination. That was the case where I have an LSD-induced hallucination of a spider crawling on my desk, and by chance, there happens to be a spider right there, but I'm not seeing it.
(iv), of course, raises the issue of deviant causal chains. For example: suppose the proverbial brain surgeon sees a cat, and this causes him to stimulate my brain in such a way as to produce a visual experience of a cat. Then the cat has caused my visual experience, and the other conditions mentioned above are satisfied, but I'm still not perceiving the cat. Because of cases like this, we can't say that (i)-(iv) are a sufficient set of conditions for perceiving. There must be the right (non-deviant) kind of causal chain. But I'm going to pass over the issue of exactly what the right kind of causal chain is.
In the following sections, then, I will defend conditions (i)
2.2. Perceptual experience & the pure intellectualist account
I have been using the concepts of 'perceptual experiences,' 'visual experiences,' and the like up till now without explicitly defining them. My earlier discussion of what I called "ultra-direct realism" (formulation (5) in section 1.4), however, gives a clue. According to the traditional analysis, whenever a person is perceiving, there is a purely subjective (i.e., purely internal) mental event or state that occurs, that is a component of the event of perceiving, and that is the only purely internal component of that event. In other words, when you're perceiving anything, there's a certain mental state that you're in, such that your perceiving is partly constituted by your being in that state, and such that that state could have existed without there being any external object that you're aware of. That mental state (and any mental state intrinsically similar to it) I call a perceptual experience. Of course, I can not make there be such a state by stipulation, so if I am mistaken, and there is no such state, then there are no perceptual experiences. "Visual experience," "auditory experience," and so on, will be understood similarly. That is, 'visual experience' is defined to be the purely internal state that is a component in the process of seeing things, 'auditory experience' is the purely internal state that is part of hearing things, and so on.
'Perceptual experience,' then, is not to be confused with perception. It follows from the above definition (together with the traditional analysis) that it is possible to have a perceptual experience without perceiving and therefore without having a perception -- for, according to the above definition, it is possible to have a perceptual experience without there being any object that you're perceiving, but according to the traditional analysis (condition (ii)), it is not possible to perceive without there being an object that you're perceiving. "Perceptual experience," then, should not be heard like "experience of perceiving," but rather like, "experience subjectively like perceiving."
It also should not be thought that the existence of 'perceptual experiences' in my sense is automatically common ground to all people who believe that there's such a thing as perception. The view I characterized as "ultra-direct realism" holds that we often perceive external objects, but we don't have any perceptual experiences (in my sense of the term). The ultra-direct realist, in contrast to myself, holds that the only mental (or quasi-mental?) event going on during a perception of a table that's a part of the perceiving, is a perception of a table, and that is an event that cannot exist in the absence of a table.(14)
Aside from the above definition, there are two ways of illustrating what I mean by "perceptual experience" and why perceptual experience is important to the analysis of perception. One is to compare two different events that both involve perceptual experience and notice what they have in common. This will be my approach in the next section. The other, the approach of the remainder of this section, is to contrast our perceptual knowledge as it actually is with the way our knowledge of the external world would be or might be if we didn't have perceptual experiences.
This latter way of illustrating the importance of the concept of perceptual experience is inspired in part by Laurence BonJour's coherentist-friendly account of observation. Without (hopefully) simplifying too much, BonJour says that what happens during observation is that you just find yourself believing a bunch of things about your environment. These beliefs are characterized by (1) being extremely specific and detailed, (2) being 'cognitively spontaneous' in the sense that they are not the product of inference or deliberation, and (3) being peculiarly difficult to resist. This is what distinguishes observation beliefs from other beliefs.(15) BonJour's account is motivated partly by his view that beliefs can only be based on other beliefs -- accordingly, other kinds of states of awareness don't have much of a role to play in his epistemology. In fairness, BonJour doesn't actually deny the existence of such things as sensations and perceptual experiences. He concedes that there might be, accompanying our cognitively spontaneous beliefs, such things as 'sense impressions' or 'sensa,' but he remains noncommittal.(16) However, David Armstrong and George Pitcher have each defended the extreme intellectualist position, the position that perception can be analyzed in terms of belief and dispositions to believe as the sole mental components. Armstrong argues, with a few qualifications, that "Perception is nothing but the acquiring of knowledge of particular facts about the world, by means of the senses."(17)
I do not think I am alone in feeling something missing from this account of observation. There doesn't seem to be any seeing or feeling stuff going on in this picture.
I think we can imagine beings for whom the purely intellectualist account would be true. We might even be able to imagine it so they would have knowledge (and not merely beliefs) about the external world. But to imagine it is immediately to see that, phenomenologically, our cognitive relation to the world is not at all like that. I suppose that 'observation' would seem pretty strange for such beings -- you just suddenly find yourself believing a bunch of complicated propositions for no reason. Perhaps certain actual cases in which people have 'premonitions' conform to the intellectualist model (with just the difference that Armstrong would have us having very complex and detailed premonitions all the time), and they are for that reason (among other reasons) often regarded as strange and inexplicable.
What is wrong with the account, of course, is that it attempts to account for perceptual knowledge without involving perceptual experiences. I don't just spontaneously think that there's a green coffee cup on my desk for no reason. I think there's a green coffee cup here because I'm having sensory experiences of a certain sort.
So far, I haven't given any real argument -- nothing beyond a simple appeal to introspection -- for believing in this element of perception that BonJour and Armstrong allegedly neglect. In fact, I rather consider the evident inadequacy of a purely intellectual account as evidence of the reality of perceptual experience than the other way around. But more of an argument can be fashioned, from attention to cases in which we 'don't believe our eyes.' For example, suppose that I am veridically seeing a pink rat on a table, but for whatever reason (my awareness of the improbability of the event, my awareness of my present state of inebriation, etc.), I don't believe my eyes -- i.e., I do not believe that there is a pink rat there. Still, it seems clear that there is something going on in my mind, something that would induce more gullible individuals to accept the existence of a nearby pink rat, something seemingly pink-rat-representing. It is, surely, a visual experience of a pink rat. And once one accepts this, it becomes difficult to see a reason for declining to accept that such states are present also in the cases in which I do believe my eyes. And then it is very implausible to deny to these states any role in explaining observational knowledge.
BonJour does not discuss how he would analyze cases of this kind. One possibility would be to move from actual beliefs to dispositions or 'inclinations' to form beliefs of certain sorts -- perhaps perceiving partly consists in having those kinds of states.(18) I think this suggestion gets things backwards, explanatorily, or at the least neglects to provide an explanation: what gives me a disposition or inclination to think there's a pink rat on the table is the experience I'm having. After all, I don't just suddenly have these dispositions to believe, out of the blue, any more than I suddenly have actual beliefs out of the blue. More to the point, this suggestion doesn't seem to do any better justice to the phenomenology of ordinary perception than the actual-belief theory -- it still doesn't sufficiently differentiate ordinary perceiving from having complicated premonitions or intuitions.
It is true that, if either of these forms of intellectualism
(i.e., the actual-belief theory or the dispositional theory) were
true, we could still say that there were perceptual experiences of
a sort -- for the beliefs or the dispositions to believe that we
formed would then be perceptual experiences under my definition.
In complaining that the intellectualist account leaves out
perceptual experience, I do not mean that it entails some such
proposition as "There are no perceptual experiences." Rather, what
I mean is this: there are certain states that we have that are, in
fact, perceptual experiences, and about those states: the
intellectualist account leaves them out. It is because a purely
intellectualist account of perception is so transparently inadequate
that it provides a useful illustration of the significance of
2.3. Against ultra-direct realism
2.3.1. The argument from hallucination, round 1: explaining subjective indistinguishability
The main argument for the existence of perceptual experiences
is the argument from hallucination.
(i) Imagine that a person, S, is veridically seeing a pink rat on
the table before him, where everything is normal, so the rat looks
pink and ratlike. S is then in a state that he could describe as
follows: "It looks to me as if there is a pink rat on the table."
(ii) Now imagine, instead, that S, under the influence of
hallucinogenic drugs, diabolical neurosurgeons, or the like, is
standing in front of a table with nothing on it, seeing the table,
but hallucinating a pink rat on the table. Suppose that his
hallucination is so vivid and detailed that it seems to him just as
if there is a pink rat on the table. S is very much tempted to
think that there's a pink rat there and that he is seeing it.
It seems obvious that these two scenarios have something in common, something going on in S's mind and having to do (in some way or other) with pink-rathood. This common element, the traditional analysis says, is a perceptual experience (specifically, a visual experience) of a pink rat (the common element is by definition a perceptual experience, if it exists).
The point is not that typical drug-induced hallucinations, the hallucinations of the mentally ill, or other hallucinations that people have in the actual world are usually indistinguishable (by the subject) from veridical perception. The claim isn't even that it is nomologically or technologically possible to produce such hallucinations in human subjects. In fact I think it is nomologically possible for there to be hallucinations that are subjectively indistinguishable from perceptions (because I think it is nomologically possible for a brain to be stimulated in the same way it is stimulated during perception by something that is not being perceived), but that claim is not required by the argument. The metaphysical possibility of hallucinations that are indistinguishable from perceptions is all the argument requires. The reason for this is that the modality intended in the definition of "perceptual experience" (I stipulated that perceptual experiences are states that 'could' exist in the absence of external objects of awareness) is metaphysical. And I have no doubt that this is at least metaphysically possible. I therefore ask the reader to suppose that S is having a hallucination of that sort in case (ii).
So far, my argument relies on an appeal to introspection or intuition (more precisely, perhaps, imaginative projection) at a crucial stage -- it just seems that there is some element common to cases (i) and (ii). I don't regard this as a flaw in the argument -- insofar as the issue is one of what mental states we have, I think introspection is precisely the appropriate method to rely on. This does not mean I think an appeal to introspection such as I have just made immediately and irrevocably settles the matter, however -- the ultra-direct realist may make an effort at explaining away the appearance. That is, he can try to present reasons for thinking the appearance must be misleading (in this case, that cases (i) and (ii) don't really have anything in common) and an explanation of why things seem the way they do (i.e., why it seems as if the cases do have something in common). But the presumption is against him.
Still, it's best to bolster the intuitive judgement with more of an argument: In case (i), it looks to S as if there is a pink rat on a table in front of him. And in case (ii) as well, it looks to S as if there is a pink rat on a table in front of him. Therefore, cases (i) and (ii) exhibit a common state (or event) -- namely, its looking to S as if there is a pink rat on a table before him. This state is clearly at least partly mental, because it is logically impossible for it to look to S as if there is a pink rat, etc., unless S has a mind, and it is purely internal, since it is metaphysically possible for it to look to S as if there is a pink rat, etc., even while there's no external object that S is aware of (especially, no rat). Therefore, its looking to S as if there's a pink rat on a table in front of him is a perceptual experience, if nothing else is. (It's still possible, consistent with this argument, that its looking to S as if there's a pink rat, etc., is not a perceptual experience but rather only a component or aspect of a perceptual experience. Recall that a perceptual experience is the maximal purely internal component in perception.)
There are two possible ways for the ultra-direct realist to respond to this, each very similar to the other. I don't see it as a plausible option to deny that case (i) involves its looking to S as if there's a pink rat, etc., nor to deny that case (ii) involves its looking to S as if there's a pink rat, etc. So the ultra-direct realist, in order to maintain that there is no state common to the two cases, must claim either that "it looks to S as if there's a pink rat..." is ambiguous, so that it means different things depending on whether it is applied to a case in which S is seeing or a case in which S is hallucinating, or that its meaning is intrinsically disjunctive (in a non-trivial sense).
What makes it implausible that the expression is ambiguous is the fact that S can be in a situation in which all he knows is that it looks to him as if P, but he doesn't know whether he's seeing or hallucinating. Suppose S is in fact in case (i), but he doesn't know that he's seeing a pink rat. He isn't sure; he thinks he might be hallucinating. All he knows is what he expresses by, "It looks to me as if there is a pink rat on a table in front of me." According to the ambiguity theory, S either doesn't mean anything by this, or else doesn't know what he means. According to the ambiguity theory, the words, "It looks to me as if there is a pink rat..." can either mean something like, "I'm seeing something as a pink rat..." or mean something like, "I'm having a hallucination as of a pink rat..." But S doesn't mean to assert either of these things, since he doesn't believe either of these things. So either his words wind up taking on one of those meanings anyway (most likely the first), in which case he doesn't know what he means, or they take on neither of these meanings, in which case S doesn't mean anything. Either of these alternatives is absurd.(19)
The disjunctive theory is a bit less objectionable. On this account, what S would mean in saying, "It looks to me as if there's a pink rat" is something like, "Either I'm seeing something as a pink rat, or else I'm hallucinating a pink rat."(20) The expression isn't ambiguous on this view, since it always means the disjunction. Consequently, I can't argue that on this view S wouldn't know what he meant; he could know that he meant the disjunction, and he just doesn't know which disjunct it is that makes his disjunctive belief true. Against this view, though, I would argue that the traditional analysis provides a more natural, more straightforward explanation of how S comes to know that either he's seeing something as a pink rat or he's hallucinating a pink rat. The proponent of the traditional analysis is in a position to offer the following simple and natural explanation: S is directly acquainted with a state (a visual experience) that is common to both pink-rat sightings and pink-rat hallucinations.(21) Therefore, S can know that one of those two things is going on, without knowing which.
Clearly the ultra-direct realist can't offer that explanation, for he can't accept that S is acquainted with a visual experience. What might the ultra-direct realist say S is acquainted with? A pink rat? But this so far does not explain why S winds up thinking that he's either seeing or hallucinating a pink rat, but not that there's a pink rat. A belief that one is seeing a pink rat might plausibly be partly explained by one's acquaintance with a pink rat, but since pink-rat hallucinations don't involve anything remotely similar to pink rats, it's puzzling why a disjunct mentioning a possible pink-rat hallucination should need to be added to a belief that's based on an episode of acquaintance with a pink rat.
A similar complaint could be lodged against the suggestion that S is directly acquainted with a seeing of a pink rat. This would not explain his merely concluding that either there's a sighting or there's a hallucination going on. It is as if a person were to become acquainted with a giraffe and thence come to the conclusion that either a giraffe or a coffee table was present.
What about this suggestion: S is acquainted with an episode of seeing a pink rat, but, although veridical pink-rat sightings are utterly different from pink-rat hallucinations, pink-rat sightings can often seem like pink-rat hallucinations (or: they appear similar, although they're not really similar). So we introduce a second-order appearance -- there are two different kinds of "it appears as if P" states (the perceptions and the hallucinations), and the two kinds of appearings themselves appear similar. Is this an adequate explanation? Undoubtedly explanations of this sort are often correct. For example, suppose I know that wooden decoy ducks typically look very much like real ducks, even though in reality they are very different from real ducks. This could explain my concluding, upon seeing a certain (real) duck, that there's either a decoy or a real duck present.
However, I think some explanation of the similar appearance is called for. Granted that wooden decoys are not much like real ducks (they're not alive, they don't have any internal organs, they can't fly, etc.), why do they sometimes seem like real ducks? One possible answer is that they produce intrinsically similar visual experiences -- but let us hold off on that answer, which is obviously unfriendly to the ultra-direct realist. Let us look for a less philosophical explanation, more on the level of common sense. Wooden ducks sometimes seem like real ducks because (a) they have certain qualities in common with real ducks (such as shape, size, and color), and (b) often in our acquaintance with particular ducks, it is only those qualities that we can see, and not the qualities wherein the real ducks differ from the fakes. Therefore, it is readily understandable (and consistently with direct realism) that decoy ducks can seem to be real ducks, so as to be mistaken for real ducks.
Could we similarly explain how perceptions can seem like hallucinations (and vice versa), so as to be mistaken for hallucinations? This would require positing some common characteristic(s) between a perception and a hallucination. Notice that a common characteristic is not the same thing as a common component, so this move wouldn't require immediate capitulation to the traditional conception. For example, a blue car and a blue ocean don't have any common component or common type of component (suppose the car doesn't contain any water), but they do have a common characteristic, blueness.
However, this move would also require saying that, when S is acquainted with this particular episode of seeing, he is only acquainted with those aspects of the seeing that are common to seeing and hallucinating, but not any of the aspects wherein seeing differs from hallucinating (such as being veridical) -- that would explain his inability to distinguish this seeing from a hallucination. But it is also part of the ultra-direct realist's story to claim that, in a case of genuine seeing, one is directly acquainted with external objects. Hence, S is directly acquainted with the pink rat. But if S is directly acquainted with the pink rat, mustn't S eo ipso be acquainted with a feature of the situation that differentiates seeing from (non-veridical) hallucination, since no rat would be present in a hallucination? Why doesn't S's acquaintance with the rat immediately tip him off that this is a case of seeing?
Besides, what would be the common characteristics between the hallucinating and the perceiving? The traditional analysis has obvious suggestions to make: they have the same qualia, and/or they have the same content. Could the ultra-direct realist say this? Only with some discomfort, I think. The qualia move would require leaning on a tenuous distinction between states and qualities -- insisting that qualia are not states or events (so as to qualify as components in an event of perceiving) but rather properties of states or events, and then insisting that there can be categorially different states that exhibit precisely the same qualia. And both McDowell and Johnston claim that in a case of normal perception, the actual physical object (e.g., the pink rat) or the physical fact (e.g. that there is a pink rat there) is the content of the perception,(22) so they could not claim that the content of a nonveridical hallucination could be the same as that of a normal perception.
For all these reasons, it is very difficult for the ultra-direct realist to argue that the reason hallucinations can be
mistaken for perceptions and vice versa is that we are aware, in a
case in which we are prone to such an error, of some characteristics
common to the two kinds of state while being unaware of any
characteristics that would serve to differentiate them.
2.3.2. The argument from hallucination, round 2: the causal argument(23)
Consider the plausible "same causes-same effects" principle of metaphysics: if an event or state of affairs, D, is the immediate cause of an event, E, then whenever a state qualitatively identical with D occurs (in relevantly similar conditions), an effect qualitatively identical with E will occur. So suppose there is a chain of causes, in which A causes B, which causes C, which causes D, which causes E, so that D is the immediate cause of E (the last member of the series before E). As long as (an event qualitatively like) D is reproduced, E must follow, regardless of whether (events qualitatively like) A, B, or C have gone before.
The principle needs a qualification to protect it from Cambridge properties and other relational properties. Suppose that a worm crawls to the edge of a certain table. Suppose also that a qualitatively indistinguishable worm crawls to the edge of another table, where the second table has a soda can at one edge of it but the first table doesn't. In the second case but not the first, a result of the worm's action will be that it comes to be, say, ten feet away from a soda can. So in a sense, we have the same causes but different effects in the two cases. To avoid examples such as this, the "same causes-same effects" principle can be modified to exclude relational properties. That is: if D is the immediate cause of E and E is a change in the intrinsic properties of something, then whenever a cause qualitatively indistinguishable from D occurs, an effect qualitatively identical to E must occur.(24)
Now consider how this principle bears on our case (ii), the
case of the vivid hallucination. Suppose, as it is plausible to
suppose, that the hallucination (unlike the perception) is a purely
internal mental state. This state is immediately caused by certain
changes in S's brain state. Those brain processes are the same ones
that occur during a veridical perception of a pink rat, the only
difference between the cases being in what goes on before the brain
events. Therefore, the same internal mental state that occurs
during hallucination must also occur during normal perception.
Therefore, the ultra-direct realist is faced with a dilemma: either
he grants that this internal mental state is part of the episode of
perceiving, in which case he gives in to the traditional analysis,
or he insists that the perception is an entirely separate event.
But the latter alternative is highly implausible. During normal
perception, there are not two separate appearings going on -- during
a perception of, say, a pink rat, there are not two different kinds
of "looking as if there's a pink rat" states going on
simultaneously, one a perceiving and the other a state intrinsically
like vivid hallucination. So the disjunctive theory has to be
2.3.3. Johnston's alternative
Mark Johnston has a response to the above argument. Johnston's alternative, roughly, is to deny that there are such things as hallucinations, as traditionally conceived.
A hallucination is supposed to differ from an illusion in that in hallucination, no external object of awareness exists, whereas in illusion, there is an external object of awareness, but there are some properties it appears to have different from those it actually has. Thus, if I see a white rat that looks pink, that would be an illusion, because there is an external object that I'm perceiving. But if I seem to see a pink rat when there's nothing (appropriately causally related to my experience) there at all, then I'm hallucinating.
Johnston argues that we can treat the cases usually thought of as hallucinations, rather as cases of extreme illusion, in which the object of awareness appears both to be located in a different place from where it actually is and to have properties radically different from its actual properties. Consider S's (alleged) pink-rat hallucination again. Johnston would say that this (alleged) hallucination is not a purely internal mental state, as we supposed in the argument of the last section; rather, it is a kind of illusory perception.(25) Because the hallucination is not an intrinsic property or state of S's mind, the 'same causes-same effects' principle, which we qualified to exclude relational properties, does not apply to it.
Well, if the putative hallucination is actually an illusory perception, what is it an illusory perception of? What is it that is appearing to S to have properties other than its actual properties? Johnston offers two suggestions. The first suggestion is that S is perceiving his own internal state -- not his internal mental state, of course (which would apparently introduce just the sort of mental objects Johnston is concerned to avoid), but his internal physical state.(26)
The second suggestion is that S is enjoying awareness of a complex, uninstantiated universal, something like 'pink rat over there' (but more complex, to include all of its apparent visible attributes). Notice that 'pink rat over there' is a universal, specifically a type of thing. In this case, it would be an uninstantiated universal, because there wouldn't be any particular that is a pink rat over there.
Many readers, I suspect, will find both of these suggestions counter-intuitive. But it is important to see what necessitates such moves. In order to avoid the argument from hallucination, the ultra-direct realist is forced to deny the existence of genuine hallucinations and to construe putative hallucinations as an unusual species of the perceiving relation. Otherwise, he would have to admit that hallucination was an internal mental state, and then he would have to admit that that same state must also occur during normal perception, because of the causal argument rehearsed above.
Let's try to articulate why Johnston's suggestions are counter-intuitive, and let us begin with the 'internal state' account. I suggest that our intuitions follow my third condition on perception ((iii) in the 'traditional analysis'), as well as the more general condition on awareness discussed in chapter 1, namely that our being aware of something requires at least a rough match between the nature of the thing and the content of our representation of it.
I will defend this principle at greater length in the next section. For now, let's just focus on how Johnston's proposals violate it. It is immediately obvious that, in the case of the pink rat hallucination, there is nothing about one's internal state that is remotely like what appears to the subject to be there -- there is nothing remotely like a pink rat physically inside of one. And it is worth pointing out that I need appeal here only to the weakest form of the content-satisfaction condition. What I mean by that is that anyone sympathetic to the idea that there needs to be at least some degree of similarity, if only a very minimal degree of similarity, between the nature of an object of awareness and the way the object appears, would surely agree that this case violates the principle, if any case does. (True, there are some pink things inside a person's body, but I could easily have chosen an example where the object's apparent color would be unlike the color of anything in one's body, and where the hallucinated object would be in other ways as different from the agent's internal state as anything visible could possibly be.) If Johnston would maintain that this case can be the awareness of the subject's internal state, then he must reject any sort of content-satisfaction condition on awareness.
It may seem that the case is otherwise with the complex-universals account. It may, indeed, seem that the 'pink rat over there' universal is very similar to a pink rat over there. But it is not. The complex universal is, if anything, even more radically different in nature from what the hallucination is ostensibly of, for they are not even in the same metaphysical category. The experience ostensibly represents a pink rat -- that is what appears to the subject to be present. The complex, uninstantiated universal is about as different from a pink rat as anything can be. The uninstantiated universal, for instance, is not pink, it is not alive, it doesn't have fur on it, and it isn't located over there (in the place where a pink rat seems to be). It isn't even located in space-time, if some realist philosophers are to be believed. So, again, it seems that Johnston would have to reject even a minimal content-satisfaction condition on awareness.
He would also have to reject the causal condition on perceptual awareness, because the complex universal does not stand in any sort of causal relation to the subject of the hallucination (or to anything else). It is doubtful that universals ever cause anything. Armstrong maintains that particulars have the causal powers they do in virtue of the universals they instantiate,(27) but that is not the same as saying that the universals, themselves, have causal powers, and in any event, even Armstrong would not maintain that uninstantiated universals give causal powers to anything.
So of the four conditions on perception I listed in the
traditional analysis, Johnston must reject (i), (iii), and (iv), if
he would defend the uninstantiated-universals account of
hallucination. That leaves only (ii), the condition that the object
of awareness should exist. This leaves things looking as if almost
anything goes -- as if a perception or state subjectively like
perception could count as awareness of just about anything. There
might be some necessary conditions on perception other than those
given in my 'traditional analysis,' but (iii) and (iv) are the most
obvious candidates for necessary conditions on perception, and it
is far from clear what should replace them.
2.3.4. What the argument doesn't show
It is important to distinguish the conclusion of section 2.3 from a more radical conclusion that the argument from hallucination is sometimes deployed in support of. I have only used the argument from hallucination to demonstrate the existence of perceptual experience, and that is all I think it does show. Some philosophers, on the other hand, have taken the argument from hallucination to show that some form of indirect realism must be true. We will examine that argument in a future chapter. For now, what is important to notice is that I have not so far said anything that would commit me to endorsing the argument from hallucination as it is deployed in support of indirect realism. I have argued that our awareness of the world is partly constituted by the occurrence of certain purely internal mental states called "perceptual experiences." That is a world away from arguing that our awareness of the world is either constituted by or based on our awareness of those purely internal mental states. In introducing 'perceptual experience,' I have not introduced an intermediary object of awareness, standing between us and the world; what I have introduced is simply part of an analysis of what it is to be aware of the world.
I fear that a confusion on this score is a major cause of the
resistance to the traditional analysis of perception, on the part
of ultra-direct realists such as John McDowell. McDowell's
discussion of the argument from hallucination leaves the intended
conclusion of the argument unclear, seemingly hovering over the
moderate and radical positions, perhaps embracing both. Just after
mentioning the question of how we can ever know we're not
hallucinating, McDowell writes:
An objection on these lines would be appropriate if I were
aiming to answer traditional sceptical questions, to address
the predicament of traditional philosophy. That is the
predicament in which we are supposed to start from some anyway
available data of consciousness, and work up to certifying
that they actually yield knowledge of the objective world.(28)
This makes it sound as though what McDowell is concerned to avoid
is indirect realism -- the view that our knowledge of the external
world is always based on our awareness of our own mental states.
Lower down, he says the argument from hallucination "is supposed to
show that the genuinely subjective states of affairs involved in
perception can never be more than what a perceiver has in a
misleading case." If by "genuinely subjective," McDowell means what
I call "purely internal" (i.e., mental and capable of existing in
the absence of external objects), then the conclusion under
discussion is just that there are perceptual experiences. But
McDowell goes on to warn, "This strains our hold on the very idea
of a glimpse of reality . . . We cannot have the fact itself
impressing itself on a perceiver. This seems off key,
phenomenologically, and we can resist it if we can so much as
comprehend the idea of a direct hold on the facts." Now it sounds
as if we're talking about indirect realism again. But McDowell does
not explain why the existence of perceptual experiences entails or
suggests that "we cannot have the fact itself impressing itself on
a perceiver." Perhaps the fact's impressing itself on a perceiver
just is its causing (in an appropriate way) a 'matching' perceptual
experience in the perceiver. McDowell's own choice of metaphors
seems to lend itself naturally to such an interpretation. Consider
how a seal impresses itself on a lump of wax. The seal's impressing
itself on the wax consists in the seal's causing (in an appropriate
way) a certain mark in the wax that matches the mark on the seal.
Why may not the external world's impressing itself on a perceiver
consist in the world's causing in the perceiver a veridical
perceptual experience (of course in this case, there is a different
sort of 'matching' relation called for)? No one would think to
insist that for a seal to truly impress itself on the wax in a full-blooded sense, the impression so produced must be metaphysically
inseparable from the seal, or that the seal must be a constituent
in the impression. Nor is it clear why McDowell's image of
"openness to the world" should be thought to preclude the
traditional analysis of perception. Perhaps our openness to the
world consists in our receptivity to just such impressions.
2.4. The content-satisfaction condition
To recapitulate, we said that the traditional analysis of
perception embodies four claims,
(i) that there are such things as 'perceptual experiences' (as defined in section 2.2),
(ii) that for any S and X, S perceives X only if X exists,
(iii) that S perceives X only if X roughly satisfies the content of S's perceptual experience, and
(iv) that S perceives X only if X causes S's perceptual experience
(in an appropriate way).
I have mostly taken (ii) and (iv) for granted. Sections 2.2 and 2.3 have been concerned with defending (or at least explicating) (i), as against two rival accounts of perception. It is now time that we turn to condition (iii), the content-satisfaction condition, which as we saw played a crucial role in our criticism of Johnston's version of direct realism.
The underlying motivation for condition (iii) is the idea that awareness should be something we could think of as cognitive contact with reality. It doesn't seem that an intentional state can bring one into contact with reality by radically misrepresenting reality. McDowell's metaphor of 'openness to the world' can also be exercised. It doesn't seem that we can be open to the world by virtue of having representations that fail to correspond even roughly to anything in it.
On a less metaphorical note, the content-satisfaction
condition accounts for our reluctance to attribute awareness (or
specific forms of awareness, such as seeing) in a number of cases.
(a) Suppose that there's a cat on the table in front of me.
Suppose that somehow or other, the cat's presence is causing me to
have an experience such that it looks to me as if there's a green
elephant charging into the living room. In that case, we would not
say that I am seeing (or otherwise becoming aware of) the cat, and
I do not think this has anything essentially to do with the nature
of the causal chain linking the cat to my experience. It doesn't
matter what the causal chain is like; a charging-green-elephant
experience can't count as awareness of a cat sitting on a table.
(b) Consider a case of normal perception, such as my seeing a cat in front of me. Why does my experience count as seeing the cat, but not count as seeing (or otherwise apprehending) brain processes in my visual cortex? The brain processes, of course, are at least as much causally responsible for my perceptual experience as the cat is.
Similarly, why doesn't my experience count as seeing my retinal image, the state of the electric and magnetic fields between the cat and my eye, or even the light rays incident on the surface of the cat? In all of these cases, I think it is difficult to find the causal chain 'deviant.' The way those other things cause my perceptual experience does not seem fundamentally different from the way the cat causes my perceptual experience. So it is difficult to account for these instances of non-perception using only condition (iv).
The content-satisfaction condition produces straightforward answers to these questions. None of those other things comes close to satisfying the content of my perceptual experience, which is a perceptual experience of a cat. Put another way: it looks to me as if there's a cat in front of me; it does not look to me as if there is a brain state, retinal image, complex state of electric and magnetic fields, or set of light rays, either in front of me or anywhere else.
The failure of satisfaction is most obvious for the brain state, light rays, and electro-magnetic states. My visual experience represents something of a certain color, such as orange. The brain state or process doesn't have a color (the brain has a color, but a brain process doesn't). The incident light rays may be colored, but they (typically) won't be the color of the cat. And the state of the electric and magnetic fields between the cat and my eyes probably doesn't have a color, unless it is identical with the colored light rays traveling between the cat and my eyes (even then, it is probably not colored in the same sense). Moreover, in any case none of these things is remotely cat-shaped.
The retinal image is the only item on our list that seems to match the content of the experience in terms of color and shape. In spite of this, however, the retinal image falls short in other, crucial respects. For one, the retinal image is much smaller than the object my visual experience purports to represent. It is also not close to being in the right location. The retinal images are two-dimensional, and there are two of them, whereas I seem to see a single, three-dimensional object. The retinal images don't move the way the object I see moves; if I turn my head to the left, the retinal images move to the left, but I don't experience the object of sight moving to the left. Most importantly, my visual experience essentially represents something (ostensibly) physically external to me, something in front of my eyes.
It is not that perception can never be of something that is
located elsewhere than where it appears. In some cases, an
experience can count as of a certain particular, even though that
particular is not where the experience represents it to be. In some
cases also, an experience can count as being of a certain
particular, even though that particular is not the shape the
experience represents it to be. And the same goes for color, or any
other given property. But what condition (iii) intends is that if
enough goes wrong in the representation of an object, the
representation ceases to be of that object. In the case of the
retinal image, there are two special factors that make the would-be
misrepresentation of the position of the object ("would-be" because
it would be a misrepresentation, if the object of the experience
were the retinal image, but it is not if only the cat is the object
of the experience) particularly important. The first is that the
distinction between physically internal and physically external
objects (objects in one's body versus objects outside one's body)
is particularly important to us. For this reason, even if my eye
is very close to the cat, so the difference in position between the
cat and the retinal image is only a few inches, it is still a
significant difference -- although a difference of a few inches
between positions external to my body would not be so significant.
The second factor is that in this case there simply is a much better
cat-shaped candidate for object of awareness, which neither contains
as a part nor is contained by the retinal image, and that is the
cat. The existence of an object that comes much closer to
satisfying the content of the experience makes the shortcomings of
a given candidate object of awareness more significant. If there
were no actual cat, then it might be plausible to say that I was
seeing the retinal images, or perhaps the cat-shaped light patterns
on the surface of my eyeballs.
Intuitions in favor of the content-satisfaction condition can be strengthened by noticing the fact that a similar principle applies to thought and language. Thoughts about a given object have the same kind of content-satisfaction condition as I am claiming perceptions of objects have. That is, if a conception of an object gets too much wrong about the nature of the object, then it ceases to be a conception of that object. Likewise, the word associated with that conception ceases to refer to the object.
For example, suppose an anthropologist reports to me that he has discovered a tribe where the people all believe that the sun is a powerful deity. So far, I might say, "That's interesting. What else do they believe?" Suppose he goes on to explain that, unlike us, they do not believe that the sun rises and sets daily. They don't think that it is round or yellow, nor that it radiates heat or light. Instead, they think that the sun is a small tree, living on a faraway mountainside. By this time, I would be ready to insist that whatever it is the tribesmen have those beliefs about (if indeed there is anything their beliefs are about), it is not the sun, and whatever word they are using that the anthropologist has interpreted as referring to the sun, it does not refer to the sun. (It is interesting that some beliefs are more important than others -- the observational beliefs; that the sun is round, yellow, radiating light and warmth; are much more important than the 'theoretical' beliefs about whether the sun is a deity, a ball of burning gases, or whatever.)
In a similar vein, if I report to you, "I saw an orange cat
sitting on a table today, but at the time I saw it, it looked to me
like a charging green elephant," you can respond sensibly, "Whatever
you saw -- if indeed you saw anything -- it can't have been an orange
cat sitting on a table."
But now we have to consider a very serious family of
objections, based on the phenomenon of illusion. Sometimes objects
appear to us in ways other than they are. This is not per se a
problem for condition (iii), since condition (iii) does not require
an exact correspondence between intentional content and reality.
It leaves some room for illusion. The Müller-Lyer illusion, for
example, or the mild color illusions (if illusions is what they are)
caused by nonstandard lighting conditions are easily assimilated.
But sometimes objects appear ways that are radically different from
how they actually are.
(a) An example from Johnston is that one might enter a dark room,
see a coiled rope in the corner, but see it as a snake.(29) Snakes
are really not very much like ropes, considered overall.
(b) A similar problem arises for my example involving beliefs,
because I conceded that the natives could believe about the sun that
it was a deity, but all in all, the sun really is not very much like
a deity (even a round, yellow, heat-and-light-giving deity). I said
that observational beliefs were more important than 'theoretical'
beliefs about the sun, but I did not explain why, and if overall
match between the nature of the object and the way it is represented
is at issue, the theoretical beliefs would seem to be, if anything,
more important than observational beliefs. The sun is objectively
more similar, overall, to a tree than to a deity.
These observations are not to be disputed. Accordingly, condition (iii) needs to be modified to accommodate them. Consider Johnston's rope again. Our idea in modifying condition (iii) will be that the experience should count as awareness of the rope due (in part) to the fact that the rope has approximately the right shape, color, and location, and that these are the properties that count in this situation. Similarly for the case of the natives and the sun. Their conception will count as being of the sun (at least in part) due to the fact that they identify the sun as the source of the light and heat shining down on them in the daytime and identify its approximate shape, direction, and color and the times at which it is visible. In this case, we want to say, those are the properties of the sun that count.
The task, then, is to identify what's special about those properties, so that we may restrict condition (iii) to that sort of property. An initial thought is that they are in some sense the most directly observable properties. This is the idea that I shall take up.(30)
Well, in what sense? Recall Jackson's distinction between direct and indirect perception, from section 1.4. Since I've already used the terms "direct" and "indirect" for something else, let's use "primary" and "secondary" to mark Jackson's distinction (with the intended implication that the 'secondary' kind of awareness is somewhat second-rate qua awareness -- it is a more tenuous sort of contact with reality). So you're aware of something in the primary sense if and only if you're aware of it, not in virtue of being aware of anything else. My claim is that condition (iii) holds true of primary awareness.
Once we've made this distinction, it is easy to think of other
sorts of counter-examples to the unqualified version of (iii):
(c) Suppose that I see Bob in virtue of seeing Bob's head. (It's just sticking up over the window pane.) Arguably, the content of my visual experience is satisfied by the head, and only by the head. My visual experience doesn't represent the rest of Bob.
It could be argued here that I at least take there to be a
whole human body attached to the head, and so the whole of Bob
satisfies the content of that state. But let's specify the case
otherwise. Suppose that Bob is a member of an alien species that
looks very different from humans (including that their heads look
very different from any part of a human body), and I am completely
unfamiliar with this species. So when I see the spiny, green lump
that is in fact Bob's head sticking up over the window pane, I do
not take there to be any larger body attached to it. I have no
expectations at all about what may or may not be attached to that
thing. Nevertheless, I am in fact seeing Bob's head and so, in
virtue of that fact, seeing Bob. I don't know that I'm seeing Bob,
but I am. Bob the alien doesn't come very close to satisfying the
content of my visual experience, since Bob is much larger than his
head, and he has lots of interesting features that the head doesn't
(d) On second thought, I only see Bob's head in virtue of seeing
the portion of the surface of Bob's head that is facing me. The
content of my visual experience, arguably, is satisfied by the
facing portion of the surface, and not by the whole head. (If the
reader thinks there is a state of taking there to be a whole head
attached to the surface, again, we can specify the case so that I
do not take the surface to be attached to any particular sort of
interior and back portion. I am still seeing the head, by virtue
of seeing the facing portion of the surface.)
Notice that cases (c) and (d) do not involve illusion. They
only involve seeing-in-virtue-of. This suggests that our
modification to condition (iii) was on the right track. But now,
to see whether the qualified version of (iii) is correct, we need
to examine what the primary objects of awareness are. There are at
least these four kinds of secondary perception:
(1) One can perceive X by virtue of perceiving an (important) part
of X (e.g., S can see Bob by seeing Bob's head, but not by seeing
(2) One can perceive X by virtue of perceiving the surface of X
(perhaps this is a special case of (1)).
(3) One can perceive X by virtue of perceiving an effect of X (e.g.,
S can hear the piano by hearing the sounds produced by the piano).
(4) One can perceive X by virtue of perceiving a property of X
(e.g., S can see the cat by seeing the cat's shape).
Each of these suggests a candidate for the primary objects of perception. Focusing on (1) might lead to the (abortive) suggestion that the primary objects of perception are the smallest parts of whatever we perceive -- abortive because if there are any such smallest parts, they are elementary particles, which we do not perceive. Alternately, it might lead to the suggestion that the primary objects of perception are the smallest perceptible parts of whatever we perceive -- which is also problematic, because there don't seem to be any smallest perceptible parts. I say this partly because there is no definite cut-off point for the size of an object that we can perceive, but even if there were, there would be no privileged way of deciding which collection of pieces of that size we are perceiving (there are infinitely many ways of dividing a given area into, say, 1 mm2 pieces).
(2) gives rise to the suggestion that perhaps the primary objects of at least visual and tactual perception are surfaces of material objects. This has a lot of initial plausibility, but I think there's a better candidate.
(3) invites us to look for items last in the causal chain leading up to our perceptual experiences. In the case of vision, one might be tempted to posit light rays as the primary objects of perception, or even sense data, if one goes to extremes. In saying that (3) 'suggests' such views, I do not mean that there is a good argument from (3) to the conclusion that we primarily perceive light rays or sense data. I have previously explained why (due to the content-satisfaction condition), we do not normally see light rays, and later I will argue that we do not perceive anything like sense data. I just mean that thinking about (3) naturally leads one to think of the theory that we primarily perceive light rays or sense data.
Finally, (4) leads to the suggestion that we primarily perceive the sensible properties of things.
Of course, there could be many kinds of object of primary perception, but nevertheless, I believe that only certain properties, relations, and (possibly) actions of external things are primarily perceived. When I say "properties" in this context, I mean particular property-instances (i.e. tropes), not universals. So one can primarily perceive the orangeness of this cat, but not simply orangeness in the abstract.
To make this thesis plausible, I'll explain how the view accounts for our perception of each of the kinds of things mentioned above and give examples that the other mentioned views as to the primary objects of perception would have difficulty with.
Sometimes you see a physical object by seeing an important, three-dimensional, proper part of it -- as in the case of seeing Bob by seeing his head, or upper torso, etc. Other times you see a physical object but not by seeing a three-dimensional proper part of it (as when you see Bob unobstructed, from his head to his toe). In either of these cases, there's a three-dimensional object that you see not by seeing any other three-dimensional object, and about this thing, you see it by seeing its surface. You see its surface by seeing the portion of its surface facing you. And you see the portion of its surface facing you by seeing the shape (in three-dimensional space) and/or color of that portion of the surface.
Notice that it would be inappropriate to try to turn this around. It sounds right to say that whenever you see a surface, you do so by virtue of seeing its color and/or its shape. But it is wrong to say that you see the color or shape of a surface by virtue of seeing the surface. As a result, the view that we primarily see only surfaces could not account for the fact that we see the properties of surfaces. In fact, there does not seem to be any plausible candidate for a thing by seeing which we see the properties of surfaces.
The same goes for feeling objects -- when one feels a material object, one usually does so by feeling a certain three-dimensional, proper part of it (unless it is so small that part of one's skin covers its entire surface). E.g., one feels the elephant by feeling its side. When one does so, one feels the three-dimensional part (or the whole object) by feeling its surface, which one does by feeling the portion of the surface in contact with one's skin. And one feels the portion of surface by feeling its texture, its temperature, the pressure it exerts on one's skin, and/or its shape. Again, one could not turn this around. One could not say that we feel the temperature of the surface by feeling the surface. (Be careful, though -- in one sense, we feel the temperature of a surface by feeling (i.e. by means of touching) the surface. What is wrong is to say that we are aware of the temperature of the surface by virtue of being aware of the surface.)
Perceiving X by virtue of perceiving Y is transitive: if I perceive Bob by perceiving Bob's head, and I perceive the head by perceiving its facing surface, then I perceive Bob by perceiving the facing surface of Bob's head. So whenever one sees a material object, one does so by seeing the properties of the facing portion of its surface. And whenever one tactually perceives a material object, one does so by tactually perceiving the properties of the portion of its surface in contact with one's body.
I also mentioned the case of hearing a piano by hearing the sounds produced by the piano. We hear the sounds by hearing the pitch and/or loudness of the sound waves. A similar thing will be said about the objects of the other senses. We smell things by smelling the qualities of odors (e.g., smelling the sourness of the odor). We taste things by tasting their sweetness, sourness, or ... (etc.) Any view that does not take qualities of things as objects of primary awareness will have difficulty explaining how we are aware of the qualities of things. And once we accept qualities, relations, and actions of things as objects of primary awareness, we can account for the awareness of everything else that is perceived.
Now, here are some other, somewhat more exotic (alleged)
perceivables to consider:(31)
- Can we perceive holes, like a hole in a piece of swiss cheese? Yes, one perceives the hole by seeing the shape of a certain part of the surface of the cheese.
- Can we perceive spaces, like the space between my hands? On a relational theory of space, yes, one perceives the space by perceiving the spatial relation between the hands. On an absolute theory of space, probably not (because the space between my hands is constantly changing, as I hurtle through space, and I am unable to distinguish one portion of absolute space from another).
- Can we perceive events, like a flash of lightning or a clap of thunder? Yes, one perceives the lightning by seeing the color, position, and brightness of the light that is given off during the flash. One perceives the clap of thunder by perceiving pitch and loudness properties of the sound waves that are given off, as well as their direction.
- Can one perceive a rainbow? Yes, one perceives the rainbow by seeing the colors of light coming from certain directions. (The perception may be considered an illusion, though, because it looks as if there is a rainbow-shaped object in that location, which there is not.)
- Can one perceive a shadow? Yes, one sees a shadow cast on a surface by seeing the brightness of a certain part of the surface, or perhaps the brightness of the light reflected by a certain part of the surface.
- Do we see after-images? In my view, no -- after-images are a kind of hallucination. (But note that we can 'see' after-images in the phenomenal sense. See above, p58.)
- Can we see pictures on a movie or television screen? Yes, we see
those by seeing their colors and shapes. There are two plausible
views as to what that consists in. One is that we see the colors
and shapes of the images by seeing the colors and shapes of certain
areas of the screen. This view involves holding that the regions
of the screen have the colors they appear to have when the movie
projector is operating, rather than remaining white or light grey,
as they are during normal lighting conditions. The other view is
that we see the colors and shapes of images on the screen by seeing
the colors of the light reflected from certain regions of the
screen, and the shapes of those regions of the screen. So if a red
square is projected on the movie screen, then I see the redness of
the light reflected from a certain square region of the screen, and
I see the squareness of the region that is reflecting red light.
To return to the objection we began with. One perceives the
coiled rope in the corner as a snake. The rope doesn't come close
to satisfying the snake-representing part of the content of the
visual experience. However, the rope is not what is primarily
perceived. What is primarily perceived is the shape, color, and
relative position of the facing surface of the coil of rope, and
those things do satisfy the content of the visual experience. So
the modified version of (iii) escapes counter-example. But let's
take some more examples along the same lines, to see if the new
principle really handles all the cases of illusion.
- Suppose I'm wearing night-vision goggles, which make everything
appear green (they also distort the brightnesses of things). In
that case, nothing in my field of vision satisfies the content of
my visual experience in respect of color, although I do still see
things. This isn't a problem, because I can still see the material
objects in my field of vision by virtue of seeing the shapes of
their facing surfaces. This case actually provides some support for
condition (iii), because we would not say that in this case I am
seeing the colors of things, and this is explained by the content-satisfaction condition.
- Take the infamous straight stick that appears bent in water. It
fails to satisfy the content of my visual experience in respect of
shape, but I'm still seeing it. That's okay, because I am seeing
it by virtue of seeing the top (dry) portion of it and the
underwater portion of it. Each of these portions is correctly
represented as to shape and color, although their spatial
orientation relative to one another is misrepresented.
- But those are easy cases. Let's look for trouble here. Let's suppose I'm looking at myself in a 'funhouse' mirror that distorts my image in multiple ways -- it makes me look fat in some places, ridiculously thin in others, and also curves my image over in a wide arc to the left. Nevertheless, it seems that this still counts as seeing myself.
It could be argued that my visual experience is correctly
representing my mirror image, and that I only see myself by virtue
of seeing the mirror image. Rather than contest the ontology of
images, let us switch to an example that wouldn't involve images.
- Suppose instead that I am looking at Sue, and that very unusual atmospheric conditions (extremely localized pockets of very hot air, etc.) are causing similar distortions in her appearance by bending the light rays traveling between Sue and my eyes in various ways -- they make her look too fat in some places and too thin in others. Furthermore, I'm also wearing the famous tinted glasses that make everything appear green. Thus, it seems, I am aware of neither Sue's color nor her shape. Am I still seeing Sue? Am I seeing anything?
One thing we don't want to do here is resort to 'images,' because there's no natural place for Sue's image to reside, unless it is either a retinal image or a mental image -- and that would strongly suggest that we see such 'images' during normal vision as well.
But even in this apparently extreme example, there are important properties of Sue that I am aware of and that are correctly represented in my visual experience -- for instance, that Sue has two arms attached to her upper torso, a head attached to a neck, etc. That is to say, in general, the topological relations between the parts of Sue are preserved.
Of course, some properties of Sue will always be correctly represented (such as the property of occupying space), so there might be a question as to the limits of the sort of move I'm making. In all of these examples, I identify some properties of the object of perception that correspond to aspects of the content of the subject's experience, and claim that the subject is primarily aware of those properties in accordance with my analysis of perception, and that the subject is only secondarily aware of the object. Intuitively, this move is only plausible where some reasonably significant and specific properties of the object can be identified, not merely a property like 'being a material object' or 'occupying space.' And it is open to question whether the topology of Sue's facing surface is a significant enough property to plausibly claim that I am aware of Sue by being aware of those properties and relations.
That it is really a significant and reasonably specific set of
properties, I think, is shown by this fact: it would be possible,
without any special knowledge or training, to recognize Sue on the
basis of those properties. If I'm running around the funhouse
looking into these various mirrors that distort images in all sorts
of unpredictable ways, and I see a particular image in a mirror but
I can't directly see the object being reflected, I could still
recognize who it was. I could still tell whether I was seeing the
reflection of Sue, or me, or a cat.
Note that philosophers who would find a counter-example to the
qualified version of (iii) would have to hold either that it is
possible to be aware of X without being aware of anything
significant about it (i.e. any significant properties of it), or
that it is possible to be aware of a property of X without coming
close to correctly representing that property. The latter option
would require holding something like that you can be aware of the
redness of something, even though the object looks green to you, or
aware of the squareness of it, even though it looks round to you,
etc. And either option would seem to defeat the point of claiming
awareness of things. Johnston himself recognizes something of this
An important part of what seems to make sensing intrinsically
valuable is that as well as providing us with propositional
knowledge about just which properties objects have, sensing
also acquaints us with the nature of the properties had by
those objects. It reveals or purports to reveal what those
properties are like.(32)
Indeed it is hard to make sense of having genuine acquaintance
with things without being acquainted to some extent with their
natures and so with some of their properties.(33)
In other words, one reason for accepting some form of condition
(iii) is that otherwise, it would not matter whether we were 'aware'
of things or not, and the concept of awareness would be largely
uninteresting, since our being 'aware' of all sorts of external
things would be compatible with the world being completely different
from how it appears.
- Now what if we try to make an even more extreme example, by having even Sue's topological properties radically misrepresented? To forestall the possibility that I see Sue merely by seeing some significant part of Sue (such as her head), we should have to also let the topological relations between the parts of each significant part of Sue be misrepresented (or unrepresented). And now what would the visual experience be like?
It might be something like this: Sue's presence before me
causes me to have a visual experience of a tree, or an
unrecognizable green blob. But in this sort of case, I would feel
little hesitation in simply denying that I'm aware of Sue any
longer. Rather, this would just be a case of Sue's causing me to
hallucinate -- not in principle different from the eccentric brain
surgeon causing me to hallucinate by sticking electrodes in my
brain, or a bit of LSD causing me to hallucinate.
One final note about the content-satisfaction condition: the condition is still qualified by the term "roughly." This qualification is needed because, while I don't think that a state in which a red object looks green to you can count as your being aware of its color, I do think that you can be aware of a red object's color in the face of relatively mild color illusions (e.g., when it appears a different shade of red). Otherwise, we might have to say that people are hardly ever aware of the colors of things.
"Roughly" is a very vague term. Some readers will want to construe the roughness very liberally, while others will want to construe it more restrictively. It isn't necessary to remove the vagueness, for two reasons. First, because "aware" is itself a vague term. Ordinary usage does not determine a definite cut-off point in the progression from the case where Sue is perceived perfectly normally to the case where Sue causes a visual experience of a tree. Nevertheless, there are clear cases -- the former case is clearly awareness, and the latter case is clearly hallucination.
Second, the use to which I have put the content-satisfaction condition, in arguing against Johnston's view of hallucination, and the use to which I will put it later, in arguing against indirect realism, only require a very liberal content-satisfaction condition (though a strict one will also support my arguments, of course). If even a very liberal content-satisfaction condition is accepted, a visual experience of a pink rat can not be construed as (primary) awareness of an ensemble of universals, nor a brain process, since the properties of those things are not remotely like the properties of a pink rat (or, more precisely, of the facing surface of a pink rat). Of course, a strict content-satisfaction condition will support the argument even more clearly.
Nor would there be comfort for Johnston in supposing the experience to be merely secondary awareness of an ensemble of universals or an internal physical state, since an object of primary awareness would still have to be found. Finding an object of secondary awareness would really have done no work. And without resorting to such posits as sense data or 'intentional objects,' it will not be possible to find any appropriate object remotely satisfying the content of the nonveridical hallucination.
3. THE NATURE OF PERCEPTUAL EXPERIENCE
In the last chapter, I defined the notion of a perceptual experience and assigned it a prominent role in the analysis of perception. Clearly, to have a theory of perception, we need to know more about the nature of perceptual experience.
A typical perceptual experience, such as an experience of
feeling a pen in one's hand, has three distinct, intrinsic aspects:
(i) It has certain sensory qualia (such as the qualia of the
sensations of pressure, warmth, and smoothness);
(ii) It has a certain representational content (such as a content
ostensibly representing a pen or a pen-shaped surface); and
(iii) It has a degree of 'forcefulness' (to be explained below).
The choice of the word "aspects," as opposed to "components," is deliberate. A perceptual experience is a state that simultaneously has all three of the properties listed above. It is not, for example, a collection of three states, one a quale, one a content, and one a forcefulness.
The word "typical" is also deliberate, because there are some
perceptual experiences that fail to satisfy (i). However, all
perceptual experiences have the properties listed in (ii) and (iii);
these are, in my opinion, essential properties of perceptual
experience, in the sense that no state lacking (ii) or (iii) could
be considered a perceptual experience.
3.1. Sensory qualia
Some philosophers make a distinction between sensation and perception,(34) and some of them would have said, in place of (i), that perceptual experiences contain sensations. On this view, a sensation would be a component in perception, and perhaps a perceptual experience would be a collection of sensations, or perhaps it would consist of sensations together with some other sort of state.
In my view, a sensation is a certain sort of state having qualia (other kinds of states can have qualia as well, such as emotions). Some sensations, besides having qualia, also have (representational) contents. For example, a sensation of pressure has both a quale and a content. It has a content, since the sensation is capable of counting as awareness of some pressure against one's skin. There may also be sensations lacking contents, such as sensations of pleasure or pain -- we can leave that open for now. All of this, I think, corresponds to how the word "sensation" is ordinarily used. It also seems that "sensation" is used primarily to refer to tactile and taste sensations. It's natural to speak of sensations of warmth or pressure, but it's odd to speak of 'sensations' of loudness or redness. Philosophers are so much in the habit of speaking that way that they may be insensitive to the strangeness of it, but one never hears about people having 'the sensation of red' in ordinary contexts. One does hear about having -- or rather, feeling -- sensations of warmth in ordinary contexts. And it would be even more odd to say that one either feels or sees the sensation of red.
Misusing the word "sensation" is part of an unfortunate tradition in the philosophy of perception and philosophy of mind -- a tradition that Locke inaugurated by using the word "idea" to cover the likes of perceptual experiences, sensations, concepts, emotions, and memories; and Hume continued by using the word "perception" similarly. In spite of the dangers inherent in misusing language, however, I propose to allow the philosophical use of "sensations," to cover the qualia-bearing states involved in all forms of perception, not just those involved in feeling and tasting. This is, at least, not as serious of a misuse of language as the examples from Locke and Hume, and there does not seem to be any other ordinary term that would be appropriate for this kind of state.
A sensation that has content is a perceptual experience. For
instance, a pressure sensation purporting to represent something
touching and pressing on one's skin, is a kind of perceptual
experience. But if, say, sensations of pain lack representational
content, then they are not perceptual experiences.
Well, what is a quale, anyway? A quale is a certain kind of property of certain mental states. Roughly speaking, a quale is the intrinsic, qualitative character of a sensation or emotion. Another way of explaining the idea is to say that the quale of a mental state is what it's like to be in that state. In the case of emotions and tactile sensations, that can be rephrased as "how it feels" to have such a mental state. This doesn't work for other sorts of sensations. For example, describing the quale of a certain olfactory sensation as "what it smells like" flops, because sensations don't smell like anything. An odor or object that gives off an odor can smell a certain way (it can smell like a rose, smell like soap, etc.), but a sensation can't. The reason for this asymmetry is that "feel" can be used to mean "have (as one's mental state)," in addition to being able to be used to mean "perceive by the sense of touch," whereas "smell" can only be used to mean "perceive by the sense of smell." To feel a pain is not to perceive a pain by the sense of touch; rather, to feel a pain is just to have the pain. But to smell X is always to perceive X by the sense of smell. Consequently, while you can 'feel' tactile sensations, you cannot smell olfactory sensations. You also cannot see visual sensations, taste taste sensations, or hear auditory sensations.
The existence of qualia is, in my view, simply a datum of introspection. It is, for example, immediately obvious that a sensation of warmth feels a certain way, and an itch feels another way. No argument for that is either required or possible. Note that I have not, however, taken any stand regarding whether or not a physicalist or reductionist account of qualia might be given.(35) Whether such an account is possible is beyond the scope of this dissertation. I have, of course, ruled out eliminative materialism by speaking of qualia. But then, I suppose that any talk about such things as 'perceiving' and 'knowing' rules out eliminative materialism, and there is no great loss there.
I take there to be distinctive qualia associated with temperature and pressure sensations (as already mentioned), sensations of texture (e.g. of smoothness or roughness), color sensations (of red, etc.), sound sensations (e.g. of ringing or squeaking), taste sensations (of sweetness, etc.), and smell sensations (e.g. of the smell of a rose) -- but not with 'sensations' of shape, size, or location. I don't believe there is such a thing as a sensation of squareness, a sensation of smallness, or a sensation of 'hereness.' This isn't for any deep, theoretical reason -- it isn't, for example, that I think it is theoretically impossible to have a sensation of a primary quality. It is just that when I reflect, I do not believe I can identify the quale associated with, for example, square things. There is, surely, an introspectible difference between seeing, say, a red square and seeing a circle of the same shade of red. But I think this difference can be regarded as purely a difference of intentional content.
However, on the basis of our definitions of "qualia," it is not immediately obvious that this claim rules out there being a quale associated with seeing squareness. It's not obvious that what sort of content one entertains when having a certain mental state is not already part of "what it's like" to have that mental state, or part of the "qualitative character" of it. So to rephrase my observation: I do not believe there is any difference between the qualia of a visual experience of a red square and the qualia of a visual experience of a red circle apart from the difference in the contents of these experiences.
This raises the question of whether a similar claim might be made about all aspects of all perceptual experiences. Could it be maintained, for example, that the 'qualitative' difference between visually experiencing a red circle and visually experiencing a green circle is also entirely a matter of the difference in the intentional contents of the experiences?(36) Since I do not believe this is the case, I will give an argument on behalf of qualia as distinguishable from intentionality, for the case of color experience. The argument will be extendible, I believe, to the perception of all secondary qualities.
The argument depends on a variant of the inverted color spectrum thought experiment, but this time I want the reader to imagine that the objective physical phenomena are 'inverted,' rather than imagining inverted qualia. We can call this the "inverted wavelength spectrum" thought experiment, in contrast with the 'inverted color qualia spectrum' thought experiment. That is, imagine a world in which our visual experiences are qualitatively the same as in the actual world, but the light reflectances of things (along with transmission spectra and emission spectra) are inverted. For example, in the actual world, we have a certain sort of experience when our eyes are exposed to light of around 700 nanometers; in this other possible world, we would have that sort of experience (as classified by its quale) when exposed to light of around 400 nanometers (this being the opposite end of the visible spectrum). Here I'm simplifying the matter of what causes color experiences (ignoring metamerism, contrast effects, and such). But the simplification is harmless, serving only to make the argument clearer. All I really need is any alternative logically possible way in which our color experiences might have been caused (consistent with their being veridical perceptions). The thought experiment need not even involve the electromagnetic spectrum -- it could be about a world with an utterly different physics. However, it's easiest to imagine the wavelength spectrum inversion.
It is difficult to argue that my wavelength inversion hypothesis is logically impossible, because one only has to reflect on the epistemic position of human beings prior to the twentieth century to find individuals for whom the hypothesis could have been a live possibility. For all they knew, it might have been the actual world. To maintain that no suitable hypothesis of the kind I'm asking us to entertain is logically possible would be to maintain that either (a) individuals who lived prior to certain developments of twentieth-century physics did not know what color experience was like, or (b) those individuals could not have consistently (i.e. consistently with what they knew) entertained any other hypotheses as to the explanation of our color experiences than the actual, modern explanation -- i.e., what the sensation of red is like logically fixes what wavelength of light red things must reflect.
Suppose someone argues that the spectrum inversion hypothesis is comparable to a hypothesis in which, say, water was NaCl and table salt was H2O. This 'molecular inversion' hypothesis is metaphysically impossible -- which is perhaps enough to make it illegitimate to entertain counter-factual reasoning about it -- but yet there was a time at which what we knew about water and salt was not sufficient to rule it out. There might even have been someone who entertained the possibility that water had the chemical formula NaCl -- and who had no way of knowing that it wasn't the case. But this doesn't show that there's a possible world in which water has the formula NaCl. Is the spectrum inversion hypothesis analogous? No, because the sort of appearance/reality distinctions we need to exercise in order to make sense of this example do not apply to the color example. Imagine a science fiction author at a time before the chemical compositions of salt and water have been discovered. The author, let's suppose, comes up with the basic ideas of atomic theory, as a possible set of future scientific developments. And he entertains the possibility that water might have the chemical formula H2O. Then he entertains the possibility that water might have the chemical formula NaCl. And nothing he knows rules out either hypothesis. Why does this not show that in some sense or other, there's a possible world in which each of these hypotheses is true? The good Kripkean answer is that the so-called "world in which water has the chemical formula NaCl" has been misdescribed (owing to a confusion about philosophical semantics). Because "water" is a rigid designator, the possible world we (and the science fiction author) have in mind is not properly described as "a world in which water has the chemical formula NaCl." Rather, it's correctly described something like this: "a world in which something with the appearance of water (something that looks like, acts like, etc. water) has the chemical formula NaCl." The confusion between these two things leads to the misleading appearance that it's possible to imagine a world in which water is NaCl.
This sort of thing can't be said with any plausibility about the spectrum inversion hypothesis. It can't be plausibly claimed that the world in which ostensibly the quale of the sensation of red is caused by a different external phenomenon has been misdescribed and that what we are thinking of is really a world in which something having the appearance of the quale of the sensation of red is caused by a different external phenomenon, because of course, whatever has exactly the appearance of a certain quale just is that quale.
Note that I have not said anything about whether the objects reflecting 400 nm light in this other world are red, violet, or neither color. (In the actual world, such objects are violet.) It is consistent with the possibility of my scenario that "red" is a rigid designator, so that the objects in the inverted world that cause experiences qualitatively like our experiences of red are not, in fact, red. I also have not, yet, ruled out the view that the 'qualitative character' of our color experiences is exhausted by their intentional contents. I have merely asked the reader to suppose that the qualitative character of our experience remains constant but the physical phenomena are altered. (This supposition is equally intelligible for aspects of experience that I agree are exhausted by intentional content. For instance, I could ask the reader to imagine a world in which our visual experiences are qualitatively the same as in the actual world, but the shapes of things are systematically altered.)
Next, we need to be careful about distinguishing between the content of color experience(37) and its object, and correspondingly between a theory of the content of color experience and a theory of colors. The object of a color experience is whatever phenomenon in the world satisfies the content of the color experience and appropriately causes it.
We need to make this distinction because it is possible to have two or more philosophers who agree about the nature of colors but disagree about the content of color experience. Suppose one philosopher believes that visual experience has propositional content, and that a visual experience of a red surface has for its content something like (the proposition), "There is a surface having whatever property normally causes me to experience sensory quale Q1," where Q1 is an independently-identifiable mental quality. Suppose another philosopher holds that the experience instead has the content, "There is a surface having whatever property normally causes most humans to experience sensory quale Q1." A third philosopher holds that a visual experience of a red surface has for its content the proposition, "There is a surface having spectral reflectance distribution D1," for a certain value of D1. But suppose that all three philosophers agree that D1 is the property that normally causes all normal humans to experience Q1. In that case, all three philosophers would agree that D1 is the object of the sensation of red (they will also agree that the object of the sensation of red is the property that normally causes most humans to experience Q1, since the property that normally causes most humans to experience Q1 is identical with D1, and "the object of M is _____" is extensional). However, they disagree about the content of the sensation of red, because the three philosophers identify three different modes of presentation of D1.
Now I want to make two distinctions among theories of the content of color experience, by way of dividing the possible theories into four mutually exclusive and jointly exhaustive categories. The distinctions are, first, between subjective and objective theories of content, and second, between internalist and externalist theories of content.
A subjective theory of content is one that holds that the content of a color experience makes mention of something mental. I will assume that the only plausible candidates for types of mental phenomena that would be mentioned in the contents of a visual experience are perceptual experiences, sensations, and properties of perceptual experiences and sensations. For example, the first two philosophers mentioned above (who each advert to quale Q1) would be subjective theorists. An objective theory of content is one that holds that the content of a color experience does not contain mention of anything mental.
A few further clarifications about what these labels cover are needed. First, the issue of whether color terms (or concepts, or experiences) designate rigidly is orthogonal to both the subjective/objective distinction and the internalist/externalist distinction. Suppose a philosopher held that a visual experience of a red surface has a content like, "There is a surface having whatever property actually normally causes most humans to experience sensory quale Q1," with the "actually" used to rigidify the description. This would be an instance of a subjective theory, no less than the previous examples. Objective theorists also have an option of rigidifying, if they pick out colors by contingent modes of presentation.
Second, the distinction between subjective and objective theories of content is not the distinction between subjective and objective theories of color. A subjective content theorist can be an objective color theorist, for, as we have already seen, a theorist who holds that the content of the sensation of red adverts to a quale can still hold that red is a spectral reflectance distribution, which is surely something objective.
Third, there are several different ways in which a subjective content theorist might hold mental phenomena to be implicated in the contents of color experiences. A subjective theorist could hold that the content of the sensation of red is something of the form "whatever categorical physical property stands in relation R to sensory quale Q1." Any choice for R will preserve the theory's status as a subjective content theory. Or a subjective theorist could hold that the content of the sensation of red is something more like, "the disposition to cause Q1." A subjective theorist could even hold that the content is simply, "Q1" (thus requiring red to be a property of mental states). What is common to subjective content theories is only that they all hold that visual experience picks out the property red by virtue of the relation that red stands in to some experience. It doesn't matter for my classification what relation is used.
Now to the internalist/externalist distinction. An internalist theory of content holds that what content a color experience has supervenes on the intrinsic characteristics of the experiencer (and of his internal states and processes). An externalist theory of content denies this. A typical externalist theory would be the view that what content a visual sensation has is determined by what is the normal cause of that sensation.(38)
Now let's consider what each type of theory might say about
the relationship between color qualia and the contents of color
experiences. Could a theory of any of these types maintain that
color qualia are nothing over and above color-experience contents?
(i) Subjective content theories:
For any subjective theory, the question arises as to what sort of mental state, or property of a mental state, should be mentioned in describing the content of a sensation of red. The only plausible candidates are mental states or mental properties that normally exist when people are seeing red things, and normally don't exist when people aren't seeing red things -- which is to say, really, the only plausible candidates are the visual experience of red and properties of the visual experience of red. (One could have a subjective theory where red is picked out by its relation to the emotion of happiness, but that just wouldn't be plausible.)
Suppose the property of redness is picked out by its relation to experiences of a certain kind. The kind of experience that redness stands in a special relation to will have to be identified by some property -- i.e., the kind will be "experiences having feature F," for some value of F. So, whether the subjective content theorist relies on a special relation between redness and a kind of experience or a special relation between redness and a property of experiences, he will still need to identify some special mental property that visual experiences of red have and visual experiences of (say) green lack (so as to enable the two kinds of visual experiences to be satisfied by differently-colored objects). What properties does a visual experience of red have that a visual experience of green doesn't have? I can think of two candidates: it has a certain content, and it has a certain quale.
Suppose we choose to use the distinctive content of visual
experiences of red. In that case, we have a transparent circularity
problem, for the account will now be something like this:
The content of a visual experience of a red surface is (the
proposition) that there is a surface having the property that
stands in relation R to experiences that have the content
The content of a visual experience of a red surface is (the
proposition) that there is a surface having the property that
stands in relation R to the content (...).
where the ellipsis needs to be filled in with the content of visual experiences of red -- i.e., the very content that we were trying to give an account of. Of course, it doesn't matter whether visual experiences are taken to have propositions for contents. What matters is only that the content of a visual experience of red would, on the present proposal, somehow involve reference to the content that visual experiences of red have.
Suppose we choose to use the distinctive quale of experiences
of red. In order for this to avoid the above problem, the quale
would have to be taken to be something over and above the content
of these experiences. Which is just the conclusion I am arguing
(ii) Objective, externalist theories:
An objective content theory holds that the content of experiences of red only mentions objective (non-mental) properties, such as spectral reflectances and/or surface properties that explain spectral reflectances. And an externalist, objective theory holds that they have this content in virtue of something other than just the intrinsic properties of the experiences -- most likely in virtue of the causal relations the experiences stand in to instances of the objective properties. So what should the externalist objectivist say about the contents of visual experiences in my inverted wavelength world? Well, the inverted wavelength world is Twin Earth for color.(39) That is, it's a world in which the external facts have been 'switched,' so the experiences in that world that are intrinsically like experiences of red in our world, do not have the same content that experiences of red in our world have. Instead, they would have the content that experiences of violet have in our world. Experiences with the same etiology will have the same content -- so the experiences caused by 700 nm light in both worlds will have the same content (a content satisfied, in both worlds, by the objects that predominantly reflect 700 nm light).(40)
However, ex hypothesi the visual quale caused by 700 nm light
in the inverted world is different from the visual quale caused by
700 nm light in our world. Therefore, color qualia are not entirely
accounted for by the contents of visual experiences.
(iii) Objective, internalist theories:
What would an objective, internalist theorist say about the inverted world? Just the reverse of what the externalist says -- the objective internalist should say that the experiences caused by 700 nm light in the inverted world have the same content as the experiences caused by 400 nm light in our world, and vice versa, because they are intrinsically qualitatively identical; what differs is only the external facts, which, according to the internalist, are irrelevant to determining content.
My argument here is reminiscent of (part of) Putnam's famous Twin Earth argument -- specifically, the part where Putnam argues that the internalist would have to hold that Oscar's and Twin Oscar's "water" thoughts have the same content, despite the differences in their environments. But notice that the reply that John Searle gives to Putnam's argument is not available against my argument. Searle proposed that the content of Oscar's "Here is water" belief is something along the lines of "Here is some stuff of the same kind that normally causes experiences of type F in me," and the content of Twin Oscar's "Here is water" belief is "Here is some stuff of the same kind that normally causes experiences of type F in me."(41) The two contents are different insofar as there is a different "me" involved, but it could be argued that this difference is allowable within an internalist theory. However, notice that this sort of move, if applied to the case of color perceptions, would entail a subjective content theory, a theory wherein the contents of color experiences contain mention of experiences. Since we are here examining objective, internalist theories, no Searlean moves are available, so it must be maintained that the qualitatively indistinguishable visual experiences in the two possible worlds have the same content.
Now, the content of sensations of red in the actual world is a content that objects reflecting 700 nm light satisfy but objects reflecting 400 nm light do not satisfy. (I.e., it's satisfied by red things and not by violet-colored things.) That is, in the actual world, if you see something that reflects predominantly 400 nm light and you somehow are experiencing the sensation of red instead of violet, then you are experiencing an illusion. When individuals in the inverted world have the sensation qualitatively identical to our sensation of red, by hypothesis, their sensation has the same content. Therefore, their sensation, too, has a content that objects reflecting 700 nm light satisfy and objects reflecting 400 nm light do not satisfy. But, unlike in our world, that sensation is regularly caused by objects reflecting 400 nm light. Therefore, when they have that sensation, their experience is regularly illusory. Since the same argument could be made for any other color experience, all of their color experiences are illusory (with the possible exception of some that occur in certain nonstandard lighting conditions, or when their brains are operating abnormally!)
We should verify the validity of this argument by applying it to various versions of objective internalism. Suppose the objective internalist holds that the content of an experience of a red thing (in the actual world) is that there is something having spectral reflectance distribution D1. Then the qualitatively identical sensation in the inverted wavelength world also has the content that there is something having spectral reflectance distribution D1, but in the inverted world there would normally not be anything having D1 in the offing when that sensation is going on, so the experience is regularly illusory in the inverted world.
Suppose the objective internalist holds that an experience of a red thing has the content that there is something having such-and-such surface properties (these including such things as texture and chemical makeup, which explain spectral reflectances). In that case, again, we can imagine a world in which the surface properties that underlie dispositions to reflect light are also 'switched,' and we will have to conclude that qualitatively identical sensations in the other world are regularly illusory.
Suppose the objective internalist holds that an experience of a red thing has the content that there is something having property F, where F is some property over and above the 'scientific' properties of physical objects -- i.e., over and above spectral reflectances, the surface properties that scientifically explain spectral reflectances, and the like -- so that F would not, or would not necessarily, get 'switched' in the world with the inverted wavelengths. (F would just be redness, in this theory -- just as in the first theory D1 is redness and in the second theory a certain kind of categorical surface property is redness. The theory really simply consists in denying that redness is reducible to scientific properties.) In that case, since we know that the scientific properties of external objects are the cause of color experiences in us, there would be no reason to believe that F has anything to do (causally) with the color experiences we have, and F seems to have no other explanatory function either. But if we have no reason to believe that an object's being F explains our having experiences of red, nor anything else, then there really is no reason to believe that anything is F.
Now, what is so bad about holding that the people in the inverted world are subject to regular illusions? The same verdict, of course, would have to be pronounced on the people in all the other possible worlds where color experiences have a different cause than in our world. Think back to before the connection between wavelength and color was discovered. Think back to before it was even known that there were light waves. And consider the set of all the logically possible explanations for color experience that people might have entertained, consistent with what color experiences are like for us. It is very hard to believe that of all those possible explanations, only one enables our experiences to be veridical, and that one (luckily enough) is the actual explanation -- it had to turn out that experiences of red were caused by 700 nanometer electromagnetic waves (and no other wavelength) if our experiences were not to be deceptive.
I find this unbelievable. It seems entirely arbitrary to select out the actual world explanation as the one in which color experiences are veridical. If there is to be but one way of being caused that makes color experiences veridical, all the other ways making it illusory, there is no reason to think that our color experiences are veridical.
There remains one theory about colors that I have not
addressed in this argument, and that is eliminativism. I have
assumed that things have colors, and they generally have the colors
they appear to us to have. An eliminativist denies this.(42) I do
not have an argument against color eliminativism (other than a
general methodological presumption that eliminativism should be
treated as a last resort) and an eliminativist philosopher need not
accept my conclusion about color qualia.
Thus my defense of qualia as something over and above intentional content: objective internalism is implausible, and both subjective theories and objective externalist theories have to accept color qualia as something over and above intentional contents. I have mentioned that parallel arguments could be constructed for other secondary qualities. But I do not think they could be constructed -- or at any rate, they would not be very convincing -- for primary qualities. I think they would break down in the last stage, when dealing with internalist objectivism. An 'inversion' of primary qualities does not seem as indifferent as an inversion of secondary qualities. If we had discovered, about our visual experiences of round things, that they were caused by square things, we would have concluded that our visual experiences were illusory. But it is not the case that if we had discovered, about our visual experiences of red things, that they were caused by objects reflecting predominantly 400 nm light waves (purple things), we would have concluded that our visual experiences were illusory. In that sense, the switch of spectral reflectance properties is indifferent.
It is difficult to think of a consistent way of systematically
'inverting' shapes, but suppose that we can think of one. It seems
implausible to insist that the people in the inverted wavelength
world are suffering sensory illusions while we are not. But it does
not seem at all implausible to insist that people in a world with
inverted shapes are suffering sensory illusions. If they have an
experience intrinsically like our experience of a round thing, when
the object causing it is square, it is reasonable to say that their
experience is an illusion.
3.2. The non-conceptual content of experience
The representational content of experience is a many-splendored thing.
As I use the term "content," the phrases "intentional content" and "representational content" are redundant. A mental state has a content if and only if it potentially represents something.
Why say "potentially"? Because "represents" might be taken as a success verb, so that a state could only "represent" if there existed an object represented by it; whereas a state can have content even if nothing really exists that the state is of. For example, suppose I imagine a green dragon. My imagining is an intentional mental state. Depending on how one wants to use the word "represent," one might say that this state fails to represent anything. If it were to represent something, it would have to represent a green dragon, but since there aren't any green dragons to be represented, one might say, it can't represent one. This would be taking "represent" as a success term, and treating "represents _______" as an extensional context.
There isn't anything obviously wrong with this way of speaking. However, there is still at least a sense in which my mental state does "represent" a green dragon. We might say, it has a sort of potential for representing (in the success verb sense) a green dragon. It has the right intrinsic character to represent a green dragon, and it would represent one if, so to speak, the world cooperated.
This makes it sound as if the success-verb sense of "represent" is primary or more fundamental than the other sense. Actually, I don't believe this (and there would be difficulty identifying the sense in which, for example, the concept of the largest prime number potentially represents anything). I think the non-success-verb sense is more fundamental. But this question of priority is not something we need to resolve. It is only important to note the distinction and decide how we want to use "represent." I will use "represent" in the second sense, the sense in which "represents _______" is intensional, not extensional. So "X represents Y," as I use the term, does not entail that Y exists. For the other sense of the word, we can use "refer" instead. So "X refers to Y" does entail that Y exists. In "X refers to _______," what fills in the blank names the object of X. In "X represents _______," what fills the blank specifies the content of X.
There are at least two ways that a mental state can have content -- it can have conceptual content (roughly: have content by means of concepts), or it can have non-conceptual content. Assume that mental state M has content C. Then M has C conceptually (or: C is a conceptual content of M) if and only if M's having C depends on the subject's possessing at least one concept whose content is part of C. M has C non-conceptually (or: C is a non-conceptual content of M) if M's having C does not depend on the subject's possessing any concept whose content figures in C.(44)
For example, consider a paradigm of a state with conceptual content. Suppose S believes that there is a cat on the table. The fact that S's belief has the content that there is a cat on the table depends on the fact that S has the concept of a cat. If S did not have the concept of a cat, his mental state could not have the content that there is a cat on the table (indeed, this particular mental state of his couldn't even exist). And the concept of a cat has a content that figures in (is part of) the content, there is a cat on the table. This is why the belief has the content that there is a cat on the table conceptually, rather than non-conceptually.
In contrast, I claim, perceptual experiences possess non-conceptual content. More about this below.
A few things about the definition of non-conceptual content are noteworthy. First, notice that the conceptual/non-conceptual distinction is not a distinction between kinds of contents, but rather a distinction between ways in which a content could be entertained. Thus, it is possible to have two states with the same content, where one state has this content conceptually, and the other has this content non-conceptually. A possible example would be seeing that there is a red square here, and believing that there is a red square here (admittedly, it is subject to debate whether the seeing-that has non-conceptual content, and whether it has the same content as the belief).
Second, note that the definition does not rule out that a mental state could have a conceptual content and a non-conceptual content simultaneously. If a single state can have two contents at once, it might conceptually represent P and non-conceptually represent Q at the same time.
Third, note that the definition does not require non-conceptual contents to be ineffable, or incapable of being captured using public language or concepts. The distinctive thing about having non-conceptual content is that it doesn't require the subject to have concepts; this does not mean that the content couldn't be captured using concepts, supposing that there happens to be someone who possesses the appropriate concepts. Having a non-conceptual content also doesn't require you to lack any concepts, any more than it requires you to have them. For instance, if a certain visual experience non-conceptually represents there to be a red square here, then the subject of the experience need not have the concept of red, nor the concept of a square, in order to have this visual experience with this content. But of course, he may have those concepts anyway.
With this background in place, the argument for the non-conceptual content of experience will be relatively simple. To begin with, why think that perceptual experiences have content? Because perceptual experiences can be of things (in the sense of "of" that indicates intentionality). I can have a visual experience of a tomato, an auditory experience of a symphony, and so on. Furthermore, perceptual experiences represent things as being certain ways -- e.g., a visual experience of a tomato will typically represent a tomato as being red and round.
Okay, but why think that this content is non-conceptual? Because there are ways that our experience represents things as being, such that we don't have concepts of those ways. For example, suppose Joe (an average person) sees a cloud in the sky. His experience represents the cloud as being a certain, very specific shape. But, unless one wants to be extremely liberal about what counts as a "concept," Joe does not have a concept of precisely that shape. He does have concepts of such well-known geometric shapes as round, square, and trapezoidal, but let us suppose that the cloud doesn't have any of those shapes. It has an irregular shape -- let's call it S1 -- that Joe has never seen before. Joe also has some vaguer shape concepts such as elongated or puffy, but none of these is specific enough to do justice to the content of the experience. He may see a number of elongated clouds, but they nevertheless all look differently-shaped, so the specific shape that he sees this cloud as having isn't captured by such a general concept.(45)
This argument raises the question of what a concept is, and what is required to "have a concept" of something. I do not have a theory of concepts to present. Nevertheless, I think it is clear just on the basis of an intuitive idea of "having concepts" that Joe does not (or at any rate, need not) have a concept of the shape S1. He not only doesn't have a word for S1 (which would perhaps be sufficient for having a concept of it); he has no special familiarity with it, having never seen it before; he has never before thought about it; he is not taking special notice of S1 even now (as he has the visual experience); and he will immediately forget about it when he turns away. If we say Joe has a concept of S1, it seems, we must be claiming either that Joe automatically has a concept of every shape, or else that Joe has a concept of every shape he (veridically) perceives as soon as he perceives it (since the fact that he's presently seeing S1 is the only thing special about Joe's relation to S1). Even prior to having a specific theory of concepts, each of those claims seems far too liberal vis-a-vis what it takes to have a concept -- and the former idea in any case would make it appear that one gets the concept of a shape by perceiving something as having that shape, rather than its being the case that perceiving something as having a shape depends on having the concept of that shape.
The fact that Joe lacks the concept of S1 establishes that his visual experience's representing something to have S1 can't depend on Joe's having the concept of S1. But it could still be argued that there are other concepts whose contents figure in the content of Joe's visual experience and that are required for his experience to have the content it does. For example, perhaps Joe's experience is representing that there is a cloud of shape S1 over there, and perhaps its having this content depends on Joe's having the concept of a cloud, even though it does not depend on Joe's having the concept of S1. I find considerable plausibility in this suggestion -- normally, when one sees a cloud, one will see it as a cloud, and this does seem to depend on one's first having the concept of a cloud. Similarly, if one sees something as a letter "A", this depends on one's having the concept of the letter "A". The best response to this objection, I think, is simply to modify the example, stipulating that Joe does not have the concept of a cloud. Suppose that the unfortunate Joe has so far spent his entire life indoors, having never seen the sky, having never heard of the sky, clouds, or other related phenomena. When he is first let out of his confinement, he happens to look up and see a cloud. Of course, he doesn't see it as a cloud, since he has no concept of a cloud. He has no idea what he's seeing. Nevertheless, his visual experience represents something of shape S1, and now it cannot be argued that the visual experience's having the content it has depends on Joe's having either the concept of S1 or the concept of a cloud.
But suppose one argues that Joe has the concept of a "something"; that his experience has a content like, "Something is of shape S1"; and that his having this concept is necessary to his experience's having the content it has -- again, by way of showing how the experience might have conceptual content. It is more difficult to describe the case so that it appears Joe lacks the concept "something" than it is to describe the case so that Joe lacks the concept "cloud". But it seems that there are some creatures, such as animals and small children, who do not have the abstract concept "something". So imagine that one such creature -- a cat, perhaps -- is looking at the sky and veridically seeing the S1-shaped cloud. It is open to question whether the cat has any concepts, but if it does, it probably is limited to relatively concrete ideas like "female cat" and "mouse", rather than more abstract concepts such as "something" or "existence". Nevertheless, the cat's experience represents the world to be a certain way (namely, containing an S1-shaped thing). And there is surely something important that the cat's experience has in common with normal human experience -- the visual experience of a human in analogous circumstances also represents the world to be that way. Because an experience of the same type as Joe's visual experience, with the same content, can be had by a creature who does not possess any (relevant) concepts, it appears that the intentionality of Joe's experience does not depend on his possession of concepts (even though he does possess some). The concepts seem to be irrelevant to the representational capacities of Joe's experience.
John McDowell has criticized the above form of argument.(46)
Unfortunately, neither McDowell nor Evans (to whom McDowell is
principally responding) undertake to explicitly define the notion
of non-conceptual content, so we must infer what McDowell
understands 'non-conceptual content' to be from the objections he
makes to the notion. In response to Evans' observation that our
color-experience is much more fine-grained than our color concepts,
It is possible to acquire the concept of a shade of colour,
and most of us have done so. Why not say that one is thereby
equipped to embrace shades of colour within one's conceptual
thinking with the very same determinateness with which they
are presented in one's visual experience ...? In the throes
of an experience of the kind that putatively transcends one's
conceptual powers ... one can give linguistic expression to a
concept that is exactly as fine-grained as the experience, by
uttering a phrase like "that shade", in which the
demonstrative exploits the presence of the sample.(47)
This passage strongly suggests that the notion of 'non-conceptual content' to which McDowell objects is not the notion I have defended. I do not think it is the notion Evans defended either, but we need not examine that exegetical question.
McDowell thinks the fact that "most of us" have the concept of a shade of color and therefore could meaningfully formulate the expression "that shade", poses a threat to the idea that color experience has non-conceptual content. And this would be a serious threat if the idea was that color experiences have a kind of content that is intrinsically ineffable -- that is, a content that by its nature cannot be stated or conceptualized. However, it poses no threat to the idea that color experiences have a kind of content that does not depend on the subject's possessing concepts adequate to formulating that content. On the contrary, McDowell's implicit admission that not all of us have the concept of a shade of color supports the claim that color experiences have non-conceptual content in my sense, for it shows that having concepts adequate to representing a certain color is not required for having an experience representing that color. Of course, I do not deny that having concepts adequate to representing a certain color is compatible with having an experience representing that color.
Now it is true that the fine-grained capacities I have
appealed to have a special character, which is marked by how
demonstrative expressions would have to figure in linguistic
expressions of them. But why should that prevent us from
recognizing them as rationally integrated into spontaneity in
their own way ...? Why, in fact, are they not so much as
considered in Evans's argument, and in the appeal by many
others to the consideration about fineness of grain that
drives Evans's argument?(48)
Perhaps because these philosophers never meant to deny that
perceptual experiences can be "rationally integrated into
spontaneity." By "rationally integrated into spontaneity," I
believe McDowell means something like "standing in relations of
support or disconfirmation to beliefs."(49) As we will see in chapter
4, I am very far from denying that experiences can stand in such
relations to beliefs.
3.3. The conceptual content of experience
In addition to their non-conceptual content, most of our
experiences have conceptual content. The clearest example is the
experience of seeing or hearing a word. When I see a certain sign,
I see the color patches on it as the word "exit". This is
phenomenologically different from merely (veridically) seeing the
color patches (compare the experience of seeing a word in a language
that does not share the alphabet of your native language). In
seeing the patches as the word "exit", I am recognizing a subtle
property distinct from (though perhaps supervening on) their shape
and color -- as is shown by inspection of the following examples:
The above two shapes are very different, considered merely as shapes; however, no person who has learned to read (English) can avoid seeing each of them as the word "exit". There is, in other words, a way that both of those patterns look, to a person familiar with the English alphabet.
Seeing something as the word "exit" clearly amounts to having a kind of mental content (because the thing in question is represented as being a certain way). This content belongs to the visual experience, rather than, for example, to a belief about the object. This is shown by the fact that one will continue to see an appropriately-shaped pattern as a word even if one lacks any relevant belief. Suppose I spill some ink and the ink splotches just happen by chance to form shapes similar to one of the instances of "exit" shown above. Suppose I hold certain views about language such that I am convinced that in such circumstances, the ink splotches can not actually count as words. I would nevertheless find it impossible not to see them as the word "exit".
Wittgenstein's famous duck-rabbit(50) makes the same point. There is a change in one's experience when one goes from seeing it as a duck to seeing it as a rabbit, and this obviously does not involve anything like going from believing that the thing is a duck to believing that it is a rabbit. (Interestingly, it seems impossible not to see the shape one way or the other.)
Equally clearly, this content is conceptual. A person's seeing the ink patterns shown above as instances of the word "exit" clearly depends on his having a concept of the word "exit" -- a person who has never heard or seen that word before would not see those patterns as the word "exit". Of course, one might say that, given a general knowledge of the alphabet, even a person unfamiliar with English would see those patterns as a word. But this, too, clearly depends on his having certain concepts -- the concept of the letter "e", the concept of the letter "x", and so on -- so again the presence of conceptual content is demonstrated.
Likewise, seeing Wittgenstein's picture as a duck depends on having the concept of a duck -- a person who had never heard of ducks could not see it as a duck.
I am not here claiming that all aspect-seeing evinces
conceptual content. Consider the case of the Necker cube -- perhaps
seeing the cube as angled downward does not depend on having any
concepts adequate to characterizing that spatial orientation. All
I am claiming is that there are cases of aspect-seeing, such as
those mentioned above, that do evince the conceptual content of
3.4. The forcefulness of experience
The phenomenology of perceptual experience is not merely that of possible objects presented for our contemplation; perceptual experience is not, as it were, neutral about the character of the world. Objects are presented to us in experience as real. That is to say, when one has a perceptual experience as of a green cup, a green cup seems to actually be present then and there. This quality I refer to as the forcefulness of perceptual experience, and it is a necessary feature of all perceptual experience. It is this latter fact that gives rise to the expression, "Seeing is believing." It is not literally the case, of course (pace Armstrong and Pitcher), that to see a thing is to form beliefs about it. Rather, the point of the expression is that it is very difficult not to form beliefs about something that one sees, because when one sees (more generally, when one perceptually experiences), the world very much seems to oneself to be a certain way.
This characteristic is intrinsic to the experience and is
usually impervious to one's beliefs. For example, consider the
The top line looks (hence, seems) longer than the bottom line. Even if one knows for certain that the top line is the same length as the bottom line, it still looks longer. Now, this is not to say that perceptual seemings can never be affected by one's beliefs. However, the example shows that the seeming is a state different from belief -- it can seem to you as if P even if you do not believe that P; indeed, even if you are absolutely convinced that not-P. Of course, seeming does have something to do with belief -- people normally believe that things are the way they seem.
We can see that forcefulness is a distinguishable feature of experience, over and above the features previously discussed, by comparing perceptual experience with imagination. Suppose you have a very vivid and detailed imagination, and you imagine an orange cat. This is nevertheless very different from seeing or even hallucinating a cat on a table -- there is no danger, for example, that you would ever mistake your imagining a cat for seeing one. This is not merely because imagination is typically less vivid and detailed than perceptual experience (and therefore presents different contents). It is, more importantly, because imagination is, as it were, neutral vis-a-vis the reality of its objects. You may happen to believe that the object you imagine exists, but your imagination does not add any force to that proposition; your imagining the cat doesn't make it so that a cat seems to be present. In contrast, your perceptually experiencing a cat (whether perceiving or hallucinating) does make it seem as if a cat is actually present. That is why, even if a particular imagining had the same content as a perceiving, it could still never be mistaken for a perceiving.
We have seen above that it is possible for it to seem to S as if P while S does not believe that P. This is enough to show that the state of its seeming to S as if P is not a belief. A further interesting question is whether S can believe that P while it does not seem to S as if P. At first blush, it might seem that the same example will serve to support an affirmative answer here -- while looking at the Müller-Lyer lines, I believe that the lines are the same length even though it does not seem as if they are. However, we need not describe the case this way. Instead, we can say (more accurately, I think) that there are two seemings here -- an immediate perceptual seeming and another, more intellectual seeming. So I might say, "Visually it seems to me as if P (i.e., it looks as if P), but intellectually it seems to me as if not-P." After all, this is a very famous illusion, and it certainly seems to me ridiculous to suppose that the experts are all either mistaken or lying when they report that the lines are really the same length. If I measure the lines, it will also clearly seem to me that the result of the measurement is to be believed. Therefore, it seems to me as if the lines must be of the same length, despite how they look. Therefore, this is not a case in which I believe that P while it doesn't seem to me as if P; it is a case in which I believe that P while it seems to me as if P and it also seems to me (on another level) as if not-P.
Still, I think there are genuine cases of belief without seeming. In cases of self-deception, one believes something not because it seems true but because one wants it to be true, or one wants to be the kind of person who believes that, or some such non-epistemic reason. This frequently occurs when the thing one wants to believe does not seem true and does seem false. (Recall Tertullian's famous remark, "I believe it because it is absurd": this might be paraphrased, "I believe it because it seems false.")
We will see in the next chapter that the forcefulness of perceptual experience has a profound epistemological significance.
4. PERCEPTUAL KNOWLEDGE
We have seen in the last two chapters how perception gives us awareness of the external world in the form of perceptual experiences. This awareness is direct, because there are no other states of awareness that our perceptual experiences could plausibly be said to be based on. However, this is not the only way in which perception makes us aware of the external world. In particular, the process of perception puts us in a position to obtain knowledge (awareness in the form of beliefs) of the external world. How this works is the topic of the present chapter.
Perceptual knowledge is knowledge that is held in the form of perceptual beliefs. Perceptual beliefs are beliefs that are directly based on perceptual experiences. (X is directly based on Y only if there is no third state, Z, such that X is based on Z and Z is based on Y -- this will rule out, for example, the belief that there are atoms from counting as a 'perceptual belief', though based on experience). For example, if I have a visual experience of a cylindrical, blue thing, this experience will typically cause me to believe that there is a cylindrical, blue thing there. Clearly the contents of the two states are logically related, and very strongly at that -- the content of the experience cannot be satisfied unless there is a cylindrical, blue thing there. Pursuant to the argument of section 3.2, we should not say that the two states have the same content, for the content of the experience is something much more specific (so specific and detailed, in fact, that I cannot adequately describe it -- the words "a cylindrical, blue thing" are much too vague and general to do justice to my visual representation). Nevertheless, the content of the experience entails that of the belief, for the properties attributed by the belief are merely determinables of which the properties represented in the experience are determinates.
The epistemic status of the belief clearly depends on that of
the experience -- if the experience is not an awareness (i.e., if I
am hallucinating) then the belief can not be either (i.e., I don't
know that there is a cylindrical, blue thing). So we can see that
this example satisfies the conditions for 'basing' set forth in
section 1.3 -- the belief really is based on the experience.
4.1. The justification of perceptual beliefs
The most interesting question about perceptual knowledge concerns its justification.(51) What makes our perceptual beliefs justified? The answer defended in this chapter is that perceptual beliefs are (in normal cases) justified by virtue of the fact that they are based on perceptual experiences.
Why should this be? We have just seen that perceptual beliefs are based on perceptual experiences (more precisely, since 'perceptual beliefs' are defined to be based on perceptual experiences, what we have seen is that there are perceptual beliefs about the external world). But how does this make them justified?
It makes them justified because of the following
epistemological principle, which I will call the principle of
Appearance Conservatism (the idea being that we should 'save the
(AC) If it seems to S as if P, then (by virtue of that fact) S is
at least prima facie justified in believing that P.
Before we discuss the defense of AC, it is well to make some clarifications. First, note that here, to say S is justified (or prima facie justified) in believing that P does not imply that S actually believes that P, but only that justification (or prima facie justification) for believing P is available to S, whether or not he actually exercises it. Thus, if S is a philosophical skeptic who, while seeing a cat, refuses to accept that there is a cat in front of him, S is still prima facie justified in believing that the cat is there.
Second, the notion of 'prima facie justification' invoked here has two important components: (a) that when a belief is prima facie justified, it does not require an argument, or support by other beliefs, in order to be justified; but (b) when a belief is merely prima facie justified, its justification may be defeated by countervailing evidence. For example, if I seem to see a pink rat on the table, then (according to AC) I have some justification for believing that there is a pink rat on the table, without the need of any argument for that belief. But this does not necessarily mean that I am justified sans phrase in believing there is a pink rat on the table. If I have (or epistemically should have) a well-supported background belief that pink rats do not exist, then this would overcome the initial degree of warrant for the proposition that there is a pink rat on the table, making me, all things considered, unjustified in believing there is a pink rat there.
Notice that the definition of a prima facie justified belief does not say that such a belief need not be justified by anything; it just says that such a belief need not be justified by any other beliefs. A prima facie justified belief might still depend for its justification on another state of awareness, such as a perceptual experience.
AC obviously licenses our ordinary perceptual beliefs in normal circumstances, given the forcefulness of perceptual experience. It explains why it is rational to form beliefs based on one's perceptual experiences but not rational to form beliefs based on, say, one's imaginings. Notice, too, that AC does not make the justification for perceptual beliefs depend on the veridicality of the experiences. If S has a perceptual experience of an F, then it will seem to S as if there is an F, so S will be prima facie justified in believing that there is an F -- regardless of whether there actually is an F or not, and regardless of whether S's experience is a perception or a hallucination. Thus, in the famous brain-in-a-vat scenario, we should say that the brain has all the same justified beliefs that we do -- it just happens to have a lot of false, justified beliefs. I take this to be intuitively the right thing to say.
Lastly, it is important to understand that AC does not provide
that from "It seems to me as if P" one may infer (even non-demonstratively) "P". AC has nothing to do with inferential
justification, and it says nothing about S's knowing or believing
that it seems to S as if P. S does not become justified in
believing that P, under principle AC, by justifiably believing that
it seems to him as if P. Rather, according to AC, it is the fact
that it seems to S as if P that makes S justified in believing that
P. This is important, for if AC were a principle of inferential
justification, one might think the inference in question involved
a suppressed premise something like, "If it seems to me as if P,
then P," which would stand in need of justification.
4.2. In defense of appearance conservatism
AC is a fundamental epistemological principle. When seen in the right light, its truth is self-evident. Nevertheless, it is not always immediately appreciated upon contemplation, and some things can and should be said to place the principle in the right light. Why would one think that its merely seeming as if P justified me in thinking that P?
The first thing we need to get clearly in focus in order to appreciate AC is the notion of epistemic justification at work here. We are talking about justification in the 'internalist' sense, not 'externalist' justification (which might amount to reliability(52)). That is, the relevant sense of "justification" is justification from the subject's own point of view. In contrast, Goldman's sort of 'justification' (if it is a kind of justification at all) can be viewed as objective justification, or justification from a third-person point of view. AC would seem unmotivated at best if one substituted an objective sense of justification.
The distinction can be illustrated with an analogy to practical justification. In Steinbeck's novel The Pearl, the hero, a fisherman, discovers an enormous pearl. He thinks that the pearl will make him and his family wealthy; however, through the rest of the story, the fisherman is beset by people trying to cheat him or rob him, highwaymen pursuing him, and so on. By the end of the book, it is clear that the pearl has been much more trouble than it's worth. An English teacher of mine once posed the following essay question: "True or false: when the fisherman discovered the pearl, he should have just thrown it back into the ocean." Some students answered "true", citing all the trouble it would have saved him. However, the teacher claimed the correct answer was "false", because the fisherman had no way of knowing all the trouble that was going to befall him. After all, for all he knew, it was going to make him rich; it would have been insane to throw it back into the ocean.
What we have here are two different senses of the word "should". From the third-person point of view (the viewpoint of the omniscient observer), the fisherman 'should' have thrown the pearl back into the ocean -- that is the action that would actually have had the best results. However, from the fisherman's point of view, it would have been irrational to throw the pearl back; the rational pursuit of his interests involved taking the pearl and trying to sell it, even though this course of action turned out not to be in fact in his interests. In that sense, the fisherman 'should' have done just as he did.
The relationship between true belief and epistemically justified belief is like the relationship between action that maximizes utility and action that maximizes expected utility: maximizing expected utility just is what it is to rationally pursue utility itself. But the maximization of expected utility can sometimes lead to terrible results. We can even imagine a world populated entirely with expected-utility maximizers who somehow, through bad luck, always wind up shooting themselves in the foot. Likewise, accepting epistemically justified beliefs (and not accepting unjustified ones) is what it is to rationally pursue truth. But justified beliefs can sometimes turn out to be radically mistaken, and we can even imagine a world populated with rational brains in vats who hold little else but justified, false beliefs.
With this said, we can now see the plausibility in an account of rationality put forward by Richard Foley. According to Foley, "a decision is rational as long as it apparently does an acceptably good job of satisfying your goals."(53) The qualifier, "apparently", makes room for the possibility of rational decisions that turn out, unluckily, to thwart one's goals and also introduces relativity to a point of view into the concept of rationality. Foley goes on to explain that epistemic rationality is rationality in the pursuit of one's epistemic goal, one's goal of believing truths and avoiding error.(54) Now, surely, if it seems to S as if P, then accepting P will appear, from S's point of view, to further S's goal of believing truths and avoiding error. Thus, if it seems to S as if P, then it will be (to use Foley's terminology) 'egocentrically, epistemically rational' for S to believe that P. This is essentially what AC says.(55) Clearly, from my own point of view, insofar as I aim to believe truths, it makes sense to accept the things that seem to me to be true. What else?
The second thing that will help to bring the self-evidence of AC into focus is reflection on the way you actually do form beliefs (omitting the cases of self-deception and leaps of faith, which admittedly are cases of epistemically irrational belief). When you are conscientiously seeking to know, you weigh a proposition in your mind. What determines whether you accept it or not? Well, in some cases you consider an argument for it. But this will help only if you accept the premises of said argument. How do you decide whether to accept a premise of an argument? Clearly you do not, in real life, demand an infinite series of arguments (nor, I think, would even this help). Rather, you consider the premise on its own: if it is sufficiently obvious (to you) you accept it; otherwise, not. Moreover, it is difficult to imagine what possible alternative a person critical of this procedure could have in mind. Should you, perhaps, accept the propositions that seem false instead? Surely this cannot be rational. Accept all and only the propositions that are in fact true? But acting on this advice is the same thing as accepting the propositions that seem true to you. (I have a friend who is having difficulty with fruit names. I tell him to put all the fruits that he takes to be apples into a basket. Then I empty the basket, start over, and tell him this time, put all the fruits that are in fact apples into the basket. What could I be thinking?) Or perhaps you should accept nothing at all? But half of your epistemic aim is to gain true beliefs. When P seems to you to be true and there are no grounds for doubting it (no defeaters), what more are you looking for? This is as good as it gets. Why should you withhold P (in the internalist sense of "should") -- because it is still possible that P is false? But this seems to be just an extreme and irrational prejudice for the "avoiding error" part of your epistemic goal, as against the "believing truths" part -- you would not let the mere possibility that P is true suffice for you to accept it, so why let the mere possibility that P is false suffice for you not to accept it?
Consider, in fact, the argument you have just read. No doubt some philosophers will accept it, while others will not. Which ones will accept it? The ones to whom it seems correct, of course. Even if you do not accept it, you still will be thinking in accordance with AC -- the difference will merely be that to you, it does not seem correct. There is no escape from the reliance on how things strike you.
Because of this fact, there is a kind of transcendental argument for AC, for all argumentation presupposes AC in a certain sense.
Consider the following argument in favor of philosophical
(A) 1. In order to know that P, S must have an adequate reason for believing P (where S is any person and P any proposition).
2. Any adequate reason for S to believe that P must be something that S knows to be the case.
3. S cannot have an infinite regress of adequate reasons.
4. A series of adequate reasons cannot be circular.
5. Therefore, S cannot know that P.(56)
This argument, on the face of it, carries some force. Some work
will need to be done to respond to it, to show, in the face of the
argument, how we can have knowledge. But now compare the following
'argument' for the same conclusion:
(B) 1. 3=5.
2. Therefore, S cannot know that P.
This 'argument' carries no force at all, so that it is not even
clear whether it deserves to be considered an argument. This form
of skepticism does not call for a response; there is no
philosophical work to be done in explaining how knowledge is
possible in the face of it. Similarly, consider the following,
(C) 1. There are seventeen inhabited planets in the Andromeda galaxy.
2. If there are seventeen inhabited planets in the Andromeda galaxy, then S knows that P.
3. Therefore, S knows that P.
Just as (B) does not call for a response, (C) does not count as a response. Just as we don't need to take argument (B) seriously, the skeptic does not need to take argument (C) seriously.
Why? What is the difference between (A) and (B)? The difference is not that (B) has a false premise, for (A) has a false premise too (which one, we will see later). The vast majority of arguments presented throughout the history of philosophy have contained one or more false premises (for philosophical arguments are almost always valid, but their conclusions are almost always false). Still, they have not been bad in the way that (B) and (C) are. What we are trying to account for is not why arguments (B) and (C) are unsound, but why (B) and (C) are not serious arguments at all; why do they not have a place in the discussion of philosophical skepticism?
Notice that in a way, this issue is prior to the question of whether philosophical skepticism is true. For unless we (at least implicitly) have a way of distinguishing what counts as a real argument from what doesn't, we cannot even approach the issue, philosophically. This doesn't mean the skeptic would 'win' dialectically; it means the practice of dialectic could never get started. We would have no conception of what it would be to motivate skepticism, or anything else (nor, of course, could this fact itself motivate skepticism).
What is wrong with argument (B) is that, not only does it have a false premise (like most philosophical arguments), but its premise doesn't even appear true; indeed, it seems obviously false. Hence, it doesn't even give us a prima facie reason for doubting that S knows that P. Similarly, the premises of (C), while not obviously false, do not particularly seem true either. Hence, we do not take them even prima facie as grounds for thinking S knows that P. This is why neither (B) nor (C) requires philosophical discussion or response.
The above illustrates the way in which AC is presupposed in
the practice of dialectic: what we count prima facie as motivating
a position is a matter of what at first seems true. Clearly the
intuitively plausible (true-seeming) premises of (A) are taken as
prima facie justified; else the argument would give no motivation
at all to skepticism. (Also clearly, but less importantly for our
present purposes, they are taken as only prima facie justified, or
else discussion of the issue would terminate once the validity of
the argument was agreed upon.) If AC is false, then dialectic as
we practice it is fundamentally and intrinsically irrational. In
calling it 'intrinsically' irrational, I mean to imply that the
problem would be irremediable; there is nothing that could be
recognized as a form of dialectic or reasoning that would not be
irrational. As a result, it is impossible (coherently) to argue
against AC. In engaging in argumentation, one is presupposing that
there is a distinction between real arguments and mere random
strings of sentences such as (B) and (C). If AC is false, there is
no such distinction. Thus, the transcendental argument for AC.
The account of perceptual knowledge I have just given -- namely, that perceptual beliefs are justified by virtue of being based on perceptual experiences, where these experiences are themselves neither justified nor unjustified -- has been described by some philosophers as "the myth of the given."(57) This doctrine of 'the given' is supposed to face the following dilemma: Either the basic cognitive states (in my case, 'perceptual experiences') have propositional content, or they do not. If they have propositional content, then they are the sort of things that can be true or false; therefore, they are the sort of things that can be justified or unjustified; therefore, they stand in need of justification. If they lack propositional content, then they do not stand in need of justification; however, they also cannot confer justification on our beliefs, for only a proposition can stand in relations of confirmation or entailment to a proposition.(58)
The second horn of the dilemma is meant for those who would rest with mere qualia as the basic mental states conferring justification on our beliefs or who would attribute non-propositional content to experiences. Clearly, the first horn of the dilemma is meant for my view, for, while I attribute non-conceptual content to perceptual experiences, this content is still propositional insofar as our experiences represent the world as being a certain way.(59) My visual experience might, for example, represent it as being the case that there is a thing of shape S and color C, where S and C are highly specific properties for which I have no concepts.
In response to the objection, I grant that perceptual experiences can have something like truth and falsity (I call it "veridicality" and "unveridicality" -- the important point is that the content of the experience can be satisfied or not). But I reject the inference that therefore, they can be justified or unjustified. Neither BonJour nor McDowell presents an argument for the claim that if a cognitive state can be correct or incorrect, then it can be justified or unjustified.
The claim seems false, when we consider the case of perceptual experiences. There are, of course, many kinds of mental states that can be and frequently are, in ordinary life, spoken of as justified or unjustified. Beliefs can be justified or unjustified. One can say, "Joe has good reason for believing that" or "Belief in God is irrational." Emotions can even be justified or unjustified. One can say, "Joe has every right to be angry after what you did" or, "There's no reason to be sad." (This is not exactly epistemic justification, but it is a kind of justification.) But one never says, "Joe has good reason to hear that noise" or, "It is irrational of you to have a visual experience of a pink rat" or, "Sue has every right for it to seem to her that there is a cat here." These statements seem nonsensical, like category errors.
Now, BonJour did not exactly claim that perceptual experiences can be justified or unjustified. What he claimed was that any state that has representational content can be justified or unjustified. But (a) it is obvious that there are perceptual experiences. (b) It is obvious that they have representational contents. A visual experience of a cat on the mat represents there to be a cat on the mat. (I have defended both of these points previously.) And (c) as we have just seen, it makes no sense to speak of a perceptual experience as justified or unjustified. From these three facts, it follows that there are states with representational content that can be neither justified nor unjustified. It seems difficult to reject (1) or (2),(60) so BonJour will seemingly have to say that perceptual experiences can be justified and unjustified.
This response may seem like begging the question on my part. I have done little more than to assert that justification doesn't apply to perceptual experiences. But it does at least seem that way -- my examples above show that -- and BonJour has done nothing more than to assert that justification does apply to perceptual experiences (insofar as they have content). So I think the charge of begging the question applies rather to BonJour's objection to the doctrine of the given than to my response.
There is more to say about why justification does not apply to perceptual experiences -- we can explain why justification does not apply to experiences, although the truth of this explanation is, I think, less certain than the simple observation that justification just does not apply to experiences. The explanation is something like the following: The notion of "justification" is evaluative. Moreover, it is a particular kind of evaluative notion: it is a term of praise, while "unjustified" is a term of blame. The import of calling a proposition unjustified is that one (in some sense) shouldn't accept it; the import of calling Joe's anger unjustified is that Joe (in some sense) shouldn't be angry; just as the import of calling an action unjustified is that one shouldn't perform it. The reason justification does not apply to perceptual experience, then, is that it makes no sense to say of someone that they should or shouldn't have a given perceptual experience. When placed in appropriate circumstances, I will just automatically have the visual experience of a cat. To praise or blame me for this would be comparable to praising or blaming me for having (naturally) brown hair.
One can say some loosely related things. I could be blamed
for believing what I (seem to) see. I could be blamed for placing
myself in the circumstances in which I have a visual experience of
a cat. My visual experience could be described as faulty by reason
of being unveridical. My perceptual mechanisms might be called
defective, if for example, they are abnormally prone to illusion and
hallucination. But all of these things need to be distinguished
from blaming me for having the experience.
The account given above is very liberal -- whenever it seems to you as if P, you're prima facie justified in believing that P. But whenever you believe that P, it seems to you as if P. To people who believe in astrology, for example, it seems that astrology is a reliable guide to life. So how is it possible, on this account, for any belief not to be justified?
There are two answers to this. First, the principle of appearance conservatism only describes prima facie justification, not justification simpliciter. A person can very easily be prima facie justified in believing that P but overall unjustified in believing that P. This would be the case if there are other (for him) prima facie justified propositions that significantly disconfirm P (and there's no reason to prefer P over these other propositions). For instance, our background knowledge of astronomy indicates that it is unlikely that the locations of the planets have any measurable influence on human affairs; the planets being inanimate objects, it is still less likely that any influence they had would be meaningful (e.g., giving one generally good luck on a certain day, or directing one to find one's intended mate); finally, modern society contains a great many tricksters and con artists who have an interest in fooling people. These are the sorts of considerations that one would raise to cast doubt on the supposition that astrology is a reliable guide to life.
Second, as I mentioned above, it is not the case that whenever S believes that P, it really seems to S as if P.
The analogy between epistemic justification and practical
justification is again instructive. There is a plausible view
according to which S cannot be blamed for performing an action that
he sincerely (and without negligence) believed to be right (recall
the example of the fisherman who discovers the pearl(61)). This might
make one wonder how it is possible for an action to be irrational,
for surely S will always choose what seems to him the best course
of action? However, pace Socrates, it is false that whenever S
chooses A, A seems to S to be good, or the best available choice.
There is weakness of will: sometimes S intentionally selects the
worse option. This, of course, is (practically) irrational.
Similarly, there are the phenomena of self-deception and 'leaps of
faith': sometimes S accepts what does not seem to be the case, and
this is (epistemically) irrational. This typically happens in the
case of religious beliefs, political ideologies, beliefs about
oneself (especially about one's own abilities and virtues), and
'metaphysical' beliefs that have an emotional appeal (such as the
belief in reincarnation or ESP).
Appearance conservatism is reminiscent of epistemic conservatism, the view that whenever S believes that P, the fact that S believes that P provides at least some degree of justification for P. Richard Foley has criticized this principle,(62) so it may be worthwhile to consider whether his criticism can also be applied to appearance conservatism.
Foley produces the following thought experiment. Imagine that S is considering some proposition, P. S's evidence for P is almost strong enough to make it rational for him to believe P -- it is just barely more rational for him to withhold judgement than to accept P. Suppose, however, that S accepts P anyway. If epistemic conservatism is true, then as soon as S formed this irrational belief, it would become rational -- for, in addition to the evidence he already had for P, S now has, counting in favor of P, the fact that he believes it, which pushes P over the threshold for rational belief.(63) But this does not seem right.
Now, it might seem that this argument can be applied to appearance conservatism, with the following slight redescription of the case: S misconstrues his evidence, overlooks some of his evidence, or commits some fallacy, which causes it to seem to him as if P. As a result, he accepts that P. Even if the degree of prima facie justification resulting from its seeming to S as if P is very small, when added to the evidence S originally had, it would push P over the threshold of rational belief. Does this constitute a counter-example to appearance conservatism?
No, it does not constitute a counter-example to AC, because the result in this case is correct: unless and until S discovers the fallacy in his reasoning, he is justified in believing that P. I think this is quite plausible, even if we consider a more extreme case, a case in which S's evidence did not truly support P at all. Suppose that S is a mathematician who is wondering about the truth of some mathematical propositions. He initially has no reason for affirming or denying any of them (he has no intuition about them, has not heard of any arguments for or against them, etc.) He starts by trying to prove Q. And he succeeds. His proof is valid, he knows the axioms, and everything else goes right, so he knows that Q. Next he turns to P. He again produces what appears (just as clearly as in the case of Q) to be a proof of P, which also starts from axioms that he knows to be true. However, S's argument for P contains a subtle fallacy of which S is unaware, even though he exercised the same due care in constructing this argument that he used in his deliberations about Q. Hence, the argument in some sense does not genuinely support P, even though it makes it seem to S as if P.
I take it that S is justified in believing Q. After all, this is the paradigmatic case of justified belief (for an internalist); if this doesn't count as justification, I don't know what would. Given this, what should we say about P? What attitude would it be rational for S to take towards P: should he accept P, withhold P, or deny P?
It is clear that S should not deny P. And it is hard to see
how it could be rational for him to withhold P either, given that
it would not be rational to withhold Q (refusal to believe can be
irrational, just as belief can) and the argument for P seems just
as compelling. If S declines to accept P, we might well ask him why
he accepted Q but not P. Of course, we, omniscient observers, can
supply a reason for not accepting P. But what we are concerned with
is internal justification; thus, what rationale would S be in a
position to supply for his refusal to accept P? Not the fact that
his argument for P contains a fallacy, for he has no inkling of
that. He has no more reason to suspect the argument for P than he
has to suspect the argument for Q. Of course, S may know, as a
general matter, that people, even mathematicians, are prone to
occasional errors in reasoning. However, unless this fact is to
preclude one from ever obtaining knowledge through reasoning, there
must be some practicable precautions such that their exercise
enables the arguer to justifiably accept the conclusion of an
argument (these precautions might include, for example, checking the
proof over at least once or, for difficult proofs, showing it to
another expert). We stipulate that such necessary caution was
exercised in the case of the proof of Q, and it seems clear that it
might also have been exercised in the case of P. I conclude that
it would be irrational for S not to accept P; thus, S is justified
in believing that P.
Our last objection is adapted from I.T. Oakley's criticism of the notion of foundational beliefs. Oakley imagines a person, S, blind from birth, who suddenly gains the use of his eyes.(64) One of the first things S sees is a piece of paper. Suppose that as a result, it seems to him as if there is a piece of paper there. Oakley says that this individual, at this point, has no justification for believing either that his visual judgements are reliable or that they are not. He invites us to conclude that S is not justified in believing that there is a piece of paper there. This cannot be attributed to the presence of defeaters for the belief that there is a piece of paper there, because S has no reason for thinking his vision is not reliable. According to Oakley, this shows that the visual, perceptual belief that there is a piece of paper here (even in normal cases) depends for its justification on the belief that one's vision is reliable. Thus, it is not foundational. I think Oakley would endorse the following criticism of my view: S needs justification for believing that his 'seemings' are reliable (i.e. that when it seems to him as if P, P is probably true), so its merely seeming to S as if P does not give S any foundational justification for believing that P.
I think this objection is mistaken and possibly rests on a confusion about the concept of foundational justification. The foundationalist, as Oakley recognizes, does not need to claim that our 'foundational' beliefs are initially certain. That is not part of the concept of a foundational belief. What Oakley may not realize is that the foundationalist also need not claim that foundational beliefs have an initial degree of warrant sufficient (in the absence of defeaters) to make them count as knowledge. In fact, all the foundationalist need claim is that there are beliefs that have some initial degree of justification. In particular, the appearance conservative need only claim that its seeming to S as if P produces some degree of prima facie justification for believing that P. The mutual agreement of several such prima facie justified beliefs can then increase their degree of justification to the level required for knowledge.(65) In fact, this supposition fits well with how we should naturally treat Oakley's case: the newly sighted man would probably form rather tentative visual beliefs at first, but when he found that these beliefs tended to cohere with each other and with the beliefs produced by the other senses, he would quickly become more confident.
The intuition that S has at least some justification for thinking that P as a result merely of its seeming to him as if P, can be pumped by imagining a slightly different case. Imagine again that S has suddenly gained the use of his eyes for the first time in his life, through an act of God, while walking down the street. Again, he has no particular reason either for asserting or for denying that his newly-gained faculty is reliable (unless, of course, the proposition that one's senses are reliable is automatically, foundationally justified). Now suppose that the first thing S sees is what appears to be a large truck headed for him. What would it be rational for S to do? Jump out of the way, presumably. But if the circumstance of its visually appearing to S that a truck is headed for him did not provide him with at least some grounds for thinking a truck was headed for him, then there would be no reason for jumping out of the way. There was no reason for S to jump out of the way before the visual experience of the truck occurred (assume he did not hear the truck or in any other way sense it), and if S's visual experience really carries no information about whether a truck is approaching, his attitude towards the possibility of a truck hitting him should remain completely unchanged.
Notice that I am not arguing that in order to have reason to jump out of the way, S must know or even be justified in believing that a truck is approaching. But S must have at least some reason for thinking a truck is approaching, since he clearly had no reason for jumping out of the way before gaining his eyesight (when he clearly also had no reason for believing a truck was approaching). This is all I need to uphold the principle of appearance conservatism.
5. DIRECT REALISM AND SKEPTICISM
In this, final chapter, I examine the bearing of the version
of direct realism I have developed on three important skeptical
problems. These skeptical problems I call the regress argument,
Hume's problem, and the brain-in-a-vat argument. We will see that
my direct realist theory provides a natural response to each form
5.1. Hume's problem
Of the three arguments we have to consider, David Hume's
argument for external world skepticism is the one most obviously
related to the issue of direct vs. indirect realism. Hume argued
essentially as follows:(66)
1. The only things of which we are directly aware are our own mental states.
2. No conclusion about physical things can be deduced from premises about mental states.
3. No conclusion about physical things can be inductively inferred from premises about mental states.
4. Hence, we have no knowledge of physical things.
Earlier theorists have tended to assume that we have foundational knowledge of something if and only if we are directly aware of it, so in Hume's view, (1) secures that we do not have foundational knowledge of physical facts. (2) and (3) together secure that we do not have inferential knowledge of physical facts, for induction and deduction are the only kinds of inference there are. Finally, since foundational knowledge and inferential knowledge are the only kinds of knowledge there are, it follows that we have no knowledge of the physical world.
Idealists such as Berkeley would reject premise (2): since physical things are just collections of ideas, facts about physical things certainly can be deduced from facts about ideas. This move, however, does not seem plausible. Still less plausible than the thesis of idealism itself -- that there are only minds and ideas -- is the assertion that idealism stands in opposition to skepticism(67). Pace Berkeley, by "a physical object" I do not mean a collection of ideas (nor does any normal English speaker), so if the idealist holds that all we know are minds and ideas, he is ipso facto holding that the conclusion, "We have no knowledge of physical things," is true. In short, this is not a way of avoiding skepticism at all, but just a way of relabelling a skeptical thesis.
Indirect realists will reject (3). Hume, of course, had a further argument to give in support of (3). In Hume's view, induction is a process of generalization from particular premises. We observe a certain regularity to hold in a limited number of cases, or we observe some of the A's to have a certain characteristic; and we infer that the regularity will also hold in the unobserved cases, or that the unobserved A's will also have the characteristic. If one observes a large number of white swans, one may inductively infer that all swans are white. If one observes a large number of bodies falling when dropped, one may infer that all bodies fall when dropped. But one could not inductively infer that all swans are white from premises about, say, black cats. Nor could one make the inference that all bodies fall when dropped on the basis of observations of bodies that weren't dropped or that did not fall. Now, according to Hume, the only premises we get to start with when we are seeking knowledge of the physical world are premises about our own mental states. So the only things that could be inductively inferred would just be generalizations about mental states -- mental states of type A always have feature F, mental state A is always followed by mental state B, etc. One never gets the chance, so to speak, to step outside one's head in order to observe the correlation between mental events and physical events, so one never has the data required to determine that perceptual experiences are indeed caused by physical events.
One can certainly challenge Hume's account of induction -- one could argue that there is another kind of non-deductive inference which does not amount to generalization from particular (or singular) premises (whether one calls this third kind of inference a form of 'induction' or not does not matter). One could proceed to argue that it is through this sort of inference that beliefs about the external world are justified.(68) The viability of this project is, I think, open to debate (my own judgement is that no one has given anything but the sketchiest account of how such an inference would go). There is, however, a much simpler and more straightforward approach available.
The direct realist will reject premise (1). We are directly aware of physical things in perception, and we have foundational knowledge of physical facts, so Hume's problem, the problem of how you can ever 'get outside your head,' doesn't get off the ground.
Again, Hume had a further argument to present in favor of (1).
It is a version of the argument from illusion:
This very table, which we see white, and which we feel hard,
is believed to exist, independent of our perception, and to be
something external to our mind, which perceives it... But this
universal and primary opinion of all men is soon destroyed by
the slightest philosophy, which teaches us, that nothing can
ever be present to the mind but an image or perception... The
table, which we see, seems to diminish, as we remove farther
from it: but the real table, which exists independent of us,
suffers no alteration: it was, therefore, nothing but its
image, which was present to the mind.(69)
The argument seems to be something like the following:
1a. Either we are directly aware of the real table, or we are directly aware of a mental image of the table (exclusive or).
1b. The thing we are directly aware of seems to diminish in size as we move away from it.
1c. The real table does not diminish in size as we move away from it.
1d. Therefore, the thing we are directly aware of is not the real table. (from 1b and 1c)
1e. Therefore, we are directly aware only of a mental image of the
table. (from 1a and 1d).
I think that if Hume has established (1e), then we might as well
grant him the full generalization,
1. The only things of which we are directly aware are our own
However, Hume has not established (1e). There are quite a few things wrong with the argument. First, if the main argument is correct -- so (4), "We never have knowledge of physical things," is true -- then we could not know the premises of the argument for (1). For (1c) is a premise about a physical thing. According to Hume, we have no justification at all for accepting (1c). However, Hume might respond by saying that what he is doing is providing a sort of reductio ad absurdum of our ordinary, common sense beliefs. These beliefs include (1c) and they also include the negation of (4). So I think Hume can escape this problem.
The second problem is that (1b) is not true. When I move away from a table, it does not seem to me that the table is shrinking. It just seems that the table is getting farther away. I can imagine what it would be like for the table to shrink while not getting farther away, and I can imagine what it would be like for the table to shrink while also getting farther away at the same time. In either case it would look as if the table was shrinking, but it certainly does not look that way presently (when it is merely getting farther away). Hume might respond by changing the case. Take the case of the infamous straight stick that appears bent while half-submerged in water. Here I think it is clear that the thing one sees really does seem to be bent.
However, there remains a third problem. The inference to (1d) is invalid. If the real table does not diminish in size and the thing we are directly aware of does, then it follows that the real table is not identical with the thing we are directly aware of (by Leibniz' law). But if the thing we are directly aware of merely seems to diminish, then that conclusion does not follow, for it may well be that the thing seems to diminish but does not actually diminish.(70) Consider again the straight stick that appears bent while half-submerged in water. The only thing we see is the (real) stick, which is straight, but appears bent. There is no contradiction in that, so there is no need to posit a second stick. Of course, Hume might try just revising his premise, to claim that the stick we see is bent. But then there is no reason to accept the premise. The direct realist will simply reject that we are directly aware of something that is bent; instead, we are directly aware of something that falsely appears bent.(71)
Why did Hume think that (1d) followed from (1b) and (1c)
anyway? Perhaps he had a suppressed premise, something like
1f. If one is directly aware of X, then X is the way it appears.
which is a strong version of the content-satisfaction condition we
put forward in section 2.4. But notice that if Hume's argument were
sound, a parallel argument could be constructed to prove that we are
not directly aware of a mental image after all:
5. The thing we are directly aware of seems to be in front of us, in physical space.
6. The mental image (if there is one) is not in front of us, in physical space.
7. Therefore, the thing we are directly aware of is not the mental
Or perhaps even better:
8. The thing we are directly aware of seems to be the real table.
1f. If one is directly aware of X, then X is the way it appears.
9. Therefore, the thing we are directly aware of is the real table.
Hume, it seems, has already granted premise (8) in the first sentence of the passage quoted above.
What is wrong with these arguments? Apparently, that they rely on a too-strong content-satisfaction condition for direct awareness. An object of direct awareness need not be the way it appears in every respect; it only need be the way it appears in some important respects. Hume must grant this point, because he cannot allow the arguments for (7) and (9) to go through. But then there remains no reason why we may not be directly aware of the real table, or the straight stick, despite the minor (alleged) illusions.
A last move for the Humean would be to turn to the case of hallucination -- e.g., I hallucinate a pink rat on the table. In this case, what is the direct object of awareness? Since no physical object remotely like a pink rat is present, the object of direct awareness must be a mental object, the image of a pink rat. Since, as we argued in section 2.3, the same kind of mental state occurs during a normal perception as during a hallucination, it becomes at least plausible that during normal perception one is also directly aware of a mental image.
The correct response to the question, "What is the direct
object of awareness in hallucination?", however, is not "a mental
image," nor is it "some physical thing." The correct response is
"nothing," because a hallucination is not a state of awareness at
all. "Awareness" is a success term, so a radical misrepresentation
such as a hallucination does not constitute the awareness of
anything. To seek the object one is aware of when one hallucinates
is comparable to seeking the fact one knows when one has a false
belief. To conclude that this object must be a mental object
because no physical object will do, is comparable to positing a
special kind of 'false fact' that one can know in having a false
belief because no actual fact will do.
5.2. The regress argument
The regress argument, due originally to Agrippa, made a brief
appearance in the last chapter:
1. In order to know that P, S must have an adequate reason for believing P (where S is any person and P is any proposition).
2. S's reason for believing P is adequate only if it is something S knows.
3. S cannot have an infinite regress of reasons.
4. A series of adequate reasons cannot be circular.
5. Therefore, S cannot know that P.(72)
If S has a series of reasons standing behind his belief that P, then this series must have one of three structures: either it terminates in some 'foundational propositions' (which is ruled out by premises (1) and (2)), or it stretches back infinitely (which is ruled out by (3)), or it goes in a circle at some point (which is ruled out by (4)). Since we have ruled out each of these possibilities, we must conclude that S cannot have a series of adequate reasons for his belief that P.
Before discussing the direct realist response, let us say some more to motivate each of the premises of this valid argument. Suppose I claim that the world is going to come to an end in the year 2000. You ask me why I believe that, and I say, "Oh, no reason. I just believe it." In this case, you would conclude that I do not know that the world is going to come to an end in the year 2000. It seems as if the reason for this is (1): I don't know that the world is going to come to an end because I have no reason for believing it.
Suppose again that I claim the world is going to come to an end in the year 2000 and you ask me why I believe this. This time, I say I believe this because I think the Plutonians are going to launch a lethal nuclear strike in that year.(73) You ask me why I believe the Plutonians are going to launch the nuclear strike. I say, "Oh, no reason. I just believe it." Again, you would likely conclude that I do not know that the world is going to come to an end in the year 2000. This time, I have a reason for believing it, but my reason is something that I do not know. Thus, premise (2) accounts for this case.
Next, suppose that when asked why I think the world will come to an end, etc., I report, "I believe the world will come to an end in the year 2000, because the world will come to an end in the year 2000." This doesn't seem like an adequate reason. Nor does my situation seem to be helped if I make the circle bigger. Suppose I say, "I believe the world will come to an end in the year 2000 because the Plutonians are going to launch a lethal nuclear strike in that year, and I believe the Plutonians are going to launch a lethal nuclear strike in the year 2000 because the world is going to come to an end in that year." This is no better, and it won't help if I incorporate 3 beliefs, or 4, etc., into the circle.
Lastly, it does not seem plausible that any ordinary, finite human mind has an infinite series of reasons for any of its beliefs. Note that, according to (1), for S to know that P, there must be an adequate reason for believing P which is S's actual reason for believing it. This is important, because you might think there is no difficulty in the mere existence of an infinite series of facts each confirming the next.
The most common response to the argument is to reject premise (1). Most people will say there are certain foundational propositions for which we do not need reasons in order to know. But the skeptic will certainly want to know what makes these foundational propositions so special. In other words, suppose that A is some foundational proposition. And let B = the proposition that there is a purple elephant roaming the surface of Pluto. If we may assume A to be the case without having any reason for it, why may I not also just assume B to be the case? There is, by hypothesis, no more reason for accepting A than there is for accepting B, so what makes A relevantly different from B? It would seem that there are just two possibilities: either nothing differentiates them, or there is some feature, F, that A has but B lacks (or that B has but A lacks).
Suppose nothing differentiates them. In that case, it seems pretty clear that since B is unjustified, A is unjustified too, and we do not know it to be true. We surely do not want to say that any arbitrary proposition is justified.
Suppose, then, that A and B are differentiated by some feature F. To take the classic example, A might be a 'clear and distinct' perception, while B is not.(74) In that case, it appears that A is not foundational after all -- i.e., not lacking in any reason for its acceptance -- for the reason for accepting that A is the fact that A has the feature F. This seems to constitute a reductio ad absurdum of the idea of foundational knowledge: from the assumption that A is foundational, we seem to be led to the conclusion that A is not foundational after all.(75)
Now, my direct realist view may be regarded as a rejection of either premise (1) or premise (2) in the regress argument, depending on how one wants to construe the notion of a 'reason for believing.' The regress argument assumes (even the argument I gave for (2) assumes) that a reason for believing P must itself be something that S believes. But in my view, there are at least two different kinds of state of awareness, beliefs and experiences, and experiences can support beliefs, just as beliefs can support other beliefs. However, experiences are not items of knowledge and are neither justified nor unjustified (as discussed in section 4.3, under objection #1). Hence, the threatened regress ends at the point where our beliefs make contact with our perceptual experiences.
This response can be seen as a rejection of (1) if one construes "reason for believing" in such a way that reasons must always be beliefs -- in that case, I will say that those beliefs which are based directly on perceptual experiences do not have 'reasons' for them, but they may nevertheless constitute knowledge. And I would respond to the supposed reductio of foundationalism given above as follows. The difference between one of these perceptual beliefs and the belief that there is a purple elephant on Pluto is simply that the former are, but the latter are not, based on experience. Suppose that S is having a perceptual experience of a cat on the mat, and not of a purple elephant on Pluto. Then S is prima facie justified in believing that there is a cat on the mat but not prima facie justified in believing that there is a purple elephant, under the account of justification for perceptual beliefs I have given. Nor can we now claim that "S is having a perceptual experience of a cat on the mat" is S's reason for believing that there is a cat on the mat. For we are presently assuming that 'reasons' are beliefs, and on my account, S need not believe that S is having a perceptual experience of a cat on the mat in order to gain the postulated prima facie justification for thinking there is a cat on the mat; rather, S need only have that perceptual experience. To appreciate this point, it is crucial that one distinguish the reason why S is justified in believing P (i.e., the fact that explains why S's belief is justified) from S's reason for believing P. On the present account, S has no such 'reason' (i.e., no further belief from which P is inferred), but this does not mean there is no explanation for why S's belief counts as justified. The occurrence of the relevant sensory experience is that explanation.
On the other hand, suppose one construes "reason for
believing" in a broad sense, so that both experiences and beliefs
can count as 'reasons.' In that case, my account is a
straightforward rejection of (2). S's reason for believing P might
be an experience. Since knowledge is a form of belief and
experiences are not beliefs, S's reason for believing P would not
be an item of knowledge. The argument I gave above in favor of (2)
fails against this account. There, I imagined that my reason for
thinking the world will come to an end in the year 2000 is that the
Plutonians are going to launch a lethal nuclear strike against the
Earth in that year. When pressed as to why I believe the latter
proposition, I admit that it was just a whimsical hunch. However,
this case is not comparable to the kind of case that my account
allows. My account does not allow that S's adequate reason for
believing P might be an unjustified belief of S's. It allows,
rather, that S's reason for believing P might not be a belief at
all. And if we imagine that sort of case, I think the intuitions
will go my way: suppose S believes that there is an elephant in the
room, because he has a visual experience of one (it doesn't matter
whether the experience is a perception or a hallucination). Then
intuitively, S's belief is (at least prima facie) justified.
5.3. The brain-in-a-vat argument
The brain-in-a-vat (BIV) argument is usually formulated as
1. If S is justified in believing P and P entails Q, then S is justified in believing Q.(76)
2. I'm not justified in believing that I'm not a brain in a vat.
3. That I have a body entails that I'm not a brain in a vat.
4. Therefore, I'm not justified in believing that I have a body.
Of our three skeptical arguments, this is perhaps the one to which
the relevance of direct realism is least obvious. In order to see
how my version of direct realism bears on this form of skepticism,
we will first need to examine two contemporary objections to the
brain-in-a-vat argument. We will see that the BIV argument can and
should be reformulated in such a way as to avoid these objections.
However, we will also be able to see that the reformulated argument
presupposes a form of indirect realism, and that my account of
perceptual knowledge is immune to the argument.
5.3.1. Two contemporary responses
Premise (1) of the skeptical argument is called "the Closure
Principle" -- the principle that the set of propositions one is
justified in believing is closed under entailment. This principle
is highly plausible intuitively, but some epistemologists have
challenged it.(77) Fred Dretske cites the following case: Imagine
you're at the zoo. In a pen clearly marked "zebras", you see some
black-and-white-striped, equine animals. It seems that you have
good reason to believe that those animals are zebras. Surely their
zebra-like appearance counts strongly in favor of their being
zebras, as does their being in the zebra pen at the zoo. Now, their
being zebras entails that the animals are not mules that have been
cleverly disguised by the zoo authorities to look like zebras. Are
you justified in believing that the animals are not cleverly
disguised mules? Dretske says no:
If you are tempted to say "Yes" to this question, think a
moment about what reasons you have, what evidence you can
produce in favor of this claim. The evidence you had for
thinking them zebras has been effectively neutralized, since
it does not count toward their not being mules cleverly
disguised to look like zebras.(78)
Dretske views this as a counter-example to the Closure Principle: you are justified in believing that the animals are zebras, that they are zebras entails that they are not cleverly disguised mules, but you are not justified in believing that they are not cleverly disguised mules.
What can we say against this? Can we defend the intuition behind the Closure Principle? The Closure Principle holds that when S is justified in believing P and P entails Q, S is justified in believing Q. There are at least two reasons why this might be the case. One reason would be that the very same justification S has for P also counts as justification for Q -- i.e., whatever evidence supports P would also support Q when Q is a logical consequence of P. If that were generally true, then it would follow that the Closure Principle is true. Dretske's example effectively refutes that principle. Dretske's example shows that what is evidence for P need not be evidence for every logical consequence of P. The fact that the animals in the pen look like zebras is evidence that they are zebras, but it is not evidence that they aren't mules cleverly disguised to look like zebras.
However, another reason for believing the Closure Principle is this: if S is justified in believing P and P entails Q, then P, itself, constitutes an adequate reason for believing Q. The idea is simply that deduction is an epistemically permissible way to expand your corpus of beliefs. This idea is probably the real source of the intuition in favor of closure. What Dretske says about his zebra-in-the-zoo case does not address this idea; what he says is only that the evidence you have for thinking the animals are zebras is not evidence against their being cleverly disguised mules. That much seems clear. But Dretske doesn't explain why the fact that the animals in the pen are zebras wouldn't be a sufficient reason for thinking that they're not cleverly disguised mules, given that you justifiably believe that the animals are zebras.
Peter Klein has pressed the above point.(79) However, Klein argues that, even though the above reply enables the skeptic to defend premise (1) against Dretske's attack, it nevertheless leaves the skeptic with a problem in defending premise (2). To defend premise (2), what the skeptic has to argue is that I have no available source of justification for the proposition that I'm not a brain in a vat. In defending the Closure Principle, we have just said that when P entails Q and P is justified, P is itself an adequate source of justification for Q -- deductive arguments are a way of justifying propositions. So the skeptic must argue that, among other things, I don't have that kind of justification for believing I'm not a brain in a vat. Since, as premise (3) now assures us, the proposition that I have a body would provide just that sort of adequate justification for thinking I'm not a brain in a vat, the skeptic would have to argue that I don't have that proposition in particular available as a source of justification for thinking I'm not a brain in a vat.
What this means is that, in order to establish premise (2), the skeptic would first have to establish that I'm not justified in believing that I have a body, since that belief, if justified, would be one adequate source of justification for the claim that I'm not a brain in a vat. But that I'm not justified in believing that I have a body is just the conclusion of the argument. So it seems that the skeptical argument begs the question -- one of its premises can't be established unless the conclusion is established first.(80)
To put the point another way:(81) suppose I start out thinking that I'm justified in believing I have a body, and the skeptic then proposes to argue me out of this position. He starts by informing me that the Closure Principle is true, because it is epistemically permissible to add to your body of beliefs the deductive consequences of any of your justified beliefs. The skeptic then asserts that I have no available justification for believing I'm not a brain in a vat. I naturally reply, "Yes I do, because I justifiably believe I have a body, which entails that I'm not a brain in a vat, and you just told me that it is epistemically permissible to add to my belief system the deductive consequences of any of my justified beliefs." What will the skeptic say? Why is this not an adequate source of justification for thinking I'm not a brain in a vat? Because I'm not justified in thinking I have a body? But that's just the conclusion the skeptic is trying to establish; I'm not going to grant that off hand. So the skeptic needs some other argument for the claim that I'm not justified in thinking I have a body. But if he has such an argument, then he doesn't need to use the brain-in-the-vat argument to begin with, because he would have an independent argument for the same conclusion.
In short, Dretske's response to the argument is this: Okay, I
don't know whether I'm a brain in a vat, but that doesn't matter;
I still know that I have a body. And Klein's suggestion is this:
Suppose we grant the Closure Principle. Then the skeptic's claim
that I'm not justified in believing I'm not a brain in a vat just
begs the question.
5.3.2. What's wrong with these replies?
In spite of what we have said above in the way of philosophical analysis, I think intuition still balks at these responses. It seems as if there must be something wrong with them. It doesn't seem right that I can just admit that I don't know whether I'm a brain in a vat and continue to go on claiming to know all the things I have hitherto thought I knew. But nor does it seem right that the fact that I have two hands could be an adequate proof that I'm not a brain in a vat.
Let's try to articulate why the responses seem wrong.
Consider the following two, possibly analogous cases:
Case (i) (the courtroom case): Imagine that S is on trial for murder. The prosecution offers as evidence the fact that S's blood was found at the scene of the crime along with the victims' blood. The best explanation for this, they say, is that S cut himself while stabbing his victims. The jurors find this argument initially persuasive. However, the defense attorney offers an alternative hypothesis: perhaps S is innocent, and the blood was planted at the crime scene by overzealous police officers seeking to frame S.
We can imagine how jury members might react to the defense hypothesis. Some jurors might feel that, being unable to rule out the alternative hypothesis, they should acquit S. Jurors arguing for a conviction might argue that the defense hypothesis should be rejected because it requires an improbable conspiracy on the part of the police department, because the police had no motive to frame S, and so on. But one thing that a jury member could not be expected to say is the following: "Okay, I agree that we have no reason for rejecting the defense hypothesis. For all I know, S was framed by the police. But I still think we should convict S anyway, because we know he did it." Another thing that a juror would probably not come up with is this: "The defense attorney's claim that we can't rule out his hypothesis begs the question, because if we know S is guilty, then we can rule out the defense hypothesis."
The first of these unsatisfactory responses parallels
Dretske's response to the skeptic. The second parallels Klein's
response. If either of these responses were offered, they would
probably be met with looks of puzzlement from the other jury
Case (ii) (the scientific case): Two scientists are arguing over the interpretation of quantum mechanics. Physicist A proposes the Copenhagen interpretation, noting that it accounts for a number of weird experimental results. The Copenhagen interpretation is the received view. Physicist B then proposes Bohm's interpretation of quantum mechanics, which is incompatible with the Copenhagen interpretation, noting that Bohm's theory accounts for all of the same experimental results. Now A might be expected to object that Bohm's theory conflicts with relativity, or that it is somehow less parsimonious than the Copenhagen interpretation. But one thing A would probably not say is the following: "Okay, I agree that I can't rule out Bohm's theory; for all I know, that may be the right interpretation. But nevertheless, I still know that the Copenhagen interpretation is correct." Nor could we expect A to resort to the following objection: "Your claim that I can't rule out Bohm's theory begs the question, because if I know the Copenhagen interpretation is right, and Bohm's theory conflicts with the Copenhagen interpretation, then I can rule out Bohm's theory."
Again, both of these would strike us as illogical replies; yet
they are, respectively, analogous to Dretske's and Klein's responses
to the brain-in-a-vat argument. If Dretske's or Klein's response
to the brain-in-a-vat argument is correct, then one of these absurd
replies should be correct in the courtroom case and the scientific
Dretske, of course, would challenge the analogy. It is not
his view that, in order to know something, one never needs to rule
out alternative possibilities. Rather, his view is that there are
certain kinds of alternatives that one needs to rule out (call them
the "relevant alternatives") in order to know something, and there
are other kinds of alternatives that one does not need to rule out
(the "irrelevant alternatives").(82) Dretske would claim that the
brain-in-a-vat hypothesis is an irrelevant alternative, but the
defense hypothesis in the courtroom case and Bohm's theory in the
scientific case are each relevant alternatives. Of course, this
claim remains only a promissory note until it is explained what
makes an alternative "relevant". According to Dretske, an
alternative is relevant only if it is genuinely possible, in a
[T]he difference between a relevant and an irrelevant
alternative resides, not in what we happen to regard as a real
possibility (whether reasonably or not), but in the kind of
possibilities that actually exist in the objective situation.(83)
Dretske doesn't give a precise analysis of the sense of "possible" he is invoking here, but his discussion makes it clear that it is a sense stronger than logical possibility, stronger than physical possibility, and non-epistemic.(84) Whether something is genuinely possible is supposed to be independent of our beliefs, evidence, and/or knowledge.
Dretske might argue that the brain-in-a-vat hypothesis is an irrelevant alternative, on the grounds that it is not, in his sense, genuinely possible for me to be a brain in a vat (perhaps because no one possesses the technology for keeping a disembodied brain alive, nor for stimulating it in the right ways). As a result, it is not a condition on our knowing about the external world that we rule out the brain-in-a-vat hypothesis.
However, Dretske would have difficulty distinguishing this
case from the scientific case. In the scientific case, we have two
competing physical theories. If one of these theories is true, then
the other one is not only false, but physically impossible. If
Bohm's theory is true then, for example, it is physically impossible
for particles to have indeterminate positions, as required by the
Copenhagen theory. This is typical of cases of competing scientific
theories. Thus, Dretske's account would imply that the two
hypotheses in the scientific case are not both relevant
alternatives; whichever theory is false is an irrelevant
alternative, because it is not genuinely possible. And therefore,
Dretske's theory really would license the conclusion that one could
know the Copenhagen interpretation to be true (assuming that it is
true) even though one has no reason to reject Bohm's interpretation.
5.3.3. A reformulation of the argument and the direct realist's response
This casts doubt on the validity of Klein's and Dretske's replies. However, all we've done so far is to pump the intuition that there is something wrong with those replies to the skeptic. We haven't actually explained what is wrong with them. Klein's response, at least, seems to work against the skeptical argument as formulated, so we need to reexamine the skeptic's argument.
The problem is that, as we formulated and defended the Closure
Principle, your having justification for the claim that you're not
a brain in a vat would be a result of your having justification for
the claim that you have a body. But what the skeptic wants to say
is that your having justification for the claim that you're not a
brain in a vat is a precondition on your having justification for
the claim that you have a body -- that you need to first be in a
position to know you're not a brain in a vat in order to be
justified in believing that you have a body. So the Closure
(1) If S is justified in believing P and P entails Q, then S is
justified in believing Q,
doesn't do justice to the skeptic's motivating idea.
Of course, it would not be acceptable to merely substitute the
following, logically stronger principle:
(5) If P entails Q, then a precondition on S's being justified in
believing P is that S be justified in believing Q.
This principle has no intuitive plausibility. For one thing, it would entail that a precondition on being justified in believing P, for any P, is that one be justified in believing ~~P, and also a precondition on being justified in believing ~~P is that one be justified in believing P; so one could never be justified in believing anything. No, the skeptic needs to say something more specific about the relationship between the brain-in-a-vat hypothesis and the claim that I have a body than that one entails the negation of the other. The skeptic needs to formulate an epistemological principle weaker than the absurd principle (5) above but still entailing that ruling out the brain-in-a-vat hypothesis is a precondition on knowing I have a body. At the same time, we want this epistemological principle, whatever it is, to account for our intuitions about the courtroom case and the scientific case discussed above.
So here's what we're looking for: we want an epistemological principle that, first of all, shows why in the courtroom case, we cannot merely grant that the defense hypothesis of a police conspiracy may be true and still claim to know that S is guilty. It should at the same time explain why, presumably for the same reason, we cannot merely grant that Bohm's interpretation of quantum mechanics may be correct and still claim to know that the Copenhagen interpretation is the right one. The Closure Principle, of course, would satisfy this desideratum. But second, we want the principle to explain why in the courtroom case, the defense attorney's argument does not beg the question, and in the scientific case, the physicist criticizing the Copenhagen interpretation is not begging the question either. This is where the Closure Principle falls short, because it does not tell us why the received view in these cases couldn't count as a source of justification for rejecting the rival hypotheses. Finally, we want our epistemological principle to explain why one might think the brain-in-a-vat argument was sound. We don't actually want to make the brain-in-a-vat argument out to be sound; in fact, it is a bonus if we can explain why the brain-in-a-vat argument is not sound, even though it might reasonably appear so.
Now consider the following epistemological principle, which I
will call the "Preference Principle" (because it concerns the
preference of one hypothesis over another):
(6) If E is any evidence and H1 and H2 are two incompatible
explanations for E, then S is justified in believing H1 on the
basis of E only if S has an independent reason for rejecting
In this context, an "independent reason" means a reason distinct from H1 and not justified, either directly or indirectly, through H1. So the idea is that when you're faced with two competing explanations of certain data, you can't accept the one explanation until you have first ruled out the other. One's reasons for rejecting H2 might include a priori reasons, such as that H2 is significantly less simple than H1, as well as empirical reasons.
Notice how the Preference Principle is weaker than the principle (5) that we rejected above. (5) would require us to be able to rule out each logical contrary of H1 (in the sense of having reason to accept its negation), in order to be justified in accepting H1. Thus, for example, we would have to be able to rule out (~H1 & Q), where Q is any arbitrary proposition, as a precondition on being justified in accepting H1. But the Preference Principle doesn't demand this. It only concerns the alternative explanations of the data. If H1 is an explanation of E, (~H1 & Q) will not generally be an explanation of E. For instance, Newton's Theory of Gravity (along with background assumptions) is an explanation for the fact that things fall to the ground when dropped. But the proposition, "Newton's Theory of Gravity is false and my socks are white" is not an explanation for the fact that things fall to the ground when dropped. So in order to accept the Theory of Gravity, we are not required to have an independent reason for rejecting, "the Theory of Gravity is false and my socks are white." This is fortunate, since the only reason I in fact have for rejecting that proposition is the Theory of Gravity itself.
The Preference Principle seems plausible intuitively, and it satisfies our desiderata. In the courtroom case, the hypothesis that S is guilty and the hypothesis that S was framed by the police are two competing explanations for the fact that S's blood was found at the crime scene, so we cannot accept that S is guilty on the basis of that evidence unless we rule out the other hypothesis.(85) Also, relying on the Preference Principle, the defense attorney is not open to a charge of begging the question. To assert that we have no reason for rejecting the defense hypothesis may require begging the question, because in order to establish that we have no such reason, one must first establish that we don't know S is guilty. However, in applying the Preference Principle, the defense attorney only need assert that we have no independent reason for rejecting the defense hypothesis, i.e., no reason that is independent of the claim that S is guilty. And to argue that we have no reason independent of the claim that S is guilty for rejecting the defense hypothesis clearly does not require one to first establish that we don't know that S is guilty. Similarly, for the scientific case, we have two competing hypotheses, so according to the Preference Principle we must rule out Bohm's theory before we can accept the Copenhagen theory, and we must do so on grounds independent of the Copenhagen theory.
Now when we turn to the brain-in-a-vat argument, we can see
why the argument would appear to be sound and non-question-begging
-- if one accepts a form of indirect realism. If one accepts that
beliefs about the external world are hypotheses for which the
evidence is that we have certain sorts of sensory experiences, then
the Preference Principle comes into play. Frank Jackson states this
view particularly clearly:
Our beliefs about objects, all of them (including the ones
about causal links between sense-data and objects), form a
theory, 'the theory of the external world', which is then
justified by its explanatory and predictive power with respect
to our sense-data.(86)
Our ordinary, common sense beliefs about the external world, on the one hand, and the brain-in-a-vat hypothesis, on the other hand, are then two competing explanations for the same data. Therefore, just as in the courtroom case and the scientific case, we must rule out the brain-in-a-vat hypothesis in order to be justified in accepting our common sense beliefs about the external world on the basis of that data. So the indirect realist is faced with the responsibility of refuting the brain-in-a-vat hypothesis.
On the other hand, we can also see why we need not accept the brain-in-a-vat argument with its skeptical conclusion -- if we adopt the account of perceptual knowledge put forward in chapter 4. According to my account, perceptual beliefs about the external world are foundational; they are not hypotheses posited to explain anything. Of course, some beliefs about the external world are hypotheses posited to explain evidence, such as atomic theory or electromagnetic theory; but immediate perceptual beliefs such as "Here is a red, round thing" are not. So the direct realist is in a position to make a principled distinction between, on the one hand, the courtroom case or the scientific case, where alternative hypotheses do need to be ruled out; and, on the other hand, the case of our ordinary perceptual beliefs. In the courtroom case and the scientific case, we really do have hypotheses posited to explain certain data, and as a result, the justification of a particular hypothesis depends upon a claim of superiority for that hypothesis over the alternative explanations.
Furthermore, we can now explain simply how I know I'm not a
brain in a vat. When I look at my two hands, for example, I am
prima facie justified in believing that I have two hands. That
belief, in turn, entails that I am not a brain in a vat. Notice
that what is a question-begging argument for the indirect realist
is not question-begging for the direct realist. For the indirect
realist, the argument just proposed is circular, because I have to
start with the mere fact that I have certain sorts of experiences.
From there, I don't have any way of getting to the claim that I have
two hands except by ruling out the alternative explanations of those
experiences. So I can't use the fact that I have two hands to rule
out skeptical alternatives. But the argument is not circular as
proposed by the direct realist, because I'm allowed to start from
the claim that I have two hands. I'm not required to give an
argument for that, so in particular I do not have to give an
argument for it that presupposes the conclusion that I'm not a brain
in a vat. The conclusion that I'm not a brain in a vat can be
justified by a linear argument starting from foundational
5.3.4. An objection
The direct realist line gets us out of the skeptical problem. But does it perhaps get us too much? There are some circumstances in which we genuinely need to consider alternative 'hypotheses' to our perceptual judgements. We do not want our epistemological theory to rule out all such circumstances automatically. We don't want our response to the brain-in-a-vat argument to turn into a recipe for dogmatism with respect to perceptual beliefs.
Here is an example of the sort of circumstance I have in mind. Suppose I am driving late at night. There's a stone wall running along the side of the road. And suppose I seem to see a ghostly white figure at the side of the road walk through the stone wall, at a place where there is no opening. Now I can consider a few different hypotheses. One possibility is that I just saw a ghost walk through a wall. Another possibility is that there was actually an opening in the wall that I somehow did not see, and I saw a person who was walking through it. And a third possibility -- the 'skeptical' hypothesis if you like -- is that there was neither person nor ghost there at all, and I merely hallucinated it. In this circumstance, it seems that I should weigh the advantages and disadvantages of the possible explanations for my experience, as the Preference Principle would suggest. In fact the rational conclusion seems to be the 'skeptical' one.
But wouldn't my direct realism enable me to resist this, just as it enables me to resist the brain-in-a-vat argument? Suppose I say that I have foundational knowledge that the white figure just walked through the wall, and since this entails that I did not merely hallucinate the figure, I can easily rule out that skeptical hypothesis. Isn't this comparable to claiming that since I have foundational knowledge that I have two hands, I can rule out the brain-in-a-vat hypothesis?
The key to unraveling this objection is the notion of prima facie justification. My direct realism does not hold that perceptual beliefs have a kind of justification that is immune from countervailing considerations. It holds that the justification attaching to immediate perceptual beliefs is, while foundational, nevertheless defeasible justification. The idea here is similar to the legal concept of presumption: perceptual beliefs may be presumed true unless and until contrary evidence appears. As long as there are no special grounds for doubting a given perceptual belief, it retains its status as justified, but when other, justified or prima facie justified beliefs start disconfirming it, the presumption in favor of the perceptual belief can be defeated and the perceptual belief can wind up unjustified.(87)
This is the case in the example just described. I have a certain degree of prima facie justification for thinking the ghostly figure just walked through the wall. As I might say, if I didn't know better, I would naturally (and reasonably) assume that that is what happened. However, I have a large body of background knowledge, which indicates among other things that people generally can't walk through walls and that ghosts probably don't exist, and this defeats my justification for thinking the figure walked through the wall. It seems clear that this must be the right analysis of the case -- as opposed to the view that we always need to rule out the possibility of hallucination before accepting perceptual beliefs -- because in cases in which there is no evidence either for or against the hypothesis of hallucination (e.g., if I had merely seen a rabbit sitting by the side of the road), our default assumption is that things are the way they appear.
Now the brain-in-a-vat hypothesis is different. There are no
grounds for suspecting that I'm a brain in a vat, in the way that
there are grounds for suspecting that my seeming ghost sighting is
a hallucination. So the presumption in favor of my perceptual
belief that I have two hands, for example, remains undefeated, and
this belief is therefore available for constructing an argument
against the brain-in-a-vat hypothesis.
Armstrong, David M. Perception and the Physical World (London:
Routledge & Kegan Paul, 1961).
Armstrong, David M. Universals and Scientific Realism (Cambridge:
Cambridge University Press, 1990).
Audi, Robert. Belief, Justification, and Knowledge (Belmont, CA:
Austin, John L. Sense and Sensibilia (Oxford: Clarendon Press,
Ayer, Alfred Jules. The Problem of Knowledge (New York: St.
Martin's Press, 1956).
Berkeley, George. Theory of Vision and Other Writings (New York:
BonJour, Laurence. The Structure of Empirical Knowledge
(Cambridge: Harvard University Press, 1985).
Crane, Tim. "The Nonconceptual Content of Experience," in The
Contents of Experience, ed. Tim Crane (Cambridge: Cambridge
University Press, 1992), pp. 136-57.
Dancy, Jonathan. "Arguments from Illusion," Philosophical
Quarterly 45 (1995): 421-38.
Descartes, René. Meditations on First Philosophy, in The
Philosophical Writings of Descartes, vol. II, ed. Cottingham,
Stoothoff, and Murdoch (New York: Cambridge University Press,
Dretske, Fred I. "Epistemic Operators," Journal of Philosophy 67
Dretske, Fred I. "The Pragmatic Dimension of Knowledge,"
Philosophical Studies 40 (1981): 363-78.
Evans, Gareth. The Varieties of Reference (Oxford: Clarendon
Foley, Richard. "Epistemic Conservatism," Philosophical Studies 43
Foley, Richard. Working without a Net (New York: Oxford University
Fumerton, Richard. "Inferential Justification and Empiricism,"
Journal of Philosophy 73 (1976): 557-69.
Fumerton, Richard. Metaphysical and Epistemological Problems of
Perception (Lincoln, NB: University of Nebraska Press, 1985).
Gettier, Edmund L. "Is Justified True Belief Knowledge?" Analysis
23 (1963): 121-3.
Goldman, Alvin I. "Discrimination and Perceptual Knowledge,"
Journal of Philosophy 73 (1976): 771-91.
Goldman, Alvin I. "What Is Justified Belief?" in Justification and
Knowledge, ed. George Pappas (Dordrecht: Reidel, 1979), pp. 1-23.
Hardin, C.L. Color for Philosophers (Indianapolis, IN: Hackett,
Huemer, Michael. "Probability and Coherence Justification,"
Southern Journal of Philosophy 35 (1997): 463-72.
Hume, David. Enquiry Concerning Human Understanding, ed. L.A.
Selby-Bigge and P.H. Nidditch (Oxford: Clarendon Press, 1975).
Hyman, John. "The Causal Theory of Perception," Philosophical
Quarterly 42 (1992): 277-96.
Jackson, Frank. Perception: A Representative Theory (Cambridge:
Cambridge University Press, 1977).
Jackson, Frank. "Epiphenomenal Qualia," Philosophical Quarterly 32
Johnston, Mark. The Manifest (manuscript).
Klein, Peter. "Skepticism and Closure: Why the Evil Genius
Argument Fails," Philosophical Topics 23 (1995): 213-36.
Langsam, Harold. "The Theory of Appearing Defended," Philosophical
Studies 87 (1997): 33-59.
McDowell, John. "Criteria, Defeasibility, and Knowledge,"
Proceedings of the British Academy 68 (1982): 455-79.
McDowell, John. Mind and World (Cambridge, MA: Harvard University
Nagel, Thomas. "What Is It like to Be a Bat?" in Mortal Questions
(Cambridge: Cambridge University Press, 1979), pp. 165-80.
Oakes, Robert. "Representational Sensing: What's the Problem?" in
New Representationalisms, ed. Edmond Wright (Brookfield, VT:
Ashgate Publishing Co., 1993).
Oakley, I.T. "An Argument for Skepticism Concerning Justified
Beliefs," American Philosophical Quarterly 13 (1976): 221-8.
Pitcher, George. A Theory of Perception (Princeton, NJ: Princeton
University Press, 1971).
Pollock, John. Contemporary Theories of Knowledge (Savage, MD:
Rowman & Littlefield, 1987).
Putnam, Hilary. "Brains in a Vat" in Reason, Truth, and History
(Cambridge: Cambridge University Press, 1981), chapter 1.
Reid, Thomas. Inquiry and Essays (Indianapolis: Hackett, 1983).
Robinson, Howard. "Physicalism, Externalism, and Perceptual
Representation" in New Representationalisms, ed. Edmond Wright
(Brookfield, VT: Ashgate Publishing Co., 1993).
Searle, John R. Intentionality (Cambridge: Cambridge University
Sellars, Wilfrid. "Empiricism and the Philosophy of Mind,"
Minnesota Studies in the Philosophy of Science, vol. I, ed. H.
Feigl and M. Scriven (Minneapolis: University of Minnesota Press,
1956), pp. 253-329.
Sextus Empiricus. Outlines of Pyrrhonism, in The Skeptic Way, tr.
Benson Mates (New York: Oxford University Press, 1996).
Tye, Michael. "Visual Qualia and Visual Content," in The Contents
of Experience, ed. Tim Crane (Cambridge: Cambridge University
Press, 1992), pp. 158-76.
Vogel, Jonathan. "Cartesian Skepticism and Inference to the Best
Explanation," The Journal of Philosophy 87 (1990): 658-66.
Wittgenstein, Ludwig. Philosophical Investigations (New York: Macmillan, 1968).
1. Hume, p. 152.
2. As Thomas Reid noted (Reid, pp. 178-9). See section 5.1 for more discussion.
3. Reid, p. 15.
4. This is, of course, the point illustrated by Gettier's (1963) famous examples.
5. Goldman (1976), pp. 772-3.
6. I owe this observation to Brian McLaughlin.
7. Berkeley, p. 239. Berkeley uses "perceive" in a broad sense, to encompass any kind of awareness of something, and "sense" to mean see, feel, hear, etc. Robert Oakes (1993, p. 75) protests vigorously against Berkeley's representation of indirect realism.
8. Fumerton's (1985, p. 73) definition of "epistemological naive realism." The following definition of indirect realism is my interpolation.
9. Fumerton (1985), pp. 50, 77, and Fumerton (1976).
10. See Robinson's (1993) formulation of "Naive Realism" (p. 104).
11. See Jackson (1977), introduction & chapter 1. He uses "representationalism" in place of "indirect realism."
12. Earlier, I insisted that the interpretation of condition (ii) on basing required a constitutive, rather than causal dependence (see section 1.3, p. 41). Does that conflict with the present point? No, because that dependence had different relata: when B is based on A, the positive epistemic status of B constitutively depends on A's causing B and A's having positive epistemic status. B's existence causally depends on A, while B's epistemic status constitutively depends on B's relation to A plus A's epistemic status.
13. Jackson (1977), p. 18 (emphasis Jackson's).
14. Notable ultra-direct realists include McDowell (1994, pp. 111-113), (1982); Johnston (chapter 7); Langsam; and Hyman.
15. BonJour (1985), chapter 6.
16. BonJour, p. 129. See also note 15 in chapter 6, where BonJour comments on the role that sensations might play in epistemology.
17. Armstrong (1961), chapter 9. The context makes clear that he takes knowledge to be a form of belief. Pitcher defends the thesis in his (1971), chapter II.
18. So argue Armstrong (1961), pp. 84-7; and Pitcher (1971), pp. 91-3.
19. This argument is inspired by Jackson (1977), pp. 75-6. Unfortunately, Jackson tries to use his, analogous argument to show that expressions like "bright yellow" apply in the same sense to after-images and hallucinations as to physical objects.
20. On the disjunctive theory, see McDowell (1982) and Dancy (1995).
21. Due to the possibility of illusory pink-rat sightings, in which one sees a pink rat but it doesn't look pink-ratlike, strictly speaking, we should say the state is common to pink-rat hallucinations, veridical (non-illusory) pink-rat-sightings, and illusory sightings of things other than pink rats that nevertheless look pink-ratlike. But for the sake of simplicity, I will ignore the case of visual illusion (as opposed to hallucination) below, and pretend the only alternatives to consider are veridical seeing and hallucinating.
22. McDowell (1994, pp. 26-7).
23. The argument of this section comes from Langsam.
24. The example and its remedy are from Langsam.
25. Johnston, chapter 7. I use the word "perception" hesitantly, because Johnston does not; he speaks of "visual awareness" of things and of things "appearing to" subjects. I am assuming that these things amount to perception, because he uses the same terminology when speaking of cases of normal perception, because illusions are generally cases of perception, and he does not take time out to specify that these kinds of illusions are not perceptions. However, nothing in my subsequent argument would be damaged by replacing "perception" with "visual or auditory or (etc.) awareness".
26. Johnston says that the object of awareness should be left vague in this way, rather than being specified as S's brain state. The reason for this need not concern us.
27. Armstrong (1990), vol. 2, pp. 43-7, 153-6.
28. McDowell (1994), p. 112.
29. Johnston, chapter 7.
30. The solution I adopt here was suggested to me by Brian McLaughlin.
31. Some of these are from Austin, pp. 4, 8.
32. Johnston, chapter 7. I take his "acquaintance" to be a variant on "awareness."
34. See Reid (Inquiry VI.xx), pp. 83-9.
35. See Nagel (1979) and Jackson (1982) for the classic discussion of this issue.
36. As Michael Tye (1992) has argued.
37. The phrase "color experience" is misleading, since it suggests that we're talking about a kind of experience that simply represents color, whereas it is one and the same visual experience that represents to us colors, shapes, and spatial locations all at once. What I mean is "visual experience, considered insofar as it represents color."
38. Compare Putnam's (1981) causal theory of 'meaning.'
39. See the original Twin Earth thought experiment in Putnam (1981).
40. This is assuming a causal theory of content. I don't know of any other externalist theories of content that would be plausible for the case of color experience. But if there is one, we could always construct a thought experiment in which whatever external factors the theory identifies as determining content would be similarly 'switched.'
41. Searle (1983), chapter 8.
42. See Hardin (1988) for a defense of eliminativism about colors.
43. 1995, p. 61.
44. See Crane (1992, especially p. 143), for a similar definition.
45. See Evans, p. 229 for a similar argument regarding color experience.
46. McDowell (1994), pp. 56-8.
47. McDowell (1994), pp. 56-7.
48. McDowell (1994), p. 58.
49. This interpretation is based on the concerns McDowell discusses earlier in the book. Admittedly, "rationally integrated" and "spontaneity" are obscure phrases.
50. In his Philosophical Investigations, part 2, XI.
51. I am assuming here without argument that knowledge requires justification. But even if it does not, the question of how our perceptual beliefs are justified is still interesting.
52. See Goldman (1979).
53. Foley (1993), p. 8.
54. Foley (1993), chapter 1, section 3.
55. AC is more qualified than this conclusion that falls out of Foley's account of rationality, for AC ascribes prima facie justification, rather than justification sans phrase. Why is this? Because, in putting forward AC, I have taken account of the possibility that it might seem to S as if P (on one level, or in one way) and also seem to S as if not-P (on another level or in another way). Foley seems to have been using "apparently" in the sense of an all-things-considered appearance.
56. This is a cleaned-up version of the argument in Sextus Empiricus, I.164-74. See also Oakley.
57. The term derives from Sellars' incomprehensible article; however, I will focus on the relatively clear arguments of McDowell (1994) and BonJour.
58. See BonJour (chapter 4, especially p. 78) and McDowell (1994, p. 7).
59. Propositions are abstract entities, not beliefs. They are ways things might be, or potential facts. There can be unconceived and even inconceivable propositions, just as there can be unconceived and inconceivable facts.
60. See my remarks on this, p. 62.
61. p. 140.
62. Foley (1983). The source of this criticism is somewhat surprising, given my earlier use of Foley's (1993) views to support appearance conservatism.
63. Foley (1983), pp. 176-7.
64. Oakley, p. 223. I have modified Oakley's scenario (used to criticize Ayer) slightly to render it as a criticism of my own view.
65. Note that coherence alone, in the absence of some initial degree of warrant, won't do the job. I have defended this in my (1997).
66. See the Enquiry Concerning Human Understanding, section XII, part 1. The same argument is discussed in Ayer, pp. 81-7.
67. See Berkeley, Principles of Human Knowledge, §§36-40.
68. See Jackson (Perception, pp. 141-7) and Vogel.
69. Hume, p. 152.
70. Compare Thomas Reid's excellent response to Hume (Reid, pp. 176-9). Reid goes farther, claiming that the experiment with the table confirms that it is the real table we see, because it appears in just the way one would expect a real table to appear, assuming one could perceive a real table (p. 179).
71. Compare Austin's classic response to the argument (chapter III).
72. See Sextus Empiricus, I.164-74 and I.T. Oakley.
73. The example is from Fumerton (1985), p. 39.
74. See Descartes, Meditation IV.
75. A similar argument appears in BonJour, pp. 31-2.
76. This principle will need some qualifications to protect it from easy counter-examples. For instance, if Q is a necessary truth, then P entails Q no matter what Q is, yet it is not apparently the case that, by knowing that the sky is blue, I am justified in believing Gödel's Incompleteness Theorem. We can avoid this problem by restricting the application of the principle to contingent propositions. We can further qualify the principle by restricting it to cases in which S is able to see that P entails Q. On this, see Klein, p215. These qualifications don't affect the skeptical argument or the responses to it discussed below.
77. See Audi, p. 77 and Dretske (1970). But note that Dretske is discussing the Closure Principle for knowledge rather than for justification.
78. Dretske (1970), p. 1016 (emphasis Dretske's).
79. Klein, pp. 218-22.
80. Ibid, pp. 226-8.
81. I should note that, in conversation, Professor Klein has denied that the argument of this paragraph is equivalent to that of the preceding two, nor to the argument he has presented. I have been unable to see the significant difference, but I think that the argument of this paragraph is worth considering regardless (if you like, consider it the argument of a hypothetical philosopher, Klein2).
82. Dretske (1981).
83. Dretske (1981), p. 377 (emphasis Dretske's).
84. Suppose I say, "The roads were covered with snow, so I couldn't make it to class." This seems to be the sort of modality Dretske has in mind. Of course, the roads' being snow-covered doesn't make it logically or nomologically impossible for me to come to class. Nor am I making an epistemic claim -- I'm not just saying that I know I didn't make it to class. Many uses of modal terms in ordinary language are like this.
85. More precisely, the Preference Principle requires that we have reason (that is, have an available justification) for rejecting the alternative hypothesis. It doesn't require that one have actually entertained each alternative hypothesis. It may be true that one must actually entertain and reject each alternative, but that is something the skeptic need not commit to at this point.
86. Jackson (1977), pp. 143-4 (emphasis Jackson's).
87. John Pollock has also defended this view (Pollock, pp. 29, 175-9).