The problem of induction and the problem of Cartesian/brain-in-the-vat skepticism have much in common. Both are instances of a general problem of defeasible justification. I use the term "defeasible justification" to refer to a relation between a piece of evidence, e, and a conclusion, h, such that
(1) e provides sufficient support to h for one to know h on the basis of e;
(I am assuming that knowledge is some form of justified belief) -- this is to say that by "e justifies h" I do not mean merely that e provides some support to h, but that it provides enough suppport to h to give h the level of justification implicated in knowledge (which I suppose to be high, but not so high as to preclude the following point) -- but
(2) e does not entail h.
Both (1) and (2) are essential characteristics of defeasible justification. Another way to state condition (2) is to say that even though e justifies h, the justification is capable of being defeated by the addition of new information, i.e.,
(3) There exists some other proposition (not necessarily a true one), d, such that (e & d) is logically possible and (e & d) does not support h.
To see that (2) and (3) are equivalent,
(i) assume that (2) holds. Then, trivially, (e & ¬h) is logically possible and (e & ¬h) does not support h. So (3) is satisfied.
(ii) assume (2) does not hold. Then e does entail h. Therefore, e conjoined with anything else also entails h ((e & d) can only form a logically stronger proposition) and therefore supports h to the highest degree. So (3) is not satisfied either.
This is a very pervasive form of justification. Most of our beliefs, if they are justified, are justified defeasibly. My belief that there are people in Montana, my belief that the world is at least 100 years old, my belief that the earth orbits the sun -- all are based on evidence that does not entail them. I believe, for instance, that there are people in Montana because I have heard many people say things about Montana and about the people living there, perhaps also because I have seen books that say similar things. I can not now recall all of the evidence on which I have come to believe that there are people in Montana, but it was all of this sort -- that is to say, defeasible evidence, the sort that does not entail its conclusion. I may even have been in Montana and seen many people there, but even that does not entail that there are, now, any people in Montana. Nevertheless, I normally feel perfectly justified in asserting, without any doubts or hesitation, that there are people in Montana, and even that I know that there are people in Montana; and all of us routinely feel perfectly justified in asserting similar things -- that is to say, other propositions for which we have only defeasible justification.
To remove two possible obstacles to the acceptance of this point, let me note, first, that I am using "evidence" in a particularly broad sense. I use "evidence" to refer to whatever facts, or apparent facts, may support (in an epistemic sense) a given belief for a given person. I am willing to accept the charge of using the word "evidence" in a non-standard sense. It may sound stilted and improper in an ordinary context to say that I have evidence that I live on College Avenue, for example,(1) but I think it is clear that there are some facts of which I am aware that render my belief that I live on College Avenue justified. Second, the point may also be pressed that a given piece of evidence does not typically provide support to a conclusion all by itself, but only in the context of certain background knowledge. However, in this case, I would, by stipulation, count the relevant portions of the background knowledge as part of the 'evidence' for the conclusion. e, therefore, may be an extremely complex conjunction of facts. It may even include among its conjuncts many propositions of which I am not conscious at the time I make the inference to h (perhaps because I have forgotten them or because I typically just take them for granted, etc.) Now it may be doubted whether, once this concession has been made, defeasible justification is any longer so common. To return to my earlier example, although I believe there are people in Montana on the basis of second-hand reports of this, it may be part of my background knowledge that the people from whom I heard this knew whether there were people in Montana and were not lying, etc.; and perhaps when this background knowledge is added into my 'evidence,' my evidence does entail that there are people in Montana.
I would question this suggestion, since it is not part of my background knowledge that people never lie or that they always know what they are talking about in demographic matters. However, even if this were the case, I should simply locate the defeasible justification required for my knowledge one or more stages back -- i.e., I should wonder what justifies my belief that people don't lie, and so on. It is clear that at some point, and probably sooner rather than later, I will come to a step in the justificatory ancestry of my belief that is defeasible, so if the problem of defeasible justification can be made out in a general way, it will after all threaten the vast majority of our putative knowledge.
Now let's reflect on how Nelson Goodman's 'grue' problem works.(2) Every emerald that we have observed so far has been green, so we naturally think on this basis that all emeralds are green. Now let "grue" be defined as follows:
[x is grue] if and only if [(x is first observed before the year 2000 and x is green) or (x is not first observed before the year 2000 and x is blue)].
Goodman notes that it is equally true to say that every emerald we have observed so far has been grue, and so one might wonder why we don't conclude, inductively, that all emeralds are grue. Although Goodman himself does not intend to use this example to motivate skepticism, it is not hard to see a skeptical argument just around the corner: How do we know the grue hypothesis is not true, rather than the green hypothesis? If the grue hypothesis were true, all of our observations would have been just as they in fact were. And if we do not know that the grue hypothesis isn't true, then we don't know that the emeralds we dig out of the ground in the year 2000 and after won't be blue, rather than green. Further: do we have any reason at all for thinking the grue hypothesis isn't true? If we have no reason for thinking the grue hypothesis isn't true, then (the skeptic would argue) we have no reason for thinking the emeralds we dig out of the ground in the year 2000 and after will be green, rather than blue. If this is the case, it seems that our evidence does not justify us in believing that all emeralds are green. And of course, the same applies no matter what property is substituted for green, what type of object is substituted for emeralds, and what subset of the unobserved objects of that type is substituted for objects not observed before the year 2000.
Now, there is a certain sense in which the evidence,
e1 = All observed emeralds are green.
can be proven to support the hypothesis
h1 = All emeralds are green.
-- namely, that e1 raises the probability of h1 (i.e., P(h1|e1) > P(h1)).(3) However, it can also be shown that e1 'supports'
h2 = All emeralds are grue.
in the same sense. As long as e and h each have initial probability less than 1, if h entails e, e will always raise the probability of h, for:
P(h|e) = P(h & e) / P(e),
whence, if h entails e (since h = (h & e)),
P(h|e) = P(h) / P(e),
whence, if P(e) < 1,
P(h|e) > P(h).
But I think there clearly is a sense in which we think e1 supports h1 but does not support h2.
Let's try to articulate that sense. I suggest that the reason we feel reluctant to say that e1 supports h2 or constitutes evidence for h2 is that, even though e1 raises the probability that all emeralds are grue, e1 does not raise -- intuitively, it lowers -- the probability that the emeralds that won't be observed until the year 2000 are grue. In other words, although e1 raises the probability of h2, it lowers the probability of a certain logical consequence of h2. It's convenient to introduce the notion of the 'excess content' of a hypothesis to explain this. Intuitively, the 'excess content' of a hypothesis is what the hypothesis says that goes beyond the data. For example, in the case of h1, the hypothesis is that all emeralds are green, the data is that all observed emeralds are green, so the excess content is
x1 = All unobserved emeralds are green.
In contrast, the excess content of h2 is
x2 = All unobserved emeralds are grue.
Now the difference between h1 and h2 is that e1 raises the probability of the excess content of h1, but it lowers the probability of the excess content of h2 -- at least, that is what we intuitively think. I have not given any argument for the belief that e1 raises the probability of x1 and lowers the probability of x2, though, and indeed it is very difficult to do so. I have merely stated the sense in which we ordinarily think that h1 is 'supported' and h2 not. The problem is to justify this belief.
The notion of 'excess content' is easy to define for traditional inductive hypotheses -- where the hypothesis is that all A's are B and the evidence is that some subset of the A's are B, the excess content is simply that the rest of the A's, i.e. the ones outside that subclass, are also B. The concept of excess content is more problematic for other hypotheses (what's the 'excess content' of "there are people living in Montana" relative to the evidence that I have heard many people say that there are people living in Montana? Perhaps just that there are people living in Montana.) It is not important at the moment, however, to precisely define "excess content" for such cases (it's not even important that the notion apply to such cases, for we shall generalize our problem in a moment).
A recipe for gruesome hypotheses
Now here is the recipe for creating grue-like problems, for any inductive hypothesis. The hypothesis, h, is equivalent to (e & x), where e is the evidence and x is the excess content of h. A grue-like hypothesis is generated by simply conjoining e with a different, and incompatible excess content. For instance, if h is "All ravens are black," this is equivalent to "All observed ravens are black, and all unobserved ravens are black." We may then introduce the competing hypothesis, "All observed ravens are black, and all unobserved ravens are red," or "All observed ravens are black, and all unobserved ravens are yellow," etc. (Goodman's 'grue' predicate is just a way of formulating a hypothesis of this kind without the explicit conjunction.) In the simplest and least imaginative case, the competing hypothesis can be simply (e & ¬x). (e & ¬x) is guaranteed to be consistent, provided that the justification for h was defeasible. Now, this recipe automatically gives you an alternative hypothesis, call it h', such that
(4) h' is logically possible,
(5) h' is incompatible with h,
and
(6) if h' were true, our evidence would be exactly the way it in fact was.
From (4) and (6), the inductive skeptic will attempt to argue that
(7) We have no reason for rejecting h'.
And from (5) and (7), she will wish to conclude that
(8) We do not know that h.
The problem of induction, on one reading, is to show that it is more rational to accept h, given e, than to accept h'.
Now, readers of Descartes will at this point be experiencing a sense of deja vu. The Cartesian skeptical problem (in one version) goes roughly like this: It's logically possible that I should be a brain in a vat (BIV), and if I were a brain in a vat, everything would appear to me the way it presently does (by stipulation; this is part of the BIV hypothesis). Therefore, I have no reason for thinking I'm not a brain in a vat. But my being a brain in a vat is incompatible, for example, with my having two hands. So I don't know that I have two hands.
Here, the appearances or sensory experiences I am having take the place of 'evidence,' and the competing 'hypotheses' are that I have two hands, and that I'm a BIV.
Now here is an initial formulation of a general skeptical argument that applies to any defeasibly justified belief. Let e1 be the evidence that (ostensibly) defeasibly justifies a conclusion, h1. Let h2 be a contrary of h1 that also entails e1. There always exists such a contrary, given that e1 doesn't entail h1, since at the very least, there is always (e1 & ¬h1). The skeptical argument has three major premises:
(9) Given any evidence, e, and hypothesis h, in order to justifiably infer h from e, one must have at least some reason independent of h for rejecting each of the alternatives incompatible with h in which e would be true.
(10) For any propositions e and h, if h entails e, then e is not a reason for rejecting h.
(11) For any propositions e and h, e can furnish a reason for accepting or rejecting h only if e itself is justified.
These premises will have to come under scrutiny below, but prima facie they seem plausible. By a reason 'independent of h,' I mean a reason other than h and which is not justified either directly or indirectly by h (for instance, (e & h) would be disallowed). Now from just these three premises a fairly radical skeptical conclusion will follow, for by (9), in order to justifiably infer h1 from e1, we must have reason for rejecting h2 as an alternative to h1. We are not allowed to invoke h1 itself as such a reason. And by (10), e1 cannot constitute a reason for rejecting h2. So we require some further reason, over and above e1. But even if we get that far, we're still not done. For let the reason for rejecting h2, whatever it is, be e2. Now either e2 together with e1 entail that ¬h2, or we have only a defeasible reason for rejecting h2.
But we can neglect the first case, since by hypothesis we have only defeasible justification for h1 available to us. If we were to have available to us evidence which entails the negation of each alternative incompatible with h1, then we would in fact have indefeasible justification for h1. Therefore, there must exist alternatives to h1 which we have at best defeasible justification for rejecting. We may therefore assume h2 to be one of the latter.
Now, if (e2 & e1) does not entail ¬h2, then we may repeat our previous steps. Let h3 be a hypothesis which is incompatible with ¬h2 and which entails both e1 and e2. (Again, in the simplest and least imaginative case, we can let h3 = (e1 & e2 & ¬h1).) We need a reason for rejecting h3, as a precondition both on justifiably believing h1 and on justifiably rejecting h2. Neither e1 nor e2 nor (e1 & e2) can furnish such a reason, from (10). Hence, we must have a further reason, call it e3 . . .
Now since each reason in the series is a further precondition on being justified in believing h1, the regress appears to be vicious. But let us even suppose it is not, and that a subject may have an infinite series of reasons of this sort. Let e∞ be a proposition describing all the evidence independent of h1 that we have available to us (possibly an infinitary conjunction). Either e∞ entails h1 or it does not. If it does, then we have available indefeasible justification for h1. If it does not, then we need a reason for rejecting a hypothesis, h∞, which entails e∞ and is incompatible with h1 (again, we may take (e∞ & ¬h1) for example). But by hypothesis, e∞ conjoins all the evidence we have. Since h∞ entails e∞, it also entails a proposition stating any evidence that we have available to us. Hence, by (10), no evidence we have available to us is a reason for rejecting h∞. Hence, by (9), we cannot justifiably infer h1 from e.
Thus, either we have indefeasible reasons for h1 available to us, or we are not justified in inferring h1 from e. Hence, defeasible justification is impossible.
Notice that the above is just a generalization of the brain-in-a-vat style argument. The BIV hypothesis works by 'neutralizing' all our evidence by building that evidence into the hypothesis -- i.e., we build into the BIV hypothesis that the evil scientist stimulates your brain in exactly the way inputs from normal physical objects stimulate normal brains. This also brings out what the BIV argument (more precisely, the general strategy of skeptical argument that the BIV argument exemplifies) demands of us: the Cartesian style of skeptical argument demands of us indefeasible justification for our beliefs. It demands that we have evidence which entails the existence of external objects, and all the rest of the commonsense beliefs we would defend. In the light of this, it is not surprising that Descartes' own answer to his skeptical problems employs a deductive argument culminating in the existence of external objects. This is the only sort of answer that could be given, consistent with premises (9), (10), and (11).
Premise (11) is the least subject to question and calls for no more than brief comment. The proposition that there is life on Mars, for example, cannot serve, for me at least, as a reason for rejecting anything, because I have no reason to believe that there is life on Mars. Note that it is not because it isn't true -- even if, unbeknownst to me, there really is life on Mars, that fact still doesn't give me a reason for believing or disbelieving anything (or for doing anything else for that matter) until I become aware of it, or at least justifiably believe it.
(10) may seem slightly more debatable. The reasoning behind it is simple and intuitive, however: If h entails e, then if h were true, e certainly would be true. Therefore, how can the truth of e be a reason for rejecting h? More tendentiously: how can a symptom of the truth of h count against h? If anything, it would count in favor of h.
Perhaps a counter-example to (10) can be found in the case of some contradictory or necessarily false hypotheses. Take h = (e & ¬e), for example. (e & ¬e) entails e, but e might still be regarded as a reason for rejecting (e & ¬e), being a conclusive reason for denying the second conjunct. To accomodate this, let us reformulate (10) as follows:
(10.1) For any propositions, h and e, if h is logically possible and h entails e, then e is not a reason for rejecting h.
The rest of the skeptical argument is unaffected.
The other thing that should be said on behalf of (10) was already mentioned in section II: namely, that as long as 0 < P(e) < 1, if h entails e, e raises the probability of h. It therefore seems especially clear that e can't be a reason for rejecting h. And even if P(e) = 0 or 1, in no event can e lower the probability of h.
Premise (9) seems the most likely target for an attack. (9) is, of course, reminiscent of the so-called 'closure principle,' which says roughly,
(12) If S knows that p, and p entails q, then S knows that q,(4)
so we should consider whether any of the recent objections to the closure principle tell against principle (9). These objections are of two general sorts: first, those that rely on a particular analysis of "knowledge" and show that closure fails for that conception of knowledge; and second, those that give what are intuitively supposed to be counter-examples to (12).
Now, the first class of objections revolve around externalist analyses of "knowledge" -- analyses that don't include "justification" (or at least, not in the traditional sense). This includes Nozick's counter-factual analysis and Dretske's 'relevant alternatives' account.(5) These objections do not apply directly to (9) because (9) does not mention knowledge. (9) mentions justification, and its truth is unaffected by the supposition that justification isn't really part of knowledge.
However, it may be objected that in the initial characterization of defeasible justification, we assumed that knowledge was some form of justified belief (recall condition (1)), and that this assumption can be questioned. The assumption is, however, entirely dispensible and was made merely for convenience. (1) could as easily have read,
(1.1) e provides sufficient support to h that one could be completely justified in believing that h on the basis of e.
or even,
(1.2) e provides sufficient support to h that it is more reasonable to believe h, given e, than to withold judgement about whether h.
The point of (1), (1.1), and (1.2) was simply to make clear that I meant "justification" not in the very weak sense of e's just providing some support to h, but in the sense of e's providing enough support to make h a justified belief. If it were sure this confusion would not be made, we might as well restate (1),
(1.3) e justifies h,
and the problem of defeasible justification stands regardless of whether justification is part of the analysis of knowledge.
We now consider two alleged counter-examples to the closure principle that may be relevant to (9). The first is due to Dretske:
You take your son to the zoo, see several zebras, and, when questioned by your son, tell him they are zebras. Do you know they are zebras? Well, most of us would have little hesitation in saying that we did know this ... Yet, something's being a zebra implies that it is not a mule and, in particular, not a mule cleverly disguised by the zoo authorities to look like a zebra. Do you know that these animals are not mules cleverly disguised by the zoo authorities to look like zebras? If you are tempted to say "Yes" to this question, think a moment about what reasons you have, what evidence you can produce in favor of this claim. The evidence you had for thinking them zebras has been effectively neutralized, since it does not count toward their not being mules cleverly disguised to look like zebras.(6)
Dretske's argument, in short, is this: You know that the animals are zebras; you don't know that they aren't cleverly disguised mules; therefore, the closure principle is false.
This strikes me as pure question begging, vis-a-vis skepticism. Surely no skeptic would grant the first premise, "You know the animals are zebras." For one thing, propositions about the external world like "those animals are zebras" are called into question by the Cartesian skeptical argument and by the more general problem of defeasible justification. Moreover, Dretske's "cleverly disguised mule" hypothesis is so perfectly analogous to the evil genius and brain-in-a-vat hypotheses that surely anyone who appreciated the force of those arguments would see Dretske's hypothesis as just one more skeptical scenario, with the one difference that it applies specifically to the proposition, "Those animals are zebras," rather than to all propositions about the external world. This last is the only difference between Dretske's scenario and the usual Cartesian skeptical scenario -- but that does not make the skeptical argument any less plausible. Anyone who found the general skeptical argument persuasive would be just as inclined to argue: I don't know that those aren't cleverly disguised mules; therefore, I don't know that they are zebras. Dretske might just as profitably have adduced as a counter-example to the closure principle: "I know that I have two hands. I don't know that I'm not a brain in a vat. Therefore, the closure principle is false."(7)
The second alleged counter-example to (12) is due to Robert Audi:
I add a column of figures, check my results twice, and thereby come to know, and justifiably believe, that the sum is 10,952. As it happens, I sometimes make mistakes, and my wife (whom I justifiably believe to be a better arithmetician) sometimes corrects me. Suppose that, feeling unusually confident, I now infer that if my wife says this is not the sum, she is wrong. But even though I know and justifiably believe that this is the sum, can I, on this basis, automatically know or justifiably believe the further proposition that if she says that it is not the sum, she is wrong? Suppose my checking just twice is enough to give me the minimum basis for justified belief and knowledge here. Surely I would then not have sufficient grounds for the further proposition that if she says the answer is wrong, she is wrong.(8)
I believe Audi's example trades on the notorious inadequacy of the material conditional as an analysis of the "if...then" relation. If "If my wife says 10,952 is not the sum, she is wrong" is interpreted as a material conditional, then it does follow from "The sum is 10,952." However, in this case, I would say Audi is justified in believing that conditional, because all it means is, "Either my wife doesn't say that 10,952 is not the sum, or my wife is wrong." He is justified in believing this, because he is justified in believing the first disjunct -- i.e., he is justified in believing that his wife will not say that 10,952 is not the sum. Since by hypothesis he knows that the sum is 10,952, he would also justifiably expect his wife, if she were to add the numbers, to come up with 10,952 (though of course there would be some possibility that she wouldn't).
Clearly, however, what Audi wants us to imagine, what is suggested by the example, is that when he says, "If my wife says the sum is not 10,952, then she is wrong," he is in a state of mind such that, if he hears his wife say, "The sum is not 10,952," he will then respond, "You are wrong." But being justified in believing the material conditional, "If p then q," is not equivalent to being in a position such that, if you come to justifiably believe that p, you will then be justified in concluding that q. Audi's being justified in believing that if his wife says the sum is not 10,952, she is wrong, does not mean that, if his wife says the sum is not 10,952, Audi will be justified in thinking that she is wrong. After adding up the sum of numbers, Audi is justified in believing the conditional because he's justified in denying its antecedent. If his epistemic position changes such that he comes to justifiably believe the antecedent, then he will of course have to revise his opinion of the conditional; he will not simply continue to accept the conditional and thence accept its consequent. So, in upholding the closure principle, we are not putting Audi in a position to contradict his wife, and we are not telling him to place greater confidence in his calculations than in his wife's. Audi's acceptance of the material conditional, "If my wife says the sum is not 10,952, she is wrong" is perfectly consistent with his holding his wife to be more likely correct about the sum than himself. And I think it is only the superficial impression that the case is otherwise that makes it seem as if Audi can't justifiably believe that if his wife says the sum is not 10,952 she is wrong.
We may note further that if the conditional is read as a subjunctive conditional ("If my wife were to say the sum isn't 10,952, she'd be wrong"), then it does not follow from the proposition that the sum is 10,952.
Now what can be said positively on behalf of principle (9)? The intuitive idea is this: when we draw a non-deductive conclusion from certain evidence, we are choosing among various competing alternatives. There are many possibilities consistent with the evidence. In selecting one of them to believe, we are at the same time rejecting all the others. We can therefore be justified in accepting h1 only if we are justified in rejecting all the alternatives to h1. If any other logically possible alternative is equally good as or better than h1, then we should either select that alternative or at the least suspend judgement. Thus, there must be a reason why we select h1 rather than h2, for each alternative h2, if we are to be justified in inferring h1.
Furthermore, principle (9) is very strongly connected with the seemingly much weaker and unobjectionable thesis that it is possible to have a theory of confirmation. For, provided that there exist some set of complete and correct criteria of confirmation, there will have to be some (possibly complex, possibly conjunctive) sufficient condition on confirmation that h1 satisfies and h2 does not, and a (possibly complex, possibly disjunctive) necessary condition on confirmation that h1 satisfies and h2 does not. In other words, there must be some facts about h2 in virtue of which h2 is not confirmed. Whatever these facts are, they may serve as the 'reason for rejecting h2' posited in principle (9). To deny (9), then, is to say that there need be no explanation of why h1 is confirmed by e and h2 is not; there is nothing about h1 and h2 that accounts for this difference.
Now, I think the above rationale for principle (9) is something the skeptic has to say. I do not think the skeptic can make her case with the weaker principle (12). (12) can easily enough be defended by a very simple consideration: when p is justified and p entails q, p itself constitutes a justification for q; therefore, S has a justification for q; therefore, S is justified in believing q. (This is the essential point -- the concept of 'knowledge' need not be invoked.) But this consideration does not motivate a skeptical argument -- it merely invites the conclusion that our commonsense beliefs justify the negations of all the contraries of them.(9) The skeptic must somehow argue that elimination of all the contrary alternatives to h is a precondition on (N.B. not a consequence of) justifiably believing h. Principle (12), or rather, the rationale for it I just gave, merely provides us a means of expanding our knowledge. It is principle (9) that places an obstacle in the way of knowledge, and it is this that the skeptic needs. This is to say that I think not only is the argument I have presented a skeptical argument; it is the skeptical argument.
The rationale we have stated for (9), however, suggests a reformulation of it. It appears what we need is not so much a reason for rejecting h2 simpliciter, but a reason for preferring h1 over h2. Since this may not be the same thing, let us reformulate the principle accordingly:
(9.1) Given any evidence, e, and hypothesis, h, in order to justifiably infer h from e, one must have at least some reason independent of h for preferring h over each of the alternatives incompatible with h in which e would be true.
We will have to accordingly reformulate (10) like so:
(10.2) For any propositions, h1, h2, and e, if h2 is logically possible and h2 entails e, then e is not a reason for preferring h1 over h2.
I think the same rationale stands for (10.2) as for (10.1): that is, we may reflect that if h2 were true, e would certainly hold. Therefore, how can the fact that e does hold be a reason for preferring some alternative over h2? How can a necessary consequence of h2 be a reason for believing something else rather than h2? In the best case, h1 would also entail e; but even then it seems that e is merely neutral between h1 and h2.
What are our options now? The first option, which no doubt some will press, would be to accept the skeptical conclusion: there is no such thing as defeasible justification, because one can never be justified in rejecting all the alternative hypotheses. This option is, in my opinion, clearly the last option to choose, given the apparent omnipresence of defeasible justification we noted in section I. I will therefore not discuss it further.
A second possible reply for the case of perceptual knowledge would be to question the assumption that perceptual beliefs are based on evidence of some kind. Where this is not a superficial, semantic point about the use of the word "evidence," it would involve a direct realist theory of perception. However, while such a theory may well save our perceptual beliefs about the external world, it would do nothing to protect the vast majority of our commonsense belielfs about the world, of the sort mentioned in section I. It would leave our knowledge of the external world restricted to our current and perhaps remembered observations of our immediate environment. It therefore, even if successful as far as it goes, does not constitute much of a solution to our problem.
A third possibility is to say that we may be justified in rejecting an alternative, h2, without having a reason for rejecting it. This would be something analogous to foundational justification, but more like foundational unjustification. Or, taking into account our formulation (9.1), we might be justified in preferring one alternative over another without having any reason for preferring it -- i.e., one alternative is, so to speak, just self-evidently better than another. Perhaps "All emeralds are green" is just self-evidently preferable to "All emeralds are grue." This possibility, though it may offend the sensibilities of most philosophers, deserves to be taken seriously. Why it would probably offend most philosophers' sensibilities appears when we attend to the ubiquity of the foundational justification (or foundational preferability) it would require in order to successfully dispatch the problem of defeasible justification. Because the skeptical problem can be raised in any case of defeasible justification, the element of foundational preferability must make its appearance every time we infer a conclusion defeasibly. Furthermore, as we said in section IV above, the approach appears to entail that a theory of confirmation is impossible, to the extent that a theory of confirmation would provide reasons for preferring one alternative over another. These are at least somewhat unsettling consequences, but not absurd (like the first option).
The fourth response is an objection, or at least qualification, to principle (10). (10) is perhaps implausible for the case where e is a necessary truth. For in that case, any h automatically entails e; therefore, (10) would say that a necessary truth can never be a reason for rejecting anything (or preferring anything over anything else), which is implausible. This also sounds very close to the thesis that a necessary truth can never be a reason for accepting anything (if it can't be a reason for rejecting ¬p, this is very close to its not being a reason for accepting p). Furthermore, one of the reasons we gave for (10) fails in this case -- namely, the probabilistic argument which says that if e raises the probability of h, e isn't a reason for rejecting h -- because that argument was predicated on P(e) being less than 1, whereas the probability of any necessary truth = 1. Thus, it seems appropriate to qualify (10) like so:
(10.3) For any propositions, h1, h2, and e, if h2 is logically possible, e is contingent, and h2 entails e, then e is not a reason for preferring h1 over h2.
Now this qualification aids the anti-skeptic if and only if the reasons we have for preferring our defeasible conclusions over skeptical alternatives consist in necessary truths. The skeptic would probably reply that a necessary truth can never by itself be a reason for accepting or rejecting a contingent proposition. We said against the unqualified version of (10) that it was implausible to hold that a necessary truth can never be a reason for rejecting anything, but this is primarily implausible because necessary truths are often reasons for rejecting other propositions which are logically impossible. (For instance, that nothing is both blue and red can be a reason for rejecting the proposition that Sue's cat is both blue and red.) The thesis that a necessary truth is never a reason for preferring one contingent proposition over another is more acceptable, and also seems supported by our previous argument for (10): Of each of two contingent propositions, it is true to say that if it were the case, e would certainly hold, where e is a necessary truth, so it doesn't seem as if e's holding in fact is a reason for preferring one of the contingent propositions over the other. Note that this argument does not work when the alternatives considered are not both contingent, for in that case (since they are contraries) one of them is logically impossible, and it therefore won't make sense to speak of what else would be the case if it were true -- i.e., if a logical impossibility were to hold. Thus, the skeptic does seem to have a principled reason for holding that e can't be a reason for preferring h1 over h2 where h1 and h2 are contingent and h2 entails e but not holding the implausible thesis that a necessary truth is never a reason for preferring anything. That is, the skeptic can make the distinction she needs to make to preserve her argument in a non-arbitrary, not ad hoc manner.
This doesn't end the matter, however. The skeptical premise,
(10.4) For any propositions, h1, h2, and e, if h2 is contingent and h2 entails e, then e is not a reason for preferring h1 over h2.
seems plausible when considered purely in the abstract as we have done. However, consider examples of likely candidates for reasons for preferring one hypothesis over another. Simplicity is often suggested in this connection -- that is, the fact that h1 is the 'simpler' hypothesis in some sense may be a reason for preferring h1 over h2 (why it is is difficult to say, but it is equally difficult not to feel the sense that it is a reason for preferring h1). But if h1 is simpler than h2, then it is plausibly a necessary truth that h1 is simpler than h2 (assuming that simplicity is an objective characteristic of propositions). To take another instance, if there is such a thing as logical probability in Carnap's sense, then if h1 has a higher logical probability than h2, this seems to be a reason for preferring h1 over h2, but it is also a necessary truth that h1 has a higher logical probability than h2. In the light of this, (10.4) seems to be false, in spite of what we have said in its defense.
Now, if we take this route, we will get an interesting result. If the relevant necessary truth(s) can be known a priori, then it appears that we can also have a priori knowledge (or at least justification) for synthetic, contingent truths.(10) For if e2 is a reason for preferring h1 over h2, then it appears to be a reason for thinking that if either h1 or h2 is the case, h1 is the case. Now the proposition, if either h1 or h2 is the case, h1 is the case, is contingent, but it is apparently justified by an a priori truth. And whatever is justified by an a priori truth is justified a priori. Thus, there are contingent, a priori justified beliefs.
The last response we have to consider probably involves the most insight into the trick of the skeptical argument as I stated it. The rationale for (9) that I provided depends on the idea that in accepting a hypothesis, h1, we are choosing h1 over all of its alternatives, h2, h3, and so on. The argument also depends on treating the alternatives we are choosing from as consisting of all the logical contraries of h1. And indeed, any alternative to h1 is a contrary of h1, but is it the case that any contrary of h1 is an 'alternative' to h1 in the relevant sense? The argument for (9) sounded plausible when stated in the abstract, but to take a specific instance of it: When we infer that all ravens are black from the fact that all observed ravens have been black, are we really choosing this hypothesis over, for example, the hypothesis, "Some ravens are not black, and there is life on Mars"? It seems unnatural, to say the least, to regard the alternatives among which we have to choose as including
All ravens are black.
Not all ravens are black, and there is life on Mars.
Not all ravens are black, and 2+2=4.
Not all ravens are black, and today is Tuesday.
etc.
even though the last three are logical contraries of the first. One can not, then, in general, generate a commensurable alternative to a hypothesis simply by slapping an arbitrary conjunct onto the negation of the hypothesis. And arguably, that's just what I did above, for I assumed that (e1 & ¬h1) and (e1 & e2 & ¬h1), etc. could be counted as among the alternatives to h1.
I stress that the point I am making here is not merely a form of the "relevant alternatives" account of knowledge of Dretske or Stine. Intuitively, the genuine 'alternatives' to "all ravens are black" include "all ravens are red," "90% of ravens are black," and the like, but do not include the likes of "all ravens are red and there is life on Mars" because of the artificiality of the division of the space of logical possibilities that the latter implies, and the artificiality of contrasting "all ravens are black" with the latter sort of 'alternative' rather than the former. The latter sort of contrary doesn't count as an 'alternative' to h1 in the sense that it isn't to be regarded as among the possibilities one is choosing from. (Analogously, if I choose to eat a red M & M, I may choose this as opposed to eating a yellow or green M & M, but not, properly speaking, as opposed to 'not eating a red M & M and believing the theory of relativity,' or 'not eating a red M & M and becoming a physician.') On the other hand, Gail Stine's notion of an 'irrelevant alternative' is of one that isn't supported by any evidence,(11) and Dretske's notion is of one that isn't objectively possible given the circumstances.(12) Moreover, the irrelevance is not in the sense of its being inappropriate to draw a contrast between those alternatives and the given hypothesis -- i.e., its being inappropriate to classify the alternatives in that way -- but rather in the sense that among all the alternatives that we can contrast, some of them need not be ruled out in order to 'know' the hypothesis in question. The observation I'm making doesn't have anything to do with the analysis of "knowledge." So this is an entirely different issue.
Now this quasi-logical point can be made into a diagnosis of the skeptical argument if we can produce criteria for appropriate contrasting of alternatives, or at least generate an intuitive sense of the inappropriateness of the skeptical hypotheses as alternatives to their commonsensical contraries. I have not done the former, but I shall make some programmatic remarks about it below. The latter, I think, has already been done for my 'cheap' skeptical hypotheses of the form "(e & ¬h1)." It doesn't seem fair to just stick a statement describing our evidence onto what is already a contrary of h1 that we may already have grounds for rejecting, and call that a new alternative that we need grounds for rejecting. This seems in the same vein with arbitrarily conjoining "Today is Tuesday" to "All ravens are red" and calling that a new alternative.
But what about the more traditional skeptical possibilities? Take "All emeralds are grue" first. Is this a natural alternative to contrast with "All emeralds are green"? When we conclude that all emeralds are green, are we choosing this belief over the possibility that all emeralds are grue? It's plausible to say that we are not. We are choosing "All emeralds are green" as opposed to "Some but not all emeralds are green," "Only 50% of all emeralds are green," "No emeralds are green," and the like. And perhaps we are choosing, "All emeralds are green" as opposed to "All emeralds are blue," "All emeralds are red," and "All emeralds are transparent," but not as opposed to, "All emeralds observed before the year 2000 are green, and all emeralds not observed before the year 2000 are blue." If my method of merely conjoining an evidence statement to the denial of the hypothesis we are usually inclined to infer (i.e., merely taking (e & ¬h)) is not a legitimate method of generating commensurable alternatives to the hypothesis, the grue hypothesis hardly seems different. Essentially, it just takes a contrary of the excess content of the hypothesis (namely, "All unobserved emeralds are blue"), and slaps on to it a statement of our evidence. It's the latter, seemingly capricious addition that prevents us from saying the evidence counts against the grue hypothesis, but without that we would have no hesitation in saying we have evidence against it (i.e., we have evidence that not all unobserved emeralds are blue) -- at least, not as far as any skeptical argument we've so far discussed goes. So it's the very thing that entitles the skeptic to claim (6) that if h2 were true, our evidence would be the same as it actually is, that also makes it unnatural to describe us as choosing between h1 and h2.
Next consider the brain-in-the-vat hypothesis. This hypothesis does little more than the grue hypothesis to earn its keep. The hypothesis that you are just a brain is plausible as a commensurable alternative to the hypothesis that you have a body. However, the BIV skeptic can't get by by proposing merely that you're a brain -- it's surely not the case that if you only had a brain and no body, you'd be having exactly the experiences you're presently having. No, you'd be dead. Nor is it even enough to propose that you're a brain being kept alive in a vat of nutrients (you still wouldn't be having experiences similar to normal). Nor yet that you're a brain being kept alive in a vat of nutrients and stimulated by a mad scientist. None of these is enough. The skeptic cannot get her needed premise, viz., that all your experiences would still be exactly the same, without going the whole way and directly stipulating that this is the case. That is, where e is a complete and accurate description of all your sensory experiences, the skeptical hypothesis must take the form: "e, and you're a brain being kept alive in a vat of nutrients and stimulated by a mad scientist." Thus, in spite of the apparent artificiality of my 'cheap' method of generating skeptical hypotheses deployed in section III, it appears the BIV scenario is barely distinct -- it is barely different from merely conjoining the denial of your commonsense beliefs to a statement of your alleged evidence. The main difference between my method and that of both the grue hypothesis and the BIV hypothesis seems to be this: whereas I conjoin the evidence statement to the simple negation of the commonsense hypothesis, the latter two each conjoin the evidence statement to a more specific, logically stronger contrary of the commonsense hypothesis.
This suggests at least one condition for 'appropriate contrasts': h2 forms an appropriate contrast to h1 only if h2 contains no 'superfluous' information. The rough idea is that h2 must not contain information that goes beyond the part of it that conflicts with h1. We can regard a proposition, h2, as 'containing' another proposition, p, or having p as a 'part' of its content, just in case h2 is logically equivalent to (p & q), for some q. And we can then say that a component, p, of h2 is superfluous vis-a-vis the contrast with h1 just in case (i) p doesn't follow from the other conjunct, q, (ii) p does not conflict with h1, and (iii) q does conflict with h1. Under this suggestion, the BIV alternative would have to be pared down to "I'm a brain in a vat" (or perhaps just "I'm a brain"), this being the minimal part of the hypothesis required to form a contrast with the belief that I have a body; the skeptic would not be permitted to add into it the stipulation that all my sensory experiences are exactly as in the actual world. Similarly, the grue hypothesis would have to be pared down to "All emeralds not observed before the year 2000 are blue," but would not be permitted to include "All emeralds observed before the year 2000 are green," as this would be superfluous.
One thing that must be clarified about this proposal is that it does not purport to provide an account of confirmation, or a criterion for when a hypothesis is confirmed by some piece of evidence. It does not attempt to carry out the seemingly impossible task set by Goodman, that of giving a general criterion distinguishing projectible from unprojectible predicates. All I have argued, with respect to Goodman's problem, is that "All emeralds are grue" does not provide an appropriate contrast to "All emeralds are green." But it is equally true to say that "All emeralds are green" does not provide an appropriate contrast to "All emeralds are grue," so I have not stated a point of distinction between the two hypotheses - their relation is symmetric. And consequently, I haven't provided positive reason for thinking that "All emeralds are green" is a superior hypothesis to "All emeralds are grue." Rather, the point of my criterion of appropriate contrasts is solely to stall a skeptical argument - it is to show that, if we think that our evidence confirms "All emeralds are green," the skeptic has at least not provided adequate grounds for denying this. The lack of evidence against the incompatible grue hypothesis is, at least, not sufficient grounds for saying that we are not justified in accepting the green hypothesis.
The essence of the problem of defeasible justification is that, for any defeasible conclusion, there exist alternative hypotheses not only compatible with but entailing the evidence. This is a necessary characteristic of defeasible justification. Because of this, the skeptic is able, in principle, to 'neutralize' any evidence we might bring against such alternatives by building the evidence into her hypotheses. There is no comfort to be hoped for from trying to refute skeptical alternatives derived from such a strategy, unless we would turn our ostensibly defeasible justification into indefeasible justification. If, therefore, we accept just two general epistemological principles, a skeptical conclusion appears inevitable, namely, (i) that justifiably believing that h on the basis of e requires some grounds for rejecting each contrary of h, and (ii) that a logical consequence of a hypothesis is never a ground for rejecting the hypothesis.
There are two arguments to be considered on behalf of (i): First, one might argue that, since a valid deductive argument from a known or justifiably believed premise constitutes a paradigm case of justification and h always entails the denial of each contrary of h, anyone who justifiably believes h ipso facto has a justification ready for believing the denial of each contrary of h. We pointed out, however, that this argument gives no comfort to the skeptic, because it merely licenses us to expand the corpus of our justified beliefs by adding the logical consequences of our defeasibly justified beliefs; what the skeptic really needs is the full principle (9), that justifiably believing that h on the basis of e requires some grounds independent of h for rejecting each contrary of h -- a principle not supported by this argument. The actual logic of the skeptical argument, then, turns on the idea that in inferring a hypothesis from evidence, one is choosing it from a set of alternatives, choosing this one hypothesis as opposed to each of the alternative hypotheses. It therefore seems that to be justified in inferring the hypothesis is to be justified in preferring it over every alternative. And therefore it seems as if one must have a reason for preferring the given hypothesis over every alternative.
There were also two considerations on behalf of (ii): First, that if e is a logical consequence of h, then if h were true, e would certainly be true. Second, if e is a logical consequence of h, in the usual case, this will mean that e raises the probability of h, according to any coherent subjective probability distribution. Both of these seem quite incompatible with e's being evidence against h.
(i) and (ii) are conjointly unacceptable, because they lead to the consequence that there is no such thing as defeasible justification. I stress that the conclusion they lead to is not merely that we can never have justification which is both defeasible and makes us absolutely certain of the conclusion (where this is understood in a typically extremely strong sense). That, of course, would be a scarcely interesting conclusion. But the conclusion they lead to is that any belief which is justified is indefeasibly justified -- that is to say, deductivism.
I have not claimed to solve this problem. However, I have outlined three main solutions that seem to me worth pursuing. Any solution must reject either (i) or (ii) and produce an answer to the argument(s) offered on its behalf. The first solution worth considering would reject (i) while seemingly preserving the intuition behind it. This solution would be to hold that it is possible to be justified in rejecting an alternative -- or rather, in preferring one alternative over another -- without having a reason for doing so, in much the same way as foundationalists hold that it is possible to be justified in accepting a proposition without having a reason for accepting it. The second solution would reject (ii), arguing that it doesn't apply when the reason is a necessary truth, and that we always have available metaphysically necessary reasons for rejecting some of the alternatives to our defeasible conclusions (these necessary truths must, of course, function as defeasible reasons for rejecting those alternatives, given that the alternatives are logically possible). Finally, one might reject (i) and respond to the argument on behalf of it by denying that every logical contrary of a hypothesis is one of the alternatives over which it is chosen.
1. On this, see J.L. Austin, Sense and Sensibilia (Oxford: Clarendon Press, 1962), pp. 115-8.
2. "The New Riddle of Induction," Fact, Fiction, and Forecast (Cambridge: Harvard University Press, 1955), chapter 3.
3. David Stove, in particular, makes much of this point. See The Rationality of Induction (Oxford: Clarendon Press, 1986), chapter 5; and Probability and Hume's Inductive Skepticism (Oxford: Clarendon Press, 1973), chapter 5, section iii.
4. 'Roughly' because the principle needs some qualification and reformulation to avoid obvious objections (perhaps for "S knows that q" we should read "S can know that q," and perhaps for "p entails q" we should read "S knows that p entails q.") We won't be concerned with these details here.
5. See Robert Nozick, Philosophical Explanations (Cambridge: Harvard University Press, 1981), chapter 3; and Fred Dretske, "The Pragmatic Dimension of Knowledge," Philosophical Studies 40 (1981): 363-78.
6. "Epistemic Operators," Journal of Philosophy 67 (1970): 1007-23, p. 1016.
7. In fairness to Dretske, he realizes that some further reason for rejecting closure is required. I think that the reason he provides -- essentially, an argument from analogy to other operators (already a fairly weak form of argument) -- is neutralized by the rationale for (9) that I provide below, as it would be neutralized by any rationale for (9) that made use of intuitions specifically about justification.
8. Belief, Justification, and Knowledge (Belmont, Calif.: Wadsworth, 1988), p. 77.
9. As Peter Klein has argued in his "Skepticism and Closure: Why the Evil Genius Argument Fails," Philosophical Topics 23 (1996): 215-38.
10. Kripke (Naming and Necessity (Cambridge: Harvard University Press, 1972), pp. 54-7) has already showed, arguendo, that there is contingent a priori knowledge of things like "S is one meter long," where S is the standard meter, but this is because it is analytic and stipulated to be true.
11. Gail Stine, "Skepticism, Relevant Alternatives, and Deductive Closure," Philosophical Studies 29 (1976): 249-61.
12. "The Pragmatic Dimension of Knowledge," op cit.