Philosophy Department University of Colorado
Boulder, Colorado 80309-0232
|B.A.||Philosophy||University of California, Berkeley||1992|
|Ph.D.||Philosophy||Rutgers University, New Brunswick||1998|
|University of Colorado, Boulder||Assistant Professor of Philosophy||1998-2005|
|Associate Professor of Philosophy||2005-2011|
|Professor of Philosophy||2011-|
|Epistemology, ethics, meta-ethics, political philosophy.|
Introduction to Philosophy
Philosophy and Society
Philosophy & Science Fiction
Major Social Theories
Economics, Game Theory, & Rational Choice
History of Science (ancient & medieval)
Philosophy of Science
Abstracts of Written Work
Ethical Intuitionism (Palgrave Macmillan, 2005)
I defend a form of ethical intuitionism, according to which (i) there are objective moral truths; (ii) we know some of these truths through a kind of immediate, intellectual awareness, or “intuition”; and (iii) our knowledge of moral truths gives us reasons for action independent of our desires. I confront the major objections to this theory, arguing, among other things, that intuitionists have the resources to explain moral disagreements and to offer a reasonable approach to resolving some of them. The major alternative theories, including subjectivism, nihilism, non-cognitivism, and reductionism, are shown to face compelling objections.
Skepticism and the Veil of Perception (Rowman & Littlefield, 2001)
I develop a direct realist theory of perceptual awareness, according to which (i) perception constitutes direct awareness of the external world, rather than of mere mental representations, and (ii) perceptual experiences provide us with non-inferential knowledge of (some of) the features of this world. I defend a general foundationalist principle on which appearances confer prima facie justification for belief, and I confront the four main arguments for philosophical skepticism, showing that they are ineffectual against the direct realist.
“Against Equality and Priority” (Utilitas, forthcoming)
I start from three premises, roughly as follows: (1) that if possible world x is better than world y for every individual who exists in either world, then x is better than y; (2) that if x has a higher average utility, a higher total utility, and no more inequality than y, then x is better than y; (3) that better than is transitive. From these premises, it follows that benefits given to the worse off contribute no more to the world’s value than equal-sized benefits given to the better off.
“Phenomenal Conservatism and Self-Defeat: A Reply to DePoe” (Philosophical Studies, 2011)
John DePoe has criticized the self-defeat argument for Phenomenal Conservatism. He argues that acquaintance, rather than appearance, may form the basis for non-inferentially justified beliefs, and that Phenomenal Conservatism conflicts with a central motivation for internalism. I explain how Phenomenal Conservatism and the self-defeat argument may survive these challenges.
“Does Probability Theory Refute Coherentism?” (Journal of Philosophy, 2011)
Recent results in probability theory have cast doubt on the coherence theory of justification, allegedly showing that coherence cannot produce justification for beliefs in the absence of foundational justification, and that there can be no measure of coherence on which coherence is generally truth-conducive. I argue that the coherentist can reject some of the assumptions on which these theorems depend. Coherence can then be held to produce justification on its own, and truth-conducive measures of coherence can be constructed.
“Epistemological Egoism and Agent-Centered Norms” (Evidentialism and its Discontents, 2011)
Agent-centered epistemic norms direct thinkers to attach different significance to their own epistemically relevant states than they attach to the similar states of others. Thus, if S and T both know, for certain, that S has the intuition that P, this might justify S in believing that P, yet fail to justify T in believing that P. I defend agent-centeredness and explain how an agent-centered theory can accommodate intuitions that seem to favor agent-neutrality.
“The Puzzle of Metacoherence” (Philosophy and Phenomenological Research, 2011)
Moore’s paradox supports the principle of “metacoherence”, i.e., that if one categorically believes that P, one is committed to accepting that one knows that P. The principle raises puzzles about how, when one has justification for P, one also has justification for the claim that one knows P. I reject a skeptical answer as well as a bootstrapping answer, and I suggest that we typically have independent justification for the claim that we know P.
“Lexical Priority and the Problem of Risk” (Pacific Philosophical Quarterly, 2010)
Lexical priority theories hold that some practical reasons have infinitely greater weight than others. This includes absolute deontological theories and axiological theories on which some goods are categorically superior to others. These theories face problems in cases in which there is a non-extreme probability that a given reason applies. Lexical priority theories are in danger of becoming irrelevant to decision-making, becoming absurdly demanding, or generating paradoxical cases in which each of a pair of actions is permissible yet the pair is impermissible.
“Is There a Right to Immigrate?” (Social Theory and Practice, 2010)
Immigration restrictions violate the prima facie right of potential immigrants not to be subject to harmful coercion. This prima facie right is not neutralized or outweighed by the economic, fiscal, or cultural effects of immigration, nor by the state's special duties to its own citizens, or to its poorest citizens. Nor does the state have a right to control citizenship conditions in the same way that private clubs may control their membership conditions.
“A Paradox for Weak Deontology” (Utilitas, 2009)
Deontological ethicists generally agree that there is a way of harming others such that it is wrong to harm others in that way for the sake of producing a comparable but greater benefit for others. Given plausible assumptions about this type of harm, this principle yields the paradoxical result that it may be wrong to do A, wrong to do B, but permissible to do (A and B).
“Explanationist Aid for the Theory of Inductive Logic” (British Journal for the Philosophy of Science, 2009)
The theory of logical probability stands in need of a way of constraining initial probabilities. One popular constraint, the Principle of Indifference, admits of multiple interpretations, some of which lend support to inductive reasoning, and some of which instead support inductive skepticism. I propose a way of incorporating the notion of explanatory priority into the Principle of Indifference. This approach vindicates inductivism against skepticism.
“When Is Parsimony a Virtue?” (Philosophical Quarterly, 2009)
I review four accounts of the virtue of parsimony in empirical theorizing and consider how each might apply to two prominent appeals to parsimony in the philosophical literature: those made on behalf of physicalism and on behalf of nominalism. None of the accounts of the virtue of parsimony extends naturally to either of these philosophical cases. This suggests that, in typical philosophical contexts, ontological simplicity has no evidential value.
“Values and Morals: Outline of a Skeptical Realism” (Philosophical Issues, 2009)
I propose a skeptical form of moral realism, according to which, while there are objective values, many of the evaluative properties appealed to in common sense moral thinking, particularly “thick” evaluative properties, may be illusory. I suggest that “immorality” may be an example of a thick evaluative term that denotes no real property.
“Singer’s Unstable Metaethics” (Singer under Fire, 2009)
I address the tension in Singer’s meta-ethical views between his sympathy with non-cognitivism and his sympathy with Sidgwick-style intuitionism. To maintain his normative ethical views, Singer should reject non-cognitivism, because a non-cognitivist cannot plausibly defend utilitarianism. It remains difficult for an intuitionist to embrace utilitarianism, but this is the meta-ethical theory that holds out the most hope for the utilitarian.
“In Defence of Repugnance” (Mind, 2008)
I defend the “repugnant” conclusion that, given any world full of happy people, a world containing a sufficient number of people with lives barely worth living would be better. I defend a variant of Parfit’s Mere Addition argument, I propose three additional arguments for the conclusion, I review the problems facing alternative theories of population ethics, and I argue that intuitions opposing the repugnant conclusion should not be trusted.
“Revisionary Intuitionism” (Social Philosophy & Policy, 2008)
I argue that, given evidence of the factors that tend to distort our intuitions, ethical intuitionists should disown a wide range of common moral intuitions, and that they should typically give preference to abstract, formal intuitions over more substantive ethical intuitions. In place of the common sense morality with which intuitionism has traditionally allied, the suggested approach may lead to a highly revisionary normative ethics.
“Epistemic Possibility” (Synthese, 2007)
Seven accounts of epistemic possibility are criticized and a new account is proposed, making use of the notion of justified dismissal of a proposition. The new account explains intuitions about puzzling cases, upholds plausible general principles about epistemic possibility, and explains the practical import of epistemic modality judgements. I suggest that such judgements serve to assess which propositions are worthy of further inquiry.
“Compassionate Phenomenal Conservatism” (Philosophy & Phenomenological Research, 2007)
I defend the principle of Phenomenal Conservatism, on which appearances of all kinds generate justification for belief. I argue that there is no reason for privileging introspection or intuition over perceptual experience as a source of justified belief; that those who deny Phenomenal Conservatism are in a self-defeating position, in that their view cannot be both true and justified; and that the demand for a metajustification for Phenomenal Conservatism is easily met, or is unfair or question-begging.
“Weak Bayesian Coherentism” (Synthese, 2007)
Recent results in probability theory have cast doubt on coherentism, purportedly showing (a) that coherence among a set of beliefs cannot raise their probability unless individual beliefs have some independent credibility, and (b) that no possible measure of coherence makes coherence generally probability-enhancing. I argue that coherentists can reject assumptions on which these theorems depend, and I derive a general condition under which the concurrence of two information sources lacking individual credibility can raise the probability of what they report.
“Moore’s Paradox and the Norm of Belief” (Themes from G. E. Moore, 2007)
Reflection on Moore’s Paradox leads us to a general norm governing belief: fully believing that p commits one to the view that one knows that p. I sketch conceptions of both the nature of belief and the nature of knowledge that account for this norm. The norm for belief revealed by Moore’s Paradox leads us to an appreciation of the deep philosophical significance of the concept of knowledge.
“Phenomenal Conservatism and the Internalist Intuition” (American Philosophical Quarterly, 2006)
Externalist theories of justification create the possibility of cases in which everything appears to one relevantly similar with respect to two propositions, yet one proposition is justified while the other is not. Internalists find this difficult to accept, because it seems irrational in such a case to affirm one proposition and not the other. The underlying internalist intuition supports a specific internalist theory, Phenomenal Conservatism, on which epistemic justification is conferred by appearances.
“Is Critical Thinking Epistemically Responsible?” (Metaphilosophy, 2005)
Three ways of approaching controversial issues are: (i) To accept the conclusions of experts on their authority; (ii) to evaluate the relevant evidence/arguments for ourselves; and (iii) to simply withhold judgement. The received view recommends strategy (ii). But (ii) is normally epistemically inferior to (i) and (iii), since we are justified in believing that it is less reliable at producing true beliefs and avoiding false ones.
“Logical Properties of Warrant” (Philosophical Studies, 2005)
Taking “warrant” to be the property that when conjoined with true belief yields knowledge, I establish three facts about warrant: (1) Warrant is not a unique property; multiple non-equivalent properties satisfy the definition of “warrant.” (2) Pace Trenton Merricks, warrant need not entail truth; some warrant properties entail truth, while others do not. (3) Warrant need not be closed under entailment, even if knowledge is.
“Elusive Freedom? A Reply to Helen Beebee” (Philosophical Review, 2004)
I defend my earlier argument for incompatibilism, against Helen Beebee’s reply. Beebee’s reply would allow one to have free will despite that nothing one does counts as an exercise of that freedom, and would grant one the ability to do A even when one’s doing A requires something to happen that one cannot bring about and that in fact will not happen.
“Non-Egalitarianism” (Philosophical Studies, 2003)
Equality of welfare among persons has no intrinsic value. This follows from three axiological principles: (i) a principle of the indifference of the distribution of utility across time within an individual’s life, (ii) a strong supervenience principle for value, and (iii) a principle of the additivity of value across disjoint time periods. (iii) is the most likely target for attack by the egalitarian; but the rejection of (iii) creates decision-theoretic paradoxes.
“Causation as Simultaneous and Continuous” (Philosophical Quarterly, 2003)
We propose that all actual causes are simultaneous with their direct effects, as illustrated by both everyday examples and the laws of physics. We contrast this view with the sequential conception of causation, according to which causes must precede their effects. We find that the key difference between the two views lies in differing assumptions about the mathematical structure of time.
“Arbitrary Foundations?” (Philosophical Forum, 2003)
Foundationalism has often been charged with the defect of endorsing “arbitrary” foundations. On the most obvious interpretations of the term “arbitrary,” this objection transparently begs the question. A more sophisticated interpretation reveals the objection as resting on a conceptual confusion between reasons why a belief is justified and reasons that the believer has for the belief.
“Is There a Right to Own a Gun?” (Social Theory and Practice, 2003)
Individuals have a significant prima facie right to own firearms, deriving from both their recreational and their self-defense uses. This right is not overridden by the social harms of private gun ownership, which have been greatly exaggerated and are probably considerably smaller than the social benefits. Furthermore, the harms would have to be at least several times greater than the benefits in order to render gun prohibition permissible.
“Fumerton’s Principle of Inferential Justification” (Journal of Philosophical Research, 2002)
Against Richard Fumerton, I argue that to be justified in believing P on the basis of E, one need not first be justified in believing that E makes P probable. Fumerton’s view rests on a level confusion and leads to skepticism about inferential justification. Instead, Fumerton’s examples can be accommodated by the principle that in order for S to be justified in believing P on the basis of E, it must be true that E makes P probable.
“The Problem of Defeasible Justification” (Erkenntnis, 2001)
Inductive skepticism and brain-in-a-vat skepticism are instances of a general skepticism about defeasible justification, which can be derived from three premises: (1) To be justified in believing H on the basis of E, one must have grounds for rejecting every alternative to H; (2) if E does not entail H, then there are scenarios in which E is true but H is false; (3) if a hypothesis entails E, then E is not grounds for rejecting that hypothesis. I discuss the skeptic’s defense of these premises and some ways of responding to the skeptic.
“Van Inwagen’s Consequence Argument” (Philosophical Review, 2000)
Peter van Inwagen’s argument for incompatibilism uses a sentential operator, “N”, which can be read as “No one has any choice about the fact that . . . .” I show that, given van Inwagen’s understanding of the notion of having a choice, the argument is invalid. However, a different interpretation of “N” can be given, such that the argument is clearly valid, the premises remain highly plausible, and the conclusion implies that free will is incompatible with determinism.
“Naturalism and the Problem of Moral Knowledge” (Southern Journal of Philosophy, 2000)
Ethical naturalists interpret moral knowledge as analogous to scientific knowledge and not dependent on intuition. For their account to succeed, moral truths must explain observable phenomena, and these explanations (i) must be better than any explanations framed in non-moral terms, (ii) must not rely on ad hoc posits about the causal powers of moral properties, and (iii) must not presuppose the existence of an independent means of awareness of moral truths. No moral explanations satisfy these criteria.
“Direct Realism and the Brain-in-a-Vat Argument” (Philosophy & Phenomenological Research, 2000)
The brain-in-a-vat argument for skepticism is best formulated using, not the closure principle, but the principle that in order to be justified in believing H on the basis of E, one must have grounds for preferring H over each alternative explanation of E. This formulation foils Dretske’s and Klein’s anti-skeptical replies. However, the strengthened argument remains impotent against a direct realist account of perceptual knowledge.
“The Problem of Memory Knowledge” (Pacific Philosophical Quarterly, 1999)
When one recalls that P, how is one justified in believing that P? I refute three natural answers: a memory belief is not justified by a belief in the reliability of memory; a memory experience does not provide a new, foundational justification for a belief; and memory does not merely preserve the same justification a belief had when first adopted. Instead, the justification of a memory belief is a product of both the initial justification for adopting it and the justification for retaining it provided by memory experiences.
“Probability and Coherence Justification” (Southern Journal of Philosophy, 1997)
Suppose two witnesses independently report some observation. Laurence BonJour argues that the agreement of the reports is evidence of their truth, even if neither witness has any credibility individually; similarly, the coherence of several empirical beliefs is evidence of their truth, even if no belief has any foundational justification. I show that this is mistaken in terms of probability theory: the agreement of the witnesses’ reports raises the probability of what is reported only if an individual report raises the probability of its content.
“Rawls’s Problem of Stability” (Social Theory and Practice, 1996)
Rawls addresses the problem of the stability of his conception of justice by arguing that it could become the focus of an “overlapping consensus,” in which individuals with diverse moral, philosophical, and religious views all accept the Rawlsian conception for different reasons. Using the example of Christian fundamentalists, I show that, subject to constraints that Rawls himself delineates, no such consensus is possible.