Research

For other repositories of my work, see: ResearchGate, PhilPapers, Explore Bristol Research, Google Scholar, Academia.

Publications

In preparation

What is justified credence?

In this paper, we seek a reliabilist account of justified credence. Reliabilism about justified beliefs comes in two varieties: process reliabilism (Goldman, 1979, 2008) and indicator reliabilism (Alston, 1988, 2005). Existing accounts of reliabilism about justified credence comes in the same two varieties: Jeff Dunn’s is a version of process reliabilism (Dunn, 2015) while Weng Hong Tang offers a version of indicator reliabilism (Tang, 2016). As we will see, both face the same objection. If they are right about what justification is, it is mysterious why we care about justification, for neither of the accounts explains how justification is connected to anything of epistemic value. We will call this the Connection Problem. I begin by describing Dunn’s process reliabilism and Tang’s indicator reliabilism. I argue that, understood correctly, they are, in fact, extensionally equivalent. That is, Dunn and Tang reach the top of the same mountain, albeit by different routes. However, I argue that both face the Connection Problem. In response, I offer my own version of reliabilism, which is both process and indicator, and I argue that it solves that problem. Furthermore, I show that it is also extensionally equivalent to Dunn’s reliabilism and Tang’s. Thus, I reach the top of the same mountain as well. Having done that, I consider some objections to the account I propose and some consequences of it.

PDF

What we talk about when we talk about numbers

In this paper, I describe and motivate a new species of mathematical structuralism, which I call Instrumental Nominalism about Set-Theoretic Structuralism. As the name suggests, this approach takes standard Set-Theoretic Structuralism of the sort championed by Bourbaki and removes its ontological commitments by taking an instrumental nominalist approach to that ontology of the sort described by Joseph Melia and Gideon Rosen. I argue that this avoids all of the problems that plague other versions of structuralism.

PDF

An accuracy-dominance argument for conditionalization (with R. A. Briggs)

Epistemic decision theorists aim to justify Bayesian norms by arguing that these norms further the goal of epistemic accuracy—having beliefs that are as close as possible to the truth. The standard defense of probabilism appeals to accuracy-dominance: for every belief state that violates the probability calculus, there is some probabilistic belief state that is more accurate, come what may. The standard defense of conditionalization, on the other hand, appeals to expected accuracy: before the evidence is in, one should expect to do better by conditionalizing than by following any other rule. We present a new argument for conditionalization that appeals to accuracy-dominance, rather than expected accuracy. Our argument suggests that conditionalization is a rule of diachronic coherence: failing to conditionalize is not just a bad response to the evidence; it is also inconsistent.

PDF

Internalism, externalism, and the KK principle (with Alexander Bird)

This paper examines the relationship between the KK principle and the epistemological theses of externalism and internalism. There is often thought to be a very close relationship between externalism and the rejection of the KK principle and between internalism and its acceptance. How strong are the connections? The stronger proposals are: externalism entails the denial of the KK principle; internalism entails the truth of the KK principle. We will consider a number of problems for the theses as stated; we will present two ways of amending them so that they avoid these problems.

PDF

Forthcoming

Epistemic Utility and the Normativity of Logic

Logos and Episteme

How does logic relate to rational belief? Is logic normative for belief, as some say? What, if anything, do facts about logical consequence tell us about norms of doxastic rationality? In this paper, we consider a range of putative logic-rationality bridge principles. These purport to relate facts about logical consequence to norms that govern the rationality of our beliefs and credences. To investigate these principles, we deploy a novel approach, namely, epistemic utility theory. That is, we assume that doxastic attitudes have different epistemic value depending on how accurately they represent the world. We then use the principles of decision theory to determine which of the putative logic-rationality bridge principles we can derive from considerations of epistemic utility.

PDF

Book symposium on Accuracy and the Laws of Credence

Philosophy and Phenomenological Research

  • Précis and replies (PDF)
  • Contribution from R. A. Briggs
  • Contribution from Jim Joyce
  • Contribution from Matt Kotzen

Transformative experience and the knowledge norms for action: Moss on Paul’s challenge to decision theory

in Lambert, E. and J. Schwenkler (eds.) Transformative Experience (OUP)

L. A. Paul (2014, 2015) argues that the possibility of epistemically transformative experiences poses serious and novel problems for the orthodox theory of rational choice, namely, expected utility theory — I call her argument the Utility Ignorance Objection. In a pair of earlier papers, I responded to Paul’s challenge (Pettigrew 2015, 2016), and a number of other philosophers have responded in similar ways (Dougherty, et al. 2015, Harman 2015) — I call our argument the Fine-Graining Response.  Paul has her own reply to this response, which we might call the Authenticity Reply. But Sarah Moss has recently offered an alternative reply to the Fine-Graining Response on Paul’s behalf (Moss 2017) — we’ll call it the No Knowledge Reply. This appeals to the knowledge norm of action, together with Moss’ novel and intriguing account of probabilistic knowledge. In this paper, I consider Moss’ reply and argue that it fails. I argue first that it fails as a reply made on Paul’s behalf, since it forces us to abandon many of the features of Paul’s challenge that make it distinctive and with which Paul herself is particularly concerned. Then I argue that it fails as a reply independent of its fidelity to Paul’s intentions.

PDF

Aggregating incoherent agents who disagree

Synthese

In this paper, we explore how we should aggregate the degrees of belief of a group of agents to give a single coherent set of degrees of belief, when at least some of those agents might be probabilistically incoherent. There are a number of way of aggregating degrees of belief, and there are a number of ways of fixing incoherent degrees of belief. When we have picked one of each, should we aggregate first and then fix, or fix first and then aggregate? Or should we try to do both at once? And when do these different procedures agree with one another? In this paper, we focus particularly on the final question.

PDF

The Principal Principle does not imply the Principle of Indifference

The British Journal for the Philosophy of Science

In a recent paper in the British Journal for the Philosophy of Science, James Hawthorne, Jürgen Landes, Christian Wallmann, and Jon Williamson (henceforth HLWW) argue that the Principal Principle entails the Principle of Indifference. In this paper, I argue that it does not. Lewis’ version of the Principal Principle notoriously depends on a notion of admissibility, which Lewis uses to restrict its application. HLWW do not give a precise account either, but they do appeal to two principles concerning admissibility, which they call Condition 1 and Condition 2. There are two ways of reading their argument, depending on how you understand the status of Conditions 1 and 2. Reading 1: The correct account of admissibility is determined independently of these two principles, and yet these two principles follow from that correct account. Reading 2: The correct account of admissibility is determined in part by these two principles, so that the principles follow from that account but only because the correct account is constrained so that it must satisfy them. HLWW then show that, given an account of admissibility on which Conditions 1 and 2 hold, the Principal Principle entails the Principle of Indifference. I will argue that, on either reading of the argument, it fails. I will argue that there is a plausible account of admissibility on which Conditions 1 and 2 are false. That defeats the first reading of the argument. I will then argue that the intuitions that lead us to assent to Condition 2 also lead us to assent to other very closely related principles that are inconsistent with Condition 2. This, I claim, casts  doubt on the reliability of those intuitions, and thus removes our justification for Condition 2. This defeats the second reading of the HLWW argument. Thus, the argument fails.

PDF

On the accuracy of group credences

in Szabó Gendler, T. & J. Hawthorne (eds.) Oxford Studies in Epistemology volume 6

We often ask for the opinion of a group of individuals. How strongly does the scientific community believe that the rate at which sea levels are rising increased over the last 200 years? How likely does the UK Treasury think it is that there will be a recession if the country leaves the European Union? What are these group credences that such questions request? And how do they relate to the individual credences assigned by the members of the particular group in question? According to the credal judgment aggregation principle, Linear Pooling, the credence function of a group should be a weighted average or linear pool of the credence functions of the individuals in the group. In this paper, I give an argument for Linear Pooling based on considerations of accuracy. And I respond to two standard objections to the aggregation principle.

PDF

Making things right: the true consequences of decision theory in epistemology

in Ahlstrom-Vij, K. & J. Dunn (eds.) Epistemic Consequentialism (Oxford: Oxford University Press)

In his 1998 paper, ‘A Nonpragmatic Vindication of Probabilism’, Jim Joyce offered a novel argument for the credal principle of Probabilism. In this paper, I consider an objection to Joyce’s argument that has been raised by Hilary Greaves (‘Epistemic Decision Theory’, Mind, 2013); and I try to answer that objection.

PDF

2016

Book symposium on Accuracy and the Laws of Credence

Episteme 14(1):1-69

  • Précis and replies (PDF)
  • Contribution from Fabrizio Cariani (PDF)
  • Contribution from Sophie Horowitz (PDF)
  • Contribution from Ben Levinstein (PDF)
  • Contribution from Julia Staffel (PDF)

Illness as transformative experience (with Havi Carel and Ian James Kidd)

The Lancet 388(10050):1152-53

Imagine that you need to decide whether to adopt a child or not. It’s the only avenue to parenthood that is open to you. If you adopt a child, you will become a parent. You will experience the (currently unknown) highs and lows of being a parent. If you decide not to adopt, you will never know what being a parent is like. The decision you are asked to make is doubly risky. This problem has been discussed recently by philosopher L A Paul in her book Transformative Experience. Paul suggests that experiences such as becoming a parent are doubly transformative. First, they are epistemically transformative: you can only learn what it is like to be a parent by becoming one. Second, experiences such as becoming a parent are existentially transformative: you don’t know how such an experience will change you and your preferences. We suggest that serious illness is a transformative experience and that Paul’s framework usefully characterises central aspects of it.

PDF, Journal

The population ethics of belief: in search of an epistemic Theory X

Noûs doi: 10.1111/nous.12164

Consider Phoebe and Daphne. Phoebe has credences in 1 million propositions. Daphne, on the other hand, has credences in all of these propositions, but she’s also got credences in 999 million other propositions. Phoebe’s credences are all very accurate in the following sense. Each of Daphne’s credences, in contrast, are not very accurate at all; each is a little more accurate than it is inaccurate, but not by much. Whose doxastic state is better, Phoebe’s or Daphne’s?

It is clear that this question is analogous to a question that has exercised ethicists over the past thirty years. How do we weigh apopulation consisting of some number of exceptionally happy and satisfied individuals against another population consisting of a much greater number of people whose lives are only just worth living? This is the question that occasions population ethics. In this paper, I go in search of the correct population ethics for credal states.

PDF, Journal

Jamesian epistemology formalised: an explication of ‘The Will to Believe’

Episteme 13(3):253-268

Famously, William James held that there are two commandments that govern our epistemic life: Believe truth! Shun error! In this paper, I give a formal account of James’ claim using the tools of epistemic utility theory. I begin by giving the account for categorical doxastic states – that is, full belief, full disbelief, and suspension of judgment. Then I will show how the account plays out for graded doxastic states – that is, credences. The latter part of the paper thus answers a question left open in (Pettigrew 2014).

PDF, Journal

Accuracy and the Laws of Credence (Oxford: Oxford University Press)

In this book, we offer an extended investigation into a particular way of justifying the rational principles that govern our credences (or degrees of belief). The main principles that we justify are the central tenets of Bayesian epistemology, though we meet many other related principles along the way. These are: Probabilism, the claims that credences should obey the laws of probability; the Principal Principle, which says how credences in hypotheses about the objective chances should relate to credences in other propositions; the Principle of Indifference, which says that, in the absence of evidence, we should distribute our credences equally over all possibilities we entertain; and Conditionalization, the Bayesian account of how we should plan to respond when we receive new evidence. Ultimately, then, the book is a study in the foundations of Bayesianism.

To justify these principles, we look to decision theory. We treat an agent’s credences as if they were a choice she makes between different options. We give an account of the purely epistemic utility enjoyed by different sets of credences. And we appeal to the principles of decision theory to show that, when epistemic utility is measured in this way, the credences that violate the principles listed above are ruled out as irrational. The account of epistemic utility we give is the veritist’s: the sole fundamental source of epistemic utility for credences is their accuracy. Thus, this is an investigation in the version of epistemic utility theory known as accuracy-first epistemology. The book can also be read as an extended reply on behalf of the veritist to the evidentialist’s objection that veritism cannot account for certain evidential principles of credal rationality, such as the Principal Principle, the Principle of Indifference, and Conditionalization.

Publisher, Webpage

Remaking the elite university: An experiment in widening participation in the UK (with Josie McLellan and Tom Sperlinger)

Power and Education 8(1):54-72

This article analyses and critiques the discourse around widening participation in elite universities in the UK. One response, from both university administrators and academics, has been to see this as an ‘intractable’ problem which can at best be ameliorated through outreach or marginal work in admissions policy. Another has been to reject the institution of the university completely, and seek to set up alternative models of autonomous higher education. The article presents a different analysis, in which the university is still seen as central and participation is seen as an aspect of pedagogy rather than as an administrative process. This is illustrated through a description of how a Foundation Year in Arts and Humanities was conceived, designed and implemented at the University of Bristol. This model is used to consider the problems, risks and successes in challenging received notions of how (and whether) widening participation can be achieved, and whether it can reach those who are currently most excluded from elite universities, such as those without
qualifications. The article suggests how academics can utilise their expertise to solve key challenges faced by universities and reclaim autonomy in central aspects of university administration. At the same time, it demonstrates how change to the current model of student recruitment can also bring welcome – and transformative – change to the nature of elite higher education institutions in the UK and elsewhere

PDF, Journal

Accuracy, Risk, and the Principle of Indifference

Philosophy and Phenomenological Research 92(1):35-59

In Bayesian epistemology, the problem of the priors is this: How should we set our credences (or degrees of belief) in the absence of evidence? That is, how should we set our prior or initial credences, the credences with which we begin our credal life? The Principle of Indifference gives a very restrictive answer. It demands that such an agent divide her credences equally over all possibilities. That is, according to the Principle of Indifference, only one initial credence function is permissible, namely, the uniform distribution. In this paper, we offer a novel argument for the Principle of Indifference. I call it the Argument from Accuracy.

PDF, Journal

Review of John P. Burgess’ Rigor and Structure

Philosophia Mathematica 24(1):129-136

In this review, I focus on the possibility of giving a precise account of informal mathematical proof; and the lesson that Burgess draws from the indifference that mathematicians have towards questions about the subject matter of their discipline.

PDF, Journal

Review of L. A. Paul’s Transformative Experience

Mind 125(499):927-935

In this review, I focus mainly on Paul’s own solution to the problems that she raises for orthodox decision theory; and I consider the possibility of an alternative solution, which I originally proposed in ‘Transformative Experience and Decision Theory’ (Philosophy and Phenomenological Research, 2014).

PDF, Journal

2015

Risk, rationality, and expected utility theory

Canadian Journal of Philosophy 45(5-6): 798-826

There are decision problems where the preferences that seem rational to many people cannot be accommodated within orthodox decision theory in the natural way. In response, a number of alternatives to the orthodoxy have been proposed. In this paper, I offer an argument against those alternatives and in favour of the orthodoxy. I focus on preferences that seem to encode sensitivity to risk. And I focus on the alternative to the orthodoxy proposed by Lara Buchak’s risk-weighted expected utility theory. I will show that the orthodoxy can be made to accommodate all of the preferences that Buchak’s theory can accommodate.

PDF, Journal

Epistemic utility arguments for Probabilism (revised version)

in Zalta, E. (ed.) Stanford Encyclopedia of Philosophy

A survey article on epistemic utility arguments for Probabilism.

Website

Transformative experience and decision theory

Philosophy and Phenomenological Research 91(3):766-774. (Contribution to book symposium on L. A. Paul’s Transformative Experience)

I have never eaten Vegemite—should I try it? I currently have no children—should I apply to adopt a child? In each case, one might imagine, whichever choice I make, I can make it rationally by appealing to the principles of decision theory. Not always, says L. A. Paul. In Transformative Experience, Paul issues two challenges to decision theory based upon examples such as these. I will show how we might reformulate decision theory in the face of these challenges. Then I will consider the philosophical questions that remain after the challenges have been accommodated.

PDF, Journal

Pluralism about belief states

Proceedings of the Aristotelian Society (Supp. Vol.) 89(1):187-204 (Contribution to a symposium on Hannes Leitgeb’s Humean thesis on belief at the Joint Session of the Aristotelian Society and Mind Association 2015)

With his Humean thesis on belief, Leitgeb (2015) seeks to say how beliefs and credences ought to interact with one another. To argue for this thesis, he enumerates the roles beliefs must play and the properties they must have if they are to play them, together with norms that beliefs and credences intuitively must satisfy. He then argues that beliefs can play these roles and satisfy these norms if, and only if, they are related to credences in the way set out in the Humean thesis. I begin by raising questions about the roles that Leitgeb takes beliefs to play and the properties he thinks they must have if they are to play them successfully. After that, I question the assumption that, if there are categorical doxastic states at all, then there is just one kind of them—to wit, beliefs—such that the states of that kind must play all of these roles and conform to all of these norms. Instead, I will suggest, if there are categorical doxastic states, there may be many different kinds of such state such that, for each kind, the states of that type play some of the roles Leitgeb takes belief to play and each of which satisfies some of the norms he lists. As I will argue, the usual reasons for positing categorical doxastic states alongside credences all tell equally in favour of accepting a plurality of kinds of them. This is the thesis I dub pluralism about belief states.

PDF, Journal

Accuracy and the belief-credence connection

Philosophers’ Imprint 15(16):1-20

Probabilism is the thesis that an agent is rational only if her credences are probabilistic. This paper will be concerned with what we might call the Accuracy Dominance Argument for Probabilism (Rosenkrantz, 1981; Joyce, 1998, 2009). In this paper, I wish to identify and explore a lacuna in this argument that arises for those who take there to be (at least) two sorts of doxastic states: beliefs and credences.

PDF, Journal

What chance-credence norms should not be

Noûs 49(1):177-196

A chance-credence norm states how an agent’s credences in propositions concerning objective chances ought to relate to her credences in other propositions. The most famous such norm is the Principal Principle (PP), due to David Lewis. However, Lewis noticed that PP is too strong when combined with many accounts of chance that attempt to reduce chance facts to non-modal facts. Those who defend such accounts of chance have offered two alternative chance-credence norms: the first is Hall’s and Thau’s New Principle (NP); the second is Ismael’s General Recipe (IP). Thus, the question arises: Should we adopt NP or IP or both? In this paper, I argue that IP has unacceptable consequences when coupled with reductionism, so we must accept NP alone.

PDF, Journal

2014

Deference done right (with Mike Titelbaum)

Philosophers’ Imprint 14(35):1-19

There are many kinds of epistemic experts to which we might wish to defer in setting our credences. These include: highly rational agents, objective chances, our own future credences, our own current credences, and evidential (or logical) probabilities. But how, precisely, ought we defer to these experts? Exactly what constraint does a deference requirement place on an agent’s credences at a particular time?

In this paper we consider three possible answers, inspired by three different principles that have been proposed for deference to objective chances. We consider how these options fare when applied to the other kinds of epistemic experts mentioned above. Besides assuming a baseline probabilism about rational credences, we are particularly interested in the following two desiderata:

  • A deference principle should be consistent with both the agent’s and the experts’ updating by Conditionalization.
  • A deference principle should permit agents to have various kinds of doubts about what’s rationally required.

Of the three deference principles we consider, we argue that two of the options face insuperable difficulties meeting these desiderata. The third, on the other hand, fares well — at least when it is applied in a particular way.

PDF, Journal

Two types of abstraction for structuralism (with Øystein Linnebo)

Philosophical Quarterly 64(255):267-283

If numbers were identified with any of their standard set-theoretic realizations, then they would have various non-arithmetical properties that mathematicians are reluctant to ascribe to them. Dedekind and later structuralists conclude that we should refrain from ascribing to numbers such ‘foreign’ properties. We first rehearse why it is hard to provide an acceptable formulation of this conclusion. Then we investigate some forms of abstraction meant to purge mathematical objects of all ‘foreign’ properties. One form is inspired by Frege; the other by Dedekind. We argue that both face problems.

PDF, Journal

2013

Accuracy and Evidence

Dialectica 67(4):579-96

In ‘A Nonpragmatic Vindication of Probabilism’, Jim Joyce argues that our credences should obey the axioms of the probability calculus by showing that, if they don’t, there will be alternative credences that are guaranteed to be more accurate than ours. But it seems that accuracy is not the only goal of credences: there is also the goal of matching one’s credences to one’s evidence. I will consider four ways in which we might make this latter goal precise: on the first, the norms to which this goal gives rise act as ‘side constraints’ on our choice of credences; on the second, matching credences to evidence is a goal that is weighed against accuracy to give the overall cognitive value of credences; on the third, as on the second, proximity to the evidential goal and proximity to the goal of accuracy are both sources of value, but this time they are incomparable; on the fourth, the evidential goal is not an independent goal at all, but rather a byproduct of the goal of accuracy. All but the fourth way of making the evidential goal precise are pluralist about credal virtue: there is the virtue of being accurate and there is the virtue of matching the evidence and neither reduces to the other. The fourth way is monist about credal virtue: there is just the
virtue of being accurate. The pluralist positions lead to problems for Joyce’s argument; the
monist position avoids them. I endorse the latter

PDF, Journal

A New Epistemic Utility Argument for the Principal Principle

Episteme 10(1):19-35

Jim Joyce has presented an argument for Probabilism based on considerations of epistemic utility. In a recent paper, I adapted this argument to give an argument for Probablism and the Principal Principle based on similar considerations. Joyce’s argument assumes that a credence in a true proposition is better the closer it is to maximal credence, whilst a credence in a false proposition is better the closer it is to minimal credence. By contrast, my argument in that paper assumed (roughly) that a credence in a proposition is better the closer it is to the objective chance of that proposition. In this paper, I present an epistemic utility argument for Probabilism and the Principal Principle that retains Joyce’s assumption rather than the alternative I endorsed in the earlier paper. I argue that this results in a superior argument for these norms.

PDF, Journal

Epistemic utility and norms for credence

Philosophy Compass 8(10):897-908

Beliefs come in different strengths. An agent’s credence in a proposition is a measure of the strength of her belief in that proposition. Various norms for credences have been proposed. Traditionally, philosophers have tried to argue for these norms by showing that any agent who violates them will be lead by her credences to make bad decisions. In this article, we survey a new strategy for justifying these norms. The strategy begins by identifying an epistemic utility function and a decision-theoretic norm; we then show that the decision-theoretic norm applied to the epistemic utility function yields the norm for credences that we wish to justify. We survey results already obtained using this strategy, and we suggest directions for future research.

PDF, Journal

Introducing…Epistemic Utility Theory

The Reasoner 7(1):10-11

A very brief overview of accuracy-based arguments for credal principles.

PDF

Review of Mark Colyvan’s An Introduction to the Philosophy of Mathematics

Bulletin of Symbolic Logic 19(3): 396-397

PDF, Journal

2012

Accuracy, Chance, and the Principal Principle

Philosophical Review 121(2):241-275

In “A Nonpragmatic Vindication of Probabilism,” James M. Joyce attempts to “depragmatize” de Finetti’s prevision argument for the claim that our credences ought to satisfy the axioms of the probability calculus. This article adapts Joyce’s argument to give nonpragmatic vindications of David Lewis’s original Principal Principle as well as recent reformulations due to Ned Hall and Jenann Ismael. Joyce enumerates properties that a function must have if it is to measure the distance from a set of credences to a set of truth values; he shows that, on any such measure, and for any set of credences that violates the probability axioms, there is a set that satisfies those axioms that is closer to every possible set of truth values. This article replaces truth values with objective chances in this argument; it shows that for any set of credences that violates the probability axioms or the Principal Principle, there is a set that satisfies both that is closer to every possible set of objective chances and similarly for Ned Hall’s New Principle and Jenann Ismael’s Generalized Principal Principle. Along the way, the article provides new arguments for some of Joyce’s central conditions on distance measures, and it answers two pressing objections to Joyce’s strategy.

PDF, Journal

Indispensability arguments and instrumental nominalism

Review of Symbolic Logic 5(4):687-709

In the philosophy of mathematics, indispensability arguments aim to show that we
are justified in believing that mathematical objects exist on the grounds that we make indispensable reference to such objects in our best scientific theories (Quine, 1981a; Putnam, 1979a) and in our everyday reasoning (Ketland, 2005). I wish to defend a particular objection to such arguments called instrumental nominalism. Existing formulations of this objection are either insufficiently precise or themselves make reference to mathematical objects or possible worlds. I show how to formulate the position precisely without making any such reference. To do so, it is necessary to supplement the standard modal operators with two new operators that allow us to shift the locus of evaluation for a subformula. I motivate this move and give a semantics for the new operators.

PDF, Journal

Identity and Discernibility in Philosophy and Logic (with James Ladyman and Øystein Linnebo)

Review of Symbolic Logic 5(1):162-186

Questions about the relation between identity and discernibility are important both
in philosophy and in model theory. We show how a philosophical question about identity and discernibility can be ‘factorized’ into a philosophical question about the adequacy of a formal language to the description of the world, and a mathematical question about discernibility in this language. We provide formal definitions of various notions of discernibility and offer a complete classification of their logical relations. Some new and surprising facts are proved; for instance, that weak discernibility corresponds to discernibility in a language with constants for every object, and that weak discernibility is the most discerning nontrivial discernibility relation.

PDF, Journal

2011

An Improper Introduction to Epistemic Utility Theory

in Regt, Henk de, Stephan Hartmann, and Samir Okasha (eds.) EPSA Philosophy of Science: Amsterdam 2009 (Springer)

A survey of accuracy-based arguments for Probabilism and Conditionalization.

PDF

Probability

in Horsten, L. and R. Pettigrew (eds.) Continuum Companion to Philosophical Logic (Continuum Press)

An introductory survey article on different interpretations of probability.

PDF

Category theory as an autonomous foundation (with Øystein Linnebo)

Philosophia Mathematica 19(3):227-254

Does category theory provide a foundation for mathematics that is autonomous with respect to the orthodox foundation in a set theory such as ZFC? We distinguish three types of autonomy: logical, conceptual, and justificatory. We argue that, while a strong case can be made for its logical and conceptual autonomy, its justificatory autonomy turns on whether or not mathematical theories can be justified by appeal to mathematical practice. If they can, a category-theoretical approach will be fully autonomous; if not, the most natural route to justificatory autonomy is blocked.

PDF, Journal

2010

An Objective Justification of Bayesianism II: The Consequences of Minimizing Inaccuracy (with Hannes Leitgeb)

Philosophy of Science 77: 236-272 (Chosen for the Philosophers’ Annual 2010)

In this article and its prequel, we derive Bayesianism from the following norm: Accuracy—an agent ought to minimize the inaccuracy of her partial beliefs. In the prequel, we make the norm mathematically precise; in this article, we derive its consequences. We show that the two core tenets of Bayesianism follow from Accuracy, while the characteristic claim of Objective Bayesianism follows from Accuracy together with an extra assumption. Finally, we show that Jeffrey Conditionalization violates Accuracy unless Rigidity is assumed, and we describe the alternative updating rule that Accuracy mandates in the absence of Rigidity.

PDF, Journal

An Objective Justification of Bayesianism I: Measuring Inaccuracy (with Hannes Leitgeb)

Philosophy of Science 77: 201-235

In this article and its sequel, we derive Bayesianism from the following norm: Accuracy—an agent ought to minimize the inaccuracy of her partial beliefs. In this article, we make this norm mathematically precise. We describe epistemic dilemmas an agent might face if she attempts to follow Accuracy and show that the only measures of inaccuracy that do not create these dilemmas are the quadratic inaccuracy measures. In the sequel, we derive Bayesianism from Accuracy and show that Jeffrey Conditionalization violates Accuracy unless Rigidity is assumed. We describe the alternative updating rule that Accuracy mandates in the absence of Rigidity

PDF, Journal

Modelling Uncertainty: Review essay on Huber, F. and C. Schmidt-Petri (eds.) Degrees of Belief

Grazer Philosophische Studien 80: 309-316

The book under review provides a stimulating, informative, and focussed collection of new articles that survey a topic in formal epistemology that is fast becoming one of the central topics in mainstream epistemology. The twelve articles, as well as Huber’s excellent Introduction, address the following two questions:

  1. How should we model or represent an agent’s epistemic state?
  2. What constraints does rationality impose on an agent’s epistemic states thus
    modelled?

I treat each of these questions in turn, and conclude with a detailed consideration of an argument from Joyce’s article.

PDF, Journal

The foundations of arithmetic in finite bounded Zermelo set theory

in Hinnion, R. and T. Libert (eds.) One Hundred Years of Axiomatic Set Theory, Cahiers du Centre de Logique 17: 99-118

In this paper, I pursue such a logical foundation for arithmetic in a variant of Zermelo set theory that has axioms of subset separation only for quantifier-free formulae, and according to which all sets are Dedekind finite. In section 2, I describe this variant theory. And in section 3, I sketch foundations for arithmetic in that theory and prove that
certain foundational propositions that are theorems of the standard Zermelian
foundation for arithmetic are independent of it. An equivalent theory of sets and an equivalent foundation for arithmetic was introduced by Mayberry and developed by the current author in his doctoral thesis. In that thesis and in the joint paper with
Mayberry to which it gave rise, the independence results mentioned above
are proved using proof-theoretic methods. In this paper, I offer model-theoretic
proofs of the central independence results using the technique of cumulation models, which was introduced by Steve Popham, a doctoral student of Mayberry from the early 1980s.

PDF

2009

On interpretations of bounded arithmetic and bounded set theory

Notre Dame Journal of Formal Logic 50(2): 141-152

In ‘On interpretations of arithmetic and set theory’, Kaye and Wong proved the following result, which they considered to belong to the folklore of mathematical logic.

Theorem The first-order theories of Peano arithmetic and Zermelo-Fraenkel set theory with the axiom of infinity negated are bi-interpretable.

In this note, I describe a theory of sets that is bi-interpretable with the theory of bounded arithmetic, IΔ0 + exp. Because of the weakness of this theory of sets, I cannot straightforwardly adapt Kaye and Wong’s interpretation of the arithmetic in the set theory. Instead, I am forced to produce a different interpretation.

PDF, Journal

Aristotle on the subject matter of geometry

Phronesis 54: 239-260

I offer a new interpretation of Aristotle’s philosophy of geometry, which he presents in greatest detail in Metaphysics M 3. On my interpretation, Aristotle holds that the points, lines, planes, and solids of geometry belong to the sensible realm, but not in a straightforward way. Rather, by considering Aristotle’s second attempt to solve Zeno’s Runner Paradox in Book VIII of the Physics, I explain how such objects exist in the sensibles in a special way. I conclude by considering the passages that lead Jonathan Lear to his fictionalist reading of Met. M3,1 and I argue that Aristotle is here describing useful heuristics for the teaching of geometry; he is not pronouncing on the meaning of mathematical talk.

PDF, Journal

2008

Platonism and Aristotelianism in Mathematics

Philosophia Mathematica 16(3): 310-332

Philosophers of mathematics agree that the only interpretation of arithmetic that takes that discourse at ‘face value’ is one on which the expressions ‘N’, ‘0’, ‘1’, ‘+’, and ‘×’ are treated as proper names. I argue that the interpretation on which these expressions are treated as akin to free variables has an equal claim to be the default interpretation of arithmetic. I show that no purely syntactic test can distinguish proper names from free variables, and I observe that any semantic test that can must beg the question. I draw the same conclusion concerning areas of mathematics beyond arithmetic.

PDF, Journal

Drafts that will probably remain drafts

Accuracy-domination arguments and credences as estimates of truth-values

Branden Fitelson has recently raised an intriguing objection (Fitelson, 2012) to Jim Joyce’s accuracy-domination arguments for probabilism (Joyce 1998, 2009). He adapts an objection raised by David Miller against accounts of verisimilitude that make it a measure of the accuracy of a theory’s predictions (Miller, 1975). As Joyce presents his accuracy domination argument, it is based on a conception of credences as estimates of truth-values; and Fitelson’s objection is based on an alleged analogy between an agent’s estimate of a quantity (such as a truth-value) and a scientific theory’s prediction of the value of a quantity. I will offer two responses to Fitelson’s objection: I will show that, even if the alleged analogy does hold, it does not undermine Joyce’s argument; then I will argue that the analogy does not hold.

PDF

Self-locating beliefs and the goal of accuracy

The goal of a partial belief is to be accurate, or close to the truth. By appealing to this norm, I seek norms for partial beliefs in self-locating and non-self-locating propositions. My aim is to find norms that are analogous to the Bayesian norms, which, I argue, only apply unproblematically to partial beliefs in non-self-locating propositions. I argue that the goal of a set of partial beliefs is to minimize the expected inaccuracy of those beliefs. However, in the self-locating framework, there are two equally legitimate definitions of expected inaccuracy. And, while each gives rise to the same synchronic norm for partial beliefs, they give rise to different, inconsistent diachronic norms. I conclude that both norms are rationally permissible. En passant, I note that this entails that both Halfer and Thirder solutions to the well-known Sleeping Beauty puzzle are rationally permissible.

PDF

My doctoral thesis

Natural, rational, and real arithmetic in a finitary theory of finite sets

Supervisor: John Mayberry

In the first part of this thesis, I describe a finitary theory of finite sets called Euclidean Arithmetic, which was introduced by Mayberry in his The Foundations of Mathematics in the Theory of Sets. In the second part, I develop the theory of simply infinite systems in this theory. Since Dedekind’s Isomorphism Theorem for Simply Infinite Systems does not hold in Euclidean Arithmetic, there is a vast and varied fauna of simply infinite systems with different and surprising properties. I describe many of these systems; I detail the properties they have and the relations that hold between them; and I give recipes by means of which new systems may be defined from old, and from which systems with desired properties may be defined. In the final part of this thesis, I describe a novel version of infinitesimal analysis in a intuitionistic extension of the basic theory of Euclidean Arithmetic. I prove analogues to many of the standard elementary theorems of real analysis and the calculus.

PDF

Research projects

Epistemic Utility Theory: Foundations and Applications

European Research Council

January 2013 – December 2016

This project aims to apply the powerful tools of decision theory to provide novel arguments for the epistemic norms that we take to govern what it is rational to believe; and to discover new epistemic norms. We treat the possible epistemic states of an agent as if they were epistemic actions between which that agent must choose. And we consider how we should measure the purely epistemic utility of being in such a state. We then apply the general apparatus of decision theory to determine which epistemic states are rational in a given situation from a purely epistemic point of view; and how our epistemic states should evolve over time. This allows us, often for the first time, to give formal justifications of epistemic norms without appealing to pragmatic considerations that seem intuitively irrelevant to the norms in question. These formal arguments have the great advantage that their assumptions are made mathematically precise and their conclusions are deduced from their assumptions by means of a mathematical theorem. We call their study epistemic utility theory.

The Scientific Approach to Epistemology

Leverhulme Trust

June 2015 – May 2018

The scientific approach to epistemology is a novel and exciting methodology. It supplements traditional methods, such as conceptual analysis, by means of three innovative techniques: (i) formal mathematical modeling, e.g., by means of Bayesian nets; (ii) empirical experimentation; (iii) computer simulations, e.g. agent-based models. These techniques extend the range of questions that can be fruitfully asked, make the addressed problems more precise, and gauge theoretical results against empirical or simulation-based tests. Apart from incorporating insights from formal sciences such as mathematics, statistics, and computer science, the scientific approach makes (socio-)epistemological research more relevant to related disciplines, e.g. psychology or sociology.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s