My concern in this paper is with a certain, pervasive picture of epistemic justification. On this picture, acquiring justification for believing something is essentially a matter of minimising one’s risk of error – so one is justified in believing something just in case it is sufficiently likely, given one’s evidence, to be true. This view is motivated by an admittedly natural thought: If we want to be falliblists about justification then we shouldn’t demand that something be *certain* – that we *completely* eliminate error risk – before we can be justified in believing it. But if justification does not require the complete elimination of error risk, then what could it possibly require if not its minimisation? If justification does not require epistemic certainty then what could it possibly require if not epistemic probability? When all is said and done, I’m not sure that I can offer satisfactory answers to these questions – but I will attempt to trace out some possible answers here. The alternative picture that I’ll outline makes use of a notion of *normalcy* that I take to be irreducible to notions of statistical frequency or predominance.
15:40 à 16:40
Colloque
Non enregistré
Justification, Normalcy and Evidential Probability
Martin Smith
15:40 à 16:40