I've recently discovered some really interesting papers on how to think about belief in a future with branching time. Folks are interested in branching time as it (putatively) emerges out of "decoherence" in the Everett interpretation of standard Quantum mechanics.
The first paper linked to above is forthcoming in BJPS, by Simon Saunders and David Wallace. In it, they argue for a certain kind of parallel between the semantics for personal fission cases and the semantics most charitably applied to language users in branching time, and argue that this sheds lights on the way that beliefs should behave.
Now, lots of clever people are obviously thinking about this, and I haven't absorbed all the discussion yet. But since it's really cool stuff, and since I've been thinking about related material recently (charity-based metasemantics, fission cases, semantics in branching time) I thought I'd sit down and figure out how things look from my point of view.
I'm sceptical, in fact, whether personal fission itself (and associated de se uncertainty about who one will be) will really help us out here in the way that Saunders and Wallace think. Set aside for now the question of whether faced with a fission case you should feel uncertain which fission-product you will end up as (for discussion of that question, on the assumption that it's indeterminate which of the Lewisian continuing persons is me, see the indeterminate survival paper I just posted up). But suppose that we do get some sense in which, when you're about to fission, you have de se uncertainty about where you'll be, even granted full knowledge of the de dicto facts.
The Saunders-Wallace idea is to try to generalize this de se ignorance as an explanation of the ignorance we'd have if we were placed in a branching universe, and knew what was to happen on every branch. We'd know all the de dicto truths about multiple futures---and we would literally be about to undergo fission, since I'd be causally related in the right kind of ways to multiple person stages in the different futures. So---they claim---ignorance of who I am maps onto ignorance of what I'm about to see next (whether I'm about to see the stuff in the left branch, or in the right). And that explains how we can get ignorance in a branching world, and so lays the groundwork for explaining how we can get a genuine notion of uncertainty/probability/degree of belief off the ground.
I'm a bit worried about the generality of the purported explanation. The basic thought there is that to get a complete story about beliefs in branching universes, we're going to need to justify degrees of beliefs in matters that happen, if at all, long after we would go out of existence. And so it just doesn't seem likely that we're going to get a complete story about uncertainty from consideration of uncertainty about which branch I myself am located within.
To dramatize, consider an instantaneous, omniscient agent. She knows all the de dicto truths about the world (in every future branch) and also exactly where he is located---so no de se ignorance either. But still, this agent might care about other things, and have a certain degree of belief as to whether, e.g. the sea-battle will happen in the future. The kind of degree of belief she has (and any associated "ignorance") can't, I think, be a matter of de se ignorance. And I think, for events that happen if at all in the far future, we're relevantly like the instantaneous omniscient agent.
What else can we do? Well---very speculatively---I think there's some prospect for using the sort of charity-based considerations David Wallace has pointed to in the literature for getting a direct, epistemic account of why we should adopt this or that degree of belief in borderline cases. The idea would be that we *mimimize inaccuracy of our beliefs* by holding true sentences to exactly the right degrees.
A first caveat: this hangs on having the *right* kind of semantic theory in the background. A Thomason-style supervaluationist semantics for the branching future just won't cut it, nor will MacFarlane-style relativistic tweaks. I think one way of generalizing the "multiple utterances" idea of Saunders and Wallace holds out some prospect of doing better---but best of all would be a degree-theoretic semantics.
A second caveat: what I've got (if anything) is epistemic reason for adopting certain kinds of graded attitude. It's not clear to me that we have to think of these graded attitudes as a kind of uncertainty. And it's not so clear why expected utility, as calculated from these attitudes, should be a guide to action. On the other hand, I don't see clearly the argument that they *don't* or *shouldn't* have this pragmatic significance.
So I've written up a little note on some of these issues---the treatment of fission that Saunders-Wallace use, the worries about limitations to the de se defence, and some of the ideas about accuracy-based defences of graded beliefs in a branching world. It's very drafty (far more so than anything I usually put up as work in progress). To some extent it seems like a big blog post, so I thought I'd link to it from here in that spirit. Comments very welcome!
Update: Oh, and worldle abstract: