Thursday, August 30, 2007

Simplicity and possibility

I find mereological nihilism an attractive view. All there are are simples: there’s no mereological complexity to the world. I feel no need to say as a result of this that there are no tables or chairs, provided we don’t take those claims to be perspicuously describing fundamental reality. ‘There are tables’ might be a true sentence of English; but it is being made true not by a mereologically complex object, but simply by a collection of simples arranged a certain way. (See my paper and Robbie’s paper on fundamentality.)

I’m also attracted to the view that there aren’t really any structural universals. All there are are the perfectly natural basic universals. That’s not to say that there’s no methane; it’s just to say that claims about methane will be made true not by a structured universal METHANE but, ultimately, by the pattern of instantiation of the basic (let us suppose) universals HYDROGEN and CARBON.

One type of argument you hear against views like this is that we have to believe in the complex things because there might not be the entities at the bottom level. So we have, e.g., Sider and Armstrong, arguing that we’ve got to believe in mereologically complex entities because there might be no simples, i.e. the world might be gunky. And we have, e.g., Lewis and Armstrong arguing that we’ve got to believe in structural universals because there might be no basic ones. (I’m using a bit of poetic license here: Lewis didn’t really believe in structural universals – but he thought this was the best argument to believe in them.)

There are two ways to read the complaint. One is to read the ‘might’s in the above as meaning metaphysical possibility, one is to read them as meaning epistemic possibility.

I find the former form of the argument unconvincing. For starters, the metaphysical possibility of gunky worlds, or worlds with infinitely descending chains of structural universals, is far from a datum. (See this paper and this paper by Robbie, which attempt to explain away the illusions of possibility in each case.) But also, even if these are genuine possibilities, I only see a reason to believe that there might have been mereological complexity and structural universals; I don’t see any reason to think that the world actually contains either kind of complex entity. The two positions I confessed my attraction to are claims about how the world actually is, not how it must have been; the possibility of infinite complexity doesn’t give me any reason to accept the actuality of infinite complexity. (See my paper on the contingency of composition.)

What if the ‘might’s are read as epistemic modality? There the complaint is that we have no right to reject the existence of mereologically complex objects or structured universals because we have no guarantee that there are in fact the mereological simples, or basic universals, that there would need to be.

This is, I think, how Armstrong intends the objection (at least sometimes). As he sees it, I think, we’ve got no right just to assume that there are simples or basic universals. That would be a priori ontology, and therefore suspicious! We shouldn’t build theories on the assumption that there are the entities at the bottom level, then, and this means we have to allow that there are the complex entities.

I’ve heard something like that argument from quite a few people, but I don’t find it at all moving. Yes, there might not be any simples or basic universals. My theory might be wrong! There’s no a priori guarantee that there are simples or basic universals. So what? There’s no a priori guarantee that there are complex objects or structural universals either, so where’s the asymmetry? In accepting the two theories above I close off the epistemic possibility that there are no simples or basic universals, but in accepting Armstrong’s theory I close off the epistemic possibility of there being no complex objects or structural universals: why is one better than the other? Every theory closes off epistemic possibilities, unless it is a theory that tells us nothing about the world. So why is it a good objection to the above theories that their truth requires the existence of entities that we have no guarantee exist? Sure, I have no guarantee that there are simples or basic universals. It’s a hypothesis that there are; that hypothesis will then be judged just like any other: on the balance of costs and benefits.

Why might you think there was an asymmetry between reliance on the existence of the simple ontology and reliance on the existence of the complex ontology? You might think that there is an a priori guarantee of the existence of the complex ontology but not the simple ontology? Why? Well in the case of mereology, the existence of the complex objects is guaranteed by the axioms of classical mereology but the existence of simples is not. But that’s not convincing. The question then is simply: why believe in the axioms of classical mereology? They close off epistemic possibilities as well. To claim that they’re a priori looks no better to me than the claim that it’s a priori that there are simples. Assume the axioms of classical mereology and construct your theory on that basis by all means; but then I have as much right to do the same with the assumption that there are the simples – and then to the victor the spoils.

Perhaps the asymmetry is meant to be that the hypothesis that there are the complex objects is empirically sensitive in a way the hypothesis that there are the simple objects isn’t. But I can’t see any reason to think that that is true. If anything it’s the other way round: there would be no observable difference in the world were there complex objects as opposed to simples arranged a certain way, but if scientists are unable to split the lepton (or whatever) that gives us some reason to believe that leptons are mereologically simple. (I don’t really believe that, but some people do.)

So where’s the asymmetry? Any suggestions? Is the metaphysician who relies on the existence of simples doing anything worse than the metaphysician who relies on the existence of complexity?

Friday, August 17, 2007

PPR back up?

I'm not sure I've seen this advertised around the place, but PPR appear to be back up and running. The website for submissions is a little hard to find at the moment, since it's swamped on google by the (I believe) out of date site at Brown. Anyway, the new site, I think is here.

Emergence, Supervenience, and Indeterminacy (x-posted from T&T)

While Ross Cameron, Elizabeth Barnes and I were up in St Andrews a while back, Jonathan Schaffer presented one of his papers arguing for Monism: the view that the whole is prior to the parts, and the world is the one "fundamental" object.

An interesting argument along the way argued that contemporary physics supports the priority of the whole, at least to the extent that properties of some systems can't be reduced to properties of their parts. People certainly speak that way sometimes. Here, for example, is Tim Maudlin (quoted by Schaffer):

The physical state of a complex whole cannot always be reduced to those of its parts, or to those of its parts together with their spatiotemporal relations… The result of the most intensive scientific investigations in history is a theory that contains an ineliminable holism. (1998: 56)


The sort of case that supports this is when, for example, a quantum system featuring two particles determinately has zero total spin. The issues is that there also exist systems that duplicate the intrinsic properties of the parts of this system, but which do not have the zero-total spin property. So the zero-total-spin property doesn't appear to be fixed by the properties of its parts.

Thinking this through, it seemed to me that one can systematically construct such cases for "emergent" properties if one is a believer in ontic indeterminacy of whatever form (and thinks of it that way that Elizabeth and I would urge you to). For example, suppose you have two balls, both indeterminate between red and green. Compatibly with this, it could be determinate that the fusion of the two be uniform; and it could be determinate that the fusion of the two be variegrated. The distributional colour of the whole doesn't appear to be fixed by the colour-properties of the parts.

I also wasn't sure I believed in the argument, so posed. It seems to me that one can easily reductively define "uniform colour" in terms of properties of its parts. To have uniform colour, there must be some colour that each of the parts has that colour. (Notice that here, no irreducible colour-predications of the whole are involved). And surely properties you can reductively define in terms of F, G, H are paradigmatically not emergent with respect to F, G and H.

What seems to be going on, is not a failure for properties of the whole to supervene on the total distribution of properties among its parts; but rather a failure of the total distribution of properties among the parts to supervene on the simple atomic facts concerning its parts.

That's really interesting, but I don't think it supports emergence, since I don't see why someone who wants to believe that only simples instantiate fundamental properties should be debarred from appealing to distributions of those properties: for example, that they are not both red, and not both green (this fact will suffice to rule out the whole being uniformly coloured). Minimally, if there's a case for emergence here, I'd like to see it spelled out.

If that's right though, then application of supervenience tests for emergence have to be handled with great care when we've got things like metaphysical indeterminacy flying around. And it's just not clear anymore whether the appeal in the quantum case with which we started is legitimate or not.

Anyway, I've written up some of the thoughts on this in a little paper.