In his APA comments on Jonathan Schaffer, Ross asks about some of Jonathan's ideas about the applicability of Ockham's razor. The question arises if you buy into some robust distinction between "fundamental" and "derivative" existents. Candidate fundamental existents: quarks, electrons, maybe organisms (or maybe just THE WORLD). Candidate derivative existents: weirdo fusions, impure sets, maybe tables and chairs (or maybe everything except THE WORLD).
Let's call the idea that "derivative" as well as "fundamental" entities are (thump table) existing things the expansivist interpretation of the fundamental/derivative distinction. Call the idea that only the fundamental (thump table) exists the restrictivist interpretation of that distinction.
Jonathan's position is that Ockham's razor, rightly understood, tells us to minimize the number of fundamental entities. Ross's idea (I think?) is that this is right iff one has a restrictivist understanding of the fundamental/derivative distinction. But Jonathan, pretty clearly, has an expansivist understanding of that distinction: he doesn't want to say that the only thing that (thump table) exists is the world, just that the world is ontologically prior to everything else. So if Ross is right, his application of parsimony is in trouble.
I can see what the idea is here: after all, understanding parsimony as the instruction to minimize (thump table) existents or to minimize the (thump table) kinds of existents is surely close to the traditional understanding. Whereas the idea that we need only minimize (kinds of) existents of such-and-such a type, seems to come a bit out of the blue, and at minimum we need some more explanation before we could accept that revision to our theoretical maxims.
However... One thing that seems important is to consider what sort of principles of parsimony might be present in more ordinary theorizing (e.g. in the special sciences). The appeal of appealing to parsimony in metaphysics is in large part that it's a general theoretical virtue, applicable in all sorts of areas that are paradigms of good, productive fields of inquiry. Now, theoretical virtues in the sciences is not a topic that I'm in a position to speak with authority on. But one thing that seems to me important in this connection: if you think that the entities of special sciences aren't fundamental entities, then principles of parsimony restricted to the fundamentals aren't going to be in a position to give you much bite. (NB: I think that this was raised by someone in comments on Jonathan's paper in Boise, but I can't remember who it was...).
If that's right, then whether you're an expansivist or a restrictivist about the fundamental/derivative distinction seems beside the point. Any theorist who gives a story about what the fundamentals are that's unconstrained by what the special sciences say, is going to be in trouble with the idea that principles of parsimony should be restricted to constraints on fundamental existents: for such principles of parsimony won't then be able to get much bite on theorizing in the special sciences. I'd like to think that quarks, leptons etc are going to populate the fundamental, rather than Jonathan's WORLD. This point bites me as much as Jonathan.
There's plenty of room for further discussion here, particularly the interaction of the above with what you take to be evidence for some entities being fundamental. E.g. if you thought that various types of emergentism in special science would be evidence for "higher level" fundamental entities, then maybe the above parsimony principle would still have application to special sciences: it'd tell you to reduce to the number of emergent entities you postulate (i.e. it'd be a methodological imperative towards reductionism).
Also, it seems to me that there is something to the thought that some entities are simply "don't cares" when applying parsimony principles. If I'm concerned with theorizing about the behaviour of various beetles in front of me, I care about how many kinds of beetles my theory is giving me, but not with how many kinds of mathematical entities I need to invoke in formulating that theory. Now, maybe that differential attitude can be explained away by pointing to the generality of the mathematica involved (e.g. that total science is "already committed to them"). But one natural take would be to look for restrictions to principles of parsimony/Ockham's razor, making them sensitive to the subject-matter under investigation.
To speculate wildly: If principles of parsimony do need to be sensitized in this way, and if the study of what fundamentally exists is a genuine investigation, maybe the principle of parsimony, in application to that study, really would tell us to minimize the number of, and kinds of, fundamental entities we posit.