« Graduate Admissions | Main | Stability and Practical Reason »



"...the value of d(wi) reflects the degree of similarity between a world wi and w with 1 corresponding to maximal similarity and 0 corresponding to maximal dissimilarity."

We can't be sure that d(wi) will select the right worlds. Many worlds won't be selected because those worlds are very dissimilar to wi. But those worlds needn't be dissimilar in ways that matter. For instance, a world in which lakes are filled with twater and the sky reflects green will (likely) be quite dissimilar to ours. But such worlds are not dissimilar in a way that matters to the stability of Smith's belief that (say) barns are sometimes found on farms.
Apart from this, it is very hard to understand an ordering of similarity, simpliciter. How similar a world is to wi depends on what actually happens in wi. So consider a world w in which a nuclear blast distablizes Smith's belief. Is w relevantly similar to wi? Will w be selected by the d-function for wi? Who knows? It depends on whether a nuclear blast in fact happens in wi. So we have no idea what worlds are most similar to wi.



Thanks for the comment; its a good point. Question: In the particular case that you consider (viz., cases where microstructural differences don't matter), can't we consider macroscopic identity as defining a set of equivalence classes on worlds? In which case wouldn't I be able to just ignore microstructurally dissimilar, but macrostructurally identical, worlds without loss?



That's an interesting question, if I'm tracking you. If you stipulate as irrelevant microstructural differences, you are effectively claiming that such differences can never be relevant to the stability of a belief. Here's my bet. Given a little ingenuity and a little time, we'll construct a case in which those differences do matter to some case of belief stability. Doesn't that sound right?


Last point on this (for me). Suppose you're right that in this context it is macorscopic similarity that matters (as I think it does). That won't help in providing a general definition of the function d. In any case, I can't see how it might help. You need a non-ad hoc way of selecting the most similar worlds to wi. Sometimes macroscopic features of worlds will matter, sometimes microscopic features, sometimes both will matter. A formal representation of this (although I'm all for it!) will be a very complex matter.


You are right about the fact that different factors will come into play in different cases. So it seems pretty clear that I can't build in a stipulation that microstructural features are irrelevant. Maybe the moral is this: in cases where all that matters are macro characteristics, we will get the same results whether or not we treat macro-counterparts as nearby. If so, then the defintion will work (in these sorts of cases) as it stands. Take your original example. There will be a problem excluding twater worlds from the set of nearby worlds iff (i) there is a twater world which should count intuitively against the stability of the belief and (ii) there is no water world which counts against the stability of the belief. But I don't see how this can be. If the the twater worlds are relevant, this can only be because the water/twater distinction is irrelevant. But if it is irrelevant, then the existence of a twater world which should intuitively count against the stability of the belief ought to have a counterpart water world which also counts against it.

I hasten to add that I am by no means confident of the above claim, but I don't see a counterexample up front.


I suppose the simplest way in which microfeatures become relevant is to consider the stability of a person's beliefs about the microfeatures of his world. That seems perfectly possible and microfeatures ought then to be relevant to belief-stability, no?



First, I think we agree that we don't want the stability of our beliefs to be an issue just because there are TE type externalist scenarios around in logical space. So, for instance, we don't want my belief that there is water in this glass to be unstable just because there are TE worlds in which it is twater. Typically in these cases, we want TE worlds to be counted as sufficiently dissimilar to not be counted as nearby. On this count the present proposal seems to get things right. Specifically, in typical cases, a difference in microstructure between a world w and the actual world will be sufficient for counting that world as not being nearby.

Now there might be cases where we have an unstable belief which is explicitly about microfeatures. For instance, a case in which a person is vacillitating between microsctructural choices (Is it H2O or XYZ?). Suppose we have such a case. Now the fact that XYZ worlds are in fact quite dissimilar to H2O worlds doesn't in itself mean that they won't get counted as nearby worlds for the case under consideration (since the degree of similarity required for inclusion in the nearby worlds will vary with context). [Aside: it is a kind of interesting feature of the definition that when the relevance and similarity are vacuous, the stability operator turns into a logical necessity operator.] I am guessing that in such cases, the relevant worlds will be picked out via macrofeatures and the similarity requirement will be extremely lax.

But I will have to think more about the examples. Without a concrete case in front of me, I just can't see if there are enough resources to handle things or not.

The comments to this entry are closed.

August 2011

Sun Mon Tue Wed Thu Fri Sat
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31      
Blog powered by Typepad