This is a little thought experiment I ran by a few people a while back and would love to get as much feedback as possible on it.
Kurt, a theoretical mathematician, has just come up with a long and complicated proof for the proposition that p which involves several wholly novel logical techniques. The difficulty of the proof is such that Kurt can only grasp the cogency of the argument when he is at his absolute sharpest. Suppose that upon finishing the proof at t1 Kurt is at his sharpest, sees how the argument goes, and consequently comes to belief that p. Immediately thereafter, at t2, Kurt’s mental focus deteriorates slightly (unbeknownst to him). At his point, he begins to doubt the cogency of the argument and, as a result, ceases to believe that p. Immediately thereafter, at t3, Kurt takes a shot of espresso, his mental acuity goes back up and he once again comes to believe that p. Shortly thereafter, at t4, the caffeine jolt wears off and Kurt once again loses faith in his argument. (Iterate this vacillation as many times as you like.) Finally, at tn, Kurt has gone through the proof a sufficient number of times that, even at those times when he can’t actually see how the proof goes, he continues to believe that p on the basis of those occasions when he functioning at a higher mental level.
Question: Does Kurt know his conclusion at time t1?
There seems to be three responses: (1) maintain that Kurt knows that p at t1, fails to know it at t2, knows it at t3, fails to know it at t4, … , knows it at tn. (2) Kurt doesn’t know it at any time prior to tn because he isn’t, in fact, justified. (3) Kurt has a justified true belief at t1 and t3, but doesn’t know it until tn.
Option (1) requires that knowledge be able to flicker in and out of existence over very short intervals of time. This seems to me like an immensely unattractive position. [UPDATE: In any event, it seems to me intuitively wrong to say that he knows at t1.] Option (2) requires that justification at a time be a function of our level of mental acuity at other times. Specifically, it requires that my being justified in believing that p at t is not simply a matter of my understanding at t how my evidence supports my belief, but also on whether or not my mental acuity is such that at times other than t I understand how my evidence supports this belief. This also seems unlikely. After all, if Kurt had been compelled to act on his belief at, say, t3 we would hardly be inclined to say that he was not justified in so acting.
Option (3) requires merely that justified true belief is not sufficient for knowledge – something we already know to be the case from Gettier. But how are we to diagnose the failure? Unfortunately for my larger project, the case is equivocal between the instability of belief and the instability of justification (both of which seem to be lost at t2 and t3). One thing that tilts the example slightly in favor of the instability of belief is that it is not implausible to claim that Kurt would be justified in believing that p at, say, t2 on the basis of his being justified at t1. So we might be inclined to say that, at t2, Kurt is justified in believing that p, though he doesn’t in fact believe it then. If so, then it looks as if the JTB analysis can fail, not only because of instability in justification, but also because of instability in belief.
I went for (1) intuitively. A couple of comments:
Perhaps grasping a proof takes time. This could increase the amount of time it takes for knowledge to come and go--if at all the even-numbered t's Kurt has to go through his proof again.
Also, Hawthorne's practical environment view (in Knowledge and Lotteries) has knowledge coming and going quickly as your practical focus changes.
I should note that intuitions about knowledge come cheap for me, because I don't think knowledge is very important as a fundamental epistemological concept (as opposed to justification). Mathematical knowledge is a bit of a hard case here, though, because having proved that p is much more important than having a certain degree of justification for belief that p is true. I don't have a settled view on that.
Here's a question: At t3 Kurt tells Emmy that he has proved p. Does Emmy know that p?
Posted by: Matt Weiner | May 26, 2004 at 11:36 AM
It might be that the situation is not sufficiently articulated to yield a reliable intuition. Suppose we add the following: (i) Kurt knows that his acuity oscillates and he knows when he is mentally acute and when he isn't (ii) Kurt's *recollection* about his prior mental acuity is reliable in his non-acute (and acute states) and he knows this (iii) Kurt believes correctly at t1 that he proved p. If these are true, then it is hard to see why Kurt would not know that p at both acute and non-acute points. He can't do the proof at non-acute points, but he recalls correctly at such points that someone has done the proof correctly (viz. himself). In the same way that I know that arithmetic is incomplete (i.e., I know that someone has done that incompleteness proof correctly), Kurt knows that p in his less acute moments.
Posted by: MJ | May 26, 2004 at 03:41 PM
Wait! I see now that you expressly deny part of my (i). You assume that Kurt does not know when he is in a non-acute state. But so long as his recollection of having done the proof correctly is reliable (and he knows its reliable), it seems to me that he does know. But re-reading the brief example, you might be implicitly denying that his recollection is reliable (or that he knows that it's reliable) in those non-acute states. I just can't tell.
Posted by: MJ | May 26, 2004 at 03:54 PM
MJ--
The intention was to set up the case in such a way that, although K (nonfactive reading) remembers at t2 doing the proof accurately at t1, he doubts that he really did it accurately at t1. I suppose that this requires that he not be aware of the shift in his cognitive functioning. We might even suppose that at t2 he comes to reasonably but wrongly suspect that he made some error in the proof. So I guess I want to develop the case in a way that denies condition (iii) as well. What he believes at t2 is that he thought at t1 that he proved that p, not that he proved that p. Does that help?
Posted by: marc | May 26, 2004 at 05:02 PM
__What he believes at t2 is that he thought at t1 that he proved that p, not that he proved that p. Does that help?__
Let me see. At t2 to the query "did you prove p at t1" Kurt says "I think so". But to the query "Are you certain?" he says "No, I'm not certain."
So can Kurt know that he proved p without being certain that he proved p? I'm guessing that almost everyone will say yes. In any case, I would.
Posted by: MJ | May 26, 2004 at 07:24 PM
Let me see. At t2 to the query "did you prove p at t1" Kurt says "I think so".
I intended the case in such a way that K's response would be, "I thought so, but now I'm not sure--in fact, I kinda doubt it." As stated in the original case, K doesn't believe that p at t2, so it would hardly be reasonable to expect him to know that he proved that p!
Posted by: marc | May 26, 2004 at 08:07 PM
__As stated in the original case, K doesn't believe that p at t2, so it would hardly be reasonable to expect him to know that he proved that p!__
I don't know. K temporarily fails to remember the proof. Does he know that p? Well suppose Smith knows his phone number at t1. But at t2 he is so drowsy that he (temporarily) can't recall the number. An hour later he recalls the number again. This temporary cognitive impairment might be similar to K's. Perhaps K's lapses are due to drowsiness or a temporary chemical imbalance or poor nutrition (perhaps K starves himself now and then out of a fear of being poisoned). In any case, though Smith temporarily cannot recall his phone number, I'd urge that he knows the number. So it might also be that though K temporarily cannot recall how the proof for p goes, he knows how it goes.
Posted by: MJ | May 27, 2004 at 07:33 AM
I'm not a theoretical mathematician or [professional] philosopher but I am a programmer. Perhaps a description of a scenario I run into quite often can shed some light on why I would lean towards Option (1).
Occasionally I'm asked to write something complex. It usually requires an extended time of superb concentration to produce a solution that works. The nice thing about software development, however, is that things can be tested to work.
After some time of neglect (I like to think of it as unblemished functionality) I often forget how my own code works! Especially if it is a program involving a few thousand lines of code and I'm off working on other things.
It's funny how we create things and forget them subsequently - we remember the substance of the work but forget the details. Politicians forget beurocracies, linguists forget languages, philosophers forget books and so on goes the pattern.
If I'm asked to troubleshoot or enhance the code I've forgotten about I have to reenter the state of superb concentration I had when writing it in order to really get how things worked in the first place. When I have a firm clarity of the type I had when it was originally written, I can add or remove or edit with a good understanding.
I suspect that a lot of "knowing" a truth or fact outside of ourselves is how well we can document our understanding of it. The dishevled mathematician is surely grasping for a pen and paper in his musty office as his theory dawns upon him.
Lastly, just like a programmer who writes a library and then uses it over and over with the assumption that it works, a lot of "understanding" works that way. We quickly seek applications for our understandings and the foible is seeing everything as a nail after we've conceived a hammer.
Posted by: David Seruyange | May 27, 2004 at 08:25 AM
Like Matt, I too plunk for option 1. There's more on why -- including another thought experiment intended to motivate the choice -- at http://philonous.typepad.com/musings_from_the_lehigh_v/. Cheers!
Posted by: j.s. | May 27, 2004 at 09:59 AM