Saturday 1 March 2008

On the Logic of Knowing

In epistemic logic it has become common to construe formulas of the form Kiφ, read “agent i knows φ”, in modal terms. Most notably, Hintikka has suggested the following definition of knowledge:
(*) M, s ╞ Kiφ iff for all t ~ i s : M, t ╞ φ
According to the above formula, an agent i knows that φ in some situation s, on some model M if and only if, for all other situations t, that i considers indistinguishable from s, t entails φ on M. On this approach an agent i is said to know a fact φ if φ is true at all the worlds she considers possible (given her current information).

However, intuitively, there seems to be certain sceptical scenarios which, though remote, we still feel inclined to hold as possible (given our current information). Nevertheless, we seem to feel that these possibilities do not preclude our knowledge of propositions that are incompatible with the sceptical scenarios envisioned. For example, I take myself to know that my laptop is currently sitting on the desk in my apartment (I am currently in the Butler Library computer lab) although I consider it possible that someone broke into my apartment and stole it twenty minutes ago. Since I am in no way inclined to believe that this possibility changes the fact that I know that my laptop is on the desk in my apartment, these two claims must, by my lights, be compatible. But if (*) is correct, then it seems as if I do not count as knowing that my laptop is on the desk in my apartment.

One response to the above objection would be to distinguish between logical possibility and serious possibility. The defender of (*) may grant that I consider it logically possible that my laptop has been stolen but then point out that I do not consider this a serious possibility. The key to distinguishing between the two draws on intuitions from Action Theory.(I take Action Theory to minimally entail the claim that knowledge cannot be analysed independently of epistemic actions. For our present purposes it won't be necessary to appeal to anything beyond this minimal claim.) Roughly, a serious possibility is one which is sufficient for generating a change in an agent’s behaviour. If I considered the possibility that my laptop has been stolen a serious one, the thought goes, then I would not be sitting calmly in the computer lab typing up a philosophy blog post. Rather, I would probably be running home to double check that my computer is still there.

We may construe the notion of serious possibility along probabilistic lines by specifying a threshold for a logical possibility to become a serious possibility. For example, I may regard some possibility as serious only when it has a logical possibility above 0.3. The present suggestion allows us to reconcile (*) with intuition described above. If we construe logical possibility along probabilistic lines, then my earlier admission that the possibility that my laptop was stolen twenty minutes ago is greater than zero merely registers that this is a logical possibility. However, since I hold this possibility as being less than 0.3, it still does not amount to a serious possibility for me. That is, it is not the type of possibility that would prompt any changes in my behaviour. Since the type of possibility implicated in (*) is actually serious possibility, the defender of (*) may, without contradiction, say that I do not consider it possible (i.e., seriously possible) that my laptop was stolen twenty minutes ago.

One problem with framing serious possibility in terms of the Action Theoretic terms adumbrated above is that it makes an agent’s behaviour the litmus test for what she does or does not know. This seems problematic since agents often do not act in ways that reflect their knowledge. For example, suppose I suffer from obsessive compulsive disorder and I find myself plagued by thoughts of having forgotten my gas stove on. I may be aware of my neurosis, and therefore dismiss my fears as idle paranoia, and yet decide to return home to check on the stove. I believe that any general account of knowledge should allow that, despite my neurosis, I do know that my gas is turned off. Admittedly, such actions on my part would count as irrational. However, it does not seem as though the irrationality of my behaviour should undermine my knowledge. This suggests that a plausible behavioural litmus test must be framed only in terms of rationally consistent behaviour. But if we already have some criterion for which actions count as rational, then it is this criterion, and not the agent’s behaviour per se, that determines what constitutes a serious possibility. In my next post on this topic I will suggest an alternative framework for differentiating between logical and serious possibility that is free of the foregoing problematic behaviourist assumptions.


No comments: