Course:Phil440A/Notes

From UBC Wiki

some very brief notes on issues that arose during the classes.

*Dretske*

~ note the connection with perception. when you see a thing you don't have to see all of it or know what it is. this is like the woman in the bus who saw that D's brother was being a pain, without knowing that it was D's brother. (She knew by seeing that D's brother *was a pain*, but she didn't know by seeing that *D's brother* was a pain.)
~ he is very vague about what makes an alternative relevant. relevant for whom (the person in question or the person judging what she knows?) thought to be relevant or really relevant? (suppose the person has silly ideas about what is relevant.) same questions about what is an alternative.
~ there's an obvious link to skepticism, and one idea that surfaced in the class was that whatever philosophical line we take on knowledge, we are going to have to reject some ordinary claims about what and when people know as confused.

*Lewis* (both weeks)

~ there's a huge and basic contrast between Lewis and Dretske over what is now called 'closure', believing consequences of what you believe. For Dretske it will vary from case to case whether someone believes something that follows from her belief, even when she knows that it follows. But for Lewis it is built into his definition of a proposition that a belief in something (a relation between a believer and a believed proposition) just is the same belief as a belief in something logically or necessarily equivalent to it.

"Did you know that Sally believes - knows, in fact - that e^(πi)= -1 and cats chase mice?"
"And she's only in grade 2, what a smart kid?"
"But anyone who believes that cats chase mice believes that cats chase mice and e^(πi)= -1 . They're the same proposition, so the same belief."

There are a lot of smart children out there, on this way of counting beliefs.

~ Lewis' paper is built around a simple idea: the variations in context that make "know" sometimes very demanding and sometimes fairly weak (if there are such variations), are a special case of the variation in the force of "all", and related words ("everything", "something") so that "after the party, everyone was happy" says just that the people at the party were happy, not that all six billion of us were. "Know" is a special case of this because it means - for Lewis - that all relevant alternatives have been excluded.

~ the rules he gives are not rules for good, rational, thinking for a person. They are rules that specify what alternatives we the people discussing a person have to check she has ruled out before we can truly attribute knowledge to her.

~ L's contextualism comes from his rule of attention. It says that when A and B are discussing C's beliefs they will count her belief as knowledge only if her experience rules out all the alternatives, except those that the rules allow *A and B* to ignore. So it is an 'ascriber-relative' contextualism, not a 'subject-relative' one.

~ I take this to be plausible when A and B are discussing their own beliefs, and less so when they are discussing some other person's, especially when that other person is distant in space or time. I used the example of two criminals discussing whether the police know that they committed the crime. It doesn't seem that one of them can produce a far-out possibility that the police will not have ruled out (that fingerprints do not uniquely identify people, perhaps), and conclude that the police are still ignorant. That seems like a way to get arrested.

~ At the end he makes some brief remarks on how talking of knowledge is a shortcut. The real business of science, he says, lies in excluding alternatives. So - I take it - by suitable use of context we can identify the kinds of alternatives that a particular style of science excludes. Interesting, but he doesn't elaborate on it.

    • Hypothesis testing** (both weeks)

The points that seemed to me most interesting were these:
~ The conclusions about which tests - in the tea experiment - were significant (showed better than chance performance) and which were not, were not obvious, though the arguments were simple. So we need a theory of experimental method to guide us. And - Staley&Cobb-like point - it is possible to do a good experiment but not know it is good (or bad and think it is good), so the grounds for one's conclusion that are given by the experiment, and the grounds that are given by knowing that it is a good experiment are separate, and can be hard to weigh against each other.
~ The second part of the point above has an interesting comparison with knowledge based on testimony: you can have a belief based on reliable testimony but have weak grounds for knowing that it is reliable. Externalist and internalist dimensions of justification.
~ Not obvious where the probabilities assigned by the null hypothesis come from. a priori (pure math)? general background beliefs about the world? An arbitrary alternative hypothesis?
~ real epistemology needed to clarify what one concludes from a test. (Fisher says just that the null H is to be rejected, if appropriate: but this isn't adequate.) How do we combine the results of tests of different severities (different significance levels)? How do we weigh these results against prior plausibility, inductive evidence, explanatory power, and so on? Such an epistemology might give rationales for the 'rules' of experiment, regarding radomization, double/triple blinding, sample sizes, control groups, and so on

    • pragmatic encroachment - Fantl & McGrath **

The argument to focus on has the form

If I know P then O is the best act for me
O is not the best act for me (because of the risks)
therefore: I do not know P

This creates a danger that considerations about what risks someone faces will determine what she knows. Consider this parody of the argument

if P is true then O is the best act for me
O is not the best act for me
therefore: P is not true

But it is surely crazy that what risks someone faces should determine what is true.
One might defend the first premise with "I should have bought gold" considerations. (If gold is going to rise, then buying it is the best thing for me to do.) This points to an ambiguity in "best act". So now the interesting question is whether we can find a similar ambiguity in the use of "best" in the earlier argument.

    • Second order knowledge **

~ connection with Gettier cases: since so many varied things can go wrong with a justified true belief to prevent it being knowledge, it is hard to have evidence that none of them have occurred. But you would have to have this to know that you know something. Looking at the field with the sheep you can know that there is a sheep in the field just by looking, but to know that you know you have to know for example that the surrounding fields do not have lifelike sheep models in them.
~ knowing that someone else knows: as long as we accept that one can only know truths, it is hard to see how it could not be that knowing that someone knows something means knowing it yourself. But sometimes it seems fishy. Best way out is to emphasise warrant, as K & P do. For though I now know that Fermat's last theorem is true, by knowing that Wyles knows it, I know it on very different grounds to the ones he has.
~ the class produced nice simple examples where one person knows that another knows something (and herself knows it only by knowing that the other knows.) You see someone looking round a corner and see them see something startling. You see someone's face light up and she calls BINGO.

    • accomplishment

One more idea, that helps connect this stuff to the themes of the course. Examples suggest that action that is based on belief, even true belief, that is not knowledge is not accomplishment. And it is plausible that we base our assessments of competence on accomplishment. (In judging whether someone is a good person to entrust with a task you check what they have accomplished in the area, not which of their schemes in that area happened to succeed.) So this is one reason that we need the concept of knowledge: as input to judgements of accomplishment, which are inputs to judgements of competence. This should help shape our intuitions about when someone has knowledge, too.

The last paragraph helps with Williamson's question of the explanatory force of attributions of knowledge. True belief without knowledge does no explain why someone accomplished what they tried to do (at most they luckily got what they wanted. And so it is not an input into saying what skills or capacities they may have.
Another way of putting the point: mere true belief does not explain how someone did what they did (even when it explains that they did it - explains the action that resulted.)

Often when we get a richer explanation of an action than simple attribution of beliefs and desires gives, what else we add (use of reliable testimony, tracking skill, whatever) also gives knowledge. The fact of knowledge does not by itself add the additional explanatory power, but it results from the factors that give the better explanation.