forum 7: week of 27 Feb - pragmatic encroachment
I hope you've all got my email with reading suggestions. Contact me if you have not.
On page 564, in the last whole paragraph, F & McG state their assumptions. You may have worries about (1), fallibilism. But that's going down a well-explored route. (Comments welcome, all the same.) I think (3) is the assumption doing the most work. Think about it: is it as obvious as it might seem?
( (2) is important too. Worth pausing to think out what it is saying.)
I am writing to endorse Nicole Jinn's February 16 posting on the PHIL 440A Non-Standard Topics: on Experiment [Staley] March 6,8; and, Experimental Design [Fisher] March 20, 22.
Nicole's February 9 Couse Talk posting reads:
Last Thursday (February 9), Dr. Morton briefly went over the remaining topics that he plans to cover in this course. Among them, he mentioned that the two that are least connected with his overall motive for this course are the readings by Staley (6-8 March) and Fisher (13-15 March). I am curious as to whether anyone in this course (among the participants) has objections to doing any of these two readings. If so, please be honest about your objections and I will try to consider them to the best of my ability. While you decide what objections you may have to those two readings, I just want to make it known that you may expect to see me give short presentations on Fisher's reading on one or both days during that week. As much as these two topics (or readings) are least connected with this course, they are (ironically) the two topics of most interest to me, if that makes sense to any of you.
ReplyMoreHistoryEditLink toDrag to new location.NicoleJinn02:07, 16 February 2012.forum 5: week of 6 Feb. Hawthorne and lotteries
Dr. Morton's November 17, 2011 E-mail questions to PHIL 440A course registrants on Non-Standard Topics:
I would like to spend some time on the following non-standard topics. Do you have any background or interest? > - the design of experiments & the philosophy of experimentation > - the link between grounds for knowledge and reasons for action
My November 17, 2011 E-mail reply to Dr. Morton:
Thank you for your E-mail on PHIL 440. I think I can claim background in design of experiments. Current focus is 0 carbon dioxide emissions, and the deployment of the plant to implement it. Your non-standard topics are of great interest.
I think I rather agree with (2), almost unreservedly. And (3) does seem pragmatically straight-forward. But I cannot seem to make the three statements lead directly to the conclusion reached. They seem to me to lead to a justification in -doing- but not a difference in -knowing-. (1) doesn't lead to an alteration of knowledge, but an alteration of surety beyond knowledge. "Do you know that?" "Yes." "Are you SURE?" "Sure ENOUGH." is not changed to "Are you SURE?" "You're right, I don't know." It's rather "Are you SURE?" "No, I'm not sure, but I -think- so." I'm not sure (pun not intended) if it could be more convincing with a bit of rewording, though.
I don't think they explained away uncertainty as definitively as they hoped to. Unless I am mistaken, they conclude that if you know reason r, then, no matter the risks, the possibility that not -r is irrelevant. To me their reasoning about the big O went nowhere, so their conclusion about r is just something they said at the end. Risk will always be a factor in my decisive use of knowledge, having "not" in the back of your mind does not subdue knowledge, cutting out the possibility that "not" can only stifle your scope of awareness.
Generally, I don't have a problem with any of the claims or #3 in particular; however, the reductio argument used by Fantl & McGrath to arrive at (3) confused me, but on a purely pragmatic level and only in DeRose's Bank examples (p. 564, paragraphs 2 and 3) they used to explain it. In my interpretation, option O (in case A) is "waiting until tomorrow to deposit the check instead of going in and double-checking whether the bank is open". The authors claim that "he will know that going in to check further will have a worse outcome". I realize the low stakes of case A, but it escapes me why improving one's epistemic position concerning the bank's hours is ever a worse option. Perhaps it is not important in this particular situation (hence the low stakes), and maybe it will take up a couple of minutes of the individual's time, and maybe the clerk will be rude or the hours sign will be unintelligible; but overall, knowing the hours will maybe save this person from attempting the bank line-up some Friday nights in the future! I agree with the authors that you are still, in fact, justified in doing O; but I don't think the other option is objectively worse.
(1) is fairly unproblematic for me. (2) and (3) seem quite related to each other in that both apply in cases where there is a lack of certainty. Indeed, (3) is the assumption doing the most work. (3) is also the most problematic for me, for the following reasons: (a) Supposing that one knows "that O is best" is a huge leap for me because (b) "That O is best" is arbitrary - what does it mean for O to be best? (This question is NOT answered in the Fantl and McGrath article we are reading.) Especially when we assume a lack of uncertainty, the "best" option need not be lopsided in that all other options are "much worse" than the "best" option, whatever that may mean. In other words, the "best" option may not be that much better than the second best option (i.e., the first option beat the second option by a very narrow margin or a close call). This reason is why taking the "maximally" likely option is not always optimal in the probability setting. Hence, I do not buy Fantl and McGrath's argument or reasoning for "if you know that O is best, you are justified in doing O" (page 568) because I almost never know for sure "that O is best"! Establishing the truth of "O is best" is difficult, and the authors (Fantl and McGrath) seem to have swept this important point under the rug.
Personally, I have only minor qualms about 1 & 3. My problem with the second claim is the condition "if the stakes are high enough". I still don't buy that high-stakes situations should have an effect on "certainty", implying that it also has an effect on knowledge (or else what would that certainty pertain to). I'm inclined to agree with Ange that the entire argument seems to be about pragmatics rather than epistemics.
In support of the Jeremy Fantl and Mathew McGrath paper commitment to pragmatic encroachment, in the absence of certainty in knowledge, I offer the example of Winston Churchill.
In the book titled Troublesome Young Men The Rebels Who Brought Churchill To Power, author Lynne Olson describes how a group of young Tory members of Parliament, in May 1940, toppled the British Prime Minister Neville chamberlain, the leader of their own party, from power.
Chamberlain had an overwhelming parliamentary majority. He had declared war on Nazi Germany eight months earlier with the Nazi invasion of Poland. The young dissidents used a major British military setback in Norway, and the speech of the leader of their dissident group to motivate the British House of Commons to reassert itself as the guardian of democracy. The result was Churchill became Prime Minister May 10, 1940.
In the book titled Five Days in London May 1940, 1999, author John Lucas describes the five days Friday May 24, 1940 through May 28, 1940. On May 28, Churchill had won a struggle with his War Cabinet. He declared that England would go on fighting, no matter what happened. No matter what happened; there would be no negotiating with Hitler.
On page two, Lucas writes, “Then and there he saved Britain, and Europe, and Western civilization.”
In 1943 the United States War Department produced a factual film titled The Battle of Britain. In June 1940 the Nazi army had 100 fully equipped divisions lined 2,000 miles along the European coast, from Norway into France for the planned invasion of Britain. Britain had less than one equipped division. The Nazi air force out numbered the British air ten to one, both in aircraft, and in pilots.
I think Churchill satisfies  in no certainty,  in making a difference; and,  in justification with his personal commitment to resist the influence of his appeasers.
(3) if "option O will have the best outcome of all your available acts, then you are justified in doing O." It seems if it is a matter of doing something completely affecting yourself, seen in the Bank example, where staying in line would be the best option if stakes were high. If, on another example, say where your sister fell and broke her leg after falling through a crack in the middle of a frozen lake, the best option would be to bring her back before she freezes, taking the risk of walking across the lake (knowing there was a chance it would crack again). Even though stakes were high in this example, it seems the best decision would be to rescue her, whereas if it were a matter of walking across the lake individually, the best option would be to stay put. Perhaps I am missing the nature of the stakes, or the best option (or this may even be a question of ethics). Perhaps (3) is justified when only the individual is under consideration.
Assumption 3 is plausible. If I know O is the best option, then I ought to do O. This is because I have the reason that O is the best option, where having such a reason is bearing some sufficient epistemic relation to that reason.
I'm not convinced that this relation is a relation of 'knowing that', however. When I know r and r is a reason for doing O, then I ought to do O. But the relation I bear to r may just as well be 'believing that' or 'judging it to be the case that'.
I wanted to raise a further issue on the topic of deception. Well, more on the issue of how human's have different levels of trust for different situations. Some people take risks, some people are confident in what they "know". For me, I would not cross the frozen lake if the only the I had to gain was time since if I was wrong I could die. "Reckless Rick" on the other hand would claim that he knew he would not fall through the ice so he crossed the lake. Both Rick and I had the same information but for him it was knowledge and for me it was not. To me this seems to be the effect of removing certainty from the prerequisites for knowledge. Once you don't have to be certain it becomes a matter of opinion whether you know something or not.
I am sympathetic to the pragmatic approach to resolving paradoxes as such snowmobile example discussed in class. While I see how it can be troubling to epistemologists, I think it still offers an intuitive description of how the concept of knowledge is actually applied in real-world situations. I could very well imagine myself saying well, I know we're going to have class next week. But if someone asks, "would you bet my life on it", I would of retract the earlier statement. Well, maybe know it, I merely think it to be probable (but not probable enough to warrant risking my life).
Perhaps why epistemologists have trouble with this conclusion has more to do with the word know than any actual disagreement over how people behave. We're ascribed so many things and connotations to knowing something, and knowledge has been virtually elevated to the pantheon of the immortals. But the conclusion shows that knowledge is not only moral, but subjective as well. Perhaps we need to find another word for the doubleplusgood knowledge that philosopher describe.
I disagree with the claim that knowledge is subjective. Beliefs are subjective, but knowledge is not necessarily subjective (and the two terms--belief and knowledge--are not interchangeable, at least in statistics or applied mathematics). The type of knowledge I'm thinking of is scientific or experimental knowledge: "The growth of knowledge, by and large, has to do not with replacing or amending some well-confirmed theory, but with testing specific hypotheses in such a way that there is a good chance of learning something--whatever theory it winds up as part of" (page 56, "Error and the Growth of Experimental Knowledge" by Deborah Mayo). My main point is that these specific hypotheses need not be subjective, unless the scientific models themselves are subjective. However, I don't want to think that the scientific models themselves are subjective. Otherwise, the entire pursuit of science would be subjective--there would be no objectivity in science, but I do not think that is the case! Does anyone else believe that there is (at least some) objectivity in science???
The more I read about the role of stakes in regards to knowledge, the more I question whether it is actually knowledge that is being influences in these cases. It is apparent that stakes do play a large role in the outcomes of these scenarios, but I wonder if it might be that high stakes have more of a role in changing the way that one acts, rather than their knowledge. I am suggesting that maybe these stakes can impact the way in which the choose to act without truly weakening their knowledge. Is it possible that these stakes are causing people to act in contrary to what they actually know? In the case with ice thickness, it seems that the person knows that the ice is thick enough, but something like their conscience, or gut, leads them to act in opposition of this knowledge.
I have no problem with claims (1) or (2). (3) 'If you know O will have the best outcome you should do O' is where I identify a problem. It seems like an oversimplification that requires some clarification.
There are many factors that need to be considered in deciding which option will be best which is why some examples are so problematic for the argument. (3) relies on the assumption that in every given case there will be an option that will undeniably lead to the best outcome but there are no universal criteria for what makes an outcome the best.
Is the best option the one that is most likely to have a favourable outcome? This can't be it since what is most probable is not always the rational choice to make.
What is the 'best option' also varies from different viewpoints and amounts of evidence available. Is the best option objective and based on the evidence that would be available to an omniscient observer? Or is the best option subjective and only based on the evidence of whoever is making the decision? And if it is subjective is it based on evidence available only in the split second before the decision must be made? These issues need to be resolved before the third assumption is permissible.
In Jim's Churchill example,a crucial point that needs to be made is that Churchill was bluffing.That is to say,deceit played a role in the forming of a historic outcome.In the Churchill example it is a passive,tacit form of deceit.(lie by omission,undisclosure)Closer to home a more active example of deceit is provided by the tale of one of Tecumseh's tactics in the War of 1812-14. The great First Nations leader,in collaboration with the British general Brock,was able to convince a large attack force of American troops stationed at their fort in Detroit,that The Canadian defense forces were mightier in number than they indeed were.After having Brock send the Americans a letter declaring that 5000 Canadians were on the way,Tecumseh had his small band, upon their arrival,circle the fort single file through a clearing.He then had them double back through the woods to repeat their appearance of passing through the clearing repeatedly, giving the appearance to the Americans that they were vastly outnumbered.Subsequently,the Americans,under General Hull sent out a white flag and surrendered Fort Detroit,suffering at that time their greatest loss of territory to a foreign power,and affecting the course of the war.Relating these historic examples to point (2),the stakes being high would surely have to include the active and even probable likelihood that deceit will be involved in affecting outcomes given that warfare is a life and death struggle in which the stakes are dramatically heightened.
I sort of agree with 1) since in certain less serious contexts you can say you know something without really being sure that you do. Without having looked at all the reasons for and against it and made an EDUCAted assertion. 2)definitely makes a lot of sense because i think at any which point there can always be doubt even if its tiny or microscopic for all the things we know we may one day be proven to be false about. The degree to which we think we know something depend greatly on the level of stake and if we are uncertain and the stakes are high this will greatly effect what we choose to do. 3) is to kind of common sense. If you know option O is best ob course you will O. The question really is how certain you are about O being best. really that is the question we are asking. And I do think how certain you are is determined by what is at stake. J and M said if the man knew the bank was open sat then he should go back sat even if the stakes are high..if he does in fact know this. I agree if u KNOW it is open then you should go back. even if the skates are high it doesn't change the fact that you know it. But the problem i think is is that knowledge is not concrete and or better yet beliefs are not concrete we can have beliefs be certain of them but maybe we can never have true knowledge because even if we think we know something we could turn out to be wrong and could we ever say we knew something and then we didn't? So the question is how certain are we right now. what reasons do we have for believing what we do right now. if the stakes are low we don't need that much justification but if they are high we need more. This clears up the bank situation at least a bit. When stakes are low he says i know the bank will be open because it was last week so ill come back butt when the stakes are high he says well they could change the hours or maybe he could even be mistaken so the evidence is no longer good enough with the high stakes so he goes in to the bank. did he ever know? did he know in the high stakes that the bank was open on sat? i think he knew on some level not a level high enough though....so i guess i think knowledge is more of a continuum and less a binary category.