Course talk:Phil440A

From UBC Wiki

forum 11: week of 26 March - knowledge and accomplishment

The parallel might be wrong, spurious or misleading. One way to test its robustness is to push the analogy further. Where does it break? Are there analogs of Gettier cases for accomplishment? Yes, I think, and I can explain, but I'm bothered by more specific forms of the question: are there analogs of fake barn cases? If Williamson is right about everyday explanation of action - and it is a controversial view - then something similar should be arguable for accomplishment. But is it?

AdamMorton16:04, 24 March 2012

hah: no replies (yet). Come to class and all will be explained.

AdamMorton21:29, 26 March 2012

I'm not sure if I understood the question correctly, but here's my attempt to answer it: I get the impression that one of the possible analogs of the fake barn cases has already been discussed in the paper. It's the killer flu case, where the subject has been planning to murder someone, but instead sneezes on them and ends up killing them this way instead of the initially intended method. This doesn't appear to be a genuine case of accomplishing accomplishment, because, while the intended result has been achieved, since it was achieved by means other than the ones planned in advance, it is not the case of AA, but seems to have been achieved with some element of luck.

Olsy03:24, 27 March 2012
 

I think that Williamson's argument is more satisfying when applied to accomplishment rather than knowledge. I'm not sure if I'm convinced that persistence could be part of the distinction between knowledge and something lesser like true belief. But classifying accomplishment with respect to persistence seems more plausible. For knowledge, I think it's more likely that the distinction comes earlier - possibly from the cause of the persistence (in other words how the knowledge was acquired).

AlexanderBres22:48, 1 April 2012
 

A similarity I notice among accomplishment, and Williamson's explanation of our actions, is the role of stakes. It seems that the more difficult something is to accomplish, the more willing we are to call it an accomplishment. If accomplishing something will lead to something significant, such as a large cash prize, then we would consider achieving this to be an accomplishment. The same can be said for Williamson's example of the burglar; the burglar is willing to ransack the house if he knows that the diamond is somewhere in the house, but his strong desire to possess the diamond is necessary for him to search the house, knowledge alone would not be enough. In both cases it seems that one's motivation to achieve whatever is at stake plays a crucial role.

Andreaobrien23:05, 3 April 2012
 

Three quotations from two individuals with accomplishment: [1] No bastard ever won a war by dying for his country. He won it by making the other poor dumb bastard die for his country.

[2] Lead me, follow me, or get out of my way.

[1] and [2], General George S. Patton

[3] The inherent vice of capitalism is the unequal sharing of blessings; the inherent virtue of socialism is the equal sharing of miseries.

[3] Winston Churchill

JamesMilligan04:47, 27 March 2012

The reference to accomplishment in the preceding quotations is DR. Morton’s paper titled Accomplishing Accomplishment, page two, last paragraph, on success: “I have in mind getting what you want because of your efforts.” To me, Patton and Churchill are strong examples of accomplishment by human effort. In the Hollywood film titled Patton, General Patton is portrayed as the only general the Nazis were afraid of. Patton successfully led the US Third Army across France in World War II, to attack Nazi forces. Churchill led Britain to fight the Nazis, no matter what happened. Churchill’s effort earned a year for the United States to prepare to enter WWII. Do the quotations of Patton and Churchill achieve philosophical knowledge, because of the accomplishments of Patton and Churchill.

JamesMilligan07:24, 28 March 2012

It seems to me that, at any rate, all this talk of accomplishment seems to be a certain unspoken assumption of freewill. It seems somehow silly to suggest that anything could actually be accomplished without freewill, in that accomplishment seems different from a success event occurring. You have to mean it, and to have effectuated an outcome based on your intentions. What if your intentions are not your own, or at least pre-determined by causes and events before you were ever before? If that were true, does it still make sense to talk about accomplishments, aims, reality, desires in the way that we have used them in class? How are our desires different from a rock's desire to fall down a cliff, or from the universe's desire for max entropy and min enthalpy? Are those cases different? If so, how?

Edward10:33, 29 March 2012
 
 

Your definition of accomplishment seems to be "An event that is the consequence of X's actions, where X is the one who accomplishes". With this definition X does not have to be aware of what they accomplish they just have to have be one of the reasons the even takes place. However, for X to accomplish their accomplishment they must actively strive for a certain event to take place and thus be the cause of that events existence. I feel like saying accomplishing an accomplishment is confusing, just like how saying knowing you know seems confusing. I would say directly accomplish and indirectly accomplish would better capture the meaning (if I even understand it). Once it is seen in those terms it seem clear to me that an indirect accomplishment is not really an accomplishment since you cannot take credit for its completion. If someone wildly throws a dart and it happens to hit a bulls-eye on a nearby wall people would pat her on the back for her luck, whereas if she had been practicing throwing darts all morning in order to accomplish the same thing purposely people would praise her for different her skill and effort. In this case the difference seems to be skill and effort, without which I do not believe you can call something an accomplishment. But maybe this is just turning into an argument about language and I'm missing the point...thoughts?

ThomasMasin18:33, 29 March 2012

To consider an individual to be one who has accomplished something is to give them credit for it. The credit is due to the degree that they have moved beyond the influence of other persons or contextual influences on their claimed success.Only to that extent,which it seems to me would be difficult to estimate,could it be truly said to be an effort based on free will.Take the case of Churchill mentioned by Jim above for instance:His biographer William Manchester mentions a little known fact in "The Last Lion" (the official biog.) concerning Churchill's decision to oppose Hitler.A minor Soviet diplomat residing in England on the eve of Britain's involvement in hostilities approached Churchill and convinced the staunchly conservative Brit,who at one point referred to the Soviet Union as a "jewish empire,"to reconsider his neutral stance.Ultimately,as we now know,Sir Winston abandoned his neutrality and the rest,as they say,is history.The official(whose name,I'm embarrassed to say,I don't recall,and have so far failed to google)returned to the USSR a short time later and disappeared,a victim of Stalin's paranoia.By way of this example,I think it can be said that Churchill's accomplishment was,to some degree creditable to another,and was not therefore entirely of his own making.It was therefore a conditioned response to the problem before him,and was one to which he significantly contributed to solving.I used to get into some heated rows with a friend who was a great fan of the great man view of history,I would take great delight in pointing to Tolstoy's portrait of Napoleon in "War and Peace."Contrary to a portrayal of Napoleon as a great accomplisher,Tolstoy likens the conquering general to a chip of wood carried along on the great river of history,and further to this as a helpless prisoner of massive forces surrounding him and of which he is largely unaware.Indeed,Tolstoy presents him as the least free of individuals.This seems to connect with Quine's theory of established (i.e.locked-in)truths as those closest to the centre of an interlocking web of current opinions.Conversely,the'truths'most subject to change are at the outer edge where current experience is most directly or immediately encountered.In Tolstoy's great novel,the accomplishing heroes tend to be humble,almost comic figures who have no sure guides in their attempts to come up with solutions to perplexing,because novel,problems.In a word,they must innovate,thus accomplishing accomplishment.

Robmacdee01:25, 30 March 2012

Rob, Your reference to a minor Soviet diplomat is of interest. I have not read Manchester's book. The second edition appears to be dated 1988. Five Days in London May 1940, by historian John Lukacs, 236 pages, was printed 1999, Yale University Press. Lukacs calls himself an uncategorized historian, with one advantage over many British historians, Lukacs's familiarity with documents and other materials relating to Hitler, in this case especially in 1940. Any further information you can think of to relate Manchester's discussion of Churchill's decision to fight against Hitler's Nazi forces will be appreciated.

JamesMilligan04:21, 3 April 2012
 

I agree that the difference seems to be skill and effort, without which I do not believe you can call something an accomplishment, at least in some cases. But how does this difference turn into an argument about language? And the element of luck is what makes something that seems like an accomplishment not an accomplishment. Or am I wrong about what makes something an accomplishment?

NicoleJinn07:43, 2 April 2012

It is necessarily a discussion about language since a common understanding of the term itself,one we can agree on for the sake of discussion and clarity,is required.Luck,skill and effort are important ingredients of accomplishment, but another term needs to be added to complete that list:attribution.Accomplishment implies a claim,it is someone's accomplishment.This someone can be an individual or a group,but ownership is implied,because accomplishment is a particularizing and polar term and concept,further implying competition.Success is opposed to failure.This is also true of negative accomplishments,as in crimes,where blame is attributed to someone.Blame is placed on a designated accomplisher,just as credit is given in cases of positive attainment.These concepts are firmly established,even foundational,in our language and culture,to the degree that they function as unquestioned norms.Inflated claims of expertise and to celebrity on one side,or to scapegoating on the other,raise further questions around how we understand and then define personal identity,and whether reputation can really be regarded as a discrete,private property.We live in the culture of the signed work.Who accomplished the great cathedrals of the Middle Ages?Who accomplished the pyramids.Trout would undoubtedly credit the chrono-synclastic Infindibulum:-)

Robmacdee20:20, 2 April 2012

To me,in Dr. Morton’s paper titled Accomplishing Accomplishment, page two, last paragraph, on success, the definition “I have in mind getting what you want because of your efforts” has a clarity, and a concrete connotation. Brute effort. The online dictionary defines accomplish to succeed in doing, and Merriam-Webster, to bring about by effort.

JamesMilligan21:49, 2 April 2012
 
 
 

Clearly when we say someone has accomplished something we mean that they intended to do such an such an act and that it was due to some action they took and not merely due to quiescence or chance. For example the case of the man trying to murder someone but doesn't get to do it the way they intended however the man dies due to catching the flue. So although the man had the intention and did the action the victim did not dye due to that action but rather due to a another cause which the murderer did not even intend to. So the question is did he accomplish the death of his victim? I think it is difficult to say that he didn't because although he did not achieve his desired result by the method which he took the results were achieved...the way to answer this question about accomplishments i think comes back to intention or action....which of the two matter. I would think that according to the criminal law it is intend that matters. if the man intended to murder and did but not due to his action would he be guilty of murder or not? And again that is hard to answer because if he intended to kill the man but the man died from some cause totally irrelevant to his action like a heart attach the next day maybe he would not be found guilty but if he died from a heart attach caused by fright i think he should be guilty. However I can see that although we can accomplish a desired result to say that you have accomplished an accomplishment seems to translate to you DOING something which has directly caused the result. So to AA you must have intent and also an action which directly leads to your result.

ShivaAbhari09:35, 3 April 2012
 

When we usually say accomplishment I think what we refer to is AA. AA seems to mean that you intended to do something a certain way the act is done with controlled orchestration. To just A seems to worry only about the consequence you attempt to bring about while AA focuses more on intention.

KevinByrne04:59, 14 April 2012
 

forum 9: week of 12 March: Fisher and the design of experiments

I commend all participants to use this space for communicating ANY difficulties that arise during your reading of Fisher BEFORE next Tuesday (March 12), and I will be sure to accommodate your requests into my presentation!

NicoleJinn23:48, 8 March 2012

Endorsing what Nicole said: let's get really clear about the idea here, as epistemology as well as take-it-or-leave-it scientific method. So let's have a variety of questions, problems, puzzles. One issue that I think is central is this
When we do an experiment the results are interesting if they are surprising. Take 'surprising' as 'improbable'. Improbable given what, given which assumptions about probabilities? (You're testing a coin for bias so you do a long series of tosses, and they're mostly heads. This is not surprising if we assume that it is biased, but is if we assume that it is fair. But we don't want to assume either: that's just what we want to find out.) Different attitudes to this separate different philosophies of testing.

AdamMorton18:22, 11 March 2012

In response to Dr. Morton's question, from the Fisher reading, as well as some rudimentary knowledge of statistics (most of which is also based on Fisher's work, obviously), it follows that results of experiments are interesting if they are statistically significant, i.e. it is improbable that these results have occurred purely by chance. Given this, we are functioning under the assumptions that we, as experimenters, have considered every possible combination of the results before the experiment has been conducted, and it would be highly unlikely that the data that would falsify the null hypothesis occurred by accident. This would also depend on the sample size used in the experiment, and Fisher stresses the importance of this by stating, "The odds could be made much higher by enlarging the experiment, while if the experiment were much smaller, even the greatest possible success would give odds so low that the result might, with considerable probability, be ascribed to chance" Nicole, I found (and I wouldn't be surprised if others felt the same way) this reading to be a lot more accessible than last week's, and some basic knowledge of statistics has primed my understanding of Fisher this time, so I haven't encountered any difficulties.

Olsy05:08, 12 March 2012
 

What I got from reading about the experimental design in this reading and also some previous knowledge that i have from taken research methods is that there is always a possibility that your results could be due to chance or other factors not accounted for. You can set a high p-value or make your experiment larger or even increase the reliability by test-retest methods but though the the possibility of it being due to chance will get smaller and smaller you can never know for certain that your results indicate an actual effect. Another factor besides the possibility of the results being due to chance is that no matter how hard you try to control the noise and other third variables that could be contributing to your results you still could have not taken something into account without being aware of it and your results could have been the effect of that. That is why in science we take all these precautions but you can never KNOW for certain that your hypothesis is true and that is why you always say "based on the data we can conclude so and so" and can never say that it IS so and so way for certain. I think the experimental method yet again reveals the interesting aspect of knowledge in that we can never fully 100 percent know anything. we think we know but we can come to find out later that we were that one percent chance or there was another variable we didn't account for or some other factor that distorted our knowledge.

ShivaAbhari07:14, 16 March 2012
 

I have a lot of respect for experimental design, and the amount of knowledge we can gain from it. My biggest uncertainty, (I realize this would be a lot more useful if I had posted this prior to Nicole's presentation) is what made researchers come to the standard 5% level of significance that is used? And although this is the most common significance level, what circumstances lead researchers to sometimes use a 1% significance level?

Andreaobrien22:23, 16 March 2012

Thanks for your question, Andrea. Both the 5% and 1% significance levels are common. For the ones who think 5% is too high, most of them tend to go with 1% for the significance level because a lower significance level is (supposedly) more desirable. Why 5% is too high a significance level is incredibly subjective, and more often than not relies on factors specific to the discipline undergoing investigation that I am unable to speak of here. In any case, this is the view on significance levels among researchers that is most articulated. The 5% significance level has become known as a standard because of how frequently it has been used, and the fact that some journals have made it as a rule for publications. That is, if results don't reach the 5% significance level, then those studies are generally not accepted in those journals. I am quite skeptical myself on the merits of this idea about the 5% significance level as the standard, and support a view on significance levels that does not agree with the most articulated view I mentioned earlier. Thus, I am not quite the right person to ask what circumstances lead researchers to sometimes use a 1% significance level. Needless to say that the debate on how to correctly interpret significance levels is nowhere near reconciliation, though one of the goals of my term paper for PHIL 440 is to shed light on this topic. Also, I found one article on the internet that tries to address this question of why 5% is a common significance level: http://www.jerrydallal.com/LHSP/p05.htm. I hope this helps. Let me know if you have any more questions on this topic.

NicoleJinn00:49, 17 March 2012
 
 

In your presentation, may you discuss Sir Ronald A. Fisher’s, page 12, first sentence under the heading 6. Interpretation and its Reasoned Basis: “In considering the appropriateness of any proposed experimental design, it is always needful to forecast all possible results of the experiment, and to have decided without ambiguity what interpretation shall be placed upon each one of them.” How does this statement include experimentation to learn experimental development results, or new theory. Also of interest is your discussion of the concept of Null Hypothesis, and its purpose.

May you include in your presentation the Boeing Advanced Quality System Tools manual:

http://www.boeingsuppliers.com/supplier/d1-9000-1.pdf

Your discussion of Section 1.17, pages 208 to 214 [pages 211 to 218 in the pdf copy], on Statistically Designed Experiments, and any other Boeing content you would like to relate to Sir Ronald Fisher’s The design of Experiments will be appreciated.

JamesMilligan05:53, 12 March 2012
 

Thank you to the ones who have contributed to this forum, so far. As I prepare for the presentation tomorrow, I should note that I will not be accepting comments/questions after 9:30 pm tonight. If anyone has any burning questions between now and tomorrow morning, it's best to ask it now (or before 9:30 pm tonight). Remember, the more comments I receive, the more likely I will satisfy the needs of the audience. Hence, the presentation will go much more smoothly if everyone gets their considerations taken into account.

NicoleJinn01:48, 13 March 2012

Thank you to everyone for all the questions yesterday - it is great to see that some of the participants do have interest in this subject of experiments and the design of them! Also, any feedback pertaining to my presentation would be greatly appreciated.

NicoleJinn03:23, 15 March 2012

Although I was familiar with a lot of the material you presented from statistics courses, I thought you did a good job bridging the gap between epistemology and philosophy of science. The handout you prepared was also very helpful for following along. Well done.

AlexanderBres04:27, 20 March 2012
 
 

Now that my presentation has been finished, I will accept any additional questions anyone may have pertaining to Fisher's work on significance tests and design of experiments! I will be sure to answer your questions to the best of my ability.

NicoleJinn03:21, 15 March 2012

Nicole,

May you comment on Fisher's statements page 7, heading 4. The Logic of the Laboratory: "Inductive inference is the only process known to us by which essentially new knowledge comes into the world." And page 8, "Experimental observations are only experience carefully planned in advance, and designed to form a secure basis of new knowledge; that is, they are systematically related to the body of knowledge already acquired, and the results are deliberately observed, and put on record accurately."

JamesMilligan06:14, 15 March 2012

In reply to the concern over the need to know in advance all possibilities in order to learn something from an experiment: I think the problem might lie in the fact that in the paper there is no distinction between 'learning' and 'contributing to scientific knowledge'. We may well learn that under certain experimental conditions a possibility that we hadn't foreseen does in fact obtain, and use this result as a basis for further investigation. But for the purposes of gleaning some legitimate scientific knowledge, those results are irrelevant because they don't substantiate either of the hypotheses in the experiment.

MclarenThomas16:29, 15 March 2012

Thomas, In the 2011 NOVA film series, The fabric of the Cosmos, physicist Dr. Leonard Susskind argues from the perspective that there are 10 to the 500 different String Theories. He claims this is exactly what cosmologists are looking for. This fits with the ideas of a multiverse, a huge number of universes; each different. In Dr. Susskind’s 2006 book titled The Cosmic Landscape, Dr. Susskind, page 381, provides a distinction between the words he used in his book as landscape and megaverse. In the film Dr. Susskind used multiverse in place of megaverse. On the megaverse [multiverse] he wrote, “The megaverse [multiverse], by contrast is quite real. The pocket universes that fill it are actual existing places, not hypothetical possibilities.

I think any testing devised challenges Dr. Fisher’s concept to forecast all possible results.
JamesMilligan06:59, 22 March 2012

The specific film, in the 2011 Nova film series with reference to Dr. Leonard Susskind is titled Universe or Multiverse? dvd Koerner Library QB 981 F135 2011.

JamesMilligan20:27, 23 March 2012
 
 
 
Edited by author.
Last edit: 21:45, 17 March 2012

One individual asked me after class about "the black swan problem", and whether the Fisherian way of testing hypotheses would relate to that. Before I respond to this question, I should clarify that "the black swan problem" is about falsification of hypotheses (http://en.wikipedia.org/wiki/Falsifiability#Inductive_categorical_inference) as a 'solution' to the problem of induction - at least that is how I take it. Now that we know somewhat what the black swan problem refers to, my short answer to the question is that Fisher's significance tests do NOT provide a means to falsify hypotheses. Yes, Fisher says that there's a chance at disproving the null hypothesis (and that the null hypothesis could never be proved), but this does NOT (necessarily) mean that the primary objective of significance tests is to falsify the null hypothesis! My longer answer follows, if anyone cares to read it. Deborah Mayo would agree with me that significance tests should NOT be used to falsify hypotheses in the way Popper describes falsification. In fact, I quote verbatim an excerpt from Mayo's 1996 book, Error and the Growth of Experimental Knowledge (page 2):

For Popper, learning is a matter of deductive falsification. In a nutshell, hypothesis H is deductively falsified if H entails experimental outcome O, while in fact the outcome is ~O. What is learned is that H is false. ... We cannot know, however, which of several auxiliary hypotheses is to blame, which needs altering. Often H entails, not a specific observation, but a claim about the probability of an outcome. With such a statistical hypothesis H, the nonoccurrence of an outcome does not contradict H, even if there are no problems with the auxiliaries or the observation.

As such, for a Popperian falsification to get off the ground, additional information is needed to determine (1) what counts as observational, (2) whether

auxiliary hypotheses are acceptable and alternatives are ruled out, and (3) when to reject statistical hypotheses. Only with (1) and (2) does an anomalous observation O falsify hypotheses H, and only with (3) can statistical hypotheses be falsifiable. Because each determination is fallible, Popper and, later, Imre Lakatos regard their acceptance as decisions, driven more by conventions than by experimental evidence.

Mayo later states in the same book, "A genuine account of learning from error shows where and how to justify Popper's 'risky decisions.' The result, let me be clear, is not a filling-in of the Popperian (or the Lakatosian) framework, but a wholly different picture of learning from error, and with it a different program for explaining the growth of scientific knowledge" (page 4, emphasis mine). In other words, Popperian falsification is NOT the right way to think about hypothesis tests! Hypothesis tests, whether Fisherian or from the Neyman-Pearson methodology, are NOT about falsifying statistical claims. Hence, the Fisherian way of testing hypotheses does NOT apply to the black swan problem. I quoted Deborah Mayo's view on Popper's falsification to show that my view against Popper's falsification came from her work. My term paper for PHIL 440 will address how to correctly interpret Fisher's significance tests. I hope this helps. Are there any questions about my answer to this person's question?

NicoleJinn04:36, 16 March 2012

<wikieditor-toolbar-tool-file-pre>picture.jpg]]

Nicole, Does the above chart copy, and the associated text copy that follows below, that are copied from the experiment described on page 248 of the Boeing Advanced Quality Sytem Tools document, satisfy your concept of significance test methods, and the significance test methods of Deborah Mayo, and Sir Ronald A. Fisher

Robust design: testing process parameters Parts in a heat-treat process were experiencing unpredictable growth, causing some parts to grow outside of the specification limits and be rejected as scrap. It was surmised by the engineering team that irregular growth was due to the orientation of the part in the oven and the part’s location in the oven. Since it was desirable to heat treat a maximum number of parts in each oven load, it was important to be able to determine a set of heat-treat processing conditions that would result in minimum growth for heat-treated parts in both a horizontal and vertical orientation, and at both the top and bottom locations in the oven.

Four process factors were identified: hold temperature, dwell time, gas flow rate, and temperature at removal. The team defined two settings for each of the process factors. The experiment used eight runs of the oven, as shown in figure 2.7 (a fractional factorial design, that is, a particular selection of half of the 16 possibilities defined by all combinations of the process factors at two settings). For each oven run, parts were placed at both the top and the bottom of the oven and in both orientations.

The experimental results indicated an unsuspected effect due to oven location, with parts in the bottom of the oven experiencing less growth than those in the top of the oven. The analysis indicated that a particular combination of hold temperature and dwell time would result in part growth that is insensitive (or robust) to part orientation and part location. Furthermore, the experiment indicated that temperature at removal did not affect part growth, leading to the conclusion that parts could be removed from the oven at a higher temperature; thus resulting in savings in run time.

JamesMilligan02:13, 17 March 2012

Unless I have access to the analysis of variance (ANOVA) table, I cannot comment on the last paragraph (pertaining to what the results indicated, or the conclusion that was drawn from the experiment). Also, I should mention that ANOVA (e.g., see http://www.stat.columbia.edu/~gelman/research/unpublished/econanova.pdf) is a separate technique in itself, distinct from the type of significance tests that Fisher introduced in Chapter 2 of his book (i.e., the reading for this past week). To answer your question, page 248 of the Boeing Advanced Quality System Tools document does not satisfy my concept of significance test methods insofar as it not relating at all to the significance test introduced in my presentation earlier this week. I hope this helps. If you have any more questions pertaining to the example you gave me in the Boeing Advanced Quality System Tools document, then I suggest we do not communicate on this forum but in other ways that will not disturb the focus of the discussions in this course.

NicoleJinn20:41, 17 March 2012
 
 
 

This comment might be a little late but I just wanted to rephrase my question in because I am a notorious mumbler. The significance level corresponds to the probability that your results in a an experiment were obtained by chance (coin landed heads 5 times in a row) The P-value is the probability that your null hypothesis (This coin is unfair and obtains heads more often than tails) is true. Your results are significant if the P-value is less than the significance level. That would then mean that if there was a higher probability that your 5 coin flips were by chance than the probability that the coin was unfair your results would be significant in disproving the null hypothesis (that is was a faire coin). ...Well now that I've written it our it makes sense to me. I'll post this on the forum anyway in case anyone else had trouble. I was confused because i didn't understand that the goal was the disprove the null hypothesis and show that it was all due to chance instead.

ThomasMasin18:46, 15 March 2012
 

After the discussion last class on the difficulty of attaining knowledge of causation, I've been wondering why we place such emphasis on causation anyways? Since the dawn of philosophy in Ancient Greek, philosophers have been searching for the arche, the cause, of things. But causation if incredibly difficult - if not outright impossible - to know for certain. I could observe two events, and find their correlation. This is a pretty plain-vanilla observation of the physical world.

But causation is a special form of correlation. And getting there takes a quantum leap in effort whereas the marginal utility gained is relatively much smaller. For our pragmatic interests, the two aren't that different. Suppose we know there's a negative correlation between greenhouse gas emission and global warming such that if we decrease our greenhouse gas emissions, the temperature will stabilize or go down. Great. Now we have a powerful tool for action and policy. It's not really necessary (for our practical concerns) to know whether the decrease in temperature is really brought about by a decrease in greenhouse gas emissions, or by some unknown and unconsidered third factor.

If it's not our practical concerns that is driving us, then could our search for causal chains and causes be theoretical in nature? That present a problem as well. You can never get inside causes, and examine them, and say for sure. The system is simply too complex to allow you to draw that conclusion most times. Besides, there's also the theoretical worry that causes can never be known for sure. Causes operate in the physical world. There is no cause in theories and formulae. I can't say "1" is caused by "1+1", or that the Earth is caused by the Sun just by looking at general relativity. I can't remember where I read it, but someone pointed that no matter how long you observe a watch from the outside, you can never tell for sure how it works. You can only guess, make predictions, see if those predictions materialize, and if not, refine your theory and repeat. The universe is rather like a large watch, of which you will never see the inside.

So if it's not practical or theoretical, why are we so obsessed with causes?

Edward03:47, 19 March 2012

Edward, I agree with you to some degree on questioning why we (scientists, philosophers of science) are obsessed with causes. While the idea of causation has been around for a long time, the idea of correlation representing causation has not been around for that long! Karl Pearson insisted that correlation is fundamental to science, and that correlation is to replace causation. Hence, the idea about causation being a special form of correlation was driven primarily by Karl Pearson in the (late 19th)/(early 20th) century. From my (limited) exposure to studying the history of statistics, it seems that Pearson's argument about correlation being fundamental to science has gone out of hand, with numerous scientists not using the concept properly and almost always misinterpreting the inference from observed correlation. In other words, Pearson presented his argument for correlation replacing causation, then many scientists have been misguided in their use of Pearson's correlation coefficient as thinking that statements of causation are the norm (or de facto standard) when inferring correlations. I say that they (scientists) have been (and still are) misguided because that's what Pearson taught them, yet no one has questioned the grounds for inferring those causal statements from correlations until much later (e.g., since Nancy Cartwright came along in the 1980s)! Even though I am unable to (fully) answer your question, I hope that I have been able to shed light on the issue in an effective manner. I am able to give insight to this question, even though it is not really related to the reading on Fisher, because of my research interests in (probabilistic) causal inference. Yes, causal inference is a whole separate topic in itself, distinct from design and analysis of experiments! I hope that everyone can see by now that there are several sub-fields within the domain of statistics as a discipline. There's so much work to be done in accounting for the philosophical issues surrounding statistical techniques, it's not even funny!

NicoleJinn00:04, 21 March 2012
 

Not sure if this reply in counted too, so here I am, replyin'

KevinByrne06:56, 20 March 2012

It seems data are to be more verifiable or convincing if the probability showed a smaller significance level within experiments. As discussed in class, it seems the likelihood of it being accepted or examined from other scholars of the given area would be higher if it were not 'surprising', or appeared to have resulted by chance. (As we accept probability being small and incremental vs. too 'perfect' or 'surprising'). Albeit, perhaps at he same time it represents an inexactitude in the progress of the research and results, it perhaps only leads to a direction, reveals the quality of the research (substantial sample and randomization). Significant gaps exist within scientific experiments, although it seems the peer-reviewed structure of determining both the design of the experiment and its acceptance or validity. It seems the strongest structures are ones with small probabilities and small significance levels with modest results and aim to clarify an area of research.

DorothyNeufeld02:26, 26 March 2012
 
 

forum 10: week of March 19 - second order knowledge

A detailed question and a broader one.
A) You can see why someone might think that if they know that 63+12=75, they will know that they know it. For they can check all the steps of the calculation and conclude that it was correct. (I don't think this is right, but you can see how it is attractive.) So could we make a parallel argument that if someone knows something much more ordinary, like that there is a computer screen in front of them, then they will know that they know it?
B) The more interesting questions about second order knowledge do not concern whether you know that you know that P, but whether you can know which of your beliefs are knowledge, or know how much you know about a topic. ("I have a lot of opinions about religion, but how much of this is really knowledge, is unclear to me." People often say things like this.) How do these topics relate to the much dry-er ones discussed in the paper?

AdamMorton19:03, 17 March 2012

What is an example of knowing something without knowing that one knows it?Not noticing that one knows?(what do"fail to register"and fail to believe"really mean?).It seems to me that one can only know something first hand,exclusive of second order knowledge,imminantly,or instantaneously.If any perception is held even for a moment in memory,it is by definiton reflected upon,and thus known in the second sense of being a known known(shades of Rumsfeld:-)This would mean that there are really two meanings contained in the single term "know":One meaning is experiential and the other is held belief.It is, perhaps psychologizing,but nevertheless tempting to presume that since S knows something,at some level then she must know that she knows unconsciously.("lapse of attention"p.590)This might perhaps suggest a belief that is unacceptable,and,in being denied thus occludes access to second order knowledge.(How does ISD,or Internal Self Deception sound,or,borrowing from Heidegger's tortured prose,undisclosing?)This would be to argue that KK's failure would rest on deliberate,though perhaps reflexive and involuntary,deliberate unknowing,especially in consideration of the first sentence,"Knowledge involves belief."In the case of children and animals,we might also add that they are not at a stage or level of sufficient guile to tactically avoid and thus successfully undisclose unpleasant or undesirable realities.(as in I just can't face the fact that the zebras are really painted mules) Regarding section 5:it might seem that the collectivised helping-each-other-out communitarian example of Bob and Jack would solve things as in the case given,and that there would be a positive outcome. U:nfortunately,the reverse can also be true,that denial of inconvenient belief can also be collectivized leading to collectively reinforced,rational,yet monstrouly delusional prosecutions of policy internally held by sufficient numbers of individuals and based on 'proven'facts with disastrous results.I have a question about the term 'warrant,'Is it a foundational(ist) term?"Warrant for belief"seems to me to involve some sort of idea of permissability,a kind of etiquette for belief.Does this suggest then that there are some beliefs which cannot be allowed without a warrant,and what constitutes unwarranted?It means a permit,or a guarantee.By whom and by what authority?It has a whiff of dogma to it.

Robmacdee22:24, 17 March 2012

Robert asked about the term "warrant". Partly it's just a fancy word for "justfied". But there are three other ideas connected with it. (They're different, so I avoid the word.)
1) externalists started using the word "justified" with an externalist flavour. So if you get your belief in a way that is usually reliable, even if it is failing to give you a true belief in this case, they call it justified. Internalists said "To hell with this; we'll use our own word, and we won't let them take it away from us."
2) it's part of a fake solution to the existence of Gettier cases. Knowledge is not the same as justified true belief, so we declare that it is the same as true belief with warrant. That is, all warranted belief needs to become knowledge is truth. But then it is utterly unclear how we are to define warrant.
3) There might be something in someone's situation that makes it reasonable to hold a belief, even though it is not part of what would traditionally justify it, for example the fact that the person who told it to you is trustworthy. Then we call this part of your warrant for the belief. (This is inconsistent with 1) - you can't have both motives for using the term. See how it's confusing.)

AdamMorton02:04, 18 March 2012

Re: SECOND-ORDER KNOWLEDGE Christoph Kelp and Nikolaj J.L.L. Pedersen

Excerpts: Knowledge involves belief. Belief is a propositional attitude, i.e. an attitude that a subject holds towards a proposition

principle of knowledge transmission: (KTP) KSKRP → KSP

KTP has some prominent advocates—Hintikka (1962), to mention just one. However, even if we suppose that advocates of KTP are right in maintaining that the principle holds, it is important to avoid confusion about what the principle says. In particular, although it is natural to read KTP as saying that subject S knows that P by knowing that subject R does so, the specific warrant involved in R’s knowledge is not automatically transmitted to, or inherited by, S. This can be so even if S is fully aware of what the source of R’s knowledge—and warrant—is.

I like the reference to Hintikka.

Between Staley and Cobb; and, Kelp and Pederson, it appears possible to bring the internalist knowledge of beliefs that include theology, space, time, quantum mechanics, and multiverses; together, into the externalist scrutiny of justification in scientific enquiry, with the help of Hintikka's logic.

JamesMilligan07:35, 20 March 2012
 

I just wanted to talk a bit about what we discussed today the kk principle- that you can know something because you know that someone knows that. A part of me wants to say that this can't be a case of knowledge because in order to KNOW something you must have reasons or evidence for it. But then again how can we personally experience everything and know everything only through personal experience. Obviously we gain much of our knowledge through testimony. So is S knows that R knows that P is not necessary a case in which S doesn't have evidence or proof. I think S can in fact have evidence without personal experience. The fact that S even says I KNOW that R KNOWS means that obviously R is a legitimate source and expert or has enough evidence for S to be certain or to KNOW that R does in fact know it. So although S himself doesn't have the evidence and or did not collect it personally if he is in fact saying that he KNOWS R KNOWS it it is because he KNOWS that R HAS THE EVIDENCE. If R was just saying something which S was not sure about or wasn't sure that R really knew it or had good enough reason to know it then S would never say I KNOW. I feel like just the wording makes this statement correct because we are not saying that S believes that R has some reasons to know P rather we are saying that S KNOWS (he is certain because R is a reliable source etc) that R KNOWS (he has facts and evidence and experience) that R. Therefore I do think that S can in fact know that R also. His evidence may not be PERSONAL experience but his evidence is the fact that he can in fact trust R as a legitimate and reliable source.

ShivaAbhari03:19, 23 March 2012

I agree that it is appealing to be able to use testimony gain knowledge. However, I think this forces us to ask other questions, such as how do we know that one's source of information is reliable? Can we assume that if they have been a reliable source of information in the past, that we can trust them to be a reliable source in this case as well? Or is it a matter of checking to see that they have evidence to support their testimony? I guess I'm a little sceptical of second-order knowledge, when I think about how confidently one can know something based on testimony, my instinct is that one needs to see the evidence that the other person has, which leads us back to first-order knowledge.

Andreaobrien05:44, 23 March 2012
 
 
 

I think this question fundamentally rests on one's definition of knowledge. Most conceptions of knowledge involve truth and belief + some other factor. But what is it to believe in something? Must we be conscious of every belief, or are there areas (e.g. intuitions) which we do in fact make use of and believe in, but do not conscious reflect on or Take the example of the chicken-sexer that we discussion in 220. The chicken-sexer knows the gender of chicks. His ability to pick out chicks based on their gender is overwhelming and is not something that can randomly occur. Let's call the chicken-sexer Bob. Bob definitely knows that sex of the chicks. We know that Bob knows the sex of the chicken because of objective evidence such as the fact that his ability is more than mere chance. If we can say we know, then Bob should know of his ability as well. After all, he should know himself better than we do. But does Bob really know that he knows how to sex chickens? There seems to be something lacking altogether in granting that Bob knows. He has intuitions, ungathered thoughts. We can say he really knows?

Edward03:45, 19 March 2012
 

I think the second question takes us down the thorny path of the ever-present issues with self-knowledge. First, how do we come about any knowledge about ourselves? Is it by the incorrigible, infallible methods of introspection advocated by Descartes, or by a more modern behaviorist account of observing our own actions using the same methods as we do for actions of others? This accounts for the externalist/internalist debate mentioned numerous times in the paper. Since there are so many ways in which our intuitions about our own skills and knowledge (like the fact that everyone thinks they are smarter AND a better driver than an average person) turn out to be erroneous, I'm sure that there are many cases of misattribution of beliefs or amount of knowledge actually possessed by an individual. Now the tl;dr bit: The Visual Cognition lab at UBC has recently published a paper on the ideomotor responses used in answering trivia questions with a Ouija board. First,the participants were asked to answer a few dozen questions like "Is the capital of Brazil Rio de Janeiro?" online. They also had to indicate whether they knew the answer, or were "just guessing". Some time later, they came into the lab and had to answer similar questions, this time using the Ouija board and blindfolded. Furthermore, at the start of the experiment, the subjects were told that there will be another participant using the Ouija planchette; this participant was actually a confederate, who took their hands off the planchette after the participant was blindfolded. Overall, there was a significant increase in the percentage of correct answers given using the Ouija board, especially for the "just guessing" questions. Part of the hypothesis proposed by the authors is that the reduced responsibility (since the participants thought there was another person involved) made them guess the right answers more readily! This is a perfect real-life example of how, many times, we don't really know what we know. There is so much information that we acquire every day in many ways, and only some of it is available for conscious retrieval. This already places our knowledge of our own knowledge (pardon the pun) under a big question mark. I apologize for the wall of text. The study I referred to can also be found here: http://www.sciencedirect.com/science/article/pii/S1053810012000402

Olsy06:17, 19 March 2012

They didn't perhaps question whether people looked up the answers to questions of which they were unsure?

AngeGordon04:29, 20 March 2012

Ange, I asked the same question during the presentation! The experiment design somewhat ensured that this would be unlikely: there were 80 questions on the first presentation, and the participants did not receive feedback on whether or not their answers were correct. Out of those 80, only a subset of 8 questions was randomly selected for the next phase, "subject only to the constraint that for each participant, there was one question in each of the eight category combinations: 2 question polarities (correct answer is ―yes/―no) x 2 answer confidence levels (known / guessed) x 2 answer correctness levels (right/wrong))" So it's basically very unlikely, but I wouldn't be surprised if that did happen at least for some of the participants (the very curious ones ;))

Olsy08:08, 20 March 2012
 
 

How to know which beliefs are knowledge and how much on a topic one actually knows has got to be one of the great mysteries. It seems very much intertwined with Illusory Superiority--average drivers think they're above average (as Olsy mentioned) because people have an inability to notice their own flaws. If those who actually know anything also know they they don't know everything and therefore could be wrong, then they may keep quiet and let those of us who just like the sounds of our own voices make incorrect assumptions about our own depth of knowledge which leads the non-confident expert to accept the idea that their -actual knowledge- is not knowledge at all. Best said, I think, that R cannot know anything based solely on the internal warrant held by S.

AngeGordon04:42, 20 March 2012
 

I found the point about human infants and animals to be interesting because it seems that if those requirements for second order knowledge are pushed back and applied to people in the past, adults can't have it either, and then the question becomes where is the forefront of knowledge that allows for second order knowledge, and does one even exist at all.

KevinByrne06:53, 20 March 2012

I think the forefront that you are thinking of would be the development of meta-awareness which could be defined in different ways. According to Kelp and Pederson, children lack the ability to grasp the concept of knowledge (i.e. knowing about knowing) at a young age. Presumably this skill develops as the brain matures during early childhood. I'm no developmental psychologist but I'd guess that this skill begins to develop around age 3 or 4. I would say that adults (and children who have matured to the point of developing metacognition) can possess second-order knowledge while very young children cannot. I think the KK-principle needs some tweaking to get around this objection.

AlexanderBres02:18, 24 March 2012
 

One thing I would like to talk about tomorrow if possible is the second-order knowledge that is derived from testimony. Kelp and Pederson seem to assume that the person who obtains the knowledge (the testifyee) can justify that knowledge internalistically, which I disagree with. Since the justification in such a second-order case would have to be the claim that the testifier is reliable (and the evidence why), a way in which this could be strictly internalistic evades me.

ZacharyZdenek05:31, 22 March 2012

Like Andreea mentioned, second-order knowledge calls into questioning the source of the information itself. I also feel the reasoning behind any knowledge attribution can never fully be known, so high questioning and putting the KK principle under scrutiny. I feel it s the reasoning itself, tied with belief which governs the accuracy (and inaccuracy) of a knowledge claim, as it can steer both highly irrationalized claims such as thinking we're better drivers as Olsy mentioned, or rational assertions due to personal experience (although others experience of this knowledge will never be equivalent.) Interestingly, it is thought others account for our behaviour and our dispositions with our interactions or relationships with others are better predicted by our close acquaintances, compared to ourselves as they have a more accurate judgment on ourselves vs. our interpretation of our own experience. Another interesting account for self-knowledge is the behaviour of a 'memory', which becomes altered and changes every time we think of the memory again, skewing the accuracy and interpretation at any given moment, almost distorting the 'perfection' of the experience at the given time and place. This calls into question the relationship between what it means to remember and the level of self knowledge, or knowledge claims at any given point in time. (Hopefully this was relevant)

DorothyNeufeld02:55, 26 March 2012
 

I am NOT convinced that we could make a "parallel" argument that if someone knows something much more ordinary, like that there is a computer screen in front of them, then they will know that they know it. Knowledge considered to be "ordinary" seems to have very little overlap, if any, with knowledge gained through science or experiments (i.e., in the "scientific setting"). Hence, we must treat the two domains of knowledge separately because second-order knowledge seems much more plausible in the ordinary setting than in the scientific setting. In other words, just because one claims to know something (i.e., first-order knowledge) in the scientific setting does NOT mean that the same person knows (for sure) that they know that scientific fact. If a scientist really possesses second-order knowledge in the scientific setting then it's a rare or unusual case. To go from first to second-order knowledge in the scientific setting requires "justification" in the sense of Staley and Cobb, which definitely does not seem like an easy task, at least at first glance. Whereas the requirements for "justification" in the ordinary setting seem to be much looser and far less strict. This is the barrier I see in making the "parallel" argument mentioned in Dr. Morton's first question.

NicoleJinn02:51, 26 March 2012
 

All this talk of the importance of knowing that you know something seems to undermine the value of knowledge in itself. All this paper seem to do for me is shift the importance of "knowing something" to "knowing that you know something". This essentially makes knowing something meaningless unless you can claim that you know that you know it. I feel like there has to be a better way to describe the phenomenon discussed in this paper, I just can't seem to think of one.

ThomasMasin18:14, 29 March 2012
 

forum 8: week of 5 March: tests and evidence

Edited by 2 users.
Last edit: 01:20, 7 March 2012

First of all, I should say that I am even less sure than usual which parts of this reading may be difficult to you and for which reasons. So be uninhibited in asking for clarification, on this forum and in class.
Staley and Cobb state their central claim as follows:
"... justification in science is externalist* in character insofar as the evidential relations that are of concern in addressing the problem of misleading evidence are objective ..., and internalist* in character insofar as addressing the problem of justification requires the capacity to access and provide reasons that support one’s inferences from the data."
You might disagree with this either because (a) you are not convinced that scientific tests and the process of checking them are objective and truth-conducive, or (b) you suspect that expecting scientists/us to understand and manage the ways conclusions are inferred from the data is asking too much.
Does either of these attract you?

AdamMorton23:36, 3 March 2012

I am interested in both a) and b),but from a particular perspective which involves a question of definition.Are we talking about truth-seeking as pragmatic,which to me would mean seeking victory at securing a desired outcome? (as in the warfare examples posted by Jim and myself in the last forum)That would be a competitive model, more subjective in nature,would necessarily involve the practice of deceit, and would be a war of contending truths in which ultimately might is right. Or are we talking about the seeking of some sort of common ground,which would involve an unavoidable and contingent ethical component and the requirement to disclose (to use Al Gore's phrase) inconvenient truths, regardless of its implications for or against self-interest?

Robmacdee20:06, 5 March 2012
 

could we spend two minutes on ES and Mis-specified model. I know that stuff is more the science end of this paper but I think it would help me understand things a bit better.

WilliamMontgomery22:32, 5 March 2012

Glad to hear that someone else is interested in the science-y stuff other than me! Maybe I can briefly go over the ES and Mis-specified model with you (and anyone else who is interested) after class?

NicoleJinn00:27, 6 March 2012

I'm also interested in the mis-specification testing, especially in the context of respecifying a statistical model (on p. 9). From the way the authors put it, it almost sounds like one should look for a statistical test that fits the data in a way that confirms the hypothesis, even if several other models have already failed to provide good results. I may be misunderstanding this, but, from personal experience, when the data fails to be informative, it seems counterintuitive to "assess a candidate model considered for use in drawing a primary inference"; that just sounds like trying to make the data provide the results I wanted, instead of whatever I actually got.

Olesya07:14, 6 March 2012
 
 

I am probably over-simplifying (in which case I assume I missed something important) but I really don't see why there needs to be an entire paper for the simple statement that "you cannot use evidence to support a claim if you don't believe it/have access to it and-or everyone else that may have to use that claim doesn't have access to that evidence." If I cannot logically and consistently set out my evidence then I don't actually have an argument at all, right?

AngeGordon06:15, 6 March 2012

What you call the "simple statement" is not so simple for the following reasons: (1) the scientific/experimental setting is evidently different from the traditional epistemological/philosophical setting; (2) defining accessibility in the various contexts is not quite as simple as you would (like to) think due to the nontrivial differences between the scientific/experimental setting and the traditional philosophical setting. Putting those two reasons together, justification and accessibility are key concepts that have not been paid much attention to in science, and appropriate definitions of those key concepts really need to be spelled out in detail in order for us to make any sense of those differences in my first reason (1).

You say, "If I cannot logically and consistently set out my evidence then I don't actually have an argument at all, right?" My response is: The big problem that still exists today is that it's not generally agreed upon what counts as "good" evidence vs. "bad" evidence! This lack of agreement could play a role in what the requirements are to have an argument! Hence, once again, it does not seem quite as simple (in my opinion) as you think it is.

NicoleJinn07:08, 7 March 2012
 

No, none of these attract me! I agree with their (Staley and Cobb's) conclusion that justification, in the scientific setting, requires both internalist* and externalist* elements. The only problem I have with their argument is in the introduction, when they claim that their main argument "would plausibly apply equally well to other objective theories such as likelihood accounts (Royall 1997; Lele 2004; Sober 2008), objective Bayesian theories (Jaynes 2003; Williamson 2008), or Peter Achinstein's explanatory-probabilistic hybrid (Achinstein 2001)." (emphasis mine)

The reason that their claim in the introduction is so troubling for me is because I actually don't count the likelihood account (by Richard Royall) as an objective theory at all, and at the same time have many doubts about the Bayesian theories (both objective and subjective)! In my term paper for PHIL 520 (Probability, Confirmation and Representations of Uncertainty), I explained and gave reasons for rejecting Royall's likelihood account, and instead strongly advocated Mayo's error-statistical account, especially the concept of severity. Also, I noticed that Ange brought up in class discussion earlier today the case when someone does not understand statistics/probability very well. My response to that charge is: "The ES account is meant to apply not only in cases where one quantitatively evaluates these probabilities using a statistical model of the data-generating process, but also in experimental settings that take a more casual or intuitive approach to statistical analysis." (emphasis mine, section 3 in Staley and Cobb, shortly after they introduce the error-statistical (ES) account)

Lastly, the fact that many people who use statistical methods do not have a solid grasp of them demonstrates the need to more carefully study the philosophical aspects of statistical methods! This is precisely the research area I am most interested in, and that Deborah Mayo is heavily involved in (with Aris Spanos).

NicoleJinn01:52, 7 March 2012

In the PHIL 440A March 1, 2012 lecture, in terms of epistemology, an expression was uttered in the sense: most of the things one believes on the authority of wise old men in holy books have been discredited. Is there a way in the method of Kent Staley and Aaron Cobb, in their paper titled Internalist and externalist aspects of justification in scientific inquiry, to couple the externalist aspects of justification in scientific enquiry, with the internalist aspects of beliefs based on the authority of wise old men in holy books.

JamesMilligan05:47, 7 March 2012

I am not sure if I understand your question. It seems to me that the question you are asking relates to the topic of the relation between Science and Religion, which is another area I am truly interested in. (Yes, I have multiple research interests in several different areas!) The short attempt to answer your question is: justification and beliefs are separate concepts, and I think those concepts should be treated separately. Thus, when you talk about coupling externalist aspects with internalist aspects, these aspects should all be part of the same concept, whether it be justification or belief.

NicoleJinn07:17, 7 March 2012

a) Scientific tests are unable to objectively test experiments, although within it's nature is to test them with non-objective aims. I feel that yes, this is impossible and unavoidable to a certain extent. Behind experiments is the original purpose which indicates a direction the scientist predicts. Through this purpose of conducting the experiment, it seems to contradict a notion of purely truth-conducive alternatives. Also, acknowledging a gap in the ability to access the entirety of data (Higgs Boson?) at certain points in time. Externalist frames on scientific results seem constrained by time, and only in subsequent tests will it be evident that the experiment is truth-conducive or not, relative to the capacity accessibility of reasons to support the data.

DorothyNeufeld06:56, 8 March 2012
 

My first link is via the notion of epistemic possibility, to link internalism based on authority, with the internalism based on scientific experimentation. Staley and Cobb, page 22, include: … “Hintikka, whose (1962) provides the origins for contemporary discussions, there takes expressions of the form ‘It is possible, for all that S knows, that P’ to have the same meaning as ‘It does not follow from what S knows that not-P.’12.” My second link is physicist Freeman Dyson’s personal theology as expressed in the following excepts from a Polkinghorne book review: The universe shows evidence of the operations of mind on three levels. The first level is elementary physical processes, as we see them when we study atoms in the laboratory. The second level is our direct human experience of our own consciousness. The third level is the universe as a whole. Atoms in the laboratory are weird stuff, behaving like active agents rather than inert substances. They make unpredictable choices between alternative possibilities according to the laws of quantum mechanics. It appears that mind, as manifested by the capacity to make choices, is to some extent inherent in every atom. The universe as a whole is also weird, with laws of nature that make it hospitable to the growth of mind. I do not make any clear distinction between mind and God. God is what mind becomes when it has passed beyond the scale of our comprehension. God may be either a world-soul or a collection of world-souls. So I am thinking that atoms and humans and God may have minds that differ in degree but not in kind. We stand, in a manner of speaking, midway between the unpredictability of atoms and the unpredictability of God. Atoms are small pieces of our mental apparatus, and we are small pieces of God's mental apparatus. Our minds may receive inputs equally from atoms and from God. This view of our place in the cosmos may not be true, but it is compatible with the active nature of atoms as revealed in the experiments of modern physics. I don't say that this personal theology is supported or proved by scientific evidence. I only say that it is consistent with scientific evidence.

I am myself a Christian, a member of a community that preserves an ancient heritage of great literature and great music, provides help and counsel to young and old when they are in trouble, educates children in moral responsibility, and worships God in its own fashion. But I find Polkinghorne’s theology altogether too narrow for my taste. I have no use for a theology that claims to know the answers to deep questions but bases its arguments on the beliefs of a single tribe. I am a practicing Christian but not a believing Christian. To me, to worship God means to recognize that mind and intelligence are woven into the fabric of our universe in a way that altogether surpasses our comprehension. When I listen to Polkinghorne describing the afterlife, I think of God answering Job out of the whirlwind, “Who is this that darkeneth counsel by words without knowledge?… Where wast thou when I laid the foundations of the earth? Declare, if thou hast understanding…. Have the gates of death been opened unto thee? Or hast thou seen the doors of the shadow of death?” God’s answer to Job is all the theology I need. As a scientist, I live in a universe of overwhelming size and mystery. The mysteries of life and language, good and evil, chance and necessity, and of our own existence as conscious beings in an impersonal cosmos are even greater than the mysteries of physics and astronomy. Behind the mysteries that we can name, there are deeper mysteries that we have not even begun to explore.

My third link is to subject Dyson’s personal theology concept of mind, at the quantum level, to the Externalist aspects of justification in scientific enquiry of Staley and Cobb, page 10, defined as: “Externalism*: the assertion of an experimental conclusion (h) is justified if and only if that which justifies h is truth-conducive.” Quantum physics is claimed to be the most tested theory, and to never have failed a test.

JamesMilligan08:53, 8 March 2012
 

Parallel Submission on Staley and Cobb paper titled Internalist and Externalist Aspects of Justification in Scientific Inquiry.

Assumption:

The *method* of believing things because they come from wise old men and holy books has been discredited (and might be undiscredited); and, the method of believing things because they come from wise old men and holy books has only been discredited in that we now need other reasons if we are to believe them. [concept, Dr. Adam Morton]

Question 1

Can the internalist *method* of believing things because they come from wise old men and holy books be included in the Internalist method of Kent Staley and Aaron Cobb, in their paper titled Internalist and Externalist Aspects of Justification in Scientific Inquiry. The basis of this question is Staley and Cobbs’ use, page 22, of the notion of epistemic possibility, …“Hintikka, whose (1962) provides the origins for contemporary discussions, there takes expressions of the form ‘It is possible, for all that S knows, that P’ to have the same meaning as ‘It does not follow from what S knows that not-P.’12.” If the internalist *method* of believing things because they come from wise old men and holy books cannot be included in the form of Hintikka’s construct, why not?

Question 2

Can physicist Freeman Dyson’s personal theology be included in the Internalist method of Kent Staley and Aaron Cobbs’ paper.

Freeman Dyson’s personal theology is expressed in the following excepts from:


[1] Dyson’s acceptance speech for the Templeton Foundation 2000 Prize for Progress in Religion. Retrieved from: http://www.edge.org/documents/archive/edge68.html

[2] Dyson’s The New York Review of Books, book review of physicist Sir John Polkinghorne’s book titled The God of Hope and the End of the World.

Retrieved from : http://www.nybooks.com/articles/archives/2002/mar/28/science-religion-no-ends-in-sight/?pagination=false

Sir John Polkinghorne was awarded the Templeton Foundation 2002 Prize for Progress in Religion.

Dysan excerpts:

[1] My personal theology is described in the Gifford lectures that I gave at Aberdeen in Scotland in 1985, published under the title, Infinite in All Directions. Here is a brief summary of my thinking. The universe shows evidence of the operations of mind on three levels. The first level is elementary physical processes, as we see them when we study atoms in the laboratory. The second level is our direct human experience of our own consciousness. The third level is the universe as a whole. Atoms in the laboratory are weird stuff, behaving like active agents rather than inert substances. They make unpredictable choices between alternative possibilities according to the laws of quantum mechanics. It appears that mind, as manifested by the capacity to make choices, is to some extent inherent in every atom. The universe as a whole is also weird, with laws of nature that make it hospitable to the growth of mind. I do not make any clear distinction between mind and God. God is what mind becomes when it has passed beyond the scale of our comprehension. God may be either a world-soul or a collection of world-souls. So I am thinking that atoms and humans and God may have minds that differ in degree but not in kind. We stand, in a manner of speaking, midway between the unpredictability of atoms and the unpredictability of God. Atoms are small pieces of our mental apparatus, and we are small pieces of God's mental apparatus. Our minds may receive inputs equally from atoms and from God. This view of our place in the cosmos may not be true, but it is compatible with the active nature of atoms as revealed in the experiments of modern physics. I don't say that this personal theology is supported or proved by scientific evidence. I only say that it is consistent with scientific evidence.

[2] I am myself a Christian, a member of a community that preserves an ancient heritage of great literature and great music, provides help and counsel to young and old when they are in trouble, educates children in moral responsibility, and worships God in its own fashion. But I find Polkinghorne’s theology altogether too narrow for my taste. I have no use for a theology that claims to know the answers to deep questions but bases its arguments on the beliefs of a single tribe. I am a practicing Christian but not a believing Christian. To me, to worship God means to recognize that mind and intelligence are woven into the fabric of our universe in a way that altogether surpasses our comprehension. When I listen to Polkinghorne describing the afterlife, I think of God answering Job out of the whirlwind, “Who is this that darkeneth counsel by words without knowledge?… Where wast thou when I laid the foundations of the earth? Declare, if thou hast understanding…. Have the gates of death been opened unto thee? Or hast thou seen the doors of the shadow of death?” God’s answer to Job is all the theology I need. As a scientist, I live in a universe of overwhelming size and mystery. The mysteries of life and language, good and evil, chance and necessity, and of our own existence as conscious beings in an impersonal cosmos are even greater than the mysteries of physics and astronomy. Behind the mysteries that we can name, there are deeper mysteries that we have not even begun to explore.

Question 3

Can Dyson’s personal theology of the operations of mind, at the quantum level, be included in Staley and Cobbs’ method of Externalist aspects of justification in scientific enquiry. Staley and Cobb, page 10, define Externalist aspects of justification in scientific enquiry as:

“Externalism*: the assertion of an experimental conclusion (h) is justified if and only if that which justifies h is truth-conducive.”

Can Dyson’s personal theology of mind concept, at the quantum level, be considered Externalist justification in scientific enquiry as truth-conducive by Staley and Cobbs’ method, if confirmed by quantum mechanics research. As a theory, quantum mechanics is claimed to be the most tested theory, and to never have failed a test.


Six selected quotations on this claim on quantum mechanics as the most tested theory are as follows:


[1] “Yes, the Theory of Relativity (just like the Theory of Quantum Mechanics too) can be physically tested: you can demonstrate its truth, by means of the apparent impossibility of ever proving it untrue. This is to say that science has indeed put relativity to many, many tests: those trying to prove it incorrect (when appropriately applied). Irrefutably, the Theory of Relativity (again, just like the Theory of Quantum Mechanics) has NEVER once failed ANY test that science has EVER subjected it to – NOT A Single One – making it as true as anything in the universe can ever be, because no one has ever successfully demonstrated, or better stated, no one has ever even come close to demonstrating, its incorrectness - not even once.”


 Chongo in collaboration with Jose. January, 2010. Conceptual Reality. Page, preface. Retrieved from: http://chongonation.com/nutshell.htm   March 10, 2012.


[2] “Quantum Mechanics has been around since the thirties and is the basis of essentially all modern physics. It is a clean theory and has been tested, retested, and verified more than any other physical theory in history.”


The Mathematician. April 18, 2010. Ask a Mathematician / Ask a Physicist. Retrieved from http://www.askamathematician.com/?p=2310 March 10, 2012.


[3] “Quantum mechanics has never been shown to be incorrect and has never failed experimentally.”

Hooper, Dan. Fermilab. Quantum Physics. Slide 72. Retrieved from:

https://docs.google.com/viewer?a=v&q=cache:so_g035GLTUJ:smp.fnal.gov/slides/hidden/DanHooperQuantumMechanics.ppt+Hooper,+Dan+Quantum+Mechanics+slide+presentation&hl=en&gl=ca&pid=bl&srcid=ADGEESih4RlVUe7Prhgj5m8YUofB_8H1yjiCGvFvch03UDifQ59FA4j4PnkVsgWN59gb2stPJb81xZdGihnw8Zng6eBL_llmGM6yLCgTmJ-fm7eQNWuVLfi8ueWTW7yT0roiZExlgqbb&sig=AHIEtbTDeo5ArkdDfUSWmALLWWzJJEc4AQ March 10, 2012.

[4] ”Quantum theory works. It never fails.”

Bjorken, James. The Future of the Quantum Theory. Beam Line. Summer/Fall 2000. Page 2.


Retrieved from: http://www.slac.stanford.edu/pubs/beamline/30/2/30-2-bjorken.pdf March 10, 2012.


5. “Since its final formulation in terms of Schrodinger wave mechanics, quantum mechanics has claimed to have never failed any conceivable experimental test [1]” Reference [1] = A. Peres, Quantum Theory: Concepts and Methods (Kluwer Academic Publishers, Dordrecht. 1995).”


 Budiyono, July 24, 2009. The most probable wave function of a single free moving particle. Institue for the Physical and Chemical research, RIKEN, 2-1 Hirosawa, Wako-shi, Saitama 351-0198, Japan.


 Peres, Asher, Quantum Theory: Concepts and Methods. Kluwer Academic Publishers, Dordrecht / Boston / London, 1993.


[6] “In fact, Feynman once wrote, ‘I think I can safely say that nobody understands quantum mechanics.’ But quantum physics agrees with observation. It has never failed a test, and it has been tested more than any other theory in science.”


Hawking, Stephen; Mlodinow, Leonard, 2010. The Grand Design. Bantam Books. New York. Page 74.

JamesMilligan08:23, 11 March 2012
 

Parallel Submission on Staley and Cobb paper titled Internalist and Externalist Aspects of Justification in Scientific Inquiry.

The purpose of this submission is to confirm that any Internalist belief based on authority, can be tested on the basis of the notion of epistemic possibility, as the Internalist component of the Staley and Cobb Internalist and Externalist aspects of justification in scientific inquiry.

Assumption: The *method* of believing things because they come from wise old men and holy books has been discredited (and might be undiscredited); and, the method of believing things because they come from wise old men and holy books has only been discredited in that we now need other reasons if we are to believe them. [assumtion source, Dr. Adam Morton].

Question 1

Can the internalist *method* of believing things because they come from wise old men and holy books be included in the Internalist method of Kent Staley and Aaron Cobb, in their paper titled Internalist and Externalist Aspects of Justification in Scientific Inquiry. The basis of this question is Staley and Cobbs’ use, page 22, of the notion of epistemic possibility, …“Hintikka, whose (1962) provides the origins for contemporary discussions, there takes expressions of the form ‘It is possible, for all that S knows, that P’ to have the same meaning as ‘It does not follow from what S knows that not-P.’12.” If the internalist *method* of believing things because they come from wise old men and holy books cannot be included in the form of Hintikka’s construct, why not?

Question 2

Can physicist Freeman Dyson’s personal theology be included in the Internalist method of Kent Staley and Aaron Cobbs’ paper.

Freeman Dyson’s personal theology is expressed in the following excepts from:


[1] Dyson’s acceptance speech for the Templeton Foundation 2000 Prize for Progress in Religion. Retrieved from: http://www.edge.org/documents/archive/edge68.html

[2] Dyson’s The New York Review of Books, book review of physicist Sir John Polkinghorne’s book titled The God of Hope and the End of the World.

Retrieved from : http://www.nybooks.com/articles/archives/2002/mar/28/science-religion-no-ends-in-sight/?pagination=false

Sir John Polkinghorne was awarded the Templeton Foundation 2002 Prize for Progress in Religion.

Dysan excerpts:

[1] My personal theology is described in the Gifford lectures that I gave at Aberdeen in Scotland in 1985, published under the title, Infinite in All Directions. Here is a brief summary of my thinking. The universe shows evidence of the operations of mind on three levels. The first level is elementary physical processes, as we see them when we study atoms in the laboratory. The second level is our direct human experience of our own consciousness. The third level is the universe as a whole. Atoms in the laboratory are weird stuff, behaving like active agents rather than inert substances. They make unpredictable choices between alternative possibilities according to the laws of quantum mechanics. It appears that mind, as manifested by the capacity to make choices, is to some extent inherent in every atom. The universe as a whole is also weird, with laws of nature that make it hospitable to the growth of mind. I do not make any clear distinction between mind and God. God is what mind becomes when it has passed beyond the scale of our comprehension. God may be either a world-soul or a collection of world-souls. So I am thinking that atoms and humans and God may have minds that differ in degree but not in kind. We stand, in a manner of speaking, midway between the unpredictability of atoms and the unpredictability of God. Atoms are small pieces of our mental apparatus, and we are small pieces of God's mental apparatus. Our minds may receive inputs equally from atoms and from God. This view of our place in the cosmos may not be true, but it is compatible with the active nature of atoms as revealed in the experiments of modern physics. I don't say that this personal theology is supported or proved by scientific evidence. I only say that it is consistent with scientific evidence.

[2] I am myself a Christian, a member of a community that preserves an ancient heritage of great literature and great music, provides help and counsel to young and old when they are in trouble, educates children in moral responsibility, and worships God in its own fashion. But I find Polkinghorne’s theology altogether too narrow for my taste. I have no use for a theology that claims to know the answers to deep questions but bases its arguments on the beliefs of a single tribe. I am a practicing Christian but not a believing Christian. To me, to worship God means to recognize that mind and intelligence are woven into the fabric of our universe in a way that altogether surpasses our comprehension. When I listen to Polkinghorne describing the afterlife, I think of God answering Job out of the whirlwind, “Who is this that darkeneth counsel by words without knowledge?… Where wast thou when I laid the foundations of the earth? Declare, if thou hast understanding…. Have the gates of death been opened unto thee? Or hast thou seen the doors of the shadow of death?” God’s answer to Job is all the theology I need. As a scientist, I live in a universe of overwhelming size and mystery. The mysteries of life and language, good and evil, chance and necessity, and of our own existence as conscious beings in an impersonal cosmos are even greater than the mysteries of physics and astronomy. Behind the mysteries that we can name, there are deeper mysteries that we have not even begun to explore.

Question 3

Can Dyson’s personal theology of the operations of mind, at the quantum level, be included in Staley and Cobbs’ method of Externalist aspects of justification in scientific enquiry. Staley and Cobb, page 10, define Externalist aspects of justification in scientific enquiry as:

“Externalism*: the assertion of an experimental conclusion (h) is justified if and only if that which justifies h is truth-conducive.”

Can Freeman Dyson’s personal theology of mind concept, at the quantum level, qualify as Externalist justification in scientific enquiry as truth-conducive, by Staley and Cobbs’ method, if confirmed by quantum mechanics research. As a theory, quantum mechanics is claimed to be the most tested theory, and to never have failed a test.


Six selected quotations, on the claim that quantum mechanics is the most tested theory, and has never failed a test, are as follows:


[1] “Yes, the Theory of Relativity (just like the Theory of Quantum Mechanics too) can be physically tested: you can demonstrate its truth, by means of the apparent impossibility of ever proving it untrue. This is to say that science has indeed put relativity to many, many tests: those trying to prove it incorrect (when appropriately applied). Irrefutably, the Theory of Relativity (again, just like the Theory of Quantum Mechanics) has NEVER once failed ANY test that science has EVER subjected it to – NOT A Single One – making it as true as anything in the universe can ever be, because no one has ever successfully demonstrated, or better stated, no one has ever even come close to demonstrating, its incorrectness - not even once.”


Chongo in collaboration with Jose. January, 2010. Conceptual Reality. Page, preface. Retrieved from: http://chongonation.com/nutshell.htm, March 10, 2012.


[2] “Quantum Mechanics has been around since the thirties and is the basis of essentially all modern physics. It is a clean theory and has been tested, retested, and verified more than any other physical theory in history.”


The Mathematician. April 18, 2010. Ask a Mathematician / Ask a Physicist. Retrieved from http://www.askamathematician.com/?p=2310


[3] “Quantum mechanics has never been shown to be incorrect and has never failed experimentally.”

Hooper, Dan. Fermilab. Quantum Physics. Slide 72. Retrieved from:

https://docs.google.com/viewer?a=v&q=cache:so_g035GLTUJ:smp.fnal.gov/slides/hidden/DanHooperQuantumMechanics.ppt+Hooper,+Dan+Quantum+Mechanics+slide+presentation&hl=en&gl=ca&pid=bl&srcid=ADGEESih4RlVUe7Prhgj5m8YUofB_8H1yjiCGvFvch03UDifQ59FA4j4PnkVsgWN59gb2stPJb81xZdGihnw8Zng6eBL_llmGM6yLCgTmJ-fm7eQNWuVLfi8ueWTW7yT0roiZExlgqbb&sig=AHIEtbTDeo5ArkdDfUSWmALLWWzJJEc4AQ March 10, 2012.

[4] ”Quantum theory works. It never fails.”

Bjorken, James. The Future of the Quantum Theory. Beam Line. Summer/Fall 2000. Page 2.


Retrieved from: http://www.slac.stanford.edu/pubs/beamline/30/2/30-2-bjorken.pdf March 10, 2012.


[5] “Since its final formulation in terms of Schrodinger wave mechanics, quantum mechanics has claimed to have never failed any conceivable experimental test [1]” Reference [1] = A. Peres, Quantum Theory: Concepts and Methods (Kluwer Academic Publishers, Dordrecht. 1995).”


Budiyono, July 24, 2009. The most probable wave function of a single free moving particle. Institue for the Physical and Chemical research, RIKEN, 2-1 Hirosawa, Wako-shi, Saitama 351-0198, Japan.


Peres, Asher, Quantum Theory: Concepts and Methods. Kluwer Academic Publishers, Dordrecht / Boston / London, 1993.


[6] “In fact, Feynman once wrote, ‘I think I can safely say that nobody understands quantum mechanics.’ But quantum physics agrees with observation. It has never failed a test, and it has been tested more than any other theory in science.”


Hawking, Stephen; Mlodinow, Leonard, 2010. The Grand Design. Bantam Books. New York. Page 74.

JamesMilligan03:07, 12 March 2012

Jim, I appreciate you contributing to the forum. However, what you're asking and writing about does not relate very much to Fisher's reading. Hence, I suggest you focus on Fisher's book, as that is the topic for this week. Focusing on Fisher and the design of experiments will make this week go much smoother because I will be able to accommodate more requests from the audience, if they are willing to tell me what they have trouble understanding.

NicoleJinn04:36, 12 March 2012
 
 
 
 

This whole week was a little confusing for me, but I'm not sure why. Hopefully next week's reading will clarify things for me a bit. Something about Staley's style made the paper difficult to read.

ThomasMasin04:51, 9 March 2012

Thomas, exactly what part of Staley's argument was most confusing for you? Or, is it not possible for you to pinpoint one thing that was most confusing this week? The reason for asking those questions is because if what confused you this week directly relates to next week's reading, then I will try to accommodate for your confusion in my presentation next Tuesday.

NicoleJinn06:11, 9 March 2012

I also found this reading to be rather confusing. I don't know how significant this part of the reading is, but I was confused by the idea of a degree of security in forming inferences. Staley & Cobb said that researchers can have secure inferences without having to state how secure they are. I don't quite understand why this is, it seems to weaken the strength of internalism. If you could clarify this a bit more that would be helpful!

Andreaobrien23:26, 9 March 2012

Andrea, security is a concept that is very much related to Fisher's work but is not something that Fisher (explicitly) addressed. I will see if I can say at least a few words about security, in the context of Fisher's reading. Thanks for your comment! To everyone else: keep these comments coming! My suggestion to all of you is to start looking ahead to next week's reading (Fisher) in light of the concepts discussed in this week's reading.

NicoleJinn00:03, 10 March 2012
 
 
 

My issue with Staley and Cobb's paper is that in attempting to apply internalism and externalism to scientific methodology they change the definitions (internalism* and externalism*) to the point that they are almost unrecognisable from their original forms.

I am mostly concerned with internalism*. I took internalism in standard epistemology to mean that justification is based on internal evidence (i.e. evidence must be directly available to and recognized by the subject). Staley and Cobb frame internalism* around an epistemic community or epistemic situation (which could contain many subjects) which, to me, contradicts the whole idea of internalism. Their internalism* also entails security and being able to defend an assertion based on "internal" evidence and collaboration within a community. This seems to go against the internalist idea. They assume that everyone within a "relevant epistemic community" holds the exact same evidence and will operate in identical ways to defend their claims.

I do see a relationship between what I took externalism to be in conventional epistemology (that justification is based on if the evidence is objectively true and doesn't need to be directly accessible for an epistemic subject) and what Staley and Cobb call externalism* in that knowledge or assertions of experimental conclusions are judged based on "truth-conduciveness". I just don't see the necessity of making the distinction when all that changed was "knowledge claim" to the "the assertion of an experimental conclusion."

The argument that justification in science is both internalist and externalism is problematic to me because of my understanding of internalism and externalism. To me they are mutually exclusive options and instead of changing the definitions entirely, Staley and Cobb might as well have used their own terms to describe their notions of internalism* and externalism*.

AlexanderBres20:44, 9 March 2012
 

Both of what they are saying makes sense to me because i believe in scientific experimentation ones results must not only explain what is going on in the world but also be properly set up. In experimental method we say internal and external validity. Internal being that your experiment is set up in such a way that protects you from coming up with a wrong conclusion (things such as having a control group and also random assignment all help to increase you internal validity). External validity on the other hand are things such as having a experiment which can be similar to what actually goes on in the world. Having a truth or reality component. For example just because people in a experiment acting in a certain way does that mean that the same goes for in the world when there are no experiments and they are not in a lab for example. Both components are necessary and neither alone is sufficient to tell us if the answer we got from the experiment is in fact correct. So the externalist says that your results must be able to apply to real world situations while the internalist says that you must have reason to defend that and that can come from a experimental design that has been conducted properly.

ShivaAbhari04:37, 13 March 2012
 

Truth conducive sounds like a powerful term, and a good feature for an experimental design to have, perhaps the best feature. But it rings hollow in this article because what it means is never really explained. How do you know? do you look at the truth and then see if your experiment matches it? I just don't quite get it.

KevinByrne07:07, 20 March 2012
 

forum 7: week of 27 Feb - pragmatic encroachment

I hope you've all got my email with reading suggestions. Contact me if you have not.

On page 564, in the last whole paragraph, F & McG state their assumptions. You may have worries about (1), fallibilism. But that's going down a well-explored route. (Comments welcome, all the same.) I think (3) is the assumption doing the most work. Think about it: is it as obvious as it might seem?

( (2) is important too. Worth pausing to think out what it is saying.)

AdamMorton23:38, 23 February 2012

I am writing to endorse Nicole Jinn's February 16 posting on the PHIL 440A Non-Standard Topics: on Experiment [Staley] March 6,8; and, Experimental Design [Fisher] March 20, 22.

Nicole's February 9 Couse Talk posting reads:

Last Thursday (February 9), Dr. Morton briefly went over the remaining topics that he plans to cover in this course. Among them, he mentioned that the two that are least connected with his overall motive for this course are the readings by Staley (6-8 March) and Fisher (13-15 March). I am curious as to whether anyone in this course (among the participants) has objections to doing any of these two readings. If so, please be honest about your objections and I will try to consider them to the best of my ability. While you decide what objections you may have to those two readings, I just want to make it known that you may expect to see me give short presentations on Fisher's reading on one or both days during that week. As much as these two topics (or readings) are least connected with this course, they are (ironically) the two topics of most interest to me, if that makes sense to any of you.

ReplyMoreHistoryEditLink toDrag to new location.NicoleJinn‎02:07, 16 February 2012.forum 5: week of 6 Feb. Hawthorne and lotteries

Dr. Morton's November 17, 2011 E-mail questions to PHIL 440A course registrants on Non-Standard Topics:

I would like to spend some time on the following non-standard topics. Do you have any background or interest? > - the design of experiments & the philosophy of experimentation > - the link between grounds for knowledge and reasons for action

My November 17, 2011 E-mail reply to Dr. Morton:

Thank you for your E-mail on PHIL 440. I think I can claim background in design of experiments. Current focus is 0 carbon dioxide emissions, and the deployment of the plant to implement it. Your non-standard topics are of great interest.

JamesMilligan08:07, 26 February 2012
 

I think I rather agree with (2), almost unreservedly. And (3) does seem pragmatically straight-forward. But I cannot seem to make the three statements lead directly to the conclusion reached. They seem to me to lead to a justification in -doing- but not a difference in -knowing-. (1) doesn't lead to an alteration of knowledge, but an alteration of surety beyond knowledge. "Do you know that?" "Yes." "Are you SURE?" "Sure ENOUGH." is not changed to "Are you SURE?" "You're right, I don't know." It's rather "Are you SURE?" "No, I'm not sure, but I -think- so." I'm not sure (pun not intended) if it could be more convincing with a bit of rewording, though.

AngeGordon04:30, 28 February 2012
 

I don't think they explained away uncertainty as definitively as they hoped to. Unless I am mistaken, they conclude that if you know reason r, then, no matter the risks, the possibility that not -r is irrelevant. To me their reasoning about the big O went nowhere, so their conclusion about r is just something they said at the end. Risk will always be a factor in my decisive use of knowledge, having "not" in the back of your mind does not subdue knowledge, cutting out the possibility that "not" can only stifle your scope of awareness.

KevinByrne06:39, 28 February 2012
 

Generally, I don't have a problem with any of the claims or #3 in particular; however, the reductio argument used by Fantl & McGrath to arrive at (3) confused me, but on a purely pragmatic level and only in DeRose's Bank examples (p. 564, paragraphs 2 and 3) they used to explain it. In my interpretation, option O (in case A) is "waiting until tomorrow to deposit the check instead of going in and double-checking whether the bank is open". The authors claim that "he will know that going in to check further will have a worse outcome". I realize the low stakes of case A, but it escapes me why improving one's epistemic position concerning the bank's hours is ever a worse option. Perhaps it is not important in this particular situation (hence the low stakes), and maybe it will take up a couple of minutes of the individual's time, and maybe the clerk will be rude or the hours sign will be unintelligible; but overall, knowing the hours will maybe save this person from attempting the bank line-up some Friday nights in the future! I agree with the authors that you are still, in fact, justified in doing O; but I don't think the other option is objectively worse.

Olesya07:34, 28 February 2012
 

(1) is fairly unproblematic for me. (2) and (3) seem quite related to each other in that both apply in cases where there is a lack of certainty. Indeed, (3) is the assumption doing the most work. (3) is also the most problematic for me, for the following reasons: (a) Supposing that one knows "that O is best" is a huge leap for me because (b) "That O is best" is arbitrary - what does it mean for O to be best? (This question is NOT answered in the Fantl and McGrath article we are reading.) Especially when we assume a lack of uncertainty, the "best" option need not be lopsided in that all other options are "much worse" than the "best" option, whatever that may mean. In other words, the "best" option may not be that much better than the second best option (i.e., the first option beat the second option by a very narrow margin or a close call). This reason is why taking the "maximally" likely option is not always optimal in the probability setting. Hence, I do not buy Fantl and McGrath's argument or reasoning for "if you know that O is best, you are justified in doing O" (page 568) because I almost never know for sure "that O is best"! Establishing the truth of "O is best" is difficult, and the authors (Fantl and McGrath) seem to have swept this important point under the rug.

NicoleJinn18:22, 28 February 2012
 

Personally, I have only minor qualms about 1 & 3. My problem with the second claim is the condition "if the stakes are high enough". I still don't buy that high-stakes situations should have an effect on "certainty", implying that it also has an effect on knowledge (or else what would that certainty pertain to). I'm inclined to agree with Ange that the entire argument seems to be about pragmatics rather than epistemics.

ZacharyZdenek19:08, 28 February 2012

In support of the Jeremy Fantl and Mathew McGrath paper commitment to pragmatic encroachment, in the absence of certainty in knowledge, I offer the example of Winston Churchill.

In the book titled Troublesome Young Men The Rebels Who Brought Churchill To Power, author Lynne Olson describes how a group of young Tory members of Parliament, in May 1940, toppled the British Prime Minister Neville chamberlain, the leader of their own party, from power.

Chamberlain had an overwhelming parliamentary majority. He had declared war on Nazi Germany eight months earlier with the Nazi invasion of Poland. The young dissidents used a major British military setback in Norway, and the speech of the leader of their dissident group to motivate the British House of Commons to reassert itself as the guardian of democracy. The result was Churchill became Prime Minister May 10, 1940.

In the book titled Five Days in London May 1940, 1999, author John Lucas describes the five days Friday May 24, 1940 through May 28, 1940. On May 28, Churchill had won a struggle with his War Cabinet. He declared that England would go on fighting, no matter what happened. No matter what happened; there would be no negotiating with Hitler.

On page two, Lucas writes, “Then and there he saved Britain, and Europe, and Western civilization.”

In 1943 the United States War Department produced a factual film titled The Battle of Britain. In June 1940 the Nazi army had 100 fully equipped divisions lined 2,000 miles along the European coast, from Norway into France for the planned invasion of Britain. Britain had less than one equipped division. The Nazi air force out numbered the British air ten to one, both in aircraft, and in pilots.

I think Churchill satisfies [1] in no certainty, [2] in making a difference; and, [3] in justification with his personal commitment to resist the influence of his appeasers.

JamesMilligan07:59, 29 February 2012

(3) if "option O will have the best outcome of all your available acts, then you are justified in doing O." It seems if it is a matter of doing something completely affecting yourself, seen in the Bank example, where staying in line would be the best option if stakes were high. If, on another example, say where your sister fell and broke her leg after falling through a crack in the middle of a frozen lake, the best option would be to bring her back before she freezes, taking the risk of walking across the lake (knowing there was a chance it would crack again). Even though stakes were high in this example, it seems the best decision would be to rescue her, whereas if it were a matter of walking across the lake individually, the best option would be to stay put. Perhaps I am missing the nature of the stakes, or the best option (or this may even be a question of ethics). Perhaps (3) is justified when only the individual is under consideration.

DorothyNeufeld07:55, 1 March 2012
 
 

Assumption 3 is plausible. If I know O is the best option, then I ought to do O. This is because I have the reason that O is the best option, where having such a reason is bearing some sufficient epistemic relation to that reason.

I'm not convinced that this relation is a relation of 'knowing that', however. When I know r and r is a reason for doing O, then I ought to do O. But the relation I bear to r may just as well be 'believing that' or 'judging it to be the case that'.

MclarenThomas08:19, 1 March 2012
 

I wanted to raise a further issue on the topic of deception. Well, more on the issue of how human's have different levels of trust for different situations. Some people take risks, some people are confident in what they "know". For me, I would not cross the frozen lake if the only the I had to gain was time since if I was wrong I could die. "Reckless Rick" on the other hand would claim that he knew he would not fall through the ice so he crossed the lake. Both Rick and I had the same information but for him it was knowledge and for me it was not. To me this seems to be the effect of removing certainty from the prerequisites for knowledge. Once you don't have to be certain it becomes a matter of opinion whether you know something or not.

ThomasMasin19:23, 1 March 2012
 

I am sympathetic to the pragmatic approach to resolving paradoxes as such snowmobile example discussed in class. While I see how it can be troubling to epistemologists, I think it still offers an intuitive description of how the concept of knowledge is actually applied in real-world situations. I could very well imagine myself saying well, I know we're going to have class next week. But if someone asks, "would you bet my life on it", I would of retract the earlier statement. Well, maybe know it, I merely think it to be probable (but not probable enough to warrant risking my life).

Perhaps why epistemologists have trouble with this conclusion has more to do with the word know than any actual disagreement over how people behave. We're ascribed so many things and connotations to knowing something, and knowledge has been virtually elevated to the pantheon of the immortals. But the conclusion shows that knowledge is not only moral, but subjective as well. Perhaps we need to find another word for the doubleplusgood knowledge that philosopher describe.

Edward06:01, 2 March 2012

I disagree with the claim that knowledge is subjective. Beliefs are subjective, but knowledge is not necessarily subjective (and the two terms--belief and knowledge--are not interchangeable, at least in statistics or applied mathematics). The type of knowledge I'm thinking of is scientific or experimental knowledge: "The growth of knowledge, by and large, has to do not with replacing or amending some well-confirmed theory, but with testing specific hypotheses in such a way that there is a good chance of learning something--whatever theory it winds up as part of" (page 56, "Error and the Growth of Experimental Knowledge" by Deborah Mayo). My main point is that these specific hypotheses need not be subjective, unless the scientific models themselves are subjective. However, I don't want to think that the scientific models themselves are subjective. Otherwise, the entire pursuit of science would be subjective--there would be no objectivity in science, but I do not think that is the case! Does anyone else believe that there is (at least some) objectivity in science???

NicoleJinn07:43, 2 March 2012
 

The more I read about the role of stakes in regards to knowledge, the more I question whether it is actually knowledge that is being influences in these cases. It is apparent that stakes do play a large role in the outcomes of these scenarios, but I wonder if it might be that high stakes have more of a role in changing the way that one acts, rather than their knowledge. I am suggesting that maybe these stakes can impact the way in which the choose to act without truly weakening their knowledge. Is it possible that these stakes are causing people to act in contrary to what they actually know? In the case with ice thickness, it seems that the person knows that the ice is thick enough, but something like their conscience, or gut, leads them to act in opposition of this knowledge.

Andreaobrien23:26, 2 March 2012
 

I have no problem with claims (1) or (2). (3) 'If you know O will have the best outcome you should do O' is where I identify a problem. It seems like an oversimplification that requires some clarification.

There are many factors that need to be considered in deciding which option will be best which is why some examples are so problematic for the argument. (3) relies on the assumption that in every given case there will be an option that will undeniably lead to the best outcome but there are no universal criteria for what makes an outcome the best.

Is the best option the one that is most likely to have a favourable outcome? This can't be it since what is most probable is not always the rational choice to make.

What is the 'best option' also varies from different viewpoints and amounts of evidence available. Is the best option objective and based on the evidence that would be available to an omniscient observer? Or is the best option subjective and only based on the evidence of whoever is making the decision? And if it is subjective is it based on evidence available only in the split second before the decision must be made? These issues need to be resolved before the third assumption is permissible.

AlexanderBres23:52, 3 March 2012

In Jim's Churchill example,a crucial point that needs to be made is that Churchill was bluffing.That is to say,deceit played a role in the forming of a historic outcome.In the Churchill example it is a passive,tacit form of deceit.(lie by omission,undisclosure)Closer to home a more active example of deceit is provided by the tale of one of Tecumseh's tactics in the War of 1812-14. The great First Nations leader,in collaboration with the British general Brock,was able to convince a large attack force of American troops stationed at their fort in Detroit,that The Canadian defense forces were mightier in number than they indeed were.After having Brock send the Americans a letter declaring that 5000 Canadians were on the way,Tecumseh had his small band, upon their arrival,circle the fort single file through a clearing.He then had them double back through the woods to repeat their appearance of passing through the clearing repeatedly, giving the appearance to the Americans that they were vastly outnumbered.Subsequently,the Americans,under General Hull sent out a white flag and surrendered Fort Detroit,suffering at that time their greatest loss of territory to a foreign power,and affecting the course of the war.Relating these historic examples to point (2),the stakes being high would surely have to include the active and even probable likelihood that deceit will be involved in affecting outcomes given that warfare is a life and death struggle in which the stakes are dramatically heightened.

Robmacdee19:22, 5 March 2012
 

I sort of agree with 1) since in certain less serious contexts you can say you know something without really being sure that you do. Without having looked at all the reasons for and against it and made an EDUCAted assertion. 2)definitely makes a lot of sense because i think at any which point there can always be doubt even if its tiny or microscopic for all the things we know we may one day be proven to be false about. The degree to which we think we know something depend greatly on the level of stake and if we are uncertain and the stakes are high this will greatly effect what we choose to do. 3) is to kind of common sense. If you know option O is best ob course you will O. The question really is how certain you are about O being best. really that is the question we are asking. And I do think how certain you are is determined by what is at stake. J and M said if the man knew the bank was open sat then he should go back sat even if the stakes are high..if he does in fact know this. I agree if u KNOW it is open then you should go back. even if the skates are high it doesn't change the fact that you know it. But the problem i think is is that knowledge is not concrete and or better yet beliefs are not concrete we can have beliefs be certain of them but maybe we can never have true knowledge because even if we think we know something we could turn out to be wrong and could we ever say we knew something and then we didn't? So the question is how certain are we right now. what reasons do we have for believing what we do right now. if the stakes are low we don't need that much justification but if they are high we need more. This clears up the bank situation at least a bit. When stakes are low he says i know the bank will be open because it was last week so ill come back butt when the stakes are high he says well they could change the hours or maybe he could even be mistaken so the evidence is no longer good enough with the high stakes so he goes in to the bank. did he ever know? did he know in the high stakes that the bank was open on sat? i think he knew on some level not a level high enough though....so i guess i think knowledge is more of a continuum and less a binary category.

ShivaAbhari05:36, 9 March 2012
 

forum 5: week of 6 Feb. Hawthorne and lotteries

Puzzles about belief and knowledge in connection with lotteries have been around for a while. But Hawthorne's contribution was to point out how similar some of the issues are to issues about skepticism. The common pattern is something (A) that we readily admit to knowing but which has as a consequence something (B) that we are reluctant to say we know. For this to be so, there have to be examples of A that do look like knowledge, and of B that don't.
So, which examples work for you?
And what conclusions about knowledge and the way we organize our beliefs does this push you towards?

AdamMorton04:38, 5 February 2012

If my desk, which I sat my computer upon last week and then typed an email at turns out to be a loaf of bread, which I then find, to my horror has been sliced up and turned into sandwiches, am I to believe it was not a) a real desk, but merely a 'desk facade,'or, alternatively and pre-sandwichhood it was b) not really a food item since it was obviously functioning as a desk last week? ( I have the photos on my cel to prove it:-) If I am caught in an earthquake and masonry and bricks are falling all about me, and I use a salad bowl for a safety helmet, is it a) a safety helmet or b) a salad bowl? Knowing there is 'something', we call it by how it functions as a piece of, and in, the big picture we call the world. As for lotteries: why would someone buy a lottery ticket if they were sure they wouldn't win? I can think of one possible and quite plausible reason. Lotteries raise money for some cause which a ticket buyer may support, but only as one among many contributors. Winning may be seen by the buyer as a harmless fantasy which can be indulged in, and serves the seller as a promotional nudge in the direction of chipping in to a cause which every ticket holder can feel is of collective benefit to all concerned. There are two opposed conflicted motives for participating in the game. The prize offered in a sense uses self interest against itself, to achieve a larger goal. That would be the 'real', sophisticated game as opposed to the more naive game of thinking that its all about me.It serves as a method to serve social need where the credo is looking out for number one. Winning would be the exception which proves the rule.

Robmacdee21:48, 6 February 2012

It seems as if Hawthorne addresses the tension between intuitions and the probabilistic reasoning towards the future. The lack of linkage between an 'ordinary proposition' and a lottery proposition explains the impossibility of knowledge of the future. Hawthorne's use of the divisions of epistemic space in his reasoning seems unsatisfying for some reason as an conclusive argument for refuting knowledge of the future or knowledge through deductive reasoning. Overall I agree with Hawthorne in rejecting parity reasoning, however it seems incomplete. It seems as if Hawthorne neglects to mention in the lottery propositions that speakers are actually aware they do not know their lottery propositions.

DorothyNeufeld02:41, 7 February 2012

I concur on your point about Hawthorne's rejection of parity reasoning being incomplete! What's more is his mentioning duplicate reasoning IN PASSING - not going into details about duplicate reasoning (because it is "not our main topic here")! Why I dislike him only mentioning duplicate reasoning and not going into the details is because I agree with duplicate reasoning - this is the position I would take. Also, I just want to disagree with DeRose's comment on probabilistic thoughts being forced upon us. Anyone else disagree with DeRose's comment (top of page 26 in the chapter we are reading)?

Dorothy, why do you think Hawthorne's use of the divisions of epistemic space in his reasoning is unsatisfying as an conclusive argument for refuting knowledge of the future or knowledge through deductive reasoning? It seems like you are on to something, and I just want to see if you can spell it out in more detail. After all, this forum is one place where we can share our ideas with one another.

NicoleJinn01:00, 8 February 2012
 
 

Hawthorne (at least not in the sections that we are reading) doesn't really address the issue of positive claims of knowledge that can counter the B claims before they are even brought up, as in the example of "faux zebra-stunt-double mules" and a visiting zoologist that DeRose brings up in his paper. The visiting zoologist has enough knowledge in his field to be able to tell that the creature in front of him is indeed a zebra, and not a zebra disguised as a mule, even if the latter possibility is brought up to his attention. The zoologist's belief concerning the genuine nature of zebras would then be a sensitive one. Consequently, (B) is still knowledge in the case of the zoologist, because it would be a lot harder to fake specific characteristics of a zebra aside from its peculiar coloration. Granted, this could lead down a slippery slope of measures of expertise in certain fields, which could then bring us back to justified true beliefs, but I think this issue is an important one in DeRose's distinction between "simple skeptics" and "AI skeptics", and, since Hawthorne takes DeRose's argument into account, it is interesting that he doesn't mention such a crucial part of it in his paper.

Olesya08:33, 7 February 2012
 

It seems to me that the reason we are so hesitant to admit that we know we will not win the lottery is that we are putting effort into the process. Our goal in buying a lottery ticket is to win, so we refuse to admit defeat by claiming that we know we will lose. This contrasts with the case of the African safari because in that case we are not making an effort to save up money or find a higher paying job. I think if someone played the lottery daily, with the goal of going on an African safari in mind, they would not say that they knew they would not go on an African safari.

ThomasMasin20:21, 7 February 2012
 

So he's got: A(for sure), A->B, -B(not for sure). In relation to just the safari heart attack scenario, A is knowledge about a plan you have made, so that knowledge is self dependent. B not a piece of knowledge that is dependent on your volition, so it is of a different quality, it is more alien to your perspective then is A. This is the reason A is proposed with confidence while B is proposed with apprehension. The middle inference still makes sense, but it does not create a logical paradox by transitioning certainty into uncertainty, A and B have their qualities all along.

KevinByrne04:11, 8 February 2012
 

I cannot bring myself to approve of most of the examples that Hawthorne supplies beyond, perhaps, his example including eating salmon. There seems to me a slippery trick in connecting a "now" proposition with a "then" proposition as we continually do. The little word "will" makes such a difference, as does the difference between have and am! That I have hands is indisputable, regardless of whether or not I am also a brain in a vat. That I will tend the dog tomorrow does not hinge on whether or not I have a heart attack tonight. I understand that in epistemology we're not interested in collapsing wave fronts and following branches of time, but each of Hawthorne's examples seems to do just that without actually solving the issue of how to travel forward and back along the correct timeline and stay in the correct possible world!

AngeGordon04:56, 9 February 2012
 

I cannot bring myself to approve of most of the examples that Hawthorne supplies beyond, perhaps, his example including eating salmon. There seems to me a slippery trick in connecting a "now" proposition with a "then" proposition as we continually do. The little word "will" makes such a difference, as does the difference between have and am! That I have hands is indisputable, regardless of whether or not I am also a brain in a vat. That I will tend the dog tomorrow does not hinge on whether or not I have a heart attack tonight. I understand that in epistemology we're not interested in collapsing wave fronts and following branches of time, but each of Hawthorne's examples seems to do just that without actually solving the issue of how to travel forward and back along the correct timeline and stay in the correct possible world!

AngeGordon04:56, 9 February 2012

The examples that work for me as examples that look like knowledge, and as examples that don’t look like knowledge, are those expressed in Paragraph 563 of On Certainty, a publication of material written on twenty sheets of foolscap, and written in small notebooks, that Dr. Ludwig Wittgenstein wrote in the last year and a half of his life:

563. “One says ‘I know that he is in pain’ although one can produce no convincing grounds for this.—Is this the same as ‘I am sure that he…’?—No. ‘I am sure’ tells you my subjective certainty. ‘I know’ means that I who know it, and the person who doesn’t are separated by a difference in understanding. (Perhaps based on a difference in degree of experience.) “If I say ‘I know’ in mathematics, then the justification for this is a proof.

“If in these two cases instead of ‘I know’, one says ‘you can rely on it’ then the substantiation is of a different kind in each case. “And substantiation comes to an end.”

JamesMilligan07:36, 9 February 2012
 

Can we please make sense of the Closure principles of Hawthorne's paper.

WilliamMontgomery17:13, 9 February 2012
 

I found most of Hawthorne's examples to be fairly similar with some small variations. They all seemed to serve the same purpose of demonstrating what he sees as our intuitive ways of reasoning (duplicate and parity reasoning). If we are to accept these types of reasoning as intuitive, which seems to me permissible in most cases, it leads to the conclusion that people are just inherently bad at probabilistic reasoning. Assuming it is equally unlikely that I will go on an African safari next week as it is that I will win the lottery, a subject should be just as willing to say that they know either case will or won't obtain.

AlexanderBres22:10, 12 February 2012
 

The philosophers we have read thus far in the course seem to be extremely focused on common-sense intuitions, and hold it as the standard against which philosophical arguments are to be judged. This is especially true of Hawthrone, and this, I think, leads to the main problem I and others have with his argument.

To put it succulently, the whole argument seems unnecessary. Sure, he points out some epistemological quirks in our intuitions. Our intuitions aren't at all coherent in many cases, and tends to lend itself differently as the circumstances vary. People seem to disclaim certain knowledge involving probabilities while whole-heartedly embracing others.

But while I think such observations are interesting, but I can't seem to tease our further philosophical implications from it. Can't the simple explanation that people are incompetent at estimating probabilities, and that they are pushed and tugged in all directions randomly by their unconsciousness, be sufficient reason? After all, we don't make such a big fuss over other gross misestimations by people.

I seem to recall that Professor Morton mentioned in class that philosophy is all about the price of your belief. Well, then in this case, we've simple established a gross mis-pricing, and either one price has to rise or the other price has to fall.

Edward04:29, 13 February 2012
 

I feel like there is a difference between the example of winning a lottery and having a heart attach or inheriting money from a dead family member and I am going to explain why? He uses the safari example and he says because i know i will not have enough money to go on the safari then he knows that he will not win a lottery. And since no one can know this then he can NOT know that he will not have enough money to go on the safari. In fact he MAY win the lottery and go on the safari. See I don't think someone who has purchased a lottery ticket can actually say they KNOW they will not will in because they MAY and hence they cannot make conclusion about this which are related to winning the ticket. (like going away on a safari). What i mean is that a person therefore cannot know that they will or will not have money to go on a safari if they have bought a ticket for a lottery. However i don't think it is the same with the heart attach situation. If i plan on going away to the safari tomorrow I can SAY that regardless of the fact that i may have a heart attack and actually not go. Because in the position which i was in when i made that assertion i have every right and reason for making it and KNOWING it. If we consider all the possibilities and things that could happen to prevent our knowledge then we would not have knowledge of anything. Because even the zoologist could in fact be deceive if someone had drugged him or if someone was just that good at disguising a mule. So the state you are in when u make an assertion of knowledge matters. If you have no reason to think you will have a heart attack tom or no reason to think it is a mule you can say you know....or at least you are in a better position than the person who has bought a lottery ticket because that person in their present situation that they are about to make the assertion that they will not go on the safari in fact knows they have bought a ticket so they must keep the in might and therefore cannot concluded they will NOT will or will not Go on the safari. In order to say you know something you must consider all the facts present to you NOW. One can never predict the future so anything is possible but i think we can say we know things (if nothing out of the ordinary happens-based on our reasons) and later turn out to be wrong. But if we were wrong because there were facts which we didn't consider and should have and if we did would not have believed that then i don't think we can say that we in fact did know that-we did NOT know that. Hope that makes some sense :s

ShivaAbhari00:13, 8 March 2012
 

Having gone over the paper again tonight, I cannot help but still be almost entirely befuddled what the authors are trying to say. Either their thesis is horribly complex, and I do not understand any it of. Or (more likely) they have taken obfuscation to a completely new level. Sometimes, when I read their paper, I get the eerie impression that their veneer of semantic acrobatics hides a pretty simple truism about science. You do not know what you do not know. Do you do not know what you do not believe, etc. If that were the case, it's just basic epistemology. They could have done just as well with a venn diagram showing beliefs, truth, and knowledge.

Edward05:33, 8 March 2012
 

forum 6: week of 13 Feb - K & practical interests

I assume you've all got my message changing the reading from Stanley to Russell & Doris.
Does the bank example work for you?
~ If it does, what feature of the example makes you think that knowledge depends on the practical situation of the knower?
~ If it doesn't, what do you think is fishy about it?

AdamMorton01:49, 11 February 2012

The bank example doesn’t work for me. In the paper Knowlege by Indifference, authors Russell and Doris begin with the question: Is it harder to acquire knowledge about things that really matter to us than it is to acquire knowledge about things we don’t much care about? They also claim Stanley’s thesis conflicts with several traditional, and quite plausible, epistemic principles. Under heading 2. Indifference Cases, subheading, Richboy, they include the propositions: Money may buy the instruments of knowledge. Money buying knowledge may not offend as much as money buying love. I think they lack examples of knowledge, or epistemic circumstances, that relate to what’s feasible in love making, and entrepreneurship.

JamesMilligan07:55, 11 February 2012

I don't really know if I agree with Stanley,I'm still digesting.(I feel like a boa constrictor that's just been fed a very large and indigestible goat.Takes a while.) If anything seems fishy to me it's the argument against him. Maybe I'm missing a beat here, but in the banking story it is not, as far as I can see, the money that is knowledge-making but rather what the subject concerned knows about his or her relationship to the money. Money buys power, and one can know this. The argument starts with the premise that access to money is either necessary or not.(high or low stakes) That is to say, whether one needs access to money, in this case via the bank, or through some other fallback (Richie's other source or the couple's winning ticket) or through denial of necessity.(Ded's getting out of Dodge.) It seems to me that two things are being confused in the banking story. One is the epistemic consideration regarding privilege or indifference, and the other is an ethical consideration of them. It looks to me like the authors are conflating these two philosophic approaches into one. Richie's decision to use his knowledge that his privilege may be called upon to save him may be morally reprehensible if acted upon, but it is outside the epistemic test. It is an ethical question. If I have insider knowledge that buying certain stocks on the market will enrich me, that is knowledge. Whether it is an unfair practice to act upon that knowledge is an ethical question. Surely this is outside the categorical box within which the epistemic equation is held. In the case of Ded, the slacker, his indifference does not necessarily demonstrate a lack of knowledge or judgement except perhaps in an (rather snotty I might add) judgement on the part of the narrator. On the contrary,his indifference might be seen as a willingness to pay a known price in terms of his status and identity by means of a quite conscious defiant refusal of the claim that money makes to direct his life. Just as apartheid may serve as an advantage to an affluent Afrikaaner in a racist society, the Stanley argument doesn't ethically justify Richie (money) or Ded (indifference) by saying that their advantage exists, it simply says they know that they have it according by degree to the stake that they hold, that their advantage exists. Whether they should exercise their advantage involves an ethical decision on their part. As for the lottery win variation, I hate to have to bring it up but doesn't this bring in tracking in some way, as in the change over time from high to low stakes as they go from no dough to winning ticket? (I have, admittedly, very limited grasp of the theory.)

Robmacdee23:55, 13 February 2012
 

can we talk about term papers next class?

WilliamMontgomery02:05, 16 February 2012
 

When you ask if the bank example works, it really falls down to "work for whose theory?" I think Stanley's example of high and low stakes works wonderfully to exemplify IRI as an aid to theories of knowledge rather than as a complete theory in and of itself. The other examples in this week's paper I find less compelling...because they're being used as a complete theory, maybe...because I can refute them in a simply sentence "urgency can diminish knowledge, indifference cannot increase it," definitely.

AngeGordon05:00, 14 February 2012
 

My favorite article so far, quirky examples, clear, and an entertaining tone. As was written, IRI is not a theory of knowledge. In every case the characters were all equally justified in their belief because they all based their belief on the same evidence. As was further written, IRI is a constraint on knowledge. The probability that the knowledge is true, multiplied by an inverse amount of care toward the potential outcome produced by acting on the assumption that the knowledge is as true as knowledge can be, equals the psychological comfort the agent feels in trusting their knowledge claim to function for them in a desired way. This article isn't about knowledge, it's about how we feel.

KevinByrne05:35, 14 February 2012
 

I ma-a-a-ay have accidentally read both papers assigned for this week, so this might explain my stance on the bank example better. The following seems counter-intuitive: if in the example of Richboy or Hannah's lottery case, having money improves one's epistemic condition and not having it, consequently, worsens it, then by the same logic, Ded would still be in a worse epistemic position than all the obscenely wealthy people. But he is just as well-off, according to the IRI, as a result of his lack of care about the issue. Therefore, a high enough level of indifference should then be equivalent to Hannah's winning the lottery; maybe if she took some Valium instead of braving the line on Friday, it would improve her epistemic position! This seems wrong, because the reason why the lottery improves the couple's epistemic position is that it offers a solution to their financial pickle. In that case, however, why does it make sense for a deadbeat Ded to have more knowledge with little vested interest, seeing how his financial situation is no better than Sarah & Hannah's in the High Stakes example?

Olsy18:02, 14 February 2012
 

I liked the bank example as a way to approach the issue of indifference in knowledge. But, I think that the exact knowledge that caring/indifference caused was misleading in this paper. In the example of Trust Fund Richie not caring about whether the bank is open on Saturday, he chooses not to go to the bank on Friday because he knows that even if the bank is closed on Saturday, he won't face severe consequences. So when he decides not to go to the bank on Friday he doesn't have any knowledge that the bank will be open on Saturday, he simply has knowledge that the consequences won't effect him. This indifference does not reduce or increase his knowledge of whether the bank will be open. However, in the high stakes example with Hannah and Sarah -when Hannah decides that she does not know whether the bank will be open on Saturday, and she chooses to go to the bank- the consequences of not having the cheque deposited in time does weaken her knowlege. The high stakes in this case prevent her from being able to know with complete certaintly that the bank will be open on Saturday, and cause her to go the bank on Friday instead. For these reasons, I think that high stakes can reduce one's knowledge, but I do not think it entails that low stakes increase one's knowledge.

Andreaobrien23:06, 14 February 2012
 

I think the bank example does - to a large degree - successfully illustrates the point of the epistemic priorities and and helps to highlight our meta-epistemological concerns of whether something can be properly called knowledge depends on its importance to us.

First, I just want to go back to that picture we had in class today. There were two parallel lines, with the endpoint of each being knowledge and desire respectively. I think in order for knowledge to really be prioritized, it needs to be subsumed under desire. So the picture would be: desire/sentiments -> goals -> steps needed to satisfy that goal (e.g. evidence, facts, other observations) -> beliefs of how to achieve said desires -> finally, actions.

Save for those initial set of conditions - i.e. what we termed desires/sentiments - it seems natural to suppose that every step in that long chain is ration. If rationality were constantly and consistently applied, however, we find that it will in cases conflict with our epistemological goals and ideals, and this is where the notion of prioritization comes from.

Prioritization is the rational provision of knowledge. Often times, it is good and preferable to know. Knowledge enabled us to make actions in order to effect those outcomes we desire. If one desires to make steam, it is for example, necessary to first possess the knowledge that water makes steam when heated, and water could be heated by building a fire underneath it. Without such pieces of knowledge, we would not be able to effectuate our initial desire to make steam.

Since knowledge and desire often coincide, it is difficult to know which causes which. And while it is true that they often occur in mutually beneficial and sustaining cycles, there is still a master and a slave in the relationship.

Just as Hume supposes the reason and rationality are slaves to our moral sentiments, the same can be said of knowledge. When knowledge is indeed assumed to be subsumed under the aegis of desires, its subservience to it becomes clear.

And so it is clear that when there should be divergences between the two, we should favor desire over knowledge since desire is its master. For example, we desire to save time, and so seek the shorter of two paths. We have the background knowledge that they are roughly the same length, but one is approximately takes 5 minutes less than the other to complete. But determining which path is the shorter actually requires 20 minutes of calculations and measurements, and thus, in order to actually satisfy our initial desire to save time, it is necessary to remain ignorant of some values in our temporal calculus.

Edward06:20, 15 February 2012
 

In order to answer your question of whether the bank example works or not, I would need a less ambiguous explication of "serious epistemic possibility" - what Russell and Dorris give is not sufficient! I mean, whether something is a "serious epistemic possibility" or not makes all the difference for deciding what counts as knowledge. At least that is what I seem to get from Russell and Dorris. Hence, without an alternative explication of "serious epistemic possibility", I cannot say whether the bank example works for me or not.

As things currently stand, I have much trouble grasping the following two statements (from Russell and Dorris, page 14): "As the various stakes cases seem to show, interest destroys knowledge and indifference creates it." "conscientiousness impedes the attainment of knowledge and dogmatism supports it."

The reason why these two statements are so troubling for me is because I would say with a fairly high level of confidence that the opposite is true in scientific practice (especially statistics!): interest creates knowledge but indifference destroys it; conscientiousness does not impede the attainment of knowledge yet dogmatism has the capability to hinder it.

NicoleJinn01:45, 16 February 2012

The bank example seems inherently unsatisfying and contradictory. It may be framed under a consistent logic, although it appears implausible. Stanley appears to be attaching importance to knowledge, where it seems an improper use of relating the two together as an argument. Also, in the fact that the bank example violates the "stability condition" Russel and Doris's argument that knowledge should be resistant to fluctuation is salient against Stanley. Introducing the lottery situation heightens this violation of the stability condition, further weakening Stanley's argument.

DorothyNeufeld06:43, 16 February 2012
 

The bank example brings to light a social phenomenon but not really anything about knowledge itself. I would elaborate but I have to go to this class in 5 minutes so we can talk there.

ThomasMasin20:26, 16 February 2012
 

I agree with Russell and Doris that the bank example gives way to too many problems to be convincing. It seems counter intuitive to me to think that when knowledge is less important to a subject from a practical standpoint that one is more likely to have knowledge. Maybe if I read Stanley's arguments rather than just a summary of his position I would understand his reasoning better. That said, I am more inclined to accept Doris and Russell's cases of Ded and Richie as problems for Stanley since according to his conditions things like having money would affect one's having knowledge. It seems especially problematic that the conditions for having knowledge would depend on luck, which seems to follow from Stanley's bank example.

I also agree with Russell and Doris' consideration of the problem of a dogmatic scientist being more likely to have knowledge than a properly conscientious scientist.

I prefer the contextualist approach of framing epistemic conditions in terms of relevant alternatives rather than interest relativity.

AlexanderBres06:07, 17 February 2012
 

I think the bank example works. Although it sounds kind of strange to say that we know more about things we don't care as much and know less about things which we care a lot about. I think Stanley clears that up by the bank example. It makes sense that when we have more at stake we would be less controllable to say we know 100 percent something. But when there really isn't that much at stake we can say with maybe less evidence that we know it. Situational factors play a major role in what we know or think we know. I feel like this is a repetitive theme in the way that i think. It is really quite difficult to define knowledge as a concrete never changing thing. I think knowledge depends largely on time and place and situation. And Stanley high and low stakes explain exactly what we do ourselves in everyday life. If you have a quiz that is not for marks you are far more likely to just go with your gut but when you have a final exam worth a lot you spend a lot more time to make sure you know the answer you think you know. You doubt yourself a lot more on the final afterward You think was i correct? But it is very unlikely you will think that much about the quiz not for marks. Stanley tries to explain knowledge by looking at how people actually act in differing situations and makes sense of why we think we know somethings in some situations and doubt it in other (high stakes) situations.

ShivaAbhari04:46, 8 March 2012
 

Course topics - objections to Staley or Fisher?

Last Thursday (February 9), Dr. Morton briefly went over the remaining topics that he plans to cover in this course. Among them, he mentioned that the two that are least connected with his overall motive for this course are the readings by Staley (6-8 March) and Fisher (13-15 March). I am curious as to whether anyone in this course (among the participants) has objections to doing any of these two readings. If so, please be honest about your objections and I will try to consider them to the best of my ability. While you decide what objections you may have to those two readings, I just want to make it known that you may expect to see me give short presentations on Fisher's reading on one or both days during that week. As much as these two topics (or readings) are least connected with this course, they are (ironically) the two topics of most interest to me, if that makes sense to any of you.

NicoleJinn02:07, 16 February 2012

I am quite pleased to see, so far, that there are no objections to the readings by Staley or Fisher. As I prepare for my presentation next Tuesday, I would truly appreciate it if all of you (participants) would be able to tell me ANY concepts you are having trouble comprehending in Chapter 2 of "The design of experiments"; or if there is anything in the first TWO chapters in Fisher's "The design of experiments" that's unclear to you! The reason for this request is because your comments would give me a much better idea of what I should focus my presentation on, so that I am able to better meet the needs of the audience. So, please, feel free to tell me about ANY difficulties that arise during your reading of Fisher BEFORE next Tuesday and I will try my best to accommodate them into my presentation!

NicoleJinn01:15, 7 March 2012
 

forum 1, week of Jan 8, Dretske

This paper is a model of pure once-dominant philosophical analysis. It is plunging into the deep end for you, I know. Much of the later reading will be more digestible. You are likely to think "where's the epistemology here; where's the concern with human knowledge, and the standards we can set and fail to meet?" But it is in fact a good case where on reflection you may conclude that there are some rather deep thoughts about these things, carried by observations about language. In particular, the Zebra example has inspired many reactions. It seems to many to give a handle on skepticism that wasn't available before. So read that again. Then read the example about his brother on the bus. Are they making the same point? Then go back to the beginning and the claim that "knows that .." is not a fully penetrating operator. (That's a terminology that has not caught on, incidentally. What we say now is that "know" is not closed under logical consequence. Or we insist that it is.) How does that link to Zebra-type examples? Now go to the end of the paper. There he is trying to say why all this happens, and his explanation is in terms of "relevant alternatives". What does that amount to, really? Can you put it in your own terms.

Now you are ready to contribute to the forum. I'm going to ask some questions below. Write an answer to one of them. It doesn't have to be careful or one you are convinced of, just something to discuss. Or write a reaction to someone else's answer. Or continue a discussion started by other people's answers and reactions.
Questions:
- does the fact that you haven't excluded the possibility that the zebras are painted mules show that you don't know that they are zebras?
- why is it so shocking (many philosophers do find it shocking, still) to claim that you can know A, know that if A is true B has to be true, but not know B?
- suppose knowing something is excluding *relevant* alternatives to it. What could *relevant* mean?

AdamMorton20:12, 7 January 2012

In reference to question 2: “why is it so shocking…”, I allude to Ludwig Wittgenstein, Philosophical Investigations, Third Edition, Blackwell Publishers, 2001, as follows: 111. page 41: “The problems arising through a misrepresentation of our forms of language have the character of depth. They are deep disquietudes; their roots are as deep in us as the forms of our language and their significance is as great as the importance of our language.------Let us ask ourselves: why do we feel a grammatical joke to be deep? (And that is what the depth of philosophy is.)” Jan Willem Wennekes, in his Master’s Thesis, titled WITTGENSTEINIAN ARGUMENTS AGAINST A CAUSAL THEORY OF REPRESENTATION, Dated August, 2006, UNIVERSITY OF GRONINGEN, FACULTY OF PHILOSOPHY; states, in Chapter 4, A Critique of the Causal Theory of Representation, page 63:

  …”Dennett and Dretske are convinced they have a causal, empirical problem at

hand while Wittgenstein is convinced that the problem is conceptual: it is the result of misunderstanding the forms of our language.” I agree with Wennekess, and Wittgenstein.

JamesMilligan05:49, 9 January 2012

If one can be reasonably confident that any statement that

Robmacdee20:22, 23 January 2012

If one can be reasonably confident that any statement of certain knowledge that one may, at present,claim to be true will be disproved and held to be false at some future time,given our track record so far,has one then admitted to an absolutely skeptical position? The question occurred to me when I read the reference to Wittgenstein,depth,and language ambiguities.I'm new to the language of philosophy and to Wittgenstein for that matter,so I hope I will be forgiven if I illustrate my thoughts by way of a detour through literature with a few theological organ notes thrown in.James Joyce made a career out of playing with people's misunderstandings of language.He presents an image,through his writing, of The Fall (as in Original Sin) as being misunderstood in that it is typically seen as an account of a one-time-only event which has happened at the beginning of human experience,and has resulted in our present 'fallen' state. According to Joyce, the Fall is better understood as an ongoing experience.(the concept of Original Sin it will be remembered, is consequent on,and enjoined with,the quest for knowledge.Our Father,we are told, apparently had an issue with this)That is to say, or so the story goes,we are in the midst of falling.In this allegory,involving gravity,we know that we move, always,toward knowledge,and this movement toward it is perhaps the only certainty outside of immediate sensation,that we can have.We don't know if its a bottomless fall,but its certainly been deep, and it has a direction,more or less certain, which is to say it continues on into depth. When we invest in a belief,and consequently act upon it, we can reasonably expect from our past experience that there will be surprising side effects.These unforeseen developments are corrective and have the ultimate effect of changing,somewhat paradoxically,our initiating belief.Even so,the initiating belief still stands as the foundation for its replacement.The question I have is whether we actually achieve progress in this pursuit (eg Columbus pursued an ever receding horizon in the expectation that he would find India and found America instead.He pursued an intended and expected goal and instead achieved an unintended one,which in a very real sense changed the meaning of the experiment)or whether the pursuit is circular,as Joyce seemed to believe,influenced as he was by the ricorso theory of Giambattista Vico,who saw history and the pursuit of knowlege as a recurring cycle with progressive stages within an evolving circular transit.

Robmacdee21:32, 23 January 2012
 
 

While thinking about the third question, I was reminded of another issue where definition of relevance is crucial: the frame problem in AI, where one needs to be able to represent the logical effects of an action without representing a multitude of irrelevant information along the way. One of the suggested attempts at solving this problem was Jerry Fodor's (I believe) appeal to relevance of information, or the "context" in which the AI has to operate in any given situation. Following his logic, exclusion of relevant alternatives could then entail thinking of as many alternatives physically possible (or conceivable to exist in a possible world, if we are to be Lewisian) in the given set of circumstances, and further, thinking of a reason as to why what we know is different from these alternatives and is therefore better suited for this context. However, this solution, both in the present situation and the frame problem, has a risk of infinite regress of "relevance of relevant contexts"; how exactly can we tell what the "given set of circumstances" actually consist of, and do we know enough about this context in order to make judgments on the relevance of it? I tend to agree that even the mention of relevance may be a slippery slope in epistemological questions, because the assumption that we can, in fact, make judgments on what's relevant could lead to a bias.

Olsy06:46, 10 January 2012
 

I cannot help but feel that "relevant" is pre-defined by Dretske's style of writing. His tendency towards slightly absurd-situational (clever lighting, costumed mules) yet commonly-placed examples (paint, zoo) are an obvious contrast between possible and likely. It's the likely part that I think is "relevant". Because we can say "this is a zebra" while excluding the obvious "this is a rhino" without having to change any other explanation or retrofit the premisses inherent in our beliefs as to why it is a zebra, it is a "relevant alternative" for the explantion. In order to object to a green wall, one must first presuppose that there is a likeliness that the wall is cleverly lit...yet in terms of argument or further analysis, there is no -relevant- reason to presuppose this. While not as analytical a consideration as is given above in Olsy's thoughts, I feel that this use, that in order to converse or think about an issue one must only consider the likely--the relevant--possibilities, is implied by Dretske's common speech.

AngeGordon07:31, 10 January 2012
 

In reference to question 1. Along the vein of my comment in class on Tuesday, when we make a claim we overlook and take for granted our background knowledge and beliefs. A person claims to know that is a zebra in the zoo they over look all of their beliefs that they have, perhaps they have good reason to trust zookeepers, maybe they read an article about this zebra a few months back, perhaps they also see themselves as good judges of zebra ect... These subconscious conditions to the agents justification can show how the agent comes to believe that they know without fully processing their claim. But is this rational? The agent has yet to thwart the skeptic's argument, but would the skeptic still find it necessary to resort to the "mischievous demon" argument if the agent did fully articulate his claim? I guess what I'm trying to say is, I understand how epistemic operators are semi-penetrating, but I put forward can a combination of operators become fully penetrating?

Also can "know" just be high level of belief? To know is to say 99% certain?

WilliamMontgomery23:03, 11 January 2012
 

For question 1, the fact that one has not excluded the possibility that the zebras are painted mules shows that the claim "I know that these animals are zebras" is not carefully backed up. By carefully backed up, I mean no existence of a track record of examining the animals more closely, checking with the zoo officials, or some kind of 'evidence' of performing tasks that would give more confidence and trust to why the claim should be regarded as true. On the other hand, whether not excluding the possibility that the zebras are painted mules is equivalent to claiming "you don't know that they are zebras" depends on how one defines what it is to 'know' something. I do not take 'know'ing something to be binary (i.e., 'yes' or 'no') - I would attach a degree (between 0 and 1) on how confident that person knows that these animals are zebras. Hence, I would answer yes to the first question as follows: the fact that you have not excluded the possibility that the zebras are painted mules shows (with degree p) that you don't know that they are zebras, where 0 ≤ p ≤ 1. The reason for including a degree of 'confidence' (or something along those lines) in 'know'ing a claim is due to my background in statistics or probability theory, as well as the existence of a recent epistemological movement towards Bayesian methods. However, it should be noted that this epistemological movement comes with numerous philosophical problems and is nowhere near consensus - the following link gives a small glimpse of demonstrating the lack of consensus on using Bayesian methods (just in case anyone is interested): http://errorstatistics.blogspot.com/2011/12/jim-berger-on-jim-berger.html#disqus_thread

NicoleJinn23:33, 11 January 2012

If there are a number of propositions on knowing the animals are zebras, each known by degree between 0 and 1, such as the 36 number example used by Dr. Morton in lecture; does the probability of the conclusion become the impact of the progression of going from one proposition to the next, each with a probability between 0 and 1, and assuming less than 1 in probability for each proposition; in a list of 36 related propositions for example, does one arrive at a residual quantified probability degree between 0 and 1, for the conclusion to the 36 propositions.

JamesMilligan06:37, 12 January 2012

I am not sure if I understand the question you are asking (e.g., what you mean by a "residual quantified probability"). Nevertheless, here is my attempt in answering your question: When the propositions are not known with certainty, then the probability of obtaining the conclusion is not necessarily a linear combination of the probability of the propositions. In other words, the propositions are not necessarily related linearly (or in some 'straightforward' fashion if we want to use non-mathematical terms) to the conclusion when the probability of the propositions is less than 1 (i.e., when the propositions are NOT known with certainty). The reason for the nonexistence of a straightforward relation between the propositions and conclusion in a probabilistic setting is because all kinds of alternative conclusions can come up with varying probabilities attached to them IF the propositions are not known with certainty. I hope this answered your question. If not, I will see if I can come up with an example to present in class tomorrow in front of the entire class, just in case anyone else has a similar question.

NicoleJinn07:09, 12 January 2012

The question relates to the overall impact of 36 propositions at less than 1 probability. The example is 36 propositions. If the propositions are a linear combination, and the second proposition is dependent or influenced by the first propsition, does the probability of the first proposition at less than 1 become the basis to apply the probability calculation of the second proposition. The residual probability being the calculation of the probability of the conclusion after 36 propositions, each successive calculation starting from the reduced probability of the preceding proposition. If there is no dependence or sequence in the 36 propositions, what methods may be used to select a probability from the 36 propositions, to quantify the probality of the conclusion.

JamesMilligan23:03, 12 January 2012

In defining residual probability, you presuppose that each successive proposition has a probability less than its precursor. However, I don't think this presupposition must hold - each proposition has a probability that is not necessarily related to the proposition before it, if the 36 propositions are in a sequence. In reference to your first question, I will not be able to explain what a linear combination is to the layperson - knowledge in mathematics, particularly linear algebra, is needed to understand this. Despite my background in probability theory, I am unable to answer your last question on which methods to use in selecting a probability of the conclusion. To point you in the right direction, the notion of statistical dependence or independence between the propositions themselves, or between any one of the propositions and the conclusion, pretty much governs which methods to use. Lastly, I will have to end this 'conversation' because 1) the content is not interesting enough to everyone else in the course (PHIL 440), and 2) this topic on the probability of the conclusion being related to the probability of the propositions is not something that this course will cover, beyond what has been written here. Hence, this 'conversation' is now closed.

NicoleJinn02:25, 13 January 2012
 
 
 
 

"suppose knowing something is excluding *relevant* alternatives to it. What could *relevant* mean?" Dretske would probably say that a relevant alternative to something you know would be information that would prove your knowledge false if it was true. For example: A relevant alternative that you are excluding in order to know that the zebras in the zoo are in fact zebras is that they are painted mules. I think that a relevant alternative to a situation should have some evidence for its possibility for it to be considered a real alternative. By this I mean one does not normally actively exclude the fact that what one is looking at is just an imitation of what it looks like. Why shouldn't the zebras be zebras? This thought normally does not even cross our minds unless we have been reading too much philosophy. For Dretske's example to be a relevant alternative I would have to see something like black and white house paint in the zebra pen. That would at least give me a reason for considering the possibility that things might not be as they appear. There are usually relevant alternatives to many things we think we know. Evidence that shows us we may be wrong but that we decide to exclude because it is not strong enough to change our minds. In short, "relevant" just means "less-likely". There is a reason to believe in this alternative, the fact that something is "possible" should not make it a faire contender for an alternative to our knowledge.

ThomasMasin13:49, 12 January 2012

I agree with this idea that relevant alternatives should be supported by some kind of empirical evidence. If one were to take seriously any possible alternative then there would be very little one could claim to actually know. Even a priori knowledge could be questioned if one were to believe that there was some kind of evil demon operating solely to trick them. If empirical evidence were not enough to give us true knowledge, then one would be forced to say that 'I think that' or 'It is likely that' the zebras are not painted mules. I do not think that this particular example can be dealt with by saying it is a problem of semantics. I think one can say that they literally know that the zebras are zebras by applying these standards of empirical evidence to rule out alternatives, such as painted mules.

Andreaobrien00:58, 14 January 2012

I agree that there would be very little one could claim to actually know, IF we have to consider all possible alternatives. However, one fundamental problem is that empirical evidence may or may not be enough to give us true knowledge, depending on the notion of evidence used. Yes, there is no consensus yet on what the heavily-used concept of evidence is! Besides, what are these standards of empirical evidence that you have in mind? Are they related at all to the definition of "evidence" given by Richard Royall and Steven Goodman? (e.g., see http://www.ncbi.nlm.nih.gov/pubmed/3189634 or http://www.botany.wisc.edu/courses/botany_940/06EvidEvol/powerpoints/Evidence.pdf)

NicoleJinn02:02, 14 January 2012
 

I agree. Dretske's examples of relevance seem self-defeating at times. The Zebra case is especially troubling because it involves intentional deception in a case where there is usually no intent to deceive. The Zebra-painter is almost as evil as Descartes' demons. The reliance on this sort of example is emblematic of a deeper problem in his theory in that it it is too exclusive and unnecessarily limits knowledge to a subset of what we usually define knowledge to include. Too much error-avoidance leads to unnecessary ignorance.

Edward07:03, 31 January 2012
 

Question 1: The claim is that epistemic operators are not fully penetrating to all consequences of them being zebras. If we know that they are zebras we do not necessarily know that they are not painted mules. And yet to 'know' something may be said to be a very strong claim according to this picture. If I had not considered that they were painted mules, then as far as my epistemic state is concerned, they could be painted mules. And if it is possible in this epistemic sense that they are painted mules, then I can't claim to know, in that same sense, that they aren't, and therefore I can't claim to know that they are actually zebras.

Dretske views this to be a mistake. It does not follow from the fact that we had not considered them to be painted mules that we don't know that they are zebras. We can know that they are zebras and not know that they are not painted mules.

If it were the case that we were being duped, then we wouldn't have known that they were zebras because the claim that they are zebras would be false. It is surely possible that they are animatronic displays, that they are painted mules, that I am having a dream, that a higher being is deceiving me, etcetera. But is the point of claiming that we know something to show that nothing of the sort is possible? Some alternatives can't even be attached to a probability, in the case of there being a deceiver, and yet the answer to whether or not there is a deceiver is entirely relevant to the question of whether we know there are zebras(or anything at all for that matter), and there is no way to exclude it. Would us not considering there being a deceiver imply that we didn't know that there were zebras instead of painted mules? I don't think so. But It's a very difficult question and I don't know how to answer it without getting into the nitty gritty details of what counts as relevant information.

I do think that knowledge has weaker conditions that we like to let on. When we claim to know something, we tend to have a set of positive claims to back it up, even though some not-claims that are not even thought about are entirely relevant and don't allow us to strictly know anything at all until they are known. Hope that makes sense.

MclarenThomas18:33, 12 January 2012
 

In response to question 3, 'suppose knowing something is excluding *relevant* alternatives to it. What could *relevant* mean?'

I take 'relevant', based on Dretske's arguments, in this case to mean something fairly intuitive in terms of everyday language. In other words, a relative alternative would be a possibility that if brought up in conversation would not elicit some degree of surprise or confusion in whoever is being spoken to.

To clarify this I will offer an example similar to Dretske's example of Brenda ordering cake. Say we know that Joe purchased ice cream from the ice cream truck since we can exclude relevant alternatives. An example of a relevant alternative would be that 'Joe purchased a popsicle from the ice cream truck' since it is a plausible circumstance. It needs to be eliminated before we can know that he purchased ice cream. An example of an irrelevant alternative circumstance would be the possibility that 'Joe walked up to the ice cream truck and requested a haircut'. This possibility would certainly not be thought of as a common request for an ice cream salesman and if I told someone it was the case they would likely be moderately surprised or confused. Thus it would not fit into the category of relevant alternatives.

This example doesn't clarify the exact definition of 'relevant' with respect to Dretske's argument but it is an illustration of what I take him to mean.

AlexanderBres22:51, 17 January 2012

For question 2, "the fact that it could be so shocking" claims the epistemic operators are not part of the presupposition. As they are tied within the statement, "the roses are wilted", through Dretsky's use of contrastive consequences, the statement appears to articulate the qualitative predicates, "roses" and "shrubs" as instances of a broader concept of plants. It seems as if yes, there is a small degree of knowledge, acting in what seems as an anchor. However, I do lean towards James' argument in language itself being the determinate of the nature in how we form these ascriptions to knowledge.

DorothyNeufeld04:28, 18 January 2012
 

I said this in class. The formual a, if a then b, seems to be something Dretske can't manipulate if he considers them not knowledge, but truths, whether anyone has ever concieved of them or not. If there is a, there is b. But to speak about the formula like that is to take an impossible perspective, because gaining the prespective disolves it, similar to the elusivity of knowledge that Lewis talks about. Once you say that someone knows a, and that person also knows that if a then b, the formula becomes subjective, and like all things subjective, like all things ever said, they can be wrong. So sure, you can know a and know if a then b and not know b, but you would either result in a 'false' (the word loses a lot of meaning here) belief that you know b, or the reasoning that got you there would have fallen to the same problem.

KevinByrne02:10, 20 January 2012
 

Q: does the fact that you haven't excluded the possibility that the zebras are painted mules show that you don't know that they are zebras? A: I agree NicoleJinn when she says that they may be a degree to knowledge. For example to say you know something in ordinary conversation maybe different than to say you know something in some academic scientific journal that. Why? I think it can be understood by the states value. For example to say you know those are zebras to you 3 years old child may not have that detrimental of an effect if in fact they were just imposter zebras after all. But to say this in an academic journal that maybe people read and formulate their own knowledge from could perhaps have a bigger impact. Also your reputation is at stakes so you want to make sure double check that what you are saying is in fact correct. But the question is whether or not you can ever be 100 percent certain. And I don't think that you can. All that you can do is do whatever you possibly can think of in order to make sure your assertion is true with the evidence that you can collect at the time being. So knowledge i think is to know something with all the evidence given at a certain time. You can know something but later realize that you are wrong. the question is did you ever know that at all...were you always wrong? or maybe you knew it and now you don't? this is a difficult question which i still am puzzled with....?

ShivaAbhari17:01, 2 March 2012
 

forum 4: week of 30 Jan: DeRose on skepticism

Remember that we are only meeting onThursday, so we have to cover DeRose in once class. We will focus on how his contextualism is different from Lewis's, and whether it has advantages. Note that like Lewis, and unlike Dretske, DeRose resists the idea that you can fail to know consequences that you see following from things you know. In fact he hates this idea. (I don't.)
A basic question: how is his diagnosis of the appeal of skepticism different from Lewis's?
A subtler one: how can "she knows it" vary from one conversation to another on DeR's approach and is it different from the variations L allows? (I don't know the answer to this one. But it's an important question if we want to understand what useful work "knows" does, besides giving employment to philosophers.)
Are there useful comparisons with other context-dependent words? Flat, big, here, left, starboard, bank: which is know most similar to?

AdamMorton01:32, 28 January 2012

move-it-to-the-top reply, as before.

AdamMorton00:04, 2 February 2012
 

In the paper titled Solving the Skeptical Problem, Dr. Keith DeRose begins with a skeptical hypothesis, "I am a bodiless brain in a vat who has been electrochemically stimulated to have precisely those sensory experiences I've had, henceforth a 'BIV'". The concept of a brain in a vat as an example for philosophical discussion, is one that I have difficulty to relate to. If the subject could be one such as the living Dr. Stephen Hawking, I think I could relate better to Dr. DeRose's philosophical discussion on skepticism. I admire the quality of philosophical discussion Dr. Ludwig Wittgenstein achieves in section 243 if the Philosophical Investions "A human being can encourage himself, give himself orders, obey, blame and punish himself; he can ask himself a question and answer it. We could even imagine human beings who spoke only in monologue; who accompanied their activities by talking to themselves.—An explorer who watched them and listened to their talk might succeed in translating their language into ours. (This would enable him to predict these people's actions correctly, for he also hears them making resolutions and decisions.) But could we also imagine a language in which a person could write down or give vocal expression to his inner experiences—his feelings, moods, and the rest—for his private use?——Well, can't we do so in our ordinary language?—But that is not what I mean. The individual words of this language are to refer to what can only be known to the person speaking; to his immediate private sensations. So another person cannot understand the language." (PI §243)

JamesMilligan07:16, 2 February 2012
 

As it is 1AM, I won't be too ambitious, and try to take on the basic question. As I've understood it, Lewis' opinion on skepticism was that such a view would often leave the ascribers and the subject in question with either infallible knowledge or none at all. This is his segue into alternatives to skepticism: fallibilism and contextualism. DeRose doesn't seem to think that skeptics demand infallible knowledge. Rather, he thinks that by the means of AI, a skeptic raises the conversational standards by which the proposition is judged as knowledge. DeRose then specifies the level to which these standards need to rise by introducing the Rule of Sensitivity, and developing the rest of his argument. I think the most interesting point of this paper is the fact that DeRose bases his contextualist argument only on the 2nd premise and the (not-) conclusion; he says that "(2) is true regardless of its epistemic standard". (2) states, "If I don't know that not-H, then I don't know O" He does specify that, in his view, whatever warranted assertability is ascribed to not-H is also ascribed to O. Even if it is explicitly said that warranted assertability cannot be mistaken for knowledge, wouldn't that be an example of how you can fail to know consequences that you see following from things you know, if not-H is somehow chosen to be a consequence of O?

Olsy09:19, 2 February 2012
 

I appreciate DeRosen's technique in so far as he's applying only the two rules/actions to reach his explanation, as compared to Lewis's list of rules to determine relevancy, but I wish he'd written -out- his thoughts instead of shorthand references...I'm still not sure I've got the argument straight, since I had to go back to check to which hypothesis he continually referred!

AngeGordon17:30, 2 February 2012
 

I don't really see any differences between DeRose and Lewis. They still have the same problems in that they seem to be more explaining HOW we use the term "knowledge" in day to day life versus how we use it in philosophy. What I would be interested in knowing would be at what point does one draw the line between plausible to the point that it could be considered and plausible to the point that something is ridiculous and should not even be taken into account when we are asking whether or not we know something.

ThomasMasin23:19, 4 February 2012
 

I think that the difference between DeRose and Lewis almost trivial. It is true that they started out with different premises, and work onwards from there. But since their overarching theory is so similar (dare I say identical?), it's natural that they end their epistemological investigations in largely the same spot. DeRose is more acutely aware of the problem posed by skepticism (or at least he appears to be), but he still doesn't offer much more beyond what Lewis said. There seems to be no good and non-circular way of discerning relevant arguments and objections from the silly and facetious. If they can't legitimately exclude these examples, they certainly can't rule over skepticism. The result would be a systemic collapse in their theories (if such problems go unaddressed).

Edward03:44, 5 February 2012

"The AI skeptic's mentioning of the BIV hypothesis in presenting the first premise of AI makes that hypothesis relevant."Isn't that a little too mind-over-matter? If it is pointing to something which is arguably actually, materially true then surely it would be true whether it was mentioned ( Rule of Relevance opposed to Rule of Accommodation ) or not mentioned.( i.e. would be relevant in either case, we just didn't realize it until it was brought up ) In other words, isn't all this only relevant in cases which can't be proven either way, which is to say, imponderables, cases where we can't advance our understanding without some unforeseen relevant disclosure. If something is, indeed, decided to be unmentionable ( the emperor's new clothes ) or "far-fetched" as the author would have it, but subsequently is nevertheless found to be true, is it still, or was it ever, really correct to call it far-fetched? What if the example is not so much far-fetched as it is inconvenient, which is to say, beyond discourse comfort levels? Is there a philosophy of manners?

Robmacdee00:22, 6 February 2012

That's a good point. I can't help thinking that both DeRose and Lewis are resigned to allow that while highly skeptical ascribers of knowledge raise the bar for the subject significantly, if the ascribers were a pair of idiots, to put it bluntly, alot of things could be counted as knowledge that DeRose and Lewis would be unhappy with. Consider someone who thinks cellphones cause cancer because of some alien chip that is implanted within (the government is of course, also culpable). Two ascribers with similarly outlandish beliefs about alien activity might hold that that individuals belief does in fact count as knowledge, where as the average person, while perhaps agreeing that evidence does suggest a link between cellphones and cancer, would think that the alien-believer's justification falls well short of the mark.

Also, I'm kind of concerned about the consequences for the concept of truth inherent in a cotextual approach to knowledge ascription, but I might just save that for my paper.

ZacharyZdenek06:13, 6 February 2012
 

I'm inclined to agree that DeRose and Lewis seem to present a very similar version of contextualism. The focus of their arguments is the main difference. While Lewis is primarily motivated with explaining his rule-set regarding the criteria for relevant alternatives and proper knowledge ascription, DeRose is more concerned with his contextualism as a direct reply to sceptical arguments (he continually explains and refers to the appeal of AI). The main difference between the two is that Lewis uses rules to classify appropriate relevant alternatives in a conversational context while DeRose adds to his solution the idea of sensitivity.

AlexanderBres01:52, 7 February 2012
 

The most interesting thing for me about DeRose is that if I was a BIV and thought I had hands in the right context I can still know that I have two hands. To me this gives weight to empirical evidence, in the vein that what I experience is so rich with sensation that I can't argue against it. This makes me think of what robert was saying about true replicas, there is this separation between my sensory experiences and actuality but for to experience them is so real.

WilliamMontgomery08:08, 6 February 2012
 

I find DeRose's approach quite different from Lewis's; Lewis presents all these rules, whereas DeRose focuses on the idea of sensitivity. DeRose also points out the need to identify the mechanism by which the skeptic at least threatens to raise the standards for knowledge, which is something I did not see in Lewis's approach to contextualism. In spite of these differences, what both approaches have in common is that their use of "knowledge" is ambiguous or arbitrary (to me).

NicoleJinn09:22, 7 February 2012
 

His uses of the BIV illustrates the source of his contextualism, which is also pretty much the source of everything. We are disconnected from reality in one way or another, so that is the context we think from. Our thoughts are relational, and more relational the farther out you follow along with them. No one has all encompassing knowledge, so there is always an additional context for a thought to relate to. But if we never held something as a piece of knowledge, because of these attributes that arise from the mind-reality disconnect, we would never get anywhere. We make a working (and totally revisable) figure of information with which we proceed into the vastness of context.

KevinByrne04:40, 8 February 2012

I disagree that BIV is pretty much the source of everything! Also, I think the mind-reality disconnect is important to consider. What justification do you have for the idea that "no one has all encompassing knowledge, so there is always an additional context for a thought to relate to"?

NicoleJinn07:47, 8 February 2012

Your first line seems to answer itself, perhaps unknowingly, because by everything I meant discourse. I didn't mean that BIVs make reality, I meant that because there is a disconnect from reality, illustrated by the BIV example, there is grounds for disagreement about the qualitative properties of reality are. I assume there is something out there, but the indirectness of that assumption sets the stage for discourse. By everything, I meant the history of distinction. My justification for what you quoted would be this test: ask that all purpose question "do you know everything?" the only answer that I think anyone should give is no, which means there is further unexplored context which the knowledge they currently hold can be related to.

KevinByrne05:10, 9 February 2012

"She knows it" can vary with DeRose, in my understanding that the the standards must remain low, in ordinary conversational contexts. DeRose appears to base his argument purely in mirroring the skeptic strategy, which does make a satisfying point, however seems to be constrained based on how it is tailor-made for the skeptical argument. It is constrained in the sense that higher positive epistemic standard are not addressed, or fully elaborated, as he emphasizes low standards in reaction to the skeptical arguments. A person in a low epistemic condition would be accepted in saying he knows B, whereas the same person in a high epistemic standard position would be incorrect in saying he knows B.

To DeRose, it is matter of the sensitivity of the argument, which still appears ambiguous (although seems to rely heavily on spatio-temporal contexts, beyond this it still seems elusive). His truth-conditions of knowledge ascriptions appeal to the implicit sensitivity to the context. Lewis, like DeRose emphasizes language , however Lewis' Rule of Accommodation cannot explain a rise in epistemic standards when an expert claims to know, or in distinguishing the AI skeptic from the simple skeptic. As this rule operates on suggestion, does the suggestion itself in raising epistemic standards defeat Lewis's argument against the skeptics? (I am trying to read into what DeRose was saying about Lewis.)

DorothyNeufeld03:23, 28 February 2012
 
 
 

Q: How can "she knows it" vary from one conversation to another on DeR's approach Are there useful comparisons with other context-dependent words? Flat, big, here, left, starboard, bank: which is know most similar to? According to DeR to know something depends completely on the context within which you say know. What sort of conversation you are having perhaps. If it is a casual one or a debate with an important figure. All this can effect what you think you know and what you are justified in thinking you know something. For example he talks about the zebra and mule example: He says an ordinary person might not be able to say that they KNOW that it is in fact a zebra and not a painted mule but a zoologist who knows these animals and their attributes very well can perhaps answer the question: is this a zebra? and be justified in saying that they do KNOW this. So the person is also part of the context differing people with differing knowledge of the topic can either have the justification to say they know or don't know something. He talks about the word flat on page 8 and says that although the desktop may not be perfectly flat in some circumstances depending on the definition of flat we are giving in the conversation we may or may not call the desktop flat. All these words including knowledge are context dependent and i think this was a good exaple to use to explain how knowledge can vary within contexts.

ShivaAbhari18:11, 1 March 2012
 

forum 2: week of 16 Jan - Lewis

Lewis is trying to reconcile skepticism and fallibilism. The result is contextualism: what someone can be truly said to know in one conversational context is different from what they can be truly said to know in a different context. Two things are worth noting
- this is not just that standards of evidence or confidence are higher in some contexts: it goes deeper (read carefully to see how)
- the way a person's evidence can rule out a possibility is rather unusual. Try to get a grasp of what this amounts to.
Now some questions for you to react to.
~ Lewis gives a number of "rules" for what possibilities can be ignored (so they don't have to be eliminated). Is this just an arbitrary list, a grab-bag, or is there some system to it? Is he just trying to come up with some rules that give the answer he wants?
~ What do you make of the rule of attention? That's what really gives his contextualism. Is it well motivated, or just pulled out of a hat?

AdamMorton05:59, 14 January 2012

In response to question 1, I feel Lewis' set rules not arbitrary. I feel as if he is more explaining in detail how people think and not necessarily how we ought to, even though he does produce a theory of how we ought to. I feel his comment on how epistemology and skepticism is too strict for any agent to fully obtain knowledge leaves agents open to fallacious reasoning. Beyond that his suggestions of ignoring and contextualizing your perceptions also leaves the agents open to fallacious reasoning. So for me he removes the hard rock of fallibilism and replaces it with an abyss, pitting the agent in between too strict skepticism and too weak contextualism.

Thoughts?

WilliamMontgomery01:34, 17 January 2012

I agree with this. Although Lewis' rules are an interesting thing to think about, they do not accomplish Lewis' goal of "just barely" dodging both skepticism and fallibilism. As mentioned above they are an insight into how we manage to claim we have knowledge with so many strange possibilities that we are wrong (like the Gettier cases or the evil demon). However, we are still wrong about what we claim to be knowledge all the time. If my professor tells me that he drives the red car in the parking lot I would immediately run home and tell all my friends that I finally found out which car was his. Later I would learn that my professor lied because he was embarrassed of his gross, old brown car and wanted to impress me. We already know that we are right about our knowledge most of the time, the problem is we still don't really know at what point can we claim we have knowledge. Lewis' rules still contain the possibility that we will ignore something important accidentally and thus make our "knowledge" false.

ThomasMasin18:58, 18 January 2012
 

In relation to the second question, I have tried finding Lewis's solution to the actual possibilities (that is, possibilities that actually obtain) that the subject just happens not to know, and therefore cannot turn his or her attention to it. Could one still properly ignore it? That is, the subject may be informed on all other relevant alternatives except for this one actual possibility; but the subject never had a chance to ever even conceive of this possibility, and therefore it has never been brought to his or her attention. Would the two rules then be in conflict? It seems possible (pun not intended) for this situation to occur without breaking any of the other rules that Lewis proposes. Or am I missing something here?

Olsy07:32, 17 January 2012

As a parallel of epistemology thinking between Ludwig Wittgenstein, and David Lewis, I would like to refer to the April 16 2010 paper presented by Post Doctoral Researcher Giacomo Sillari, of the University of Pennsylvania. The event was the Synthese Conference, at Columbia University. The title of the conference was Epistomology and Economics.

The title of Dr. Sillari’s paper is: Rule-following as coordination: A game-theoretic approach A few excerpts of Dr Sillari’s paper are as follows:

Make the following experiment: say “It‟s cold here” and mean “It‟s warm here”. Can you do it? Ludwig Wittgenstein, Philosophical Investigations, §510. I can‟t say “it‟s cold here” and mean “it‟s warm here”—at least, not without a little help from my friends. David Lewis, Convention.

"In fact, a different way to state the claim of this article is to say that Wittgensteinian rule-following deals with situations identifiable insofar as a there is a custom. Thus, while not all rules are interpretable as Lewis-conventions, all rules pertinent to Wittgensteinian rule-following16 involve a conventional element and hence can be analyzed as pertaining to situations in which individual preferences regarding their actions are conditional. Such situations are consistent with Lewis‟s analysis of convention in terms of coordination and in fact, as the rest of the article will show, are best understood as recurrent coordination problems.

"Game theory sheds new light on the notoriously obscure pages of the Investigations dealing with rule-following. Taking at face value Wittgenstein‟s indication that following a rule requires that a convention be in place, I have used David Lewis‟s game-theoretic account of convention to clarify how rule-following presupposes agreement an coordination in a community. In so doing, the role played by the community is made more perspicuous, and in particular we have seen that the strategic component is crucial 35 of a full understanding of rule-following. Game theory and the Lewisian analysis of social conventions shed light also on two notions related to rule-following. The notion of Lebensform is illuminated if looked at next to the technical notion of common knowledge, and the notion blind action is clarified in the evolutionary approach. As I have already stated above, I am not claiming that game theory can cover all subtle nuances in Wittgenstein‟s notion of language-game, and neither I claim that hard interpretative issues (for instance that of solipsistic vs. communitarian reading of rule-following) can be settled by game theory once and for all. However, I do believe that I have singled out a group of notions in the Investigations which find precise counterparts in normal game-theoretic ones. Finally, if my analysis does not of course purport to be historical in character, still it highlights that the later Wittgenstein already contains seeds of a philosophy of social sciences that has found voice first in David Lewis‟s seminal study and that, today, continues to grow at the intersection of philosophy and game theory. 36"

I like Dr. Sillari’s claim that David Lewis’s philosophy is an extension of Ludwig Wittgenstein’s philosophy.

JamesMilligan08:57, 17 January 2012
 

The Rule of Belief captures propositions that the speaker never thought of and yet are possibilities(which include actualities according to Lewis). It can be argued that evidence and arguments would back up any possibility P of that nature, whether or not speaker S believes it, but only if we can provide evidence and arguments for every fact of the world, ie. some principle of sufficient reason. So any P which can be supported by evidence and arguments may not be properly ignored. But this would be of help only if not believing P includes ignoring P altogether.

I think the Rule of Attention might be interpreted to be claiming more than it actually is. Lewis is not claiming that if a possibility is not a feature of the conversation context, then it is properly ignored. He is saying only that a possibility may not be properly ignored if it is a feature of the conversational context. And so a possibility P, if supported by arguments and evidence, may not be properly ignored, even if no attention is paid to it.

But you say there may be a fact that, in principle, S could not draw his attention to, meaning it cannot be the case that S had not drawn his attention to P(else that wold be circular). This would happen when certain evidence and arguments are unavailable for S. Maybe it's true in some sense that certain things are beyond S's understanding, therefore he can't grasp them and they would not count as proper justification. I don't buy it though,.

MclarenThomas15:02, 19 January 2012

I am troubled by Lewis' web of rules as well. I think the troubles comes from his method in trying to achieve his goals of reconciliation. His methods in many ways want to mirror empirical ways people come to acquire knowledge. He largely shuns armchair philosophizing in favor or armchair psychology. This is troubling in two ways. First, he is assuming that people fundamentally have a good way of acquiring beliefs and knowledge, and that he is simple trying to come up with a coherent way of explaining an existing phenomena in much the way same science explains nature. So he is not proposing some grand theory which if adopted by everyone, would magically the quality of knowledge in human society. Second, he is trying to cover too many moving and contradictory parts at the same time, and these forces are pulling his argument apart. This makes his argument arbitrary. He has all these rules which, while superficially rendering these disparate views compatible, do not do much to explains their overall coherence. The lack of overarching thematic narratives makes the whole project lack rhyme and reason. The faux-empiricism and arbitrariness makes Lewis's paper a confounded enigma.

Wittyretort23:56, 20 January 2012
 
 

This is not really in response to either of your questions, but I am uncomfortable with the the rule of conservatism ( and it has nothing to do with my political views). Lewis suggests that the rule of conservatism can be derived from the rule of reliability and vice versa, I am not sure I agree. I agree that the rule of reliability could be derived from the rule of conservatism, however I am not sure it works the other way around. This is likely because of my discomfort with the conservatism rule. While I am willing to ignore many possibilities, in certain contexts, because doing so allows communication to occur, such as the examples regarding the use of words like: all, never, every and none. However depending on the context one could be required to consider truly absurd possibilities, or ignore reality, the latter is dealt with by the rule of actuality. The former though that one could be required to consider the ridiculous bothers me and is also a problem with the rule of attentiveness. As a result of either rule, if I were to attend a starwars or bigfoot convention I would be forced to consider the possibilities that bigfoot exists and starwars may have actually happened a long time ago and in a galaxy far far away. And while I think Lewis may be able to do without the rule of conservatism he seems commited to the rule of attention, for that reason contextualism is unappealing to my sceptics heart.

RobGrenier20:13, 18 January 2012

I also feel unsure about Lewis' rule of conservatism. From what I understood it sounded like this rule allows one to properly ignore certain possibilities if others commonly ignore it as well. It might be a misunderstanding of this rule on my part, but this seems to weaken and if not undermine some of Lewis' other rules. If one allows common knowledge to guide their reasoning then it seems that it would just leave one with assumptions, rather than any true knowledge. Lewis mentioned how the rule of conservatism can be derived from other rules and vice versa, but I was unclear of whether this rule could stand on it's own when properly ignoring a possibility.

Andreaobrien23:37, 19 January 2012
 

In response to question 2, "What do you make of the rule of attention? That's what really gives his contextualism. Is it well motivated, or just pulled out of a hat?"

Lewis' Rule of Attention maintains that any possibility that is ignored is properly ignored. He then says that a possibility not ignored (or relevant alternative) is not properly ignored. The problem I find with this is that any possibility can potentially be brought to attention in some conversational context. So it seems to me that any possibility that Lewis would call 'rightfully ignored' is really just a possibility that hasn't yet been brought to attention. But given enough time all possibilities could eventually be brought to attention and would therefore be considered relevant alternatives according to Lewis' own definition. Since any possibility brought to attention becomes a relevant alternative what could be left as properly ignored or an 'irrelevant alternative'?

The way he resolves this is by saying if an unwanted possibility comes into conversation "we might quickly strike a tacit agreement to speak just as if we were ignoring it and after just a little of this, doubtless it would really be ignored." Maybe Lewis and I have different understandings of the word 'ignore', but if one is 'actively ignoring' a possibility surely he has paid attention to it at one time instantly transforming it into a relevant alternative, a process that I doubt can be undone based on Lewis' prior definition of a possibility 'not properly ignored' as a possibility 'not ignored'.

AlexanderBres04:00, 19 January 2012
 

I'm fine with the world being one big guess, a guess contextualized alongside the guesses of yourself and others, with I guess, there also being a wonderfully complex orgainicly fallibil system that those guesses are bound to. That does not mean that all claims are equal, it just mean that none are foundational. Our best guesses are the ones we simply can't ignore becuase their 'truth' permeates so throughly our other guesses, they support them and may be the reason we conclude to 'know' other more periferial guess to be true. I'd say that there are three things that are definately true, but they unfortunately get you no were on thier own, so though solid, they are no foundation. They are (1) there is existence, (2) there is thought and (3) there is perception; two and three might be the same thing, and if you can build anything as definite as those three out of those three, please do. Otherwise, may the best guess win.

KevinByrne02:25, 20 January 2012
 

He (Lewis) definitely is just trying to come up with some rules that give the answer he wants. Those rules, because they are incredibly difficult for me to make sense of, are arbitrary.

NicoleJinn00:13, 26 January 2012
 
  • What do you make of the rule of attention? That's what really gives his contextualism. Is it well motivated, or just pulled out of a hat?

I think what Lewis says about the rule of attention is well motivated and not just pulled out of a hat. He says at the end of p. 599 "To investigate ignoring of [possibilities is in fact] not to ignore them...Knowledge is elusive. Examine it, and straight away it vanishes". This is very well said and we see this whenever we try to define what knowledge truly is. Is it justified true belief? well no that was clearly proven insufficient by the Gettier problems. Is there one word or a sentence tat can describe what knowledge is? Is there a definition? It seems practically impossible to define knowledge and the closer we look at it the more certain we become that we have no knowledge in the first place. In the beginning he started to talk about all the things that we do know. And it seemed as though humans in general have a vast array of knowledge of all kinds of things but as we tried to examine these things more closely we became uncertain of them all and it seems that all we can truly know as Descartes says is that we ourselves exist. Nothing else is certain and can be proven without a doubt. So in order to surpass this problem of skepticism of now knowing anything we can apply Lewis rule of attention and choose to ignore possibilities which are that are most likely not true (the evil demons deceiving us). And by doing so we can at least come up with some sort of explanation of what knowledge is.

ShivaAbhari22:05, 1 February 2012

Lewis's rules do not seem to capture a very solid reasoning behind contextualism. Yes, his modality and Rule of Actuality seem plausible, with uneliminated possibilities highlighting the importance of the spatial and temporal features of experience. The Rule of Belief neglects to state a sufficiently high possibilities to be properly ignored, beyond actuality. The Rule of Attention I find difficult to understand, attending to context-dependence of making an additional comment to another that was previously said. This seems to break any sense of closure, based on its context in the conversation. Where does is the line drawn between what are purely thoughts and what is said in the Rule of Attention? At this point is it properly ignoring a possibility if it is not said? Bound by these conversational suggestions, it appears to not account for mistakes, impulsivity or useless statements in such that they are of equal relevance now to the conversation. (But, perhaps, this is exactly what Lewis was trying to argue with his nearest possible worlds, in that they are in fact a possibility.)

DorothyNeufeld05:38, 28 February 2012
 
 

forum 3: week of 23 Jan - Lewis II

Look at what Lewis says about how experience rules out possibilities on p 553. This looks at first like good old empiricism. But it isn't. How is it different? He suggests advantages of his approach, but there are also disadvantages. State them?
We will look at his list of rules. What are they rules of? They are not rules of good method - how to think in order to get knowledge. So what are they?
Why are the rules of actuality and belief there? (They can't say that if you believe it you know it, or that everything true is known. So what do they say?)
The rule of resemblance is supposed to take care of Gettier cases. I'm rather suspicious of it.
The rule of attention, on page 559 (read 560 with it in mind) is crucial to the aim of the paper. Do you buy it? (If you're inclined to agree, think of some problems with it. If you're inclined to disagree, think of some cases where it seems plausible.)

AdamMorton17:59, 20 January 2012

I'm just putting in a fake reply to bring this thread to the top of the list. It's confusing if it isn't there, even though I've numbered them. AM

AdamMorton22:56, 21 January 2012

I'm actually unclear on Lewis's method of ruling out possibility with experience; is he saying that even false experiences can rule out possibilities, so long as those possibilities conflict with the content of that experience? For example, say John bought a lottery ticket, and watched the program, where all the numbers matched the ones on his ticket. So John goes in to cash in his ticket, and it turns out that he watched a rerun of the show from last week, so his ticket is no longer the winning one. So the subject's experience at the moment of watching the rerun is that of winning the lottery; however, the possibility of him accidentally watching a rerun as opposed to the current show eliminates the fact of his victory. But subject's experience DOES eliminate the possibility of not winning the lottery before he realizes that the show was a rerun. Can it be said then that John won the lottery, because his experience at the time eliminates the possibility that the show he's watching is an old one?

Olsy08:50, 23 January 2012
 

I have a problem with the Rule of Actuality as it applies to Lewis' attempt to resolve the issue of relevant alternatives. Lewis clearly states that "actuality is always a relevant alternative" and in class we discussed the pseudo-history of a flat Earth. So, if at the time no-one brought up the possibility of a oblate-spheroid Earth, but it is in fact the actuality, no-one could know the Earth was flat. This is fine, for me, perfectly clear when looking back. But doesn't that mean, then, that when deciding if a possibility is relevant, we must consider everything because the actuality has yet to be determined? And to bend the rules of conversation to ignore some alternative that everyone feels is irrelevant is bad argumentation because they cannot all be certain that it is not, in fact, the actuality.

I may not be making myself clear but I feel that there is a circularity or a regression (and I haven't quite worked out which, either).

AngeGordon23:20, 23 January 2012

I second that problem. As finite beings we can never truly "know" whether a belief we hold is identical to the actual state of things, and similarly we can never "know", definitively speaking, whether relevant alternatives we disregard for rational reasons may actually obtain. It's easy to say that we can't ignore the actual state of things (and also makes sense to say so), but at the same time, unless we are allowed to make large leaps in judgement regarding actuality its difficult to find how practical the Rule of Actuality is.

In a certain capacity, I would argue that his list is somewhat concerned with good method. While Lewis doesn't put forward a strict criteria of knowledge, he still goes to great lengths to seperate relevant from non-relevant alternatives, in the form of general rules. This seems to me to be an attempt to provide guidelines for a first step of knowledge acquisition; what to, and what not to consider.

ZacharyZdenek06:24, 24 January 2012

I agree with you that the rules put forth by Lewis are self-referential, and do not provide much substantive support for his theories. In the limited context of his contextualism, the rules do seem to work and provide some guidance on the proper methods of acquisition of knowledge. However, if one steps back from it all, and examine the rules together as though one had never heard of Lewis or contextualism in the first place, the rules do not make sense no matter how hard or long one looks at them. I think G. E. More was brought up in class recently (I can't remember as a reference to what exactly), and I think his criticisms of philosophical musings apply especially well to Lewis's theories. It's not unlike the ontological argument for God. If you're eased into it one step at a time, the individual steps seem reasonable (as do Lewis's rules and premises of argument). But by the end of it all, you realize this can't be right - these rules can't even be guidelines for gaining good knowledge, and God definitely can't exist by way of logic. There's something just not right in the argument themselves, without necessarily being able to pinpoint exactly what is wrong with it.

Edward06:32, 31 January 2012
 

Passage 410 of the third edition of Dr. Ludwig Wittgenstein’s Philosophical Investigations reads “’I’ is not the name of a person, nor ‘here’ of a place, and ‘this’ is not a name. But they are connected with names. Names are explained by means of them. It is also true that it is characteristic of physics not to use these words.” Elizabeth Anscombe, a student of Wittgenstein, who translated Philosophical Investigations to English for publication in 1953, published her own paper titled The First Person [G.E.M. Anscombe (1975). Samuel Guttenplan, ed., Mind and Language (Oxford: Clarendon Press, 1975), pp. 45-65.] In her Post Scriptum to her paper she wrote: “My colleague Dr. J. Altham has pointed out to me a difficulty about the rule about ‘I’ on page 55. How is one to extract the predicate for purposes of this rule in ‘I think John loves me’? The rule needs supplementation: where ‘I’ or ‘me’ occurs within an oblique context, the predicate is to be specified by replacing ‘I’ or ‘me’ by the indirect reflexive pronoun.” Wikipedia, on Anscombe’s First Person paper has the inclusion: “Her paper ‘The First Person’ follows up remarks by Wittgenstein, coming to the now-notorious conclusion that the first-person pronoun, ‘I’, does not refer to anything (not, e.g., to the speaker). Few people accept the conclusion—though the position was later adopted in a more radical form by David Lewis…” On Dualism, WIKI references David Lewis: [Lewis, David (1988) "What Experience Teaches", in Papers in Metaphysics and Epistemology, Cambridge: Cambridge University Press, 1999, pp. 262-290.]; with the inclusion “If Mary really learns something new, it must be knowledge of something non-physical, since she already knew everything there is to know about the physical aspects of colour. David Lewis' response to this argument, now known as the ability argument, is that what Mary really came to know was simply the ability to recognize and identify color sensations to which she had previously not been exposed.” In David Lewis’s paper Elusive Knowledge [Australasian Journal of Philosophy Vol. 74, No. 4; December 1996] his closing comments include the paragraph: “In trying to thread a course between the rock of fallibilism and the whirlpool of scep- ticism, it may well seem as if I have fallen victim to both at once. For do I not say that there are all those uneliminated possibilities of error? Yet do I not claim that we know a lot? Yet do I not claim that knowledge is, by definition, infallible knowledge?”

To me David Lewis contradicts his own indefinite propositions, and those of Wittgenstein, and Anscombe with concrete propositions. Excerpt examples: So, next, we need to say what it means for a possibility to be eliminated or not. (If you want to include other alleged forms of basic evidence, such as the evidence of our extrasensory faculties, or an innate disposition to believe in God, be my guest. If they exist, they should be included. If not, no harm done if we have included them conditionally.) It is the Rule of Resemblance that explains why you do not know that you will lose the lottery, no matter what the odds are against you and no matter how sure you should therefore be that you will lose. For every ticket, there is the possibility that it will win. In similar fashion, we have two permissive Rules of Method: Again, the general rule consists of a standing disposition to presuppose reliability in whatever particular case may come before us. The Rule of Attention: Do some epistemology. Let your fantasies rip. Find uneliminated possibilities of error everywhere. Now that you are attending to them, just as I told you to, you are no longer ignoring them, properly or otherwise. So you have landed in a context with an enormously rich domain of potential counter-examples to ascriptions of knowledge. In such an extraordinary context, with such a rich domain, it never can happen (well, hardly ever) that an ascription of knowledge is true. Not an ascription of knowledge to yourself (either to your present self or to your earlier self, untainted by epistemology); and not an ascription of knowledge to others. That is how epistemology destroys knowledge.

JamesMilligan07:27, 24 January 2012
 

In response to Ange's and Zack's objections, respectively. I don't see the rule of actuality making any normative 'ought-' claim because it is externalist.

As Lewis states, "the subject himself may not be able to tell what is properly ignored": the subject may ignore proposition p that obtains in the real world all he wants(provided his own normative epistemic conditions lead him to do that, and setting aside the other rules for the sake of brevity). But the fact that p is true means that it is not properly ignored. This rule captures the fact that in order to know that p, p has to be true in our world.

We are not deciding whether p is relevant, I don't think. To decide whether p is relevant is to make p a relevant alternative by including it into the conversation! That is what makes me think that this rule is for the most part descriptive.

Additionally, the rule is not saying that we can't(neither physically nor normatively) ignore the actual state of things. It is saying that if we ignore the actual state of things, we have gone wrong, because we cannot know what is untrue.

...If I've interpreted it correctly.

MclarenThomas23:21, 25 January 2012
 

I thought of this in class today, but comically enough to me it almost seems like Lewis is half preaching a "ignorance is bliss," sort of attitude. The fewer alternatives you ascribe to the an agent the more likely it will be that that agent is correct. I have a strong feeling that that is not what Lewis is trying to argue but it seems to me that he is walking a very thin line of being misunderstood... Or maybe my philosophical grasp isn't as sophisticated as I presume.

WilliamMontgomery07:57, 25 January 2012

If you don't think that Lewis is trying to preach a "ignorance is bliss" sort of attitude, then what do you precisely think that Lewis is trying to argue? In raising this question, I do not have an answer in mind - I have the most difficult time understanding what Lewis is trying to say, at least in the paper ("Elusive Knowledge") that we are discussing about.

NicoleJinn00:06, 26 January 2012
 

I'm not sure Lewis is saying "ignorance is bliss." I take the rule more as "the less one considers alternative explanations for a phenomenon (i.e. the less epistemology one does) the more accurate one can be in ascribing knowledge to a subject."

AlexanderBres07:43, 30 January 2012
 

I thought of this in class today, but comically enough to me it almost seems like Lewis is half preaching a "ignorance is bliss," sort of attitude. The fewer alternatives you ascribe to the an agent the more likely it will be that that agent is correct. I have a strong feeling that that is not what Lewis is trying to argue but it seems to me that he is walking a very thin line of being misunderstood... Or maybe my philosophical grasp isn't as sophisticated as I presume.

WilliamMontgomery07:58, 25 January 2012
 

I still dislike the rule of attention for the reason I gave in the last discussion, that depending on context you are forced to entertain obsurd possibilities. That being said it may have some value depending on just why something has garnered attention. It may be fair to say that in general if someone is considering possibilities they will unconciously limit the range of things they consider to those that they think are plausable. In so far as that is the case the rule of attention seems acceptable because the truly absurd possibilities are never (using Lewis' meaning) actually considered. Contrarily there are some perverse individuals who take certain delight in considering ridiculous possibilities that cannot be eliminated by evidence available, this rule poses a problem for having conversations with them.

RobGrenier04:05, 26 January 2012

Isn't Lewis really cautioning against overthinking,at least in the context and course of everyday life? He talks about compartmentalizing,not confusing the world of epistemology with the world of the "bushwalk" as a way to avoid the multiplier effect of ever proliferating alternative possibilities which create a field so rich in what-ifs that a state of paralysis,the"destruction of knowledge" as he calls it, is achieved. As he points out, in the bushwalk reference,we actually know quite a lot. His enjoinder to "do some epistemology" it seems, is really a call to second guess error "temporarily,"and in its proper compartment, meaning a sort of presuppositional vaccination against the threat of annihilation implied by skepticism.

Robmacdee06:53, 26 January 2012

I love that last sentence. I am not sure how much work I think compartmentalizing can do in the cases I am thinking of. In the bushwalk example I feel as though the epistemologists are being tounge-in-cheek, I think that in some way they don't actually believe that they know nothing. The cases I was thinking of involve people who completely hold a belief and have no reservations about it.

So I was picturing people who, for example, truly believe with all their being that starwars is an accurate depiction of events that actualy occurd in THIS world. Now, if I am having a conversation wih these people (maybe three who all share the belief) I can attend the possibility and compartmentalize that conversation from the rest of the things I think I know or think are possible. However, what if I didn't have any preexisting beliefs about the fictional nature of starwars? I would then not have any reason to compartmentalize off the possibility that starwars is real. This may be more a complaint about how context, lack of experience AND the rule of actuality lead to entertaining ridiculous beliefs.... But I still find the rule off-putting.

RobGrenier02:04, 27 January 2012
 
 

Perhaps I missed something, but I do not see how the rule of resemblance solves the Gettier cases. On page 557 he is stating how one CANNOT ignore certain possibilities in the case of the Ford truck. Does "solving" the Gettier cases just mean that he shows how we cannot have knowledge in the Ford case? In that case, the problem with the Gettier cases is that we think we do know. Therefore, you do not solve the problem just by saying "actually you don't know because you haven't considered possibility x". The point of the Gettier cases is the possibility x is so unlikely that you haven't thought of it, and according to the rule of attention; if you haven't thought of it then you can ignore it. Thus, the Gettier cases should become false knowledge anyway.

ThomasMasin19:20, 26 January 2012
 

One of the problems I find with the rule of attention is it's relationship with Lewis' other rules, particularly the rule of conservatism. If two people are in conversation and one of them says something that seems to be against common knowledge, such as saying that the world is flat, which of these rules would outweigh the other? Does the rule of conservatism only hold if both people in the conversation share the same idea of common knowledge, even if one person seems to be obviously wrong? I would assume in this case that the rule of attention would outweigh the rule of conservatisim. In this situation, the person claiming the earth to be flat would have to argue their case, and the other person would have to seriously consider their argument. The interaction between Lewis' many rules seems kind of vague to me, this may not be the best example, but in some cases it doesn't seem clear which rule takes priority over the other.

Andreaobrien01:13, 27 January 2012
 

When someone brings up an outlandish claim it seem right that we must address it, no matter how outlandish. Like robbybobby said in class one time, some of the greatest advancement in knowledge came from someone going out on a limb with an otherworldly idea. But the difference between those brave few who we remember and the unhelpful many that we forget is that the brave few went on to explain reasons for thinking in a new way. We can always just ask, "why do you think that?", and unless we hear some good persuasive reasons, I don't see why we must still entertain a foolish claim. So yeah, the alien filled sci-fi-con conspirator gets his moment of attention, but the problem he raises, if it is to be ignored, will likely dissolve under its own foolishness. Then the conversation will continue until some other rare and crazy person says something crazy, which they then explain and support with a convincing case, and then their success will turn crazy into genius.

KevinByrne09:27, 28 January 2012
 
  • We will look at his list of rules. What are they rules of? They are not rules of good method - how to think in order to get knowledge. So what are they?

I think Lewis Rule of Resemblance "one possibility saliently resembles another. Then if one of them may not be properly ignored, neither may the other" (556). is the one that makes the most sense to me. When he talks about the possibility of winning the lottery he says that my chances of winning or someone else' are similar and my chances of losing is similar to everyone else' chances that being why the possibility of me winning or losing is just as likely and also in comparison to any other individual participating in the lottery. In regards to the Rule of Actuality. I don't understand whose actuality counts because at first he says that the subject and ascriber's actuality are the same so there is only one actual world(top of page 555) then later he says that there is a difference and it is the subjects and not the ascriber's actuality that matters (bottom of page 555)???

ShivaAbhari22:22, 1 February 2012