forum 1, week of Jan 8, Dretske
This paper is a model of pure once-dominant philosophical analysis. It is plunging into the deep end for you, I know. Much of the later reading will be more digestible. You are likely to think "where's the epistemology here; where's the concern with human knowledge, and the standards we can set and fail to meet?" But it is in fact a good case where on reflection you may conclude that there are some rather deep thoughts about these things, carried by observations about language. In particular, the Zebra example has inspired many reactions. It seems to many to give a handle on skepticism that wasn't available before. So read that again. Then read the example about his brother on the bus. Are they making the same point? Then go back to the beginning and the claim that "knows that .." is not a fully penetrating operator. (That's a terminology that has not caught on, incidentally. What we say now is that "know" is not closed under logical consequence. Or we insist that it is.) How does that link to Zebra-type examples? Now go to the end of the paper. There he is trying to say why all this happens, and his explanation is in terms of "relevant alternatives". What does that amount to, really? Can you put it in your own terms.
Now you are ready to contribute to the forum. I'm going to ask some questions below. Write an answer to one of them. It doesn't have to be careful or one you are convinced of, just something to discuss. Or write a reaction to someone else's answer. Or continue a discussion started by other people's answers and reactions.
- does the fact that you haven't excluded the possibility that the zebras are painted mules show that you don't know that they are zebras?
- why is it so shocking (many philosophers do find it shocking, still) to claim that you can know A, know that if A is true B has to be true, but not know B?
- suppose knowing something is excluding *relevant* alternatives to it. What could *relevant* mean?
In reference to question 2: “why is it so shocking…”, I allude to Ludwig Wittgenstein, Philosophical Investigations, Third Edition, Blackwell Publishers, 2001, as follows: 111. page 41: “The problems arising through a misrepresentation of our forms of language have the character of depth. They are deep disquietudes; their roots are as deep in us as the forms of our language and their significance is as great as the importance of our language.------Let us ask ourselves: why do we feel a grammatical joke to be deep? (And that is what the depth of philosophy is.)” Jan Willem Wennekes, in his Master’s Thesis, titled WITTGENSTEINIAN ARGUMENTS AGAINST A CAUSAL THEORY OF REPRESENTATION, Dated August, 2006, UNIVERSITY OF GRONINGEN, FACULTY OF PHILOSOPHY; states, in Chapter 4, A Critique of the Causal Theory of Representation, page 63:
…”Dennett and Dretske are convinced they have a causal, empirical problem at
hand while Wittgenstein is convinced that the problem is conceptual: it is the result of misunderstanding the forms of our language.” I agree with Wennekess, and Wittgenstein.
If one can be reasonably confident that any statement of certain knowledge that one may, at present,claim to be true will be disproved and held to be false at some future time,given our track record so far,has one then admitted to an absolutely skeptical position? The question occurred to me when I read the reference to Wittgenstein,depth,and language ambiguities.I'm new to the language of philosophy and to Wittgenstein for that matter,so I hope I will be forgiven if I illustrate my thoughts by way of a detour through literature with a few theological organ notes thrown in.James Joyce made a career out of playing with people's misunderstandings of language.He presents an image,through his writing, of The Fall (as in Original Sin) as being misunderstood in that it is typically seen as an account of a one-time-only event which has happened at the beginning of human experience,and has resulted in our present 'fallen' state. According to Joyce, the Fall is better understood as an ongoing experience.(the concept of Original Sin it will be remembered, is consequent on,and enjoined with,the quest for knowledge.Our Father,we are told, apparently had an issue with this)That is to say, or so the story goes,we are in the midst of falling.In this allegory,involving gravity,we know that we move, always,toward knowledge,and this movement toward it is perhaps the only certainty outside of immediate sensation,that we can have.We don't know if its a bottomless fall,but its certainly been deep, and it has a direction,more or less certain, which is to say it continues on into depth. When we invest in a belief,and consequently act upon it, we can reasonably expect from our past experience that there will be surprising side effects.These unforeseen developments are corrective and have the ultimate effect of changing,somewhat paradoxically,our initiating belief.Even so,the initiating belief still stands as the foundation for its replacement.The question I have is whether we actually achieve progress in this pursuit (eg Columbus pursued an ever receding horizon in the expectation that he would find India and found America instead.He pursued an intended and expected goal and instead achieved an unintended one,which in a very real sense changed the meaning of the experiment)or whether the pursuit is circular,as Joyce seemed to believe,influenced as he was by the ricorso theory of Giambattista Vico,who saw history and the pursuit of knowlege as a recurring cycle with progressive stages within an evolving circular transit.
While thinking about the third question, I was reminded of another issue where definition of relevance is crucial: the frame problem in AI, where one needs to be able to represent the logical effects of an action without representing a multitude of irrelevant information along the way. One of the suggested attempts at solving this problem was Jerry Fodor's (I believe) appeal to relevance of information, or the "context" in which the AI has to operate in any given situation. Following his logic, exclusion of relevant alternatives could then entail thinking of as many alternatives physically possible (or conceivable to exist in a possible world, if we are to be Lewisian) in the given set of circumstances, and further, thinking of a reason as to why what we know is different from these alternatives and is therefore better suited for this context. However, this solution, both in the present situation and the frame problem, has a risk of infinite regress of "relevance of relevant contexts"; how exactly can we tell what the "given set of circumstances" actually consist of, and do we know enough about this context in order to make judgments on the relevance of it? I tend to agree that even the mention of relevance may be a slippery slope in epistemological questions, because the assumption that we can, in fact, make judgments on what's relevant could lead to a bias.
I cannot help but feel that "relevant" is pre-defined by Dretske's style of writing. His tendency towards slightly absurd-situational (clever lighting, costumed mules) yet commonly-placed examples (paint, zoo) are an obvious contrast between possible and likely. It's the likely part that I think is "relevant". Because we can say "this is a zebra" while excluding the obvious "this is a rhino" without having to change any other explanation or retrofit the premisses inherent in our beliefs as to why it is a zebra, it is a "relevant alternative" for the explantion. In order to object to a green wall, one must first presuppose that there is a likeliness that the wall is cleverly lit...yet in terms of argument or further analysis, there is no -relevant- reason to presuppose this. While not as analytical a consideration as is given above in Olsy's thoughts, I feel that this use, that in order to converse or think about an issue one must only consider the likely--the relevant--possibilities, is implied by Dretske's common speech.
In reference to question 1. Along the vein of my comment in class on Tuesday, when we make a claim we overlook and take for granted our background knowledge and beliefs. A person claims to know that is a zebra in the zoo they over look all of their beliefs that they have, perhaps they have good reason to trust zookeepers, maybe they read an article about this zebra a few months back, perhaps they also see themselves as good judges of zebra ect... These subconscious conditions to the agents justification can show how the agent comes to believe that they know without fully processing their claim. But is this rational? The agent has yet to thwart the skeptic's argument, but would the skeptic still find it necessary to resort to the "mischievous demon" argument if the agent did fully articulate his claim? I guess what I'm trying to say is, I understand how epistemic operators are semi-penetrating, but I put forward can a combination of operators become fully penetrating?
Also can "know" just be high level of belief? To know is to say 99% certain?
For question 1, the fact that one has not excluded the possibility that the zebras are painted mules shows that the claim "I know that these animals are zebras" is not carefully backed up. By carefully backed up, I mean no existence of a track record of examining the animals more closely, checking with the zoo officials, or some kind of 'evidence' of performing tasks that would give more confidence and trust to why the claim should be regarded as true. On the other hand, whether not excluding the possibility that the zebras are painted mules is equivalent to claiming "you don't know that they are zebras" depends on how one defines what it is to 'know' something. I do not take 'know'ing something to be binary (i.e., 'yes' or 'no') - I would attach a degree (between 0 and 1) on how confident that person knows that these animals are zebras. Hence, I would answer yes to the first question as follows: the fact that you have not excluded the possibility that the zebras are painted mules shows (with degree p) that you don't know that they are zebras, where 0 ≤ p ≤ 1. The reason for including a degree of 'confidence' (or something along those lines) in 'know'ing a claim is due to my background in statistics or probability theory, as well as the existence of a recent epistemological movement towards Bayesian methods. However, it should be noted that this epistemological movement comes with numerous philosophical problems and is nowhere near consensus - the following link gives a small glimpse of demonstrating the lack of consensus on using Bayesian methods (just in case anyone is interested): http://errorstatistics.blogspot.com/2011/12/jim-berger-on-jim-berger.html#disqus_thread
If there are a number of propositions on knowing the animals are zebras, each known by degree between 0 and 1, such as the 36 number example used by Dr. Morton in lecture; does the probability of the conclusion become the impact of the progression of going from one proposition to the next, each with a probability between 0 and 1, and assuming less than 1 in probability for each proposition; in a list of 36 related propositions for example, does one arrive at a residual quantified probability degree between 0 and 1, for the conclusion to the 36 propositions.
I am not sure if I understand the question you are asking (e.g., what you mean by a "residual quantified probability"). Nevertheless, here is my attempt in answering your question: When the propositions are not known with certainty, then the probability of obtaining the conclusion is not necessarily a linear combination of the probability of the propositions. In other words, the propositions are not necessarily related linearly (or in some 'straightforward' fashion if we want to use non-mathematical terms) to the conclusion when the probability of the propositions is less than 1 (i.e., when the propositions are NOT known with certainty). The reason for the nonexistence of a straightforward relation between the propositions and conclusion in a probabilistic setting is because all kinds of alternative conclusions can come up with varying probabilities attached to them IF the propositions are not known with certainty. I hope this answered your question. If not, I will see if I can come up with an example to present in class tomorrow in front of the entire class, just in case anyone else has a similar question.
The question relates to the overall impact of 36 propositions at less than 1 probability. The example is 36 propositions. If the propositions are a linear combination, and the second proposition is dependent or influenced by the first propsition, does the probability of the first proposition at less than 1 become the basis to apply the probability calculation of the second proposition. The residual probability being the calculation of the probability of the conclusion after 36 propositions, each successive calculation starting from the reduced probability of the preceding proposition. If there is no dependence or sequence in the 36 propositions, what methods may be used to select a probability from the 36 propositions, to quantify the probality of the conclusion.
In defining residual probability, you presuppose that each successive proposition has a probability less than its precursor. However, I don't think this presupposition must hold - each proposition has a probability that is not necessarily related to the proposition before it, if the 36 propositions are in a sequence. In reference to your first question, I will not be able to explain what a linear combination is to the layperson - knowledge in mathematics, particularly linear algebra, is needed to understand this. Despite my background in probability theory, I am unable to answer your last question on which methods to use in selecting a probability of the conclusion. To point you in the right direction, the notion of statistical dependence or independence between the propositions themselves, or between any one of the propositions and the conclusion, pretty much governs which methods to use. Lastly, I will have to end this 'conversation' because 1) the content is not interesting enough to everyone else in the course (PHIL 440), and 2) this topic on the probability of the conclusion being related to the probability of the propositions is not something that this course will cover, beyond what has been written here. Hence, this 'conversation' is now closed.
"suppose knowing something is excluding *relevant* alternatives to it. What could *relevant* mean?" Dretske would probably say that a relevant alternative to something you know would be information that would prove your knowledge false if it was true. For example: A relevant alternative that you are excluding in order to know that the zebras in the zoo are in fact zebras is that they are painted mules. I think that a relevant alternative to a situation should have some evidence for its possibility for it to be considered a real alternative. By this I mean one does not normally actively exclude the fact that what one is looking at is just an imitation of what it looks like. Why shouldn't the zebras be zebras? This thought normally does not even cross our minds unless we have been reading too much philosophy. For Dretske's example to be a relevant alternative I would have to see something like black and white house paint in the zebra pen. That would at least give me a reason for considering the possibility that things might not be as they appear. There are usually relevant alternatives to many things we think we know. Evidence that shows us we may be wrong but that we decide to exclude because it is not strong enough to change our minds. In short, "relevant" just means "less-likely". There is a reason to believe in this alternative, the fact that something is "possible" should not make it a faire contender for an alternative to our knowledge.
I agree with this idea that relevant alternatives should be supported by some kind of empirical evidence. If one were to take seriously any possible alternative then there would be very little one could claim to actually know. Even a priori knowledge could be questioned if one were to believe that there was some kind of evil demon operating solely to trick them. If empirical evidence were not enough to give us true knowledge, then one would be forced to say that 'I think that' or 'It is likely that' the zebras are not painted mules. I do not think that this particular example can be dealt with by saying it is a problem of semantics. I think one can say that they literally know that the zebras are zebras by applying these standards of empirical evidence to rule out alternatives, such as painted mules.
I agree that there would be very little one could claim to actually know, IF we have to consider all possible alternatives. However, one fundamental problem is that empirical evidence may or may not be enough to give us true knowledge, depending on the notion of evidence used. Yes, there is no consensus yet on what the heavily-used concept of evidence is! Besides, what are these standards of empirical evidence that you have in mind? Are they related at all to the definition of "evidence" given by Richard Royall and Steven Goodman? (e.g., see http://www.ncbi.nlm.nih.gov/pubmed/3189634 or http://www.botany.wisc.edu/courses/botany_940/06EvidEvol/powerpoints/Evidence.pdf)
I agree. Dretske's examples of relevance seem self-defeating at times. The Zebra case is especially troubling because it involves intentional deception in a case where there is usually no intent to deceive. The Zebra-painter is almost as evil as Descartes' demons. The reliance on this sort of example is emblematic of a deeper problem in his theory in that it it is too exclusive and unnecessarily limits knowledge to a subset of what we usually define knowledge to include. Too much error-avoidance leads to unnecessary ignorance.
Question 1: The claim is that epistemic operators are not fully penetrating to all consequences of them being zebras. If we know that they are zebras we do not necessarily know that they are not painted mules. And yet to 'know' something may be said to be a very strong claim according to this picture. If I had not considered that they were painted mules, then as far as my epistemic state is concerned, they could be painted mules. And if it is possible in this epistemic sense that they are painted mules, then I can't claim to know, in that same sense, that they aren't, and therefore I can't claim to know that they are actually zebras.
Dretske views this to be a mistake. It does not follow from the fact that we had not considered them to be painted mules that we don't know that they are zebras. We can know that they are zebras and not know that they are not painted mules.
If it were the case that we were being duped, then we wouldn't have known that they were zebras because the claim that they are zebras would be false. It is surely possible that they are animatronic displays, that they are painted mules, that I am having a dream, that a higher being is deceiving me, etcetera. But is the point of claiming that we know something to show that nothing of the sort is possible? Some alternatives can't even be attached to a probability, in the case of there being a deceiver, and yet the answer to whether or not there is a deceiver is entirely relevant to the question of whether we know there are zebras(or anything at all for that matter), and there is no way to exclude it. Would us not considering there being a deceiver imply that we didn't know that there were zebras instead of painted mules? I don't think so. But It's a very difficult question and I don't know how to answer it without getting into the nitty gritty details of what counts as relevant information.
I do think that knowledge has weaker conditions that we like to let on. When we claim to know something, we tend to have a set of positive claims to back it up, even though some not-claims that are not even thought about are entirely relevant and don't allow us to strictly know anything at all until they are known. Hope that makes sense.
In response to question 3, 'suppose knowing something is excluding *relevant* alternatives to it. What could *relevant* mean?'
I take 'relevant', based on Dretske's arguments, in this case to mean something fairly intuitive in terms of everyday language. In other words, a relative alternative would be a possibility that if brought up in conversation would not elicit some degree of surprise or confusion in whoever is being spoken to.
To clarify this I will offer an example similar to Dretske's example of Brenda ordering cake. Say we know that Joe purchased ice cream from the ice cream truck since we can exclude relevant alternatives. An example of a relevant alternative would be that 'Joe purchased a popsicle from the ice cream truck' since it is a plausible circumstance. It needs to be eliminated before we can know that he purchased ice cream. An example of an irrelevant alternative circumstance would be the possibility that 'Joe walked up to the ice cream truck and requested a haircut'. This possibility would certainly not be thought of as a common request for an ice cream salesman and if I told someone it was the case they would likely be moderately surprised or confused. Thus it would not fit into the category of relevant alternatives.
This example doesn't clarify the exact definition of 'relevant' with respect to Dretske's argument but it is an illustration of what I take him to mean.
For question 2, "the fact that it could be so shocking" claims the epistemic operators are not part of the presupposition. As they are tied within the statement, "the roses are wilted", through Dretsky's use of contrastive consequences, the statement appears to articulate the qualitative predicates, "roses" and "shrubs" as instances of a broader concept of plants. It seems as if yes, there is a small degree of knowledge, acting in what seems as an anchor. However, I do lean towards James' argument in language itself being the determinate of the nature in how we form these ascriptions to knowledge.
I said this in class. The formual a, if a then b, seems to be something Dretske can't manipulate if he considers them not knowledge, but truths, whether anyone has ever concieved of them or not. If there is a, there is b. But to speak about the formula like that is to take an impossible perspective, because gaining the prespective disolves it, similar to the elusivity of knowledge that Lewis talks about. Once you say that someone knows a, and that person also knows that if a then b, the formula becomes subjective, and like all things subjective, like all things ever said, they can be wrong. So sure, you can know a and know if a then b and not know b, but you would either result in a 'false' (the word loses a lot of meaning here) belief that you know b, or the reasoning that got you there would have fallen to the same problem.
Q: does the fact that you haven't excluded the possibility that the zebras are painted mules show that you don't know that they are zebras? A: I agree NicoleJinn when she says that they may be a degree to knowledge. For example to say you know something in ordinary conversation maybe different than to say you know something in some academic scientific journal that. Why? I think it can be understood by the states value. For example to say you know those are zebras to you 3 years old child may not have that detrimental of an effect if in fact they were just imposter zebras after all. But to say this in an academic journal that maybe people read and formulate their own knowledge from could perhaps have a bigger impact. Also your reputation is at stakes so you want to make sure double check that what you are saying is in fact correct. But the question is whether or not you can ever be 100 percent certain. And I don't think that you can. All that you can do is do whatever you possibly can think of in order to make sure your assertion is true with the evidence that you can collect at the time being. So knowledge i think is to know something with all the evidence given at a certain time. You can know something but later realize that you are wrong. the question is did you ever know that at all...were you always wrong? or maybe you knew it and now you don't? this is a difficult question which i still am puzzled with....?