Course talk:PHIL240/2011WT1-section2

From UBC Wiki

forum for week of 28 November: pragmatism

There are two opinions of Rorty's that we might air before the classes.
- "all truths are merely convenient fictions" ("science, like politics, is problem-solving")
- " 'firm moral principles' .. are ..ways of summing up the habits of the ancestors we most admire."
React to one of these. (And think how they may be related.)

AdamMorton19:31, 25 November 2011

just bringing the thread to the top.

AdamMorton23:29, 26 November 2011
 

I really agree with the statement "all truths are merely convenient fictions". Just like with ethics/politics, right or wrong is dictated by the culture and situation, truth in epistemology can in fact be quite arbitrary. We don’t know that there are objective truths out there. And even if they did exist, we simply don’t have the capacity to know that.

Like mentioned in class, we don’t actually know that the elliptical motion of the planets around the sun is in fact true; there could be some complicated motion at work, which creates this seemingly elliptical motion instead. If this were true, then perhaps, we are only using the simplest, most efficient explanation to explain what we observe, rather than the right one. So I think whatever “truths” we think we have are deemed true because we make them true.

Mona Zhu07:05, 28 November 2011
 

I tend to disagree with the first, but agree with the second. While I feel strongly about the absolutism of truth, I tend to be more of a relativist about moral truths, a use in which "truths" is not even really applicable. Morality seems really just to be an amalgamation of what seems to have worked for different societies in the past.

ZacharyZdenek22:05, 28 November 2011
 

I found Rorty's discussions on relativism quite interesting, especially parts where he claims relativity in areas of knowledge that we traditionally assume to the the stronghold of objective truth. For example, in the essay, Rorty, claims that even scientific knowledge, truth as obvious and unquestionable as the heliocentric model of the universe, is relative.

It’s “incorrect” (I put it in parenthesis because Rorty would most definitely avoid the term) because our views on the solar system are not determined by the actual position of the sun and the planets. Rather, it’s the result of contingent narratives and their usefulness to use - I think in the essay that he mentions space travel as one of the uses of having a heliocentric model, and this is why actually subscribe to the view. Nothing is defined by its inherent worth, inherent truth - everything is judged through the lens of pragmatism. Therefore, there are no truths - only relatively more or less useful contingent narratives.

While I accept relativism in some places, I found the claim of relativism in the hard sciences disturbing. I wonder what view Rorty would take with regards to mathematics, given that math is an even more apparently and objective truth case than science.

Indeed, space travel is a great benefit arising from our conception of the solar system, but that could hardly be the reason why we had our conceptions in the first place. Galileo didn’t have the faintest clue what his theory would entail (e.g. space travel). He just sat in a field somewhere and looked up to the heavens with his telescope. What dictated his theory was the observations he saw and his unwavering dedication to it.

If he were practically minded, he might have been persuaded by threats from the church to cease with his hearsay. But he didn’t. His objective knowledge arising from his observations convinced him that it is the solar system is organized with the sun in the center, and not any other way. That is not only a true fact relative to him, but a true fact for all. It transcends whatever language games he may play, whatever contingencies he may have.

Rorty might respond by define pragmatism in another way. He might say Galileo still did what was pragmatic for him because the heliocentric view pragmatically explained Galileo’s empirical observations. He found the explanatory powers useful, and that’s why he persisted in face of the inconveniences posed by the church (e.g. house arrest). But this redefinition (no pun intended) of pragmatism is rather confusing. What is it actually different from truth (besides the name, of course). What is pragmatic in this sense is what is true. It was pragmatic for Galileo to describe the planets as revolving around the sun and not the other way around because that view better corresponds with reality. Therefore, if we define pragmatism liberally, it becomes the same thing as truth.

So, in my view, Rorty’s faces a central dilemma. Either his theory cannot account for many human events and developments or it looses its radical nature and becomes just another correspondence-to-the-truth theory.

Wittyretort12:12, 29 November 2011
 

I would have to tend to agree with the first statement. I think the scientific process lends itself very well to idea of "convenient truths". It would be crazy to say that all of science as we know it now is either all true or all false. There will always be new discoveries that will morph our current understanding. Even seemingly trivial advances like increasing microscopic magnification is sometimes enough to lead to many new theories in the biology. If scientific truths can be seen as "convenient truths" I feel that social norms would be even easier to prove to be dependent on something other than an absolute truth.

Lexx17:28, 29 November 2011
 

"All truths are merely convenient fictions" is a very interesting statement, and I tend to agree with it to an extent. As we have discussed in many lectures, while truths may indeed be in fact true, it is very difficult to sustain the idea that a truth can come with certainty (for many reasons). With that, the idea that a truth is a fiction may be incorrect, however, there is no sure-fire way of knowing, leading me to believe that, until we can prove with absolute certainty (which again, may not even be possible) that something is indeed true, then it can just as well be false. The problem I have with this sentence however is that the word truth is definitely a blanket term, as one "truth" cannot be categorized as the same type of truth as another (for example, absolute truths, moral truths, etc.).Adding to that, the second part of the sentence about how truths may indeed be convenient is problematic, where truths may be convenient for some while other statements may be in fact inconvenient for others.

Dwylde00:53, 30 November 2011
 

I somewhat agree with the second statement " 'firm moral principles' .. are ..ways of summing up the habits of the ancestors we most admire", because in order for us to derive and set up moral principles in our society, we have to look back at the actions of what people did in the past, and considering what they did, we can determine what is right and wrong from the result of their actions or 'habits'. However, morality differs in different societies, so "summing up the habits of .. ancestors" to derive moral principles would only work in a similar context between the people considering what is moral, and the "ancestors". These 'firm moral principles' cannot possibly be universal.

IreneWong00:58, 30 November 2011
 

For the latter opinion, I'd have to say that I agree. After all, what we consider to be our "firm moral principles" are but the values derived from the lessons that our parents and maybe grandparents taught us. And who taught them, but their own parents and grandparents! These lessons are handed down from generation to generation thus forming a conception of morality. We apply our own individual considerations and input as we age, and keep the beliefs and opinions that we agree with and alter or drop the ones that we do not. These lessons themselves were essentially the habits of our ancestors, and given our ability to decide the values in which we keep and those that we do not, it is in some way dependent on those that we most admire.

The two stated opinions are related because in a certain sense, morality is somewhat considered to be a "universal truth", despite the fact that though it may seem or feel so, this is not the case. Morality is a collection of abstractions or convenient fictions, really.

So although I am capable of forging a connection between the two opinions, I have to say that I disagree that either science or politics can be generalized into mere "problem solving" given that they both encapsulate so many different things and have a variety of purpose.

Antaresrichardson04:09, 30 November 2011
 

"all truths are merely convenient fictions" Seems like it could be some what of a metaphysical statement. Where one could ask, where does truth exist? A: In a fictional world that we made up. (But then let me ask, is that true or false?) The claim itself leads us to either agree with it and say "ya thats true" or not and say, "no, thats not true." (T/F)? The absurd thing I see is that if someone were to agree with this statement, they are claiming their own position to be fictional and not real, because what they hold as truth, thus becomes something fictional. Why then, should we consider this statement to be anything more than what it claims itself to be, nameley fiction, thus not a fact about reality. Are there different perspectives? sure. Do people have different preferences? ABSOLUTELY. But to then turn around and assert "all truths are merely convenient fictions" seems to undermine itself by the claim it makes.

AndreRoberge05:39, 30 November 2011
 

I hold a rather pragmatic perspective (like an engineer), so I'm inclined to say what we know of this world are mostly relative truth. As such, I believe that human are incapable of 'perceiving' absolute truth, but merely are convinced by the 'convenient fictions' that follows from the absolute truth.

However, there is one aspect of the universe that can be justifiably understood as absolute truth, I'm referring to mathematics. It's hard to argue against that '2+2 = 4 (being a fact) is merely a convenient fictions' because of the reasoning that such truth is universally persistent and fundamental to the understanding of the world (I raise the idea of universally persistent as a criteria to truth being absolute). Maths, to a large extent, is apriori knowledge, and as such does not come to rely on our perception to understand, but on solely on reasoning; so its hard for it to come to fault where most other knowledge has come unto, being that the majority of them are perceptual knowledge. And as such, anyone who could put 2 and 2 together will inevitably come up with 4 lots of them (unless if someone proved that 2+2=5 somewhere beyond our universe, then we can argue they are beyond our domain of reality, upon which my argument of maths being absolute lies).

Ken Wong23:28, 30 November 2011
 

In an internet reference, Thomas Kuhn: A philosophical history of of our times by Steve Fuller, there is the phrase, Rorty chides Kuhn for inconsistency, since Kuhn still wants to explain "the success of science." I am on the side of Kuhn in the appeal of his model description of the steps of a paradigm shift in science.

JamesMilligan08:57, 1 December 2011
 

I disagree with the claim that truths are merely convenient fiction. Though true that perhaps we simply truths or "common knowledge" to have convenience, there is definitely truth in the sense of absolutism. The earth has a definite mass, (lets not talk of quantum inaccuracies) and no amount of semantics will change that.

JamesWu19:17, 1 December 2011
 

Talk of absolute truth boils down to a discussion of whether humans can speak of things 'objectively.' Finding an absolute truth in say, ethics, we could "attach" things to this truth, i.e. we could see how everything else we feel, see, perceive, know, etc, is related to this "one, absolute truth." Everything is given its position, its weight, its meaning; there is no shifting of the value of a thing or idea: Compassion is good, and that is the end of the matter. Never can compassion have its value to a person undermined, subverted, or changed; never can compassion be a harmful thing to you or others.

This seems to be a tragic flaw. It is the hanging on to certainty that some people cherish that is at work here. Instead of leaving our ethical questions "up for discussion," we close off the discussion. Part of the reason is fear, I believe. Instead of admitting imperfection in life, in others, and in our way of relating to others, the hangers-on-to-certainty (absolute truthists, dogmatic people of any faith or belief system, etc) would rather cover over this uncertainty and say "there is a moral T-truth to be discovered, there is a "way things just are ethically." The hanging on to certainty seems romantic, not in the candles and fireplace kind of way, to me; it is holding out for something "perfect" (a capital-t Truth in ethics, that is, in the way humans relate to one another), trying to discover the perfect way that we can get along.

But instead of aiming for perfection here, why not say that we will relate to one another imperfectly from the start. And why not ask ourselves "how can we do better?" The other question, the one opposite to the one I just proposed, "how do we find a way things are (in ethics)" seems to imply that we can find this "way things are," and then, seemingly, do something as a result of finding this Truth, this "way things are." But what would we do? Do people actually expect this to work out for humans?

It is hard to engage Rorty head-on. This is because much disagreement with his ideas will boil down to first-order principles. So, you gotta ask yourself: Do I want to find a way things are? OR Do I want to find a way to make things better? Does the first option really seem reasonable? Does the first option really seem desireable? Does it seem desireable, in the face of all the effort it would take, to find the way things are? OR Should we admit our flaws and imperfections and see if we can spend our energies, whether physical, emotional, or psychological, on making things a bit better.

It is hard to engage Rorty head-on. This is because much disagreement with his ideas will boil down to first-order principles. For my money, fear (of admitting imperfection, and admitting that ethics is subjective) seemsto be at work in the first option, and its corollary. But so does hope, and romanticism. But it is, I argue, and so would Rorty, that this hope is misguided.

ZlatanRamusovic20:36, 1 December 2011
 

In regards to the "firm moral principles are ways of summing up the habits of the ancestors we most admire", I feel as if it doesn't ring fully true. We often change who we admire (due to a number of factors), and if there was a one-to-one correlation between the habits of ancestors we admire and our firm moral principles then our moral principles would change very often, and it's just my personal belief that, by-and-large, our moral compass remains roughly the same.

StephenRazis23:13, 1 December 2011
 

My reaction to "all truths are merely convenient fictions" is that I believe it is part of contemporary society's embracing of doubt. To avoid all error the safest would be to subscribe to doubt, but Rorty makes an especially strong claim in saying "all" truths. A claim like this is unfalsifiable, and has no more functionality than to have two minutes of fun pondering it. The only truths that matter are the ones that are pertinent, ones that serve a purpose. Even if they turn out to be false they have served their purpose.

VinceXi23:49, 3 December 2011
 

I agree with "all truths are merely convenient fictions" ("science, like politics, is problem-solving")


All truth is true because society has made them true and because we believe them to be true. Now, one statement may be true in one culture, but false in another due to different perceptions, belief systems, environment, etc. This does not make one statement true or false - it becomes very subjective. Language is another factor that makes me agree with the argument. We explain truth using language; however, no language can be exactly the same. It will not be able to convey a 100% exact explanation using two different languages ( lost in translation ). This then again makes the truth very subjective.

in such cases, truths are defined as true through explanations that makes sense in our own senses. The ultimate end product in calling a statement true may be the same; however, the steps that was taken in order to reach that conclusion may have been different. Truth then will lose it's quality of being true. Therefore, they are merely convenient fictions.

YukaZaiki10:43, 4 December 2011
 

Like many others above me, I agree that truths are convenient fictions. In the way we are brought up, we tend to percieve different words with slightly different meanings, and therefore truth is affected by perception. The only way to have truth in this sense, then, would be to create definitions for all the words in a language with all of the words in this language. That would create a sort of circularity and cover any loopholes. For example 'Horses has four legs', by using all the words available to us we can cover all of what it is not, and therefore leave out what it is. That, of course is pointless, so therefore truths are never absolute and are mere convenient fictions

ChaoRanYang00:12, 6 December 2011
 

I think the statement "all truths are merely convenient fictions" seems to complicate the case even further, in that if the statement is true, then it is saying that the statement itself is a convenient fiction. So then we can't really acknowledge this self-contradictory statement because we don't know if it's true or if it's a convenient fiction. So, we're back to first base : What is truth? In the long run, you would probably find that the answer to that question is "we don't know", and it is likely that we will never know. When we don't know something it seems like it will lead on to endless paradoxes and loopholes and contradictions. For example: how do we know that we don't know what the truth is? Is saying "Nobody knows what the truth is" true? If it is it's probably the only true thing we're certain is true. But, if nobody knows what truth is then how do we know THAT is true? Maybe we all know what thruth is, maybe all truths are absolute truths and we all constantly create these illusory explanations as to why they may not be true, and so we're unaware that what we're saying is all the truth. I guess it's a tendency in all humans not to believe that. We simply can't know. I think truth is like God, in that either it exists or it doesn't, there are signs of it everywhere. Some people see those signs more than others, and some people believe in it more than others. But up to know, we just can't know if it exists for sure.

YannickJamey23:41, 6 December 2011
 

forum for week of 14 Nov: when we don't want knowledge

Edited by another user.
Last edit: 23:38, 22 November 2011

Chapter six describes 'knowledge' as a label for top-grade belief. But on the other hand I have been warning you against thinking that if the grounds for a belief are not perfect then the belief cannot be known.  And according to fallibilism one can know something while thinking that further evidence could refute it.  So what is the relation between knowledge and the aims of inquiry?  Suggest situations in which one does the normal epistemic things - perceiving, thinking, conjecturing etc.  - but it is something less than knowledge that one is aiming at.

AdamMorton21:27, 11 November 2011

I would argue that the primary time in which a person would perceive, think, and conjecture without attempting to find something "known" (given the sort of strict definition of known, as we have seen both in class & in this weeks reading), would be when someone consumes Art. Consider (as a particularly obvious & Canadian example) viewing the work of Quebecois painter Jean Paul Riopelle. When you face it, you will do many things. First, you will perceive it, noting the colours & their interplay, the relative textures of different sections, and what appear to be the background & foreground tones of the work. After/during that, you would (hopefully) begin to form thoughts about the work, the painter at the time of painting it, and possible messages the work might offer. You would also be likely to form conjectures about the painting, and voice them to others, who would in turn voice theirs to you, which would further affect your thoughts about the painting. However, In doing all of this it would be ridiculous to think that you were searching for knowledge. Art, especially Abstract expressionism, cannot ever be reduced to being an expression of a single thought or belief, or even a finite set of thoughts. It is the multitudinous interpretations, in which none can be said to be known, that makes Art so interesting. So the relation between Knowledge and inquiry might be that knowledge happens when one is pressured to find concrete, singluar thoughts, and inquiry can be used much more broadly, in areas in which no single answer might be definitively right. For anyone who thinks that we are only ever searching for knowlege when using our critical tools, I would suggest a heavy dose of MoMA & Confusion.

NoahMcKimm16:44, 14 November 2011
 

Reading a novel. No actual knowledge, just the pleasure of some leisure time well-spent. sure, there are those who would argue that you gain a type of knowledge, but I don't subscribe to that.

KarynMethven04:28, 15 November 2011
 

"Information is not knowledge" -Albert Einstein

From this context, I would agree there are very few things that we would do which can be considered knowledge.

An example I can give would be video games. There is a large amount of perceiving, thinking and conjecturing involved - but is the aim to acquire knowledge?

Some might argue that to beat the game/win - one requires "knowledge".

I personally feel in this instance it is not knowledge but "information"

KashirajDaud07:37, 15 November 2011
 

I would argue that while sitting a test you are not necessarily aiming for knowledge. Instead, you are aiming for whoever grades the test's belief of what is the correct answer. You may have to go through a series of perceiving, thinking, conjecturing , etc., but knowledge isn't what you're aiming for, you're aiming for someone else's belief, regardless of whether that belief is knowledge or not.

JosephPeace08:27, 15 November 2011
 

What about writing a story, a novel or screenplay for example. You would think of characters in your mind and perhaps even pretend you are them while you are writing your story. You would try to think as they do so to enhance the quality of your story. The aim would be to create something fictional rather then to seek knowledge about the real world. However, I think in a way this could still be considered aiming at knowledge because you are in a sense still creating knowledge and/or seeking knowledge on a subject, just not on a subject that exists in the physical world.

KacperMotyka17:07, 15 November 2011
 

Any instance that would require a person to suspend disbelief would fit the criteria. This would include the previous art, gaming, and literature examples. Watching a movies would be another such example. I think that dreaming, in particular lucid dreams, may arguably fit the criteria. Here you could have some set of beliefs that though possible true in the dream would not be 'knowledge' in any sense of the word.

Lexx19:00, 15 November 2011
 

One perceives, thinks and uses conjecture both in the ‘real’ world and in fictional worlds. I would argue that when one uses these epistemic methods in the real world they are aiming at knowledge, and when they use them in a possible fictional world they are aiming at something different. Both art (i.e. paintings, novels), and games (i.e. video games), as already mentioned, offer examples of outlets to a sort of fictional dimension where there is something to be gained however it is not knowledge (being real world knowledge). Another example is that of theoretical physics. Physicists will use a variety of close possible worlds to help them eventually make observations about the real world, and as a result (hopefully) gain real world knowledge. In the intermediate stage physicists are using a fictional world very close to the real world and therefore aiming at something other than real world knowledge.

HannahOrdman20:02, 15 November 2011
 

When I ride my bicycle, I say I know how to ride my bicycle. When asked to prove that I have this knowledge, I simply demonstrate by riding around the questioner. However, when pressed for an explanation to show that I know how it is that I accomplish going from a to b without capsizing, I am faced with a problem. Although I can give some explanation, if pressed further, the depth and complexity of how I actually manage the trick makes me realize that I don’t know a damn thing about it with any final certainty – except perhaps one thing. Somewhat paradoxically, it is not thinking about it reflectively, but rather concentrating, or being present-minded in the task at hand that keeps me upright and rolling forward – precisely not knowing: a kind of active forgetting. Only after the act, and upon reflection, can I examine, analyze and then explain how I did it. I then have instrumental, technical knowledge – how I ride. But is it complete, that is real knowledge without proven answers to why I can, (physical laws, how the brain and body work, coordination), or what it is? (transportation, exercise, entertainment, fun, escape, madness, etc.) This forgetting activity can be applied to much of what we do, e.g. swimming, singing songs, skiing.

Robmacdee01:17, 17 November 2011
 

Making an acquaintance with another person - while inquiring into another persons existence we develop ideas and beliefs about them through many different tools. Some may be accurate, while in other cases we make misjudgments. We can ask them questions and believe or not believe their answers, which may be true or may be false; and interpret their actions and body language and how they interact with others. Here we are searching for knowledge to know something about a person while simply creating our own idea of them, consisting of our own beliefs. Our beliefs may not be the same as a person has about themselves, or they may be in tune with how the person sees themselves - but even then is it a true belief? You may be wrong in your interpretation of them, or they may be wrong in the interpretation of themselves. I would argue that we are not searching for knowledge but ideas or feelings - trying to learn how to interpret, read or interact with another person. Because you, others and themselves can not be sure of your interpretations they cannot be true beleifs or knowledge, even though we are attempting to collect as accurate information as we can. On the other hand some straightforward information like physical appearance, occupation, and nationality we can mark as true beliefs and could thus be argued to be knowledge about the person if we had good reason to belief it (ie they tell us, we see them at work ect.)

SaralynPurdie01:37, 17 November 2011
 

On knowledge for week of November 14, I would like to include a reference to the book by Bertrand Russell titled The Problems of Philosophy, 1912. In Chapter V, Russell mentions knowledge of things and knowledge of truths. Russell refers to knowledge of things by acquaintance and by description. Beyond sense data, he includes acquaintance by memory and acquaintance by introspection. In addition to acquaintance with particular existing things Russell includes acquaintance by universals such as whiteness. He writes, awareness of universals is called conceiving, and a universal of which we are aware is called a concept. And that among the objects with which we are acquainted are not included physical objects (as opposed to sense-data), nor other peoples's minds. These are things are known to us by what I [Russell] call 'knowledge by description'... And a nice discussion of propositions. November 17.

JamesMilligan08:44, 17 November 2011
 

If a kis is crying and I'm trying to stop her from crying I might try to figure out why she is upset. I would perceive, reason, etc. why she is upset, however it isn't neccessarily the knowledge of the cause of her crying that I'm aiming at but rather any belief that would lead to a sollution to stop her from crying. So although she might be crying because she scraped her knee, I might decide to belief that she is crying because she wants some ice cream and her parents won't buy her any, and although the belief is wrong I might be able to stop her from crying by buying her ice cream, and it doesn't really matter to me whether the belief is true or not true, so my aim isn't knowledge but any belief that would be useful

DennisPark21:30, 17 November 2011
 

I believe a lot of sensory data obtained by individuals often lacks epistemological characteristics, in which the individual’s primary purpose is to obtain knowledge. I agree that empiricism and sensory data is most commonly used in the pursuit of knowledge, however all individuals are subject to their own biases. People often project their biases in order to interpret the world in a way that will improve their lives. Although, these individuals may realize deep down that they possess flaws or shortcomings, they may search for data which reaffirms their desired beliefs, while dismissing data which refutes it. An example of this could be a woman who believes she is the most beautiful woman in the world, but if she sees someone who is prettier than she is; she quickly turns away and tells herself that she did not get a close enough look to determine whether this woman was more attractive. Although, this is a somewhat unrealistic example, and the idea of beauty is quite subjective, I believe this example illustrates how people are prone to gather sensory data which benefits them, while disregarding other data that negates their desired beliefs.

ChadMargolus01:53, 18 November 2011
 

A belief qualifies as knowledge if, in acquiring it, one has achieved the basic aim in the inquiry that led to it. For example when one tries to find a logical explanation for an unnatural phenomenon of which they were involved in, they tend to perceive and think. However due to the rush of shock that as overcome them, their search for an explanation is hurried, and they probably would settle for the first explanation that calmed their worries. The search here is not for knowledge rather it’s for the slightest plausible answer that can place the world back to what they know it as. Also a deep skeptic would argue that when people normally attempt to find conclusive reasons as to why their beliefs can be claimed as knowledge they cannot, even though they are thinking, perceiving and to some extent conjecturing. This is what leads deep skeptics to believe we have no or very little knowledge. Therefore the qualities of our present believes make a scenario where we are using epistemic methods of thinking but it’s something less than knowledge that we are aiming for, even if we don’t know it ourselves .

EbenzerOloidi08:10, 22 November 2011
 

I would say when a judge is hearing the arguments of lawyers they are going through such an instance that they are thinking, perceiving and conjecturing meanwhile assuming the whole time that they are not witnessing any true 'knowledge' being brought fourth to them. They have to think deeply and try and appreciate the 'beliefs' of each party and pick and choose certain instances which to them constitute something more along the lines of 'knowledge'. The judge, meanwhile, full knows that when they deliver a verdict they are hardly delivering any 'knowledge', but only trying to piece together what evidence was presented to them to try and create a version of knowledge without aiming to deliver or create any true knowledge.

AnthonyMayfield01:33, 28 November 2011
 

As far as all the "art" realated examples go, often the mechanisms of thought still aim at knowledge, even if that knowledge may not be attainable. We may seek to know what a work of art "means", or conjecture about the state of mind of the artist when they created it. Also, there is the knowing of the work itself, for example, a one may know many things about a piece of music surrounding the details of its composition, the instruments used, how it was put together both concretely and theoretically. This is more in line with acquaintance; someone may ask "do you such-and-such song" and one may answer in the affirmative if they are acquaintted with that song.

ZacharyZdenek21:50, 28 November 2011
 

Similar to the idea of the consumption of art; when one is writing a novel, poem, screenplay, television script, or creating any other kind of art of some sort for public viewing, the writer or artist is creating a fictitious world for the artistic consumer, but is not necessarily searching for knowledge. The writer must perceive and inquire both to create a logical world in which the consumer's imagination exists, and also must create something that the consumer will appreciate. In the case of the television screenwriter, he must conjure up a fictitious series of events that will both appeal to the viewer's imagination and make logical sense. The writer is not searching for knowledge, but is perceiving what will appeal to the audience in order to reach success among consumers.

CaitlinMcKewan22:37, 6 December 2011
 

forum for week of 21 Nov: the appeal of truth relativism

Edited by another user.
Last edit: 23:39, 22 November 2011

I am puzzled when students are suspicions at at the mention of truth - as in "to be known, a claim must be true" or "some beliefs are true and justified, but not known".  They ask "true for whom?" or "true from whose perspective?"  And when I say "neither, just plain true" it is their turn to be puzzled: what could I mean by that?
I can think of several sources of the idea that when something is true it is true for someone:

- A person can think that something is true, so we might say it is true for them. (But they might be wrong mightn't they?)

- When we find out something is true, we have to know it first. So it is hard to find simple descriptions of things that are true that are not believed true by you.  (But still, you ought to be able to convince yourself that there are more complicated examples.)

I'll talk about both of these on Tuesday.  But do you think either is a diagnosis of the puzzling but common belief that "true" is always "true for someone"?

AdamMorton00:24, 21 November 2011

The reason I lean towards relativist ideas of truth is that there seems to be something intrinsically wrong with the notion of absolute truth. However it is difficult to articulate precisely what is wrong. The main problem I have is the way in which absolutists deal with the future. The example in class that was used is the flipping of a coin. It is either true or false in the present that the coin will land heads in the future, if it’s true then the coin will, in fact, land heads. But this argument seems to assert that the event is true before it even happens, without regard for other possibilities. I view this argument to be extremely close to determinism, and I don’t wish to conceptualize truth in this way. I want to be able to have the free will to influence which events will become truths in my life. Furthermore, I find relativism appealing because, as human’s we have defined the concept of truth. For instance, the universe would exist without us, but it would not be “true” in relation to us. There is an inability to get outside of our human experience, and therefore I think that truth is relative to humans.

StefanRaupach03:13, 23 November 2011
 

If I recall correctly, I think Dr. Stephen Hawking uses the expression we [humans] create history in his application of Dr. Richard Feynman's quantum theory to the universe. I think history in this sense equates with truth.

JamesMilligan08:27, 24 November 2011
 

The source for Dr. Stephen Hawking is his 2010 book titled The Grand Design.

JamesMilligan08:30, 24 November 2011
 

"It is either true or false in the present that the coin will land heads in the future, if it’s true then the coin will, in fact, land heads."

This isn't right though in any theory. There is always an infinite number of possibilities to any action. Leaning towards the probable doesn't create truth.

"I view this argument to be extremely close to determinism, and I don’t wish to conceptualize truth in this way. I want to be able to have the free will to influence which events will become truths in my life."

I don't think you should shape your views on what you want to conceptualize as true but what is true. Besides, freewill has many levels but there is a fundamental flaw at the very idea of it so it seems irrelavent to truth.

PerrySieben17:17, 24 November 2011
 

If it is true, then it should also be true for everyone, perspectives shouldn't really play a role.

DannyRen19:13, 24 November 2011
 

I think that different experiences can create different schemes of interpretation, and therefore different foundational beliefs. For instance, conceptions of gender vary widely between cultures and periods of time: notions of man and woman are obvious to us, but potentially not for another culture, so a person can say, "I am a man" and be a woman and be speaking the truth, because to them man means all human beings. Or perhaps I can say, "I am a man" and to another person that is not true, because they don't have the same conception of gender. Or you could have two boys in Roman times and they both say "I am a boy" and for one it is true and the other it isn't, because one is a slave and therefore not a person at all.

SpencerKeys20:50, 24 November 2011
 

I believe that there are fundamental truths. We make use of whatever "true" beliefs we have, and whenever more rational beliefs come along, we abandon our "true" beliefs and make use of the new ones. In this way, we constantly strive for the underlying truths. When someone believes something to be "true", it is purely arbitrary, it is only true to them. Knowledge is not arbitrary, it requires fundamental truths.

WanTaiTsang13:48, 26 November 2011
 

Truth, to me, seems to be a very relative term. Something can and is likely inevitably true for one person and not for another. When thinking in terms of truths as "true for someone" though, it would seem to me that that particular term or line of thinking is more attributed to opinions. For example, if someone believes that their favorite food is pizza, then this would be "true for them," while obviously not true for everyone. Addressing the second example, it does seem to be a conundrum that one would be able to have something be true for them without first knowing that it is true. Keeping with the examples of food though, say it is true for Fred that the most delectable taste to his pallet is that of pizza. But, Fred has never actually had pizza before. So, despite that it could be considered true for Fred that his favorite food is pizza, he still does not know nor believe this. At the same time, it could be just as easily argued that, since he has never actually tried pizza, the truth is not, in fact, true for him until he tries pizza. This creates a rather confusing conundrum and, unfortunately, doesn't really seem to further the discussion but simply add more debate. To be short, while both analyses may be used to diagnose the issue of something needing to be "true for someone" in order to be a "truth," it would seem that both have very recognizable flaws that can be easily debated.

Fmillay06:29, 28 November 2011
 

I think that as well as the arguments that can be made for relativism in terms of there being relative truths from person to person at one time, there are even more simple arguments when you think about time. To say that Stephen Harper is the Prime Minister of Canada is true for anyone right now, to say the same thing a hundred years ago, this obviously would not have been true for anyone. It seems there are lots of transitory truths that are relative to groups of people as well as just individuals.

JamesRobinson18:14, 28 November 2011
 

Important here to make qualifications though. Saying that pizza is my favourite food isn't only true for them, but objectively true, as the indexical "I" indicates the person speaking, which corresponds for everyone else to the sentance "your favourite food is pizza". Similarly for the Stephen Harper example, while it is an objective fact that he is Pm at this moment, it was also an objective fact that a hundred years ago, he was not. It depends completely on the moment of the utteration, but that input does nothing to alter whether Stephen harper is PM or not. The claim made is really that Stephen Harper is PM at the time at which the claim is made, so the complete thought would be drawn out as "Stephen Harper is PM of Canada in Novemeber 2011", which always will be true, regardless of the location of the speaker in time. So, in short, yes I believe truth to be objective.

ZacharyZdenek21:59, 28 November 2011
 

forum for week of Nov 7: kinds of apriori beliefs

There are two obvious sources of apriori beliefs: mathematics and the meaning of words." 2+3=5" and "cats are animals".  
Can you think of any examples that don't  fit into either category?  If not, can you suggest a reason why there are not any?
(You don't have to commit yourself to thinking that there are apriori beliefs, but just that there are beliefs someone might plausibly take to be apriori.

AdamMorton20:45, 5 November 2011

Kant posited that there are additional apriori beliefs, called synthetic apriori beliefs (such as the belief that effects have causes) that are necessary in advance of any evidence, in order to collect and make sense of evidence. Kant’s explanation for the truth of these beliefs is somewhat related to idealism in that these beliefs are necessarily true because we make them so, due to the structure of our mental processes. Although, as Quine argues, perhaps even these beliefs may change, we must nevertheless, maintain some central apriori beliefs in order to live and form any aposteriori beliefs.

To add to Kant’s list of apriori beliefs, I wonder whether our basic biological instincts, such as the sensation of needing to eat or sleep, would also satisfy Kant’s criteria for synthetic apriori beliefs, since we seem to be aware of our need to do these things in advance of our collecting evidence to verify this (though the evidence does follow shortly after fulfilling these needs by the sensations of fullness or reinvigoration). Additionally, such instinctual reactions are necessary in order for us to be able to collect evidence to form further beliefs since we would not be able to live without fulfilling these needs. Should instincts be classified as apriori beliefs, however, they would likely be, like Descartes’ “cogito ergo sum”, considered as apriori but not necessary since animals’ instincts to eat could have been false . Of course, in that case, we wouldn’t have evolved in the way we have and thus wouldn’t be here to speculate on this matter.

Apriori beliefs seem to be extremely limited in number and this may be due to a false distinction between beliefs that can be obtained in the absence of evidence since even apriori beliefs involve the presence of a world. There is, therefore, a limit to the beliefs we can reasonably say could be apriori, though I am unconvinced that even these beliefs are entirely so, instead i would call them "as apriori as possible". If all that existed was a floating brain, would it really be able to form beliefs about mathematics or cause and effect without any evidence of the existence of objects? I would be inclined to say no: evidence requires apriori beliefs and apriori beliefs require evidence. Which came first would seem to be something of a “chicken or egg” question.

AlexandraKnott01:37, 7 November 2011
 

I wouldn't say biological instincts are apriori belief. I think that society learned from evidencial experience how to satisfy their needs for hunger and thirst (what types of food to eat, e.g how to avoiding eating poisonous berries etc.). This notion can also be uptaken by sleep itself (through evidential experience one learns that 8 hours of sleep is the "optimal amount"-not a pregiven Getting back to the question at hand, I would argue that there is only in fact one source of aprori knowledge, and that is from the meaning of words itself. Mathematics, to an extent is reliant upon a specific language to which all must agree upon, and the terminology and the way this specific diction interacts. Terms like "equals", "addition" "divisible" are are reliant upon the language one uses to describe them but I think those terms cannot be accepted as a given, rather must have been created by someone (whoever was creating language) to order to aid the process of comparison and causation of numbers. Now of course one could argue that math is universally agreed upon and cannot be fundamentally challenged, but that doesn't solve the fact that math is understood the way it has been through the way one understands meaning in the words used to describe what is at hand. To this I believe that no there are not any others apriori beliefs that stem other than interaction of words...? (Deductive reasoning, analytical beliefs all come from the way words and their meanings interact).

DanielKostovicLevi20:22, 7 November 2011
 

I had a long debate with a previous metaphysics prof about whether rules of games could be considered a priori knowledge, ie. that three strikes constitute an out in baseball. This really just comes down in the end to how wide the semantic content of the term "baseball" is. Does the term "baseball" necessarily denote "the game in which three strikes constitutes an out", or is that instead non-essential data to the concept of baseball that must be obtained via experience.

ZacharyZdenek01:30, 8 November 2011
 

I would suggest that logic, or more specifically the deductively valid argument mentioned in the text, could be viewed as a combination of the two categories put forth in the question. Predicate logic, or word math as I think of it, would seem to be a manner of achieving apriori beliefs. Predicate logic is a language that can be learned, with rules for sentence structure and grammar, but which uses quantifiers, predicates, and variables in place of words. Once a person learns the language, it can be used to analyze propositions and arguments. An argument is valid if and only if it is impossible for its premises to be true and its conclusion to be false at the same time. Once you understand that definition and the language of predicate logic, it becomes possible to reason and arrive at new true beliefs without having to acquire new evidence.

Now granted, this relies on an understanding of the logical language in advance. But that is no different than any other analytic belief which all require a grasp of language. Predicate logic is just one form of logical reasoning that can lead to apriori beliefs, but it's an interesting one as it seems to combine two separate sources into one.

AmandaJohnson03:42, 8 November 2011
 

I think that the previous poster hit on a very important point, regarding the placement of predicate logic within the framework of a priori beliefs. I would argue, however, that predicate logic is an example that falls under both categories of math and meanings (leaning perhaps more towards math, but that is a separate issue). I don't think that there are any a priori beliefs outside the realms of mathematics and definitions (which, incidentally, seem to overlap a lot), because anything we could justify independent of experience, or before evidence, would need to be thought of within a framework of logical rules, and as the previous poster has already posited, it seems as though predicate logic itself is a combination of mathematical rules and definitions. Even language, when really boiled down and stripped bare, seems to be a manifestation of some sort of syntactic structure or another, which really is no different from mathematics (for instance, consider the roles parsing trees play in both the study of language and mathematics). In short, it seems to me that a priori beliefs outside of mathematics and definitions are difficult (perhaps impossible) to really find because we are structured, in the absence of evidence, to think in terms of mathematics and definitions, and in combinations of the two, the primary combination being predicate logic.

AledLines08:15, 8 November 2011
 

I don't think there is such a thing as apriori knowledge because any sort of reasoning requires previous evidence. For example, we know 2+3 is 5 because we have learned about the mathematical foundations that are needed in order to solve the problem. On the other hand, even if we haven't solved 123456 + 12314142141412414 before personally or seen anyone else solve the problem, we get the answer by evidence that we have learned previously. Hence I believe no such thing as apriori knowledge can exist because even the tinest events in our lives require reasoning and evidence, even though the process may be so subtle that we don't realize it

YangSunnyLi14:04, 8 November 2011
 

If the offspring of two Asian parents has black hair it is apriori to believe that the parents will also be of asian ethnicity with black hair. In order to classify the individual features, we need to to assume for every asian-looking person at least one of their parent are of Asian ethnicity.

RunZheLi19:19, 8 November 2011
 

Mathematics in itself is a language, the only universal one to exist. The numbers have meanings, in whatever language they are translated to when we discuss them. I would say all our beliefs are based on the meaning we give to them, they are all relative to the way we perceive the world. But they are also all interlinked and they depend on other beliefs. Even mathematical beliefs like 2+2=4 is based on certain beliefs we have about what numbers mean and what "+" and "=" mean. Without having those beliefs, we cannot understand 2+2=4.

YaradeJong20:32, 8 November 2011
 

A Priori Belief Example:

4.3 Quantum Mechanics Other physical candidates for backward causation can be founded in the physics literature. Richard Feynman once came up with the idea that the electron could go backwards in time as a possible interpretation of the positron (Feynman, 1949). In fact he imagined the possibility that perhaps there were only one electron in the world zig-zaging back and forth in time. An electron moving backwards in time would carry negative energy whereas it would with respect to our ordinary time sense have positive charge and positive energy. But few consider this as a viable interpretation today (Earman, 1967, 1976).

More recently, the Bell type experiments have been interpreted by some as if quantum events could be connected in such a way that the past light cone might be accessible under non-local interaction; not only in the sense of action at a distance but as backward causation. One of the most enticing experiments of this kind is the Delayed Choice Quantum Eraser designed by Yoon-Ho Kim et. al (2000). It is a rather complicated construction. It is set up to measure correlated pairs of photons, which are in an entangled state, so that one of the two photons is detected 8 nanoseconds before its partner. The results of the experiment are quite amazing. They seem to indicate that the behavior of the photons detected these 8 nanoseconds before their partners is determined by how the partners will be detected. Indeed it might be tempting to interpret these results as an example of the future causing the past. The result is, however, in accordance with the predictions of quantum mechanics.

Source: Stanford Encycopedia of Philosophy. Backward Causation.

JamesMilligan07:54, 10 November 2011
 

I believe feeling threat could be one example of an a priori which is neither mathematics nor meaning of words. When impending danger is near, most people are able to detect it without any foreknowledge. For example, you may have heard of a person talking about how they knew "something was just wrong" or felt like "something bad was about to happen" after a tragedy or an unfortunate incident occurring. Sometimes people go out of their way to react to the feeling of threat although there may be no substantial evidence to prove that there are actually any threatening factors around them just to have the threat confirmed afterwards. However, I understand that there are exceptions to this, such as mere paranoia when everything may seem like a threat when it is not and having (bad) luck play a role in whether what they expected ends up happening afterall.

ShinHyeKang02:16, 12 November 2011
 

I believe that human emotion in general can be seen as an apriori belief. We are able to predict how our emotions would react to certain events. For example, we are able to predict that failing an exam would be followed by a negative emotion, and that acing an exam would result in a positive emotion.

JinKim08:28, 13 November 2011
 

perception and evidence: forum for week of 10 Oct

Edited by author.
Last edit: 19:31, 10 October 2011

Mo: Empiricists have a theory of how perception works, that it gives pure unprejudiced data.  But that's wrong.  Perception is just loaded with our prejudices and opinions. So there's no reason why we should worship it as evidence that settles all issues.  
Shmo  No no no.  You can be some sort of an empiricist without having that theory of perception.  You just have to think that slowly, by appeal to experimental evidence, we can root out the prejudices and see which opinions actually stand up.
Mo:  Why think that this will succeed?  If you appeal to experimental evidence that is shaped by some assumption, it just gets more and more convincing, even if it is completely wrong.
Shmo:  Nature has a way of giving evidence that cuts through the firmest convictions.  If a belief is wrong sooner or later one of its predictions is just obviously false, even to the most prejudiced perception.  
Mo: 
Those are your prejudices.  Just give me a reason.  

Join in, on either side..
AdamMorton01:46, 9 October 2011

I agree with Shmo's side of the argument. Admittedly the empiricist's epistemic ideals are out of reach for human beings, (due mostly to Mo's first statement), but that is no reason to abandon perception as evidence. Despite reports from experiments where human beings seem to show that their perceptual beliefs can be prejudiced, or that we can be tricked, if we are determined to bring into question one of our beliefs we do not usually fall into such traps as we repeat our observations more carefully, as in the scientific method and draw conclusions in which we can have more confidence.

JamesRobinson03:57, 10 October 2011
 

I also agree with Shmo. I think at even if you appeal to experimental evidence, it is not based off of one result or observation. Rather, it is through multiple results which we begin to strengthen our assumption.

DavidTam02:28, 11 October 2011
 

Mo does have a point that some of our perception is loaded with prejudice and opinions. Social engineering plays a large part in what we do. If someone teaches a young child that black is an evil colour and white is a good colour. That child will grow up with that belief and may even block out all other beliefs because the child was taught at an early age and doesn't refute it. Then during a trial where the child, now a grown man, is one of the jurors, proposes the verdict of guilty on the black person which was based on evidence that the person was taught that black was an evil colour.

DannyRen03:00, 11 October 2011
 

Hello

PerrySieben16:26, 11 October 2011
 

Both Mo and Shmo make valid points. I think that how we perceive is related to our general ideas and experiences, which have caused us to learn about our environment over time. We take in the raw sense data and combine it with our background beliefs to interpret what we are experiencing and create a perception (often subconsciously). Because of this, Mo would say that perception is not an ideal source of evidence. However, as Shmo states, we can avoid this by heightened attention to our perceptions and how they relate to our prejudices. For example if we are extremely careful with our perceptions, over time we can increase our ability to gain unbiased evidence. A scientist is generally better han the average person at observing things such as experimental data in an unbiased way because he has learned to over time. However it is extremely difficult to be attentive and careful enough to get proper perception all the time. Therefore perception still may not be the best source of evidence because it is labour intensive to do correctly. It may not be the best source, but I still think it can be used as a source of evidence.

StefanRaupach17:48, 11 October 2011
 

Shmo: Nature has a way of giving evidence that cuts through the firmest convictions. If a belief is wrong sooner or later one of its predictions is just obviously false, even to the most prejudiced perception.

The problem is the "later" I find. If this later doesn't show up for hundreds of years, maybe thousands, it will have been rooted so deeply in the human mind that it will be too hard to uproot it

KaiPeng19:26, 11 October 2011
 

I agree that empiricism has flaws. Yet, I don't think that we should bow the knee to the skeptics and say they have won. I do believe that this cycle of back and forth happens all the time throughout history and at this juncture the skeptic seems to have an upper hand. Still, I wold say and believe that some how and some way that, rational methods can still be used for gaining knowledge and truth in order to lull the critics accusations that we can't KNOW anything But that is something to be worked out.

AndreRoberge05:20, 12 October 2011
 

I think that both Mo and Shmo raise valid arguments. However, I believe that Shmo's position is a better one, as it provides a more reasonable, logical, and positive view for the future. I see the optimism in his ability to claim that we can weed out personal biases and beliefs in our perceptions. I too argue that we are capable of doing this. It would probably take a good bit of time, but it is definitely possible. Once we can perceive things from an unbiased viewpoint, we can experiment very accurately and precisely.

AamirQamruddin20:19, 12 October 2011
 

I agree with Mo because Shmo sounds like schmuck and I make it a habit not to believe what schmucks say.

StephenRazis22:55, 12 October 2011
 

On the side of Mo, I would like to include an effect of evolution to the perception of data, as it applies to Australian jewel beetles. Google via: Discoverers of Beetle Sex with Beer Bottles. In a 1983 paper titled Beetles on the Bottle: Male Buprestids Mistake Stubbies for Females, biology professor Darryll Gwynne of the University of Toronto, and co-author David Rentz of Kuranda, Australia identified male jewell beetles attempting to mate with beer bootles. The mating errors have to do with the brown colour of the beer bottle, and turbercles at the bottom of the bottle to help a person grip the bottle. The turbercles reflect light in a similar way as the brown wing covers of the female jewell beetle. A September 29, 2011 Q&A internet article claims the male jewell beetles have evolved over millions of years to prefer the largest brown female they can find because the bigger females have more eggs. In this case, evolution appears to have prejudiced the male jewell beetles' perception of the data.

JamesMilligan02:58, 13 October 2011
 

Mo schmo…

Some time ago a man who many people think of as one of the greatest thinkers of all time endeavored to restructure his entire knowledge/belief base by asking what he could prove and still he arrived at the conclusion that God exists.

Descartes’ plan was to question everything and in doing so he believed he would arrive at only true beliefs. Doubt everything was the idea he started with. Everything but the existence of God that is because when he broke down all his knowledge and questioned everything his perception told him (as perception is merely the brains interpretation of external stimuli which we cannot know anything about beyond what our brain constructs) he still found God – because he perceived that there was something more perfect than himself. His problem here is in his idea of perfect. He talks of being deceived but says that God would not deceive him but says that God is perfect because he is infinite. However, to be infinite God must have everything that exists within him and deception being a thing which was already discussed and therefore in existence must also be a quality of God. Logic failed Descartes because the belief in God was so strong that it was rooted in his very perception of the world. Beliefs and biases are like this – if you believe strongly enough in something you will always find evidence for it. Furthermore, you will always deny anything which refutes your belief or attempt to explain it in a way which can work within your belief structure.

Schmo, as admirable as his stance is, really is a schmuck. He said, “Nature has a way of giving evidence that cuts through the firmest convictions.” This is his best argument and is complete and utter tripe. I personally know people who live surrounded by people who do nothing but use and abuse each other yet they continue to hold to the belief that humans are universally good at heart. Having this belief about humanity they will find any explanation they can for the actions of those around them. I am not saying people are naturally evil, only pointing out that there are people who hold to beliefs despite being surrounded by evidence to the contrary rendering Schmo the schmuck’s statement wrong. The “most prejudiced perception” will always find excuses for contradictory events or evidence which are either explained by their beliefs or biases or, at the very least, up hold them. Such as the invocation of something being “God’s plan” when asked why a supposedly good God allows bad things to occur.

Though I’d defend Mo’s side in this I wouldn’t fully agree with Mo either. True our perceptions can never be free of prejudice. And I am saying this as a person who has tried to shed himself of all biases and look at the world as logically as possible. Most of our beliefs are things which we take for granted and do not know they are there. That’s the thing about cultural views and beliefs; they are so common we barely know we have them. No matter how hard we try we will not be able to arrive at incontestable truth through our perceptions – not as individuals anyway. But when many people have the same perception then we can start to believe in something as really being a real description of the real world. Scientific evidence in support of a theory, for example, isn’t presented by one person based on their perceptions and then become accepted as fact. Findings must be first be verified by other scientists repeating the experiments and arriving at the same conclusions. When perceptions of a thing are in agreement among many people who perceive the thing then I think it is safe to take it as “evidence that settles all issues” concerning at least that thing. Therefore, knowledge cannot be attained by a personal search by an individual but by a collaborative effort of groups of individuals.

WilSteele06:36, 13 October 2011
 

Both Mo and Shmo provide valid points in their argument, however I would have to side with Mo in this argument, if I were to maintain an epistemological standpoint. I agree with Shmo that empiricism is a very efficient way of attaining evidences for knowledge, but biases and prejudices will surely cause many true beliefs that are not knowledge. Mo states that “if you appeal to experimental evidence that is shaped by some assumption, it just gets more and more convincing, even if it is completely wrong.” I believe this statement provides a very valid point by suggesting that each individual is subject to their own biases and these biases will often prohibit them from attaining sound evidence. For instance someone who believes they have seen a UFO or that 9/11 was a conspiracy will often validate their beliefs by searching for other people who share those beliefs. They will remain devote to their beliefs because of their apparent evidences and they will likely avoid critics who may debunk their theories. This is probably a fairly uncommon type of belief attained through empiricism, however the fact that these types of beliefs can be produced by people's biases proves that nature, and society, will not always “cut through our firmest convictions” if the belief or bias is strong enough and the individual refuses to be persuaded otherwise.

ChadMargolus20:17, 13 October 2011
 

I find that Mo is the winner here. From the argument, I can tell that for every reason supporting empiricism Shmo can come up with, Mo can find an argument against it. Whatever Shmo says in defense of empiricism, Mo can argue that "those are merely opinions/prejudices". For example, Shmo says "Nature has a way of giving evidence that cuts through the firmest convictions." The evidence provided by Nature is open to perception, if the perception that provides data can be obscured by prejudice and opinions, why can't the evidence provided by Nature be as well?

I don't agree with Mo, but I guess you can't really completely justify empiricism. However, the results empiricism has provided has evidently benefited mankind.

WanTaiTsang06:16, 14 October 2011
 

I think both Mo and Shmo's positions are valid. However, on pragmatic grounds I would agree with Shmo; but more needs to be said by Mo so we can understand his position better. So it is hard to take a side really. It is true that Shmo could just add to his pool of wrong beliefs by weeding out the beliefs that do not help us effectively predict something. But what would Mo have us do instead? I think a human aim should be to get true ideas about the world and ourselves.

But at what price? It seems only that privileged people would really hold a position like skepticism (which I take Mo to be holding; or at least a thin/local version of it). Humans are alive and they will think and act in concert with each other, their environment, their respective pasts and aims for the future, their finite limitations, their body's limitations, and so on. Couldn't we focus more on getting along with each other, and mediating this goal with a search for truth, if there is such a way (if there isn't, wouldn't you say the effort towards this mediation is still worth it?). Scepticism, in short, is philosophical wankery. Bertrand Russell said something along the lines of "skepticism is an entirely tenable philosophy. But it is impossible psychologically" (paraphrase/rough quotation).

Here's Richard Rorty's take on related matters: http://www.youtube.com/watch?v=CzynRPP9XkY AND also http://www.youtube.com/watch?v=oQDYdfuuhAs&feature=related

Sometimes Rorty can be agitating. He once said in a radio interview that we shouldn't tell a child not to stick his hands in a fire because of the temperature of the fire (a scientific explanation), but, instead, we should tell a child that "the community you are a part of suggests you not stick your hands into that fire." Well, a bit too far for me. Science and empirical evidence are very beneficial to us as a species. But this kind of position is an exaggeration of why I would agree with Shmo. Mo's position is untenable for a living, breathing, human being. Shmo's position is certainly not perfect. Empiricism cannot exactly get closer and closer to the truth because prejudices, that is, interpretations via our senses must always be made (we cannot step out of our own bodies to observe the world, can we?). We cannot step outside of sensory perception: Shmo should admit his aims are too idealistic, actually untenable, while Mo should admit he is a human being who must interpret and perceive if he is to survive (if he cannot admit this because he "cannot be certain," well then we should leave Mo to play in the sandbox by himself).

ZlatanRamusovic22:39, 14 October 2011
 

I agree with Mo, because as soon as you perceive something, you apply your prejudices and prior knowledge and assign attributes to whatever it is that you're perceiving. Totally subconsciously or not, it happens, so there's no way that perception can be worshipped as evidence that settles all issues. Although, that is taking empiricism to the extreme, like Shmo says. However, I do not agree that the prejudices can be rooted out. Nature may give evidence, but our senses are flawed, so even if it seems as if said evidence cuts through the firmest of convictions, there are tons of situations where those senses could deceive and whatever evidence is rendered completely invalid, only half-verifiable, or slightly skewed. With either of those, the conclusion still wouldn't be accurate.

AntaresRichardson06:22, 17 October 2011
 

We all have the same fundamental beliefs like we are human, fire burns the skin, etc, which have worked for us since the beginning of time. The ones that don't or erroneous beliefs discontinue. Even if our truths aren't true to aliens or other non-human beings, it is OUR truths. These fundamental beliefs, or truths, or perceptions give us the basis for ongoing empirical experiments which make for truths. If we don't go this route then what is the point of seeking any truths? We might as well just lay around and do nothing and be in chaos. That being said, we have to be careful how far we use our perceptions in evidence and what perceptions can actually constitute evidence. How far can the deep skeptic go? Very far, but is it facilitating or debilitating to life?

RichelleOnyschtschuk17:42, 10 November 2011
 

frorum for week of 31 October: error, ignorance, use

Suppose you had the choice between
- being a brilliant but very abstract scientist, who would discover many true but utterly useless facts about the universe
- being a very effective technologist, who would come up with many roughly and approximately correct discoveries that would lead to inventions that made life easier and more fun
- being a spiritual leader, who would say many things that others had no reason to believe or disbelieve, but which would give them calm and a sense of purpose

Which would you choose?  Or "it all depends": depends on what?

AdamMorton04:56, 29 October 2011

Oh this is a question that simply has my name on it.

I would seek to minimize both error and ignorance with absolutely no concern for maximizing use. I would strive to be the abstract spiritual scientific leader.

We can think of error and ignorance-avoidance as being two ends of the epistemology spectrum. Rather than see the two ends as opposing sides that simple cannot co-exist I think it better to see them as opposing sides that compliment each other. The immediate analogy that comes to mind to me is science and art: These can be seen as opposite opposing ends of spectrum as well, however both are quintessential for human civilization, knowledge, and on just being human. Striving to maximize both art and science separately doesn't create a problem at all, in fact it creates progress. We don't try to find some common ground for the two within a common framework. That would be folly and cause all sorts of chaos. Disciplines for instance where science and art merge often have more examples of bad sloppy science and bland empty artistic discourse (Psychology is an example of this, I believe: http://www.arachnoid.com/psychology/index.html).

Similarly, for epistemology, you want to extend your boundaries in all possible directions of the spectrum, being as good of a coherentist(ignorance-avoidance - the spiritual leader/art) and a foundationalist(error-avoidance - the abstract scientist/science) as possible. This means we're not tied to a single method, nor are we reducing the quality of the content of our knowledge by trying to stick to the middle where there's both a moderate amount of error and a moderate amount of ignorance. Extend both ways as far as possible to make progress.

As for times when the two clash: I'd constantly argue and compete with my other self to make the abstract scientist a better scientist and the spiritual leader a better leader. Competition should force further excellence in either side, with revised models meaning a closer approximation of knowledge. Yes, sometimes one side will win unanimously, and other times the other side will, but at least in this way it's clearer which side has holes in which arguments; likewise, you take only the best of both sides.

As for the technologist: I'm a thinker, not a doer. That's just me.

However, if the question had meant that I could only choose one: I'd choose the technologist solely because I cannot choose between the other two.

-Cornelis Dirk Haupt

Frikster06:09, 30 October 2011
 

Would have to choose the technologist. While theoretical knowledge is fascinating (or maybe just to me), from a utilitarian standpoint, I'd rather devote my energies to something of value for more than just mental stimulation.

ZacharyZdenek20:23, 30 October 2011
 

I am going to take a bit of a utilitarian approach to answering this. You need to weigh out the implications. The intuitive first step is to ask of which option makes the individual directly involved the happiest. This is what appears to be asking each student who is replying to this topic.

So to move away from what is already being asked of us, what should also perhaps be considered is, what is it that the world already has and what does the world need. If the world already has lots of useful "stuff", but no truths - then I would suggest that perhaps we should seek out some fundamental facts.

We should weigh this with how high of a value the individual holds each option for them - for example, if person A would have a very high utility and happiness level from being a technologist while the planet is already at an average level for each other choice, then perhaps person A should be a technologist. Alternatively, if person A would only get a medium-average level of happiness from being a technologist but it would greatly benefit the planet in one way or another, then that should provide some direction.

Additionally, I think it may be useful to consider the implications of the nature of what it is exactly we are doing in each of the three forms. By that I mean the nature (good vs. bad) of what we are doing. If as a technologist you are creating nuclear weapons that will cause great misery and destruction to human-kind, that decreases utility and should be avoided. If the spiritual leader is someone who is of the likes of Charles Manson and has a devastating effect, again my suggestion is avoidance. The converse is also true.

Interestingly,when trying to evaluate "useless" truths by the scientist under the final consideration that I have outlined just above - the nature of truths, is that they are not inherently evil or good, they do not produce widespread devastation or joy by nature. Truths and facts, useless as they are all depend on how they are subjectively interpreted by humans. One could argue that they can be valued in and for themselves. To illustrate what I mean, Einstein developed a theory that was useless until it was interpreted by humans and lead to the invention of the atomic bomb. It is not that the theory was inherently good or evil, people created something from it. In contrast, an understanding of something such as Germ Cell Theory would prove to be ultimately useless until human interpretation steps in to apply it to understanding Cancer, in which we are now able to use such as theory to try to develop a cure. Again, such truths and facts depend on interpretation and do not inherently promote widespread good or evil. For this reason, I would choose to be the abstract scientist.

RachelHolmes03:32, 1 November 2011
 

I will go with the technologist choice. By being an abstract scientist, the word useless in the statement implies pointless to me. However, it may not really be useless, it helps us to have a better understanding towards the universe and I find these understandings are rather interesting. However being a person like that takes a lot of effort, and the facts are really abstract. Being a spiritual leader, whos says things that people have no reason to believe or disbelive is also pointless to me. I think that many of us will not believe things without any reason behind it. Rather, even the evidence for a certain belief is not justified, we can still believe things based on false reasoning rather than no reason at all. Therefore I think being such a leader will not result in many followers, even if the belief give people a sense purpose and calm. I remember Clifford states that before one believes anything, without realizing the consequeses of the belief, one will ask whether the belief is true at first. In this way, this option is ruled out. The technologist, on the other hand, really helps people to make them have a better life. Mind experiment does amuse people, but we need more fundamental materials before we have this leisure, and there materials are produced by the technologist. In this sense, I think the technologist is not only more reasonable, but also more fundamental than the other two options.

HongkunGai09:45, 1 November 2011
 

I would pick the technologist, having the ability to invent things that make life easier is the most appealing choice. Also of the three it seems the most useful. While being a genius level scientist would seem to be helpful for society if the discoveries are pointless it seems that they would then not be helpful for society or anyone. Also the spiritual leader could very well be a scam artest or something of that nature and while he/she could be helpful for society might be filling people with flase hope.

JamesHaddad03:27, 3 November 2011
 

If exposure to true evidence does not automatically produce true beliefs, what does it take to produce true beliefs.

JamesMilligan06:53, 3 November 2011
 

I'm not going to answer the question Prof. Morton originally posted, but I wanted to write about something related to what we talked about today in class.

Prof. Morton, near the end of class, talked about how our explanatory concepts change our definitions in the way we order and classify knowledge (eg. the Pluto example and our classifications of animals and diseases). This reminded me of an interesting project being worked on by Vancouver-born philosopher Ian Hacking called Kind Making.

For Hacking, there are two types of kinds: Indifferent and Different kinds.

An indifferent kind is something that is indifferent to how humans name and categorize it in their field of knowledge, ie. how we order and relate different beliefs we hold to be knowledge (I'm coming at this from a coherence perspective). So, an example of this would be electrons, camels, ostriches, etc.

Then there are different kinds: a thing that reacts to how it is named and placed in our field of knowledge. The prime example here would be humans. Hacking specifically deals with different human kinds in relation to mental diseases. For example, the Diagnostic Statistic Manual of Mental Disorders (DSM) organizes our different definitions of mental disorders and diseases. One particular disorder he takes issue with is Paraphillic Coercive Disorder (link: http://www.dsm5.org/ProposedRevisions/Pages/proposedrevision.aspx?rid=416). The disorder is defined as someone who derives recurrent and intense pleasure from coercing people into sexual acts. His issue is how such strange definitions come about. Many people might not consider someone who enjoys coercing people into sex an actual mental disorder, as if there is a chemical imbalance in their brains. So, how do these definitions created?

His explanation is as follow: If we label someone with sexual sadism when they are not actually a sexual sadist then they will begin to exhibit the qualities of a sexual sadist, but probably not 1 to 1 with the official definition. If someone deviates far enough from the definition, then researchers might say "Gosh! Look at the person who is diagnosed with sexual sadism, but doesn't exhibit the exact qualities. They must some undiscovered disorder!" So, they create a new disorder that defines what qualities this person exhibits, even if they have no chemical imbalances at all! So, perhaps they create something like Paraphillic Coercive Disorder. The issue is are we not actually discovering knowledge, but creating a new field of knowledge that is specifically not discovered, but made by people.

This raises many epistemological, political, social and ethical issues, which I won't go further about here.

Hope people found this interesting.

MikeHare21:18, 3 November 2011
 

Actually none of those seem particularly appealing. Part of the problem is maybe the way "being" is used here: as if a person had to devote their life entirely to this or that with no competing interests...

If I were forced to choose though, I suppose I'd go with being the technologist. Although the description gives the profession a sort of (in my view) superficial aim: making life "easier and more fun," it seems the best choice to combine the two desirables--knowledge and utility.

DevinEeg00:13, 5 November 2011
 

forum for week of 24 October: Inference to the Best Explanation

consider two types of problem case for inductive reasoning:
- patterns that  suggest they do not continue unchanged: the earth's  population, marathon times
- patterns that only a maniac would project:  all times having been past, everyone I meet meeting me.
How would  the Inference to the Best Explanation handle these cases?  (How plausible would it be?)

AdamMorton23:48, 21 October 2011

These problem cases for inductive reasoning made me think about the assumptions that are embedded in each of these examples. The question becomes, what terms of discussion are we examining?

In the first case, are we defining the earth's population as the entirety of all living species or just the human population? If we assume that we are only discussing human population, then are we talking about the whole population, or the population broken down into subsets such as: death and birth rates, geographical trends, gender or generational divisions, socio-economic inferences, or the distinction between the developed and emerging worlds? The Inference to the Best Explanation for the over-all trend for global population is that it will continue to rise at an alarming, and possibly exponential rate. The patterns for increased growth are, globally, that in spite of war, disease, and disaster, are incontrovertible. There has been no contradictory evidence.

The Inference to the Best Explanation for the case where the lunatic believes that all times having been past can perhaps be summed up by the idea that while time is always moving forward, our sense of a moment in time immediately puts it in the past. As I type the word "now", the action itself becomes a past occur that can be explained after it has transpired (I typed the "now" five seconds ago). Therefore, all future events are past events waiting to happen.

My point here is that once you accept the hypothesis of an inductive argument, the IBE becomes difficult to challenge. -CLAIRE CHEVREAU

ClaireChevreau03:37, 25 October 2011
 

This is a tricky one...

It is reasonable to believe that the earth's population cannot continue increasing indefinitely, since the earth's resources cannot possibly sustain an ever-increasing number of people. In this case, the Inference to the Best Explanation would suggest that the pattern (a growing population) would eventually break and that the population would have to start decreasing. Personally, I think that this is a fairly plausible hypothesis, since it is unreasonable to expect that the population will continue increasing indefinitely, even if it has done so in the past. Of course, it is possible we will come up with an invention that will use resources in a way that can sustain everyone, at all times, but it is far more likely that the pattern of the ever-increasing population will have to break and the population will have to start decreasing.

For the second example, I feel that for pattern's that only a maniac would insist on, the Inference to the Best Explanation would ultimately reject such patterns. The best explanation would be to conclude that it is unreasonable to rely on such patterns, since they are not justified or based on any good evidence. I hope that I am on the right track with these questions. -VERONIKA BONDARENKO

VeronikaBondarenko06:39, 25 October 2011
 

In the first case, we look at a graph of the worlds population and over time it has increased, there are however points in the worlds history where the population has drastically decreased, For instance, the black plague in Europe and the ice age. Events in the worlds history tell us that we are at the mercy of weather, disease and the exhaustion of the worlds resources. Simple induction tells us that the worlds population will increase at a rate of X and hit a population of Y at one point. The infinite number of possibilities forces us to doubt that the earths population will ever hit 30 billion people. It does make sense for us to conclude that at the rate the earths population is growing it will hit a certain number of people and until something drastic occurs we should believe it will happen. In the case of the marathon runner we will plateau at some point, We know that human beings can only run so fast and at one point it will be impossible to go any lower. But the future is so uncertain how can we conclude such a thing. We cannot conclude that human beings will evolve and run at a speed that we previously believed was impossible. Because of this fact we must suspend judgment and at least believe it to be possible to continue in the direction it has been going for so long.

SeanCott07:10, 25 October 2011
 

In the case of the earth’s population, many possibilities can be mentioned as it matters which patterns are being evaluated. Would it be the pattern of human population growth? Would it pertain to the extinction rate rather than population growth of species in general on the earth? To what degree/time period are we judging this change? If referring to the extinction rate, the IBE would suggest that the human population will grow to a certain point, reach a threshold, and begin a slow or drastic decline until the existence of human beings is no more. Thus continuing the pattern of life and death of species on the planet earth.

With regards to the maniac, the IBE may reflect the social environment and its effects on that person’s mind (considered a “maniac”) as an explanation to place it under the hypothesis. It is somewhat reasonable to assume that a maniac responds to certain things in a way that is deemed abnormal. Representative of the way he/she thinks, it can be reduced to the environment that the he/she was brought up in. Typically, a maniac will exist in an environment that has a norm and therefore may cause the maniac to become more isolated, mentally and/or physically. As a result of this almost forced introversion, their reality may grow to become very different compared to the norm; thereby making that way of thinking quite plausible in terms of his/her reality.

After writing this, I feel my understanding of IBE has become more convoluted and less concrete. I hope I am not too far off. -DEREK CHOW

DerekChow19:14, 25 October 2011
 

I wonder about the very premise of the earth's population growing indefinitely. Has this always been the case? Until agricultural inventions had been introduced, it seems that in history, the earth's population has experienced stabilization and then different rates of acceleration. The Inference to the Best Explanation would indeed say that the earth's population will continue to grow indefinitely based on the graph we saw in class but what would it say based on a graph that actually pre-dated 1800? From that the IBE could be that the earth's population simply experiences fluctuations. I too am finding the concept of IBE a bit tricky...

VeronicaDubak00:19, 26 October 2011
 

I'm a little confused about the difference between the best explanation and induction... Just to help me clarify: For the first, induction would conclude, for example, that the marathon times would continue correct? There will still be a steady incline in performance, however we mostly all would agree that there would be a plateau to the performance level and it couldn't continue infinitely as induction would suggest. This is a problem and we now need to come up with a response from the abduction (IBE) perspective to address the problem. However, I don't see how that is possible given my understanding of abduction. I'm aware that my understanding of the terms may be construed which is what complicates my response but is it not true that abduction is an inference to determine the cause of a result? Being that we don't know the end result of the examples given (population growth, marathon times)can we really apply abduction to them? I don't believe we can... Or it could be that abduction just requires us to look at the information we have, assuming now is the end result, in which case there is also a relatively gradual increase in population with slight fluctuations which would force us to conclude the same pattern in the future... although abduction is not about future projections (or at least my understanding). So I guess what I am saying is that abduction cannot address problems posed by induction.... And as for the second case "...everyone I meet meeting me" I don't understand so I can't address. Ha, I hope this counts as an adequate forum contribution. If any one could clarify the problem for me I would greatly appreciate it!

PorterBommes21:48, 26 October 2011
 

I was also wondering which of simple induction and IBE might be more fundamental, Is it quite possible that IBE might be fundamental? When we use curves for info, Dr. Morton pointed out that expectations differ and that patterns we project depends on what on we expect the figure to be describing and meaning... Jessica chen

JessicaChen14:50, 27 October 2011
 

These are interesting cases for sure.

I would say that IBE would handle the first case by projecting that the pattern would follow the same direction, just not the same trajectory. For population growth we may not follow the same pattern as the last few hundred years in terms of rate of growth, but the best explanation would include that population will continue to increase, but at a much more extreme rate than before. As for marathon times the opposite is true. IBE would say that marathon times are currently decreasing, but there must be a limit to that. So while its' prediction will not diverge from the current pattern in terms of direction, it will probably suggest that the rate of decreasing times will start to take on a more modest rate.

IBE would handle the patterns only a maniac would project by creatively putting use to the aggregate of the background beliefs society uses to control the risk of taking on new beliefs. Since the ridiculous pattern presumably lacks properties that we associate with relatively more accurate projections, IBE would align itself with an explanation that most closely matches the polar opposite of the maniac's projection.

DakotaCarter19:05, 27 October 2011
 

When taking a look at the first problem and how IBE would handle this case, it seems almost clear that IBE would look to the facts that we have gained over time to created a best hypothesis. That would mean that over time the population of the world has increased and therefore is continuously increasing without change. The same way as the marathon times are continuously decreasing, and the sun continues to rise every day. The patterns that are perceived as continuous and without change. some when making the best explanation for patterns that do not change, you would look to facts and inferences that have been gathered that prove to be continuous and not changing. And by using IBE they can justify their inductive reasoning towards their beliefs.

CourtneyChristianson07:08, 28 October 2011
 

For the population one, it is kind of hard to create a hypothesis through IBE since there are unpredictable events which may cause population to increase (baby-boomer, improvement of medicine) or decrease (world war) drastically. The marathon one is tricky since one cannot determine who the winner is based on the order at a certain time. However once someone crosses the line, it is evident that the participants behind that person will not be on a higher position. As for the maniac one, I agree with what Claire says, all the future events are determined by past events.

ChenDu19:42, 30 October 2011
 

This has been fairly well addressed so far, but I'll add my two cents.

For the population example, there would be limiting factors that would hinder the exponential growth of the population over time (ie. space, resources, rates of reproduction). Similarily, there are limits on human physiology that would prevent extremely quick marathon times. Although advances in training, diet, and technology would allow times to still be improved, there would be severely diminishing returns.

ZacharyZdenek20:20, 30 October 2011
 

forum for week of 12 September

How would you react to the following comment on chapter one of A guide ..  ? "There is an analogy with ethics running through the chapter. What you should believe, what you can be criticised for believing, and so on. But ethics is a matter of the standards you adopt in your culture. There is something personal and arbitrary about it. Is epistemology just an expression of values and biases, then? Morton's values are pretty clear: beneath a veneer of tolerance he prefers beliefs that are scientific. But those are just his preferences." Be sure to sign your contributions. (See the syllabus for why.)

AdamMorton17:15, 10 September 2011

How can one deduce knowledge by scientific reasoning without questioning what constitues scientific reasoning in itself? Is there not a level of bias in what is defined as "proper" scientific method?

The so-called "placebo effect" is an interesting way of showing how no matter if a belief is right or wrong, a belief that is simply strong can be enough to prove it is correct to the holder of the belief. Perhaps, however, this effect in itself is only the result of other beliefs the person holds. The truth of a belief then, may vary from person to person.

For example, if a person says they hear many different voices in their head, there is no scientific test that could completely prove this to be true. However, this does not mean that because it cannot be proven, it is simply untrue. It could be true to that person simply because they believe it to be so. VERONICA

VeronicaDubak00:08, 11 September 2011
 

The placebo effect is one aspect of scientific method that serves to accommodate the unknown factors involved in medical intervention. Within this context, you will be right to claim that if someone believed strongly in something, it may sometime come true (or otherwise known as miracle healing).

I agree with you in that there is a level of bias in what is defined as the proper scientific method. Scientific method relies heavily the existence and interaction of 'tangible' proofs in the universe. By doing so, and with the establishment of extensive procedures and experimentation criteria, it has advanced and accelerated our knowledge base in the last 200 years (at least). However, because of the success and reliance on these 'tangible' proofs to provide us with concrete evidences to prove and disprove our beliefs, the perception of scientific method has shifted into demanding exclusively for these 'tangible' proofs.

With public perception of scientific method swayed by the success and rigor of scientific method in improving our life, the resulting consequence is that any other unexplained phenomenon that cannot be explained by scientific method are almost either cast aside as pseudoscience or simply branded untrue, on the basis that these phenomenon does not fall within the definition of 'tangible' evidences and hence cannot be tested (or does not conform with the other scientific beliefs). The emphasis here is that there are other aspect of things in the universe that science has yet had a chance to even perceive it, let alone study (such as within the energy-energy interaction such as dark matter in space or particle physics).

So to answer your skepticism to the 'proper' scientific method. Although it may be bias, scientific method has so far served us very well at least in the last 200 years and continues to do so. We should remember that scientific method has its limitation, and although vastly popular and reliable, is not the only avenue of testing beliefs. In a way, it is not 'proper' to rely exclusively on the scientific method to verify the truth of a belief, as science is a knowledge base in progress and is not even close to exploring all the details of things in the universe.

But on a more realistic note, the public generally rely otherwise for the sake of practicality (or ignorance).

Ken Wong

KenWong07:01, 12 September 2011
 

Humanity uses reason to delegate what our conscience tells us is right or wrong. While we may agree almost unanimously agree that certain things are generally thought to be wrong (i.e. murder, theft, etc.), how can we be sure what is truly right or wrong? Ethics vary between cultures based on different perceptions of morality, which are subconsciously developed since birth. Nonetheless, under certain circumstances, we may justify even the most notorious “wrongs” if we adopt a utilitarian policy of ethics. For example, we would condemn an individual for killing someone else, but if one murdered a serial killer, we may justify the action (not necessarily as being “right” or “wrong”) if they were saving the lives of many others in the process. You can justify both sides of the argument, but there is no clear objective truth prevalent. Scientific reasoning relies on induction in the will to progress. This method of reasoning juxtaposes distinct notions into general axioms based on the process of trial and error. Even if something is said to be right, and justly proven so, we cannot prove anything to be ALWAYS [objectively] true. We are tied to our perceptions, and even if tried and agreed upon by others, we can never conceive of objectivity itself, as our measurements of reason are both consciously and subconsciously subjective. Therefore, epistemology is merely an expression of value and bias based on our subjective perception points, rendering any Absolute or Objective Truths to be unattainable by us. TY

Tclark6608:24, 12 September 2011

Out of curiousity then, what would be the rationale behind progress through scientific reasoning?

It seems that the concept of objectivity only exists through its dialectic relationship with subjectivity. Then one could question whether or not it is rational for humans to strive for objectivity considering the inherent and inescapable subjective nature of human beings themselves. DEREK

ps. I hope you don't mind I quoted you in my comment below.

DChow14:57, 13 September 2011

I actually replied to this below :)

Frikster06:45, 20 September 2011
 

Although people carry with them many beliefs such as the ones in the example, when they are confronted by people who disbelieve such statements, this disbelief is usually only founded in ordinary incredulity rather than in philosophical skepticism. It is therefore fairly easy to rebuke these disbeliefs, either by proving false or neutralizing the grounds for doubt. Take the example of Vancouver being in Canada. Suppose someone doubts this, arguing that Vancouver is, in fact, in Madagascar. This could be rebuked by either showing the person a map (or several if the doubt remains) or else by finding out that, for example, the person who told them that Vancouver is located in Madagascar was trying to play a trick on them. However, it becomes much more difficult to argue with philosophical skeptics since they question the very background of information that people generally rely on to verify facts like the examples in the question. It therefore becomes very difficult to falsify or neutralize any doubts since the reasons you would usually give to ordinary incredulity are themselves being undermined. It becomes difficult to argue with a skeptic due to disagreements on what counts as evidence. People are also likely to be convinced by skepticism, at least temporarily, when initially confronted with it since skeptics offer some very convincing reasons; how we are constantly disproving and re-formulating our scientific theories for example, or how we have been wrong so often in the past. Upon further reflection though, people will probably realize that many previous errors doesn’t mean we will not be able to discover truths in the future. Nevertheless, even if people are convinced by skepticism, they are likely to carry on with their life as they were before; by assuming they know certain things about the world or by at least working on the basis of what has been true for them in the past (though this may also be questioned in light of the dubious nature of our recollections), in order to function as humans. In this case, they would have to suspend their disbelief and suppose that we do live in a world in which there are facts; something that I would imagine even the most fervent skeptic does. In conclusion, I think that quite often an agreement to skepticism stems from a difficulty in knowing how to begin to rebuke such an argument as usual methods of disproving beliefs have been undermined by the skeptic's claim.

AlexandraKnott06:16, 27 September 2011
 

Sorry please ignore the above comment-I put it in the wrong dicussion-it is actually in response to the skepticism question from the 26th september

AlexandraKnott06:27, 27 September 2011
 

It's hard to judge beliefs concerning what is or is not ethical because different cultures often have different opinions on the matter. Still, I feel that good beliefs should be backed up by good reasons. Example: if a certain action causes direct harm to somebody else, then that backs up the belief that that action is wrong. If a belief is not backed up by any reasons showing why that action's right or wrong, then that belief is simply arbitrary. If a belief is backed up by poor reasoning, then it is much more likely for that belief to be wrong.

At the same time, it might be hard to judge whether the reasons backing up one belief are better than the reasons backing up another belief. With some difficult ethical questions, it might never be possible to decide whether some actions are right or wrong. But by evaluating the reasoning behind certain beliefs, we can at least begin to separate the good and the bad beliefs. Veronika Bondarenko

VeronikaBondarenko05:49, 13 September 2011
 

What separates a good reason from a bad reason for having a particular belief? For example, if we believe that stealing is wrong for the reason that individuals should not take something that does not belong to them, why do we believe this? What makes this a good reason for not stealing? Many of our reasonings for certain beliefs are grounded through conditioning by the society in which we are surrounded by. However, because every culture has different values, it is difficult to objectively determine which arguments behind every belief is "right" or "wrong". Another example: Causing physical harm to another human being is believed to be unethical because that person is suffering, one could argue that the person inflicting the pain is gaining pleasure from it, and thus the belief of hurting others as a good thing is justified. Although certain ethics are universal, we must ask ourselves why they are and by which means these standards are formed.

Diana07:16, 13 September 2011
 

For a human to behold an ‘objective perception’ seems to be impossible and paradoxical. If the human mind and consciousness is developed and influenced by the environment and their perception of it, the limitations of exposure greatly restrict a human’s ability to perceive. The concept of objectivity would entail knowing everything. This would imply knowledge is definitive and absolute, which is unknown. So as humans "are tied to [their] perceptions, and even if tried and agreed upon by others, [they] can never conceive of objectivity itself."(Tclark66) Then epistemology, along with anything that humans have defined, would be an expression of values and biases.

Does relative objectivity exist? Can one be more objective than another? DEREK

DChow14:54, 13 September 2011
 

I would like to apologize for not being able to contribute to this forum in time. Unfortunately, I have had some struggles getting the wiki to accept my email account. Needless to say, I would still like to share my thoughts on this week's readings.

After reading the first page of the text, I decided to look up a definition of "beliefs". I found one that I thought was in agreement with my own view of what constitutes a belief: "confidence in the truth or existence of something not immediately susceptible to rigorous proof." This lead me to believe that there must be a purpose for why many people choose to believe in explanations that perhaps don't have concrete evidence. Veronica D., you mentioned that the "placebo effect" is an example of someone having a belief that is so strong that it controls their perception and therefore what they experience. This is true of all people: what we know, what we feel and what we experience serves as our reality and therefore the basis for our knowledge. I think that is the purpose for why people choose to have beliefs, to explain something that they can't necessarily rationalize with scientific evidence. If we evaluate beliefs and modify them in order to obtain truth and create a universal epistemic ideal, are we creating conformity and agreement amongst all people? Would that disturb the unique characteristics across cultures? Morton re-iterates this on page 2 while writing about the epistemic ideal, asking us "what would be the price for satisfying this ideal: in order to have beliefs like this would we have to lose something else of value?" I think the pursuit for higher understanding is noble, but I think that trying to obtain relative objectivity is really difficult since people bring biases and stories based on their own experiences to the table. And that's what makes life so interesting, no two peoples' life narratives are akin, but does that make them untrue? -Claire Chevreau

ClaireChevreau17:13, 13 September 2011
 

I think beliefs/ethics are rather personal/societal matters than something that can be defined as right or wrong by outsiders. There has to be a reason for a person/society to believe why certain things are right or wrong, and that depends on the experiences that person/society has. However not every single person/society has the same experience as others do.For example, when some people in former British colonies think the British are the meanest people in the world, people in Britain in contrast think they are proud of themselves since Britain conquered so many places.Therefore we cannot judge others's belief or culture based on what our own standards are. Chen

ChenDu18:08, 13 September 2011
 

When reading the first 9 pages of a guide through a theory of knowledge I find it useful to learn the definitions of many terms I had used in context but never really understood their true meaning. I could not help to think of a mathematics textbook when I was reading it. The view that humans being are "rational" animals in a world governed by reason is an interesting thought but I feel it is oversimplified. When a person is confronted with making a decision in life they have many factors they need to evaluate in order to determine what is the "right" or "wrong" thing to do. The problem in life is that there are not many situations where we can see a clear a black and white situation, the world is filled with shades with gray. When discussing decision making we must deal with emotions as well reason. People make decisions based more on emotion than they do on reason. If people were truly rational creatures we would view life logically. The problem is, we are influenced by a great deal of thoughts and emotions that sometimes we do not even understand or even realize. As much as it is useful to look at people as logical agents in a world governed by reason it is both idealistic and oversimplified. Just something to think about.

SeanCott18:27, 13 September 2011
 

I don't think that can be straightforwardly answered. There are different categories of belief. Our mental perception of the world is the way it is because we have built up a system of beliefs over time to create that certain perspective. And no one can say that that certain perspective is wrong. No one can know really what our perspective is. Our beliefs about ourselves and who we are, are obviously biased and whatever it is we believe to be true about ourselves is the reality we create about ourselves and therefore the reality of the way we live our lives. There are however belief systems about the earth and the structure of the earth and the way it was formed like the geographical features of our planet for example, or the laws of nature (like gravity or electromagnetism). So say someone says the force of gravity is 9.5 m/s while it is actually 9.8 m/s. In an instance like that they are clearly wrong. So it really depends on the situation. I think unique situations have unique answers and we can't try to generalize and categorize. I don't think there is ever one unifying answer, we have to look at a specific situation and then develop a specific unique answer.

PorterBommes03:49, 14 September 2011
 

Morton prefers beliefs that are scientific. Physicist Freeman Dyson was awarded The Templeton Prize in year 2000. The Templeton Prize is awarded annually to outstanding originality in advancing the world's understanding of God or spirituality. In his acceptance speech, Freeman Dyson says his personal theology is consistent with scientific evidence. He also states that he doesn't say that his personal theology is supported or proved by scientific evidence. I think The templeton Foundation is progressive in their choice of awarding The Templeton Prize to Freeman Dyson. In his acceptance speech, Freeman Dyson says he thinks atoms and humans and God may have minds that differ in degree but not in kind. Freeman Dyson says he does make any distinction between mind and God. God is what mind becomes when it has passed beyond the scale of our comprehension. Freeman Dyson's scientific evidence is supported by quantum theory at the atomic level. Quantum physics is claimed to be the most tested theory. It has never failed a test. I think Freeman Dyson's personal theology beliefs qualify in the sense of Morton's preference for beliefs that are scientific. Am I accurate. James Milligan

JamesMilligan06:51, 15 September 2011
 

previous should be:

 Freeman Dyson says he does not make any distinction between mind and God.

James Milligan

JamesMilligan06:57, 15 September 2011
 

They may be arbitrary but they are not without purpose. A cultures beliefs or values are rules for how to effectively live/succeed in that culture. This of course will differ by region and time period, but will always, at least at the time, reflect the endpoint in the evolution of that society. These values emulate how the fittest member of a certain society would think and act. Appealing to these morals, arbitrary they may be, is not a meaningless goal.

DakotaCarter05:45, 16 September 2011
 

One person in class on Thurs had asked whether a rational belief can be unjustified... and then did Prof Morton use the easy-to-see example of a trial (DNA, hair...) to explain that the answer is Yes, a rational belief can be unjustified? Without that example, it seems hard to see that a rational belief can be unjustified... Out of interest and fun, has anyone thought of another easy to see example where a rational belief is unjustified? Jessica chen

JessicaChen18:28, 17 September 2011
 

In reading Morton I do find that he brings forth plenty of his arguments as if he is an objective authority on what constitutes knowledge, ignorance, rational, justification and so forth. However, he makes sound arguments that contribute to a larger discussion and I see no reason that he should stop and first acknowledge the inherent subjectivity in his sociohistorical context when writing about epistemology.

The ideal of epistemology is to discover the nature and scope of knowledge, or equivalently: epistemology is the study of the nature and scope of knowledge. Now, even if we change the definition of epistemology, there is still 'something' that can reasonably be seen as the study of the nature and scope of knowledge. For example, say we completely lose the concept of infinite - gone from all science. There is still 'something' that can be seen as infinite. Infinite will still be 'infiite' - still hold all the properties we know as infinite - even if we decide to define something else as infinite. The first mentioned 'something,' assuming it will always be labeled epistemology, is not just an expression of values and biases then as that's simply not what it is.

Morton, however, does bring his own fallible value and biases to the ideal of epistemology (albeit, the degree of this, I find, is only because he is human). But this only means that his arguments, and to take it further, every epistomologist's arguments is a subjective expression to some degree or another. It does not mean that epistemology is merely an expression of values and biases. There's similarly something exhilarating and arbitrary about sport brought about from one's cultural heritage. This does not mean that sport is merely an escape from boredom defined by it's sociohistorical context.

So... yes, we're all fallible but we can all bring something of value to the powerful discourses and ideals that we aspire to appropriate one day. Morton adds to his ideal and we can extract what we can from his competent writing and translate it into our context and subjective limitations as we continue to pursue that ideal of epistemology.

CORNELIS DIRK HAUPT

Frikster06:43, 20 September 2011
 

An interesting thought came to mind in the discussion we had in class with regards to belief in some ethical ideal (culturally, religiously etc) being a result of genetic disposition.

Take this and juxtapose it next to the placebo effect as discussed above.

Some people hold certain beliefs so strongly that neurologically speaking, we can measure that there is indeed empirical evidence that their beliefs do form part of a experience that is tied to their belief. One can try to rationalize this away by simply stating that we have a scientific basis for why that belief could be epistemically false (i.e. we can do experiments and see that the experience is merely the placebo effect in action). However, just as one could argue that one's sexuality is a part of one's identity, genetic disposition, and just a result of being human, so too can the same argument be made by someone who holds certain strong convictions (i.e religious beliefs).

The question posed then is whether Morton is being less human with his preference for scientific beliefs. Can one even be "less human" and are one's beliefs a basis for deciding this? Morton's essay suggest to me that having the right beliefs based on evidence is a crucial part to being a better human being.

But again, Seeing as certain ethics and standards are indeed somewhat arbitrary, personal and part of an adopted culture - and now more crucially: these beliefs are also in part biologically ingrained - is it not fair to argue against Morton that holding those beliefs in high regard is merely a consequence of being human? And that if anyone was to make a continuous affront or conspicuous rejection to those beliefs then it would be similar in all respects to someone making a continuous affront to one's sexuality. One's sexuality is also personal, a bit arbitrary, and shaped by cultural and genetic dispositions. Hence, it's merely a product of being human.

CORNELIS DIRK HAUPT

Frikster18:46, 25 September 2011
 

I believe that Clifford's argument is well-founded, if not worded in quite the way which he means to purvey it. What I glean from Clifford's argument of not believing anything without sufficient evidence is simply that we shouldn't believe something if the evidence for it is not there or properly convincing. James argues that if one goes by Clifford's argument, one will not be able to fully live one's life and will be in a constant state of paranoia. If you take Clifford's argument completely literally then I think that this would be something of a conundrum, but Clifford isn't saying that something has to be proven and without an ounce of doubt in order to believe it, he's simply saying one should have the proper information before one believes something, a practice which I personally believe is integral if one is to ever make informed decisions. After reading both arguments, I've come to the conclusion that while Clifford is more likely to miss out on something because of a lack of trust of others and the information presented to him, this is far superior to the stance of James which provides ample opportunity to be taken advantage and made a fool of.

Fmillay21:04, 25 October 2011
 

anti-inductive situations: forum for week of 17 October

Simple induction is really just  a matter of  fitting a curve to a number of points.  (Assign numbers to colours, with Black=1.  A data set of black crows is like a graph where the value for each item is 1.  So a straight line through all of them projects a prediction that all crows are black.)  This week insead of asking a question or giving a debate let me set a problem. Describe a case where the intuitively simplest line through a set of points is not what you would predict for the continuation of the data.  (Let me know if this isn't stated clearly and I'll say more. But it's good to keep these short.)  
.

AdamMorton18:41, 15 October 2011

I'm not sure if this is a good example. When you drop a ball down a spiral tube and let it slide out at the bottom end, it may not be intuitive (especially for a child that has not taken a course in physics) that the ball does not continue to spiral in a circular motion, but follow a straight path tangent to the point where it is released from the end of the tube.

I suppose another example could be that it is not intuitive for someone to think that when dropping a lead ball from the top of a very tall building that the ball would reach a terminal velocity in which it no longer accelerates. The person may think that the ball will continue to accelerate until it reaches the ground.

MonaZhu19:12, 16 October 2011
 

Stretching a rubberband and calculating its stretch/force ratio. Intuitively at the beginning with a first couple of data, we may be tempted to think that the rubberband will stretch indefinitely at the same proportion until it break. But contrary to it, there's a point called stress limit, where material, stretched beyond this point, exhibits a totally different pattern of behavior (such as needing twice as much force to stretch the same length of material or something). To make things more interesting, some materials have been found to fit into our original intuition of simple linear correlation.

Ken Wong05:33, 18 October 2011
 

I was supposed to have contributed in Group 4 the week of October 3rd. I apologize for the delay.

My case for the problem posed above is the graphing of a nerve impulse. You can find a basic graph here: ns02-actionpotential.jpg

Membrane voltage remains at resting potential until a stimulus excites it to the threshold voltage. Other stimuli may precede the ultimate stimulus of the nerve impulse; however, if they are >-55 mV (based on graph above) then the nerve impulse will nto occur. Once the threshold voltage is reached the neuron 'fires' or depolarizes and repolarizes (this is called the action potential) and is represented by the big hump on the graph. Immediately after this period the graph briefly dips below the resting potential (this stage is called hyperpolarization), and then finally returns to resting potential to await another < or = -55mV stimulus.

The simplest line through a set of data points is not what one would predict for the graphing of small incremental increases in voltage within a neuron. This is because of the function of the threshold voltage.

HannahOrdman18:05, 18 October 2011
 

The example case of the crows seem to be begging for someone to mention the black swan case. Assigning the colour white as 1, with each swan as an item the graph would be a horizontal line, if 16th century Europeans were making this graph (Black swans were populated in New Zealand and Australia). The black swan was an expression as a statement of impossibility at the time, as all historical records of swans were white. The discovery of new and unexpected empirical evidence demonstrates the fragility of induction and pattern-seeking.

VinceXi22:09, 19 October 2011
 

This example might be a little bit disturbing but it's the only one that I can come up with that's both simple and interesting. If I find a better and less disturbing example I will make another post/reply to this topic.

Most people would assume that all of the worlds animals have one head and therefore one brain. So on a graph, the data showing animals with less than one brain, one brain, and more than one brain, would show a straight line depicting that all animals have one brain. However there have been discoveries of certain animals with two heads. For example, two headed snakes, and two headed turtles. So these animals have two brains.

Well I guess this example is perfect for Halloween coming up, and perhaps an inspiration for someones Halloween costume?

IreneWong00:37, 20 October 2011
 

Someone already used the dropping an object one so.......

The application of General Relativity moving from a macro to a micro level.

ZacharyZdenek04:36, 20 October 2011
 

It's a shame that there is a lack of a question this week. I can easily provide an example, but rather of my own experience, I'd like to present a story which I have learned from a past "Theory of Knowledge" teacher. In his childhood, he lived in a small-town in Edmonton. It was a population of a few hundred or so at the most, and as expected, was relatively isolated. Further more, there were only Caucasians living in this town. One day, he and his family go to a much larger town (as a trip I believe), and there he notices performers (dancers I believe!) who seem to have dark skin. He had never seen anyone with a darker skin color before, and thus had trouble reconciling what he was seeing with his experiences. He then noticed that the dancer's palms were white, and much lighter than the rest of their skin. He came to the false, (though arguably, justified) conclusion that the skin must have been painted on, and the sweat of the palms must have cleaned off some of the paint.

In this example, he is taking a small sample (his hometown) and extrapolating the data to a much larger community. Thus, in a town where there seem to be no Africans, the rate of African occurrence is zero, and extrapolated would mean that no Africans exist. Of course, this is false, and just another example of how limited data, and mindless extrapolation can easily lead to error in knowledge.


JamesWu 21:39, 19 October 2011 (PDT)

I originally had a question, but its hard to formulate over text. Perhaps if I become more lucid in my text, I will try to repost the question.

JamesWu04:39, 20 October 2011
 

Cases Against Intuitive Line

In section 6 Justifying Induction, of chapter 4, of Dr. Morton’s book A Guide Through the Theory of Knowledge, is a reference to philosophers on inductive inferences as follows:

Some philosophers have tried to give reasons why it is reasonable to believe the conclusions of inductive inferences, and some have argued that it is only out of confusion or misunderstanding that one could think that any such reasons were possible or necessary.

Philosopher Karl Popper and physicist Freeman Dyson are offered as example, in support of conclusions of inductive inferences as unreasonable, as philosopher, and as predictive in specific application.

One of the most influential and controversial views on the problem of induction has been that of Karl Popper, announced and argued in (Popper LSD). Popper held that induction has no place in the logic of science. Science in his view is a deductive process in which scientists formulate hypotheses and theories that they test by deriving particular observable consequences.

Stanford Encyclopedia of Philosophy on Induction, 4.2.

Thirty-one years ago [1949], Dick Feynman told me about his "sum over histories" version of quantum mechanics. "The electron does anything it likes," he said. "It just goes in any direction at any speed, forward or backward in time, however it likes, and then you add up the amplitudes and it gives you the wave-function." I said to him, "You're crazy." But he wasn't.

Freeman J. Dyson, in a statement of 1980, as quoted in Quantum Reality : Beyond the New Physics (1987) by Nick Herbert

JamesMilligan07:35, 20 October 2011
 

An example for the question posed can be seen in temperature and the freezing point of water. It would seem that, with water, as the temperature decreases within the atmosphere so does the temperature of the water itself. On a chart, the decrease of temperature could be directly correlated to the decrease in water temperature. One could assume that the water will continue to decrease as the temperature decreases - however this would be incorrect. There is a point in temperature where water will cease to decrease in temperature and simply freeze and turn into solid form (instead of remaining in a liquid state). Thus, while believing in the direct correlation of temperature drop of the weather and the water seemed to be accurate - it would prove to be incorrect if simple induction was followed. It is important to note that since simple induction is formulated through the basis of observation, simple induction is never one hundred percent accurate. While some theoretical laws are based upon empirical evidence that are in support of simple induction, it is not enough evidence to provide one hundred percent accuracy. In other words, the idea of observation as being infallible is incorrect when it is based simply off of routine assumptions.

Dwylde09:20, 20 October 2011
 

This may be a long shot, but perhaps this can be thought about economically as well. My example is related to the 4 cycles of a business year(expansion, peak, recession, and recovery. If this is looked at on a graph, it may be inferred that one stage could last longer than another. Also, similar to the post before mine, the pressure point in a vacuum gauge or a manometer could be assumed to continue to rise.

AamirQamruddin19:48, 20 October 2011
 

So it seems that any and all facts that do not follow indefinitely an original linear path can be an answer to this question. I'll use the relationship between stress and strain of any solid material as an example. The applied force is proportional to the displacement for a certain distance of displacement. After said distance, the material will deform plastically and basically be screwed up. So it is an error to use simple induction when confronted with a material to which you do not know what the proportionality limit is.

ChaoRanYang20:13, 20 October 2011
 

I found today's lecture on the shortfalls of induction to be particularly interesting. I am specifically interested in exploring in what circumstances are we allowed to induce beliefs from incomplete knowledge? Suppose it were possible to produce everything we believe induction, and that induction is simple a matter of convenience (a highly improbable position in my opinion). This simplification is still obviously necessary – if nothing else, waiting for simple, deductive knowledge takes too much time. Case in point, people knew about the symptoms of diabetes long before they came to know its underlying physiological causes. Inductive beliefs in this case can be formed by doctors who know just of the symptoms and cases, and but of the underlying framework. So, induction serves as a sort of bridge between ignorance and knowledge (sound rather like the Daimon in the Symposium…).

So in what cases are we right in drawing such conclusions from such evidence? In class, we were talking about chartists and the markets. Sure, there seems to be certain patterns which dominate in certain assets – e.g. Fibonacci retracement. We hardly know why it should be the case that 61.8 or 38.2 should be support levels in stocks, but that seems to have been the case for quite some time. The pattern has some predictive powers, though, of course, it’s not very accurate by any measure. But the fact that through all the transactions that take place in the market place every second of every day, there exist a (weak) pattern is still pretty extraordinary – it’d very strange if it were just a quirk that disappeared tomorrow. Perhaps there are certain immutable laws about trading and exchanges that lead to these patterns, and it’s simply the case that we haven’t quite figured them out yet – that’s one view. On the other hand, there’s LTCM and the spectacular failure of algorithms and pattern seeking. But where is it that we actually draw the deciding line?

Wittyretort07:54, 21 October 2011
 

My example is Benazir Bhutto. I chose her because of two reasons: she was the first women prime minister of Pakistan, and she was the first female leader of any muslim country. Throughout history countries have been led by men, especially in the muslim community. Intuitively, we would expect another male figure in such a key position; therefore, her accomplishments were unexpected, defying the worldly trend of male dominance in that sector. This of course applies to other female leaders who have gained political power; however, Bhutto brings a greater sense of unexpectedness to the society due to her accomplishments in such a conservative society.

YukaZaiki04:07, 23 October 2011
 

forum for week of 2 October

Argle: Philosophers and scientists are always saying that we get evidence for our beliefs from perception. But how do we tell when perception is reliable? For that matter, how do we know that it is reliable at all? The answers always rely on evidence from perception. But that's circular. It doesn't convince me at all.
Bargle: Well, you've got to start somewhere. Can't ask for magic. And perception has all the features we want in a starting point. So if we are to get anywhere - sort out the mess about what to believe and what to doubt - that's the obvious place to start.

Whose side are you on?

AdamMorton17:27, 30 September 2011

In Chapter Two: Perception, of Dr Morton's book A Guide Through the Theory of Knowledge, Section 3 Empiricism, has the inclusion: To the question “What qualities could our beliefs have?” it answers “We could, potentially, have only beliefs that are based on perceptual evidence.” Empiricism is thus both a very down to earth and a very idealistic philosophy. It is down to earth because it aims to base all our beliefs on what we can see, hear, and touch. And it is idealistic because it thinks that human beings are capable of reforming their beliefs so that they are all based on perception.

I would like to relate three excerpts on Dr. Richard Feynman.

1. The late, great physicist Richard Feynman wrote, "It's quite wonderful that we can 'see,' or figure it out so easily. Someone who's standing at my left can see somebody who's standing at my right - that is, the light can be going this way across, or that way across, or this way up, or that way down; it's a complete network. Some quantity is shaking about, in a combination of motions so elaborate and complicated the net result is to produce an influence which makes me see you, completely undisturbed by the fact that at the same time there are influences that represent the guy on my left side seeing the guy on my right side. The light's there anyway....it bounces off this, and it bounces off that - all this is going on, and yet we can sort it out with this instrument, our eye." Source: Google entry = physicist richard feynman on perception - number 8 posting titled Introduction to Perception.

2. In the book titled Quantum Man, 2011, by Lawrence M. Krauss, page 313 includes the entry attributed to Dr. Feynman that "...we seem to be hard wired to find that what happens to each of us naturally appears to take on a special significance and meaning, even if it is an accident."

3. On page 318 Krauss includes the observation of Dr. Feynman that "We have to guard against this [theory misperception], and the only way to do so is by adhering to the straight jacket of empirical reality."

On the basis of Dr. Feynman's contribution to the advancement of quantum physics, I think an emphasis of philosophy based on experiment is accurate.

JamesMilligan07:32, 2 October 2011
 

I would argue that one of the best reasons to believe that our perceptions are reliable sources of evidence is in our everyday lives. We go through most days without hurting ourselves, or doing anything particularly dangerous or outrageous all because of our perceptions. We generally have a belief about when it is safe to cross the street, when to reach for a door handle and hundreds of thousands of other actions we do every single day, all because we can perceive the world around us, form a belief about how best to act, and react accordingly to it. We find an incredible degree of success in this. On the whole, our perceptions are fairly reliable. Of course, there are occasions where our senses fail us and we fail to perceive something obvious (this is discussed in length throughout "The Invisible Gorilla", Chabris & Simons) or our perceptions deceive us through hallucinations, but generally, we perceive everything we need to in order to get through our lives successfully. I would argue that due to the amount of success we seem to find in our beliefs based on perception, that they are a reliable source of evidence.

JosephPeace03:44, 3 October 2011
 

Argle may not be convinced, but chances are, unless he lives an extremely atypical life, he will accept perceptual evidence while he toils about his everyday work. The fact of the matter is, there really is no possible way to hold beliefs which allow you to interact with the world, even if it is to the most minimal degree to continue living, without accepting first that perception will give reliable results. Interactions between Basic beliefs produce actions, ie. I believe that I am hungry AND I believe that there is an apple in front of me, therefore i eat the apple, thus soothing my hunger. In this most basic case, both assumptions came from perceptual evidence ( you FELT your hunger, and you SAW the apple), and the awareness that the ensuing action had been completed also came from perception. One who wished to live under the assumption that perception was not reliable, however, would be bound to give no grounds to either of the above basic beliefs, and therefore no grounds to pursue the ensuing action. So, they'd be really hungry. I don't feel any anxiety further stating that no action, including actions where you accept or argue some kind of epistemic belief, could ever be founded on any belief that stems solely from non-perceptual reasoning. You can very well attempt to distance yourself from pure perception by claiming sanctuary in mathmatics or logic, but if you are challenged to prove that math or logic is solid, you will inevitably be forced to prove the most basic elements inherent in your math or logic through perception (that you can see that 2 + 2 is the case through the perceptual world, or that K therefore B is so because of observation of that being the case in pratise) So, I side with Bargle.

NoahMcKimm19:27, 3 October 2011
 

Much has been said so far about scientific method and reasoning, but very little representation has been made from the loyal opposition, namely the arts, and their role in both the formation and the understanding of beliefs.In that regard, I would like to to try

Robmacdee20:34, 3 October 2011
 
   To continue, I would like to try throwing a few terms into the discussion here that I haven't seen presented so far.The terms are: empathy,seduction,desire,aesthetics,taste(and distaste, or alternatively attraction and repulsion) and last but not least play (as in playfulness, or role playing)
   Art, as method reverses the scientific approach. In order to discover truth, it plays for effects rather than seeking causes or underlying reasons as in scientific inquiry. The artist develops an arsenal of lures designed to seduce. Those which work are retained, those which don't are discarded. In this way susceptibilities are revealed which tell us what people either believe or may be led to believe.More importantly, through the play-acting of art unconscious beliefs and prejudices may be revealed which quite likely, if left unrevealed, would in all likelihood influence or distort conscious reasoning. Decisions made on such unrelieved reasoning, even though based on opinions which may otherwise be considered quite rational, and therefore correct may indeed be quite false.
   Regarding taste: To hold a belief, it helps if one has a taste for it. Rather than accept a taste or appetite at face value, an effective artist may question, persuade, and, through art, lead an unsuspecting audience through the magic of imaginative re-framing, paradoxically, to the acceptance of previously quite unpalatable truths.
    Empathy: This term is a work in progress for me, admittedly a mystery. All I have to offer is a note I made from our class discussion. I wrote: we collectively operate on the experienced truth, or in other words we operate on faith, the faith that this commitment will carry us through from moment to moment.It seems to me that keeping faith builds empathy, while breaking faith weakens empathy.
    Is this properly philosophy? If not, should it be?
Robmacdee21:37, 3 October 2011
 

I had a problem inputting in the box above (don't format!). This is the same material as in the box above. To continue, I would like to try throwing a few terms into the discussion here that I haven't seen presented so far. The terms are: empathy, seduction, desire, aesthetics, taste (and distaste, or alternatively attraction and repulsion) and last but not least play (as in playfulness, or role playing). Art, as method reverses the scientific approach. In order to discover truth, it plays for effects rather than seeking causes or underlying reasons as in scientific inquiry. The artist develops an arsenal of lures designed to seduce. Those which work are retained, those which don't are discarded. In this way susceptibilities are revealed which tell us what people either believe or may be led to believe. More importantly, through the play-acting of art, unconscious beliefs and prejudices may be revealed which quite likely, if left unrevealed, would in all likelihood influence or distort conscious reasoning. Decisions made on such unrelieved reasoning, even though based on opinions which may otherwise be considered quite rational, and therefore correct may indeed be quite false. Regarding taste: To hold a belief, it helps if one has a taste for it. Rather than accept a taste or appetite at face value, an effective artist may question, persuade, and, through art, lead an unsuspecting audience through the magic of imaginative re-framing, paradoxically, to the acceptance of previously quite unpalatable truths. Empathy: This term is a work in progress for me, admittedly a mystery. All I have to offer is a note I made from our class discussion. I wrote: we collectively operate on the experienced truth, or in other words we operate on faith, the faith that this commitment will carry us through from moment to moment. It seems to me that keeping faith builds empathy, while breaking faith weakens empathy. Is this properly philosophy? If not, should it be?

Robmacdee23:06, 3 October 2011
 

Well, looking at the human brain, I have to admit that it is fallible. It was designed to cope with situations that no longer apply to our modern lives. There is also the propensity to "see something that isn't there" when we read something incorrectly or think we've seen something that never was. Our brains trick us on a regular basis. I would think that the greater number of people who can agree on what they perceive would lend that perception more credibility. Of course there is a possibility that there could be a mass delusion, such as a group of people under the influence of an inhalant, but the probability of that being the case would seem to be slim, unless one is at a rave. as well, I think to keep a modicum of sanity in our daily lives, we need to accept what we perceive as being so, in that it helps us cope with the rigors of modern life.

KarynMethven00:29, 4 October 2011
 

While reading about the Sense-Datum Theory, I found that the theory reminds of Compatibilism, a philosophical view on free will. I find them to be similar in certain ways. Sense-Datum Theory, especially the idealist stance, does seem to be compatible with other philosophical views on perception. It seems to be arguing for something other then a solution to the problem of perception. In a way, its not on the same page as everyone else. Based on the assigned readings, Sense-Datum Theory doesn't really provide an answer to the problem of perception. I feel as if it only introduces some new vocabulary and proposes a view that does try to not step on any other view's toes. Though Sense-Datum Theory provides an attractive and enticing argument, it feels incomplete and in a way irrelevant to providing an answer to the problem of perception.

KacperMotyka00:34, 4 October 2011
 

The problem with perception has essentially been created due to the fact that humans often have errors in their perceptions. Our perception of sense data may not always lead us to true beliefs, which poses a big problem both epistemically and in our everyday lives. If there are perceptual illusions and hallucinations, then what justified reasons do we have for trusting our empirical evidence? We know we are often wrong about the things we perceive (a stick looks bent when its half-emerged in water, white walls will look coloured if there are coloured lights, etc), however I believe in combining the data we receive from all our senses, we can create an approximate enough depiction of the world. If the stick looks bent in the water, we can still use our sense of touch to verify. We also have a pretty good idea of when our sense perceptions may not be as accurate. When I don't wear my glasses or contacts, i don't trust any of my visual perceptions, because everything looks like a blur and i know what i am seeing is most likely inaccurate. For humans to have survived as long as they have, and for our lives to function as well as they do, our sense perceptions must function at least to some extent. If we use all of our senses to create a broader depiction of the world, we are more likely to be right about it. Empirical evidence is proven to work based on our everyday lives, at least well enough for us to live comfortably. It is a good place to start when trying to form true beliefs about our world.

CaleighMcEachern01:16, 4 October 2011
 

René Magritte a surrealist artist in the 1920s painted a piece entitled "The Treachery of Images" -

"Ceci n'est pas une pipe"

A paradox, where the painting is a clear depiction of a pipe - with the text underneath translating 'this is not a pipe'. René believed it was not a pipe - but a representation of a pipe. If his text was to say 'this is a pipe', he would have been lying.

To relate this example to the argument of Argle and Bargle, I can clearly see Argle's point. Do we always come to a conclusion when there is perceptive proof of it - be it touch, smell, visual or auditory responses? How sure are we to use our perception as a tool to decipher and formulate our beliefs.

I think it is easier to succumb and accept our perception than to try and dispute it. Many great discoveries and theories are all formulated from not agreeing with our general perception of things.

The conspiracy theory if "Did man really land on the moon?" is a fine example if we should trust on our perception. We read and watch videos of man landing on the moon - and not physically experience it ourselves - is that sufficient empirical evidence for us to believe or disbelief it?

Perception is also highly susceptible to the influence of others - human's opinion and their perception. Our ability to make decisions and formulate beliefs are unconsciously influenced by various factors such as personal moral beliefs ( things that we were repetitively told that was "true" ) and/or considering another opinion ( the majority ). How then can we trust on our perception when it is already been conditioned?

Although Bargle did say perception is a starting point. "sort out the mess about what to believe and what to doubt" - is a vital information to comprehend the use and purpose of perception. It does not always point to our beliefs, but it is a foundation in which we create our beliefs from.

KashirajDaud05:41, 4 October 2011
 

I think the fundamental difference in the two statements is that for Argle our beliefs must be 100% true and provable, whereas Bargle takes into account practicality. Bargle's reply is kind of dissappointing (he seems to be just shrugging his shoulders and saying 'what can you do about it?'" but at the same time I think its the best reply possible because there isn't much you can do about it without becoming an intellectual vegetable, only able to hold beliefs like "i am a thinking being."

DennisPark18:51, 4 October 2011
 

Although it is easy to question everything and believe nothing as argyle does, bargle takes into account the impracticality of that outlook. "we have to start somewhere" simply but perfectly states how if we doubt everything including our sensory perception, we are left with nothing in which to believe, making the study of this, and everything else, a moot point. If we can't believe anything why pursue knowledge at all? It is all well and good to make the statement, as it is based on a reasonable idea, but if everyone decides to agree with argyle we agree to know nothing; a contention that would leave human life in a state of denial and would be catastrophic to the function of everyday life. We have to start somewhere, and contend that what we can see, taste, hear, smell and feel, must indeed be as we perceive it, simply to avoid the redundancy of our own existence.

SaralynPurdie05:09, 5 October 2011
 

The first argument is very similar to Descartes’ bold “I think, therefore I am” statement, which states that beyond the knowledge of our own existence, we cannot prove that anything else we perceive in any other sense exists. This first approach of complete deniability is almost the simpler route to take, but in a sense we must trust our perceptions as sources of evidence for the existence of an external world and rely upon them in order to allow ourselves to accept externality. Although the second stance is fairly passive, they realize that they need to invest a certain amount of confidence in the fact that much more often than not, our perceptions do not deceive us, and can be relied upon as a strong source of evidence about our external world.

CaitlinMcKewan07:22, 5 October 2011
 

Following up with a lot of the responses above pointing out how sometimes the way brain works would fail us from obtaining the right 'reality'. Often times when we perceived things, that certain moment of perception doesn't usually last long, then our beliefs formed by that perception will depend on our memory of that moment. And getting precise memory all the time is not a common task then we're always not so certain about what is actually that we see, hear or touch. Uncertainty doesn't normally give true beliefs, rather misleading beliefs. With illusions and memory failure, why should pure perception be good evidence for getting beliefs when we ourselves have to question what we perceive? Not suggesting that we should not believe what we experience, but aside from everyday life situation, I think pure perception would not give good reasoning and beliefs without being rationalized.

JodyNguyen08:32, 6 October 2011
 

How can we not rely on our perceptions? Everything we learn we interpret through perception. I think we have to understand that our senses can give us false information at times because that is just a part of how our human brains operate. The awareness of our shortcomings provide us with reasons to empirically research and test things to come to conclusions. The more we know about the mistakes our brains make, the better we can do tests to avoid misinterpretation. The video linking to youtube showing the McGurk effect was astounding! I was sure the first time watching it that I must be hearing two separate voice-overs, and then they showed a split screen of the different ways the mouths were moving. At first, I KNEW that there must be two different voice overs, as my brain told me I was hearing "bah" or "vah", but obviously my senses were lying to me. We can't avoid perceptual illusions, but we have no choice but to use our human brains, be aware of said illusions, and work within our limits to attain knowledge. Otherwise, it would be extremely pessimistic to think we have no chance of knowing anything simply because of how our brains and perception works (or doesn't work).

JamesMulholland19:00, 6 October 2011
 

I can understand the views of both authors, but I think the more appropriate and reasonable position is that held by Bargle, who also offers a bit more to the conversation than simply trying to write off personal perception as a source of knowledge. Below is my rant concerning my views of the topic:


Certainly our primary method of discerning beliefs from reality is our own personal experience. We are naturally and inescapably social creatures who rely heavily on not only our sense data and the intuition which works at a feverish pace to try and make sense of it. To illustrate this with an example, when we 'feel' or 'sense' that someone is angry with us, we often arrive at this idea before we go home and sit and meditate on it. Rarely would we only realize someone's displeasure with us long after the fact by the means of careful analyzing. Our view of reality is also extremely subjective--when you're hungry you can often have a pessimistic view of the world, and then rather quickly after a hearty meal we can find ourselves completely turned around and feeling once again in a good state of mind. Thus, we can slowly see that the mind is fined tuned to a certain type of consciousness--that which will ensure our survival and replication. Most of our senses help ensure this--we feel excited and anxious when we see a fight, or are jolted awake when we hear something we think may pose a danger.


Now, when we are aware of this, we can obviously see where there would be issues in trying to use this human, biological mind to try and ascertain certain facts and truth about the world. But to say that is it flawed is not to say that we must throw it out and write it off as a means of garnering information. Indeed, unless we create artificial intelligence that is more capable than ourselves, we will only have the means of human perception to view the world. Certainly a blind, deaf man with no sense of feeling, taste or smell will be less helpful in collecting evidence for beliefs than will an average person.


Certainly human perception is fallible and unreliable to a certain degree, but if we can measure and be aware of our biases, and make a conscious, constant effort to account for them, we can use it, like Bargle said, as a helpful starting point.

AnthonyMayfield21:01, 6 October 2011
 

I believe Bargle’s empirical approach to perception better achieves the goals of epistemology, since Argle’s radically sceptic stance can never achieve knowledge. Argle does pose a valid point by suggesting that all our perceptions could be false, and humanity may be subject to a massive illusion. Although his stance attempts to refute the existence of knowledge, it may support epistemic values in the sense that it attempts to achieve truth. I believe Bargle’s stance is also valid in the context of epistemology, and far more comforting in the the context of morality. Three main aspects of humanity, which can justify Bargle's stance are evolution, intuition and co-operation. The presence of these phenomena suggests that human knowledge is a viable source of knowledge, since humanity would not have developed without their impacts, and our perceptual beliefs from generation to generation must be true. Therefore, we have strong reason to believe that Bargle's stance can justify human knowledge, and even though the possibility of a mass illusion does exist, I believe this stance is too speculative and counter-productive.

ChadMargolus04:34, 7 October 2011
 

There will never be a way to refute as skeptic like Argle. His argumentation is certainly a reasonable one, but one that puts humanity or human knowledge in utter despair. A way to confront such a skeptic would only be through a compromise. We can accept that all of our knowledge even our perceptual experiences and beliefs are a mere illusion, to such a degree that our reality is artificially created by a mad scientist or a demon. No matter what the cause of our reality is, we do have to accept it. We have to accept that our perceptional evidences are not to be trusted. But within this, our reality, the rules of the skeptic do not have to apply. We simply neglect any skeptical hypothesis and we will find out, as humanity did since millions of years, that knowledge certainly can be attained, and the most reliable way to do so is through perception. Within our reality, a reality we share with other human beings through experiences of shared perceptual evidences and interactions that lead to such, we are fairly comfortable in trusting our sense data and our perception although we are aware that it can misguide our beliefs sometimes. Through shared experiences we are able to avoid or limit such false beliefs by accepting a common ground for attaining knowledge, and this common ground is perception. Reality is what the majority of human beings perceive it to be. And therefore we are justified in trusting our perception (with the usual reserve).

PhilippeNussbaumer21:31, 7 October 2011
 

Argle, like me, is a sceptic and his argument about using perception for evidence as being absurd if the motive of science is to arrive at a truth about how our world functions. If we ourselves aren’t able to master our own senses, that play many perceptual tricks on us, how can we then begin to use perception as a means of understanding the world around us? Bargle however makes a valid point in the sense of practicality. If we are to incessantly try to only use evidence that we know come from a source that is 100% reliable, mankind wouldn’t have the confidence to venture and create the many technological and social advances that we so take for granted. If Bargle’s argument is only for a starting point for evidence for a certain belief then I believe perception is our best bet. However once we have built our foundational beliefs on that particular subject, I argue with Argle that we shouldn’t continue to build on it with perceptual evidence. The empiricists’ view of obtaining beliefs is just negligent when dealing with advanced fields (especially in the sciences). Scientist can for the glory of their name base their theories on perceptual evidence, but what would benefit humanity more is if they used less intuitive methods to arrive at their grandiose theories, which in most cases would lead us closer to the truth

EbenzerOloidi07:03, 17 October 2011
 

forum for week of 19 September

Clifford says " it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence." Strong-sounding words, and they pushed William James into his very influential but slightly weird defence of believing beyond the evidence in matters of personal importance. But is that slogan of Clifford's *really* so strong? The weasel-word 'sufficient' might be a give-away. If it means 'enough to make it ok to believe', then the whole thing is in danger of turning into "it is not ok to believe when the evidence is so weak that it is not ok to believe". And this sounds a lot less drastic.

AdamMorton20:13, 16 September 2011

While doing the readings I kept thinking about a lot of things I learned in the History of Science. For a long while Physicists believed in Newtonian physics as the primary physics that ran the universe. However, with the advent of Einstein's Theory of Relativity and, later, with Quantum Mechanics scientists soon realized that the new evidence showed things were not as cut and dry as they believed; the world view they held was actually not entirely correct because ToR and QM complicated the Newtonian framework.

The issue is that we'll never know when we have enough evidence to support a belief. If we decided that we had enough evidence for some theory and that it was settled and done, case closed, then we'd be drawing a line arbitrarily in the sand as to what was enough evidence. Does this mean we shouldn't believe things ever? No, I think that's a step far into the opposite extreme. But, I think it's good to always have the caveat in our minds that even if we have a seemingly insurmountable amount of evidence for some belief, that we can never predict what new evidence might come up and how it might change our views. Essentially, we have to be ready to change our beliefs when new evidence becomes available because we can never know when enough is enough.

MikeHare01:07, 18 September 2011
 

I don't believe the slogan of Clifford's is really that strong, in fact I think it is pretty obvious. You should not believe things that do not have enough evidence to be worthy of belief seems pretty basic. However once you get into what makes evidence sufficient it can get tricky. We can never truly know anything with 100% certainty, so my question for William James would be where does information become sufficient. I know he discusses various types of beliefs and why or why not we should believe them but he seems skeptical of most if not all of them. So should we just disgard them? Does tradition for example have other uses other than a formation of beliefs?

JamesHaddad03:43, 20 September 2011
 

Ultimately, beliefs exist because they serve a purpose. Beliefs, especially those of moral nature, shape our values and determine how we see fit to conduct our behavior. Not only on the individual level but also in deciding how best to live among each other both within local societies and internationally. Shared beliefs are necessary when developing both domestic and foreign policy for organizing social infrastructure. Without such beliefs, we would live in a very anarchic world, which is quite simply, impractical.

We make a significant decision in choosing not to choose between beliefs based on how fragile the subjectivity of what constitutes "sufficient" evidence in order to establish "knowledge" is. As James noted in his essay, what one may consider to be insufficient grounds for belief when it comes to Christianity, many would maintain that the biblical evidence that they have is very strong and sufficient justification. Even in trying to decide upon what may classify a template for genuine knowledge, we must accept and believe in certain assumptions. For example one theory of knowledge is structured as : P must be true, S must believe that P, S must be justified in believing that P, and if it were not the case the P then S would not believe that P.

What I (believe) makes Clifford's statement seem so dramatic and perhaps even outlandish is his use of the word "always". It illustrates his point as radical and as we discussed in class (when radical terms are typically used) suggests irrational reasoning. In fact, I find it slightly ironic that he has such a strong belief about every single belief that itself cannot be objectively proven.

Indeed, humanity overall will have a challenging time trying to find true knowledge when it is as fearful as Clifford seems in being "duped". I think that the more reasonable way of going about things would be to keep a critical and questioning mind without going too far toward the opposite end of the spectrum - there should be a healthy balance between accepting certain beliefs and being autonomous in decision making regarding firstly your individual values but also what should be accepted as fact. Be conscious of the information you are being fed, be critical, be evaluative.

RachelHolmes06:54, 20 September 2011
 

I think the idea "belief" is a very subjective term. Most people will think whether they should believe something before they do so. Does it suggest that once a person wants to believe something he always has some reasoning in his mind? If this is the case, when people think back the reason why they believe things at first place, there will always be some reasoning in their mind that they think it is sufficient evidence. Therefore, I think the term "sufficent evidence" is only a subjective term that people already have in their mind. The paper by Clifford only reminds people they should think before they make decisions and see whether their decisons are rational. However, people always think they are rational; without other people's value judgement, they will never know they do anything wrong.

HongkunGai15:32, 20 September 2011
 

I am contributing to this week's forum as I had joined the class late and Professor Morton instructed me to simply contribute to the next week's topic. The question of "sufficiency" always seems to present unpleasant questions in the minds of those judging as it suggests "however much we need for 'our truth' to become a reality, this now detracts from the poignancy of the statement as it conveys that we may have to dig around for evidence in order to make our beliefs valid rather than develop some clear, concrete, personal reasons based on our own experiences and perceptions. Clifford also makes no effort to reduce his scope of interest here (perhaps at the danger of being too specific and in turn, ineffective) - "always, everywhere, and for anyone" somehow seals Clifford's fate in his outlandish and seemingly desperate use of phrasing. Finally, Clifford's statement relies on the subject's consideration of apparently any evidence as cause to justify any belief, invalidating the absolute necessity for evidence based on personal consideration in determining and solidifying our beliefs.

BenjaminCarney16:04, 20 September 2011
 

Taken by itself, Clifford's statement "it is wrong always..." etc. is clearly pretty unremarkable, given, as you've all rightly pointed out, that we can't really define in such an abstract way what is or isn't "sufficient" evidence.

What's more important for the purposes of his essay, I think, is the reasoning which leads up to this quote. Why is it that "it is wrong always..." etc.? According to Clifford every belief, from the most significant to most insignificant, comes together to form a sort of patchwork which helps to orient ourselves in our daily lives. Many of these beliefs are socially-derived, and hence can be influenced by anyone inhabiting a given society. The following quote I think helps to clarify Clifford's views: "Every rustic," he writes, "who delivers in the village alehouse his slow, infrequent sentences, may help to kill or keep alive the fatal superstitions which clog his race." Even the most insignificant beliefs of the most common person, according to Clifford, can have an effect on others, both through the belief itself and the maintenance of the "credulous character," and therefore, to perpetuate beliefs which "clog" the human race is in a sense to fail one's duty to humanity--the "universal duty of questioning all that we believe." 

What exactly is this "duty"? Clifford isn't clear in giving an answer, but he writes as if he means progress--both in terms of knowledge and in terms of morality. "Progress" is obviously a tricky idea in itself, so I won't try to go any deeper into it. I hope the above helps to make the quote a little clearer in its proper context. If we want to be criticizing Clifford, I think it makes more sense to look at his assertions concerning the extent to which our beliefs are socially-conditioned, and the idea of us having a "duty" towards humanity in our thinking. If these are both true, then it seems to me quite rational to believe that "it is wrong always..." etc., the slipperiness of the word "sufficient" notwithstanding.

DevinEeg06:20, 22 September 2011
 

Sorry, disregard the original post---apparently if you indent by accident it puts your writing in that weird box...

Taken by itself, Clifford's statement "it is wrong always..." etc. is clearly pretty unremarkable, given, as you've all rightly pointed out, that we can't really define in such an abstract way what is or isn't "sufficient" evidence.

What's more important for the purposes of his essay, I think, is the reasoning which leads up to this quote. Why is it that "it is wrong always..." etc.? According to Clifford every belief, from the most significant to most insignificant, comes together to form a sort of patchwork which helps to orient ourselves in our daily lives. Many of these beliefs are socially-derived, and hence can be influenced by anyone inhabiting a given society. The following quote I think helps to clarify Clifford's views: "Every rustic," he writes, "who delivers in the village alehouse his slow, infrequent sentences, may help to kill or keep alive the fatal superstitions which clog his race." Even the most insignificant beliefs of the most common person, according to Clifford, can have an effect on others, both through the belief itself and the maintenance of the "credulous character," and therefore, to perpetuate beliefs which "clog" the human race is in a sense to fail one's duty to humanity--the "universal duty of questioning all that we believe."

What exactly is this "duty"? Clifford isn't clear in giving an answer, but he writes as if he means progress--both in terms of knowledge and in terms of morality. "Progress" is obviously a tricky idea in itself, so I won't try to go any deeper into it. I hope the above helps to make the quote a little clearer in its proper context. If we want to be criticizing Clifford, I think it makes more sense to look at his assertions concerning the extent to which our beliefs are socially-conditioned, and the idea of us having a "duty" towards humanity in our thinking. If these are both true, then it seems to me quite rational to believe that "it is wrong always..." etc., the slipperiness of the word "sufficient" notwithstanding.

DevinEeg06:22, 22 September 2011
 

I think Professor Morton is accurate in his reference to Clifford's Principle that the use of the weasel-word 'sufficient' might be a give away, for anyone to believe anything in terms of evidence. In the race to develop the atomic bomb, Brigadier General Leslie Groves had Manhattan Project responsibility. The plutonium bomb technology [synthetic fuel] was first tested. The first bomb dropped was an uranium bomb. The actual dropping of Little Boy, the first bomb dropped, on Hiroshima, was the test. A decision related to Groves beliefs. The result was only two bombs resulted in Japan's unconditional surrender. A large number of military lives were saved, through action, without evidence, the Little Boy bomb would actually work. James Milligan

JamesMilligan06:57, 22 September 2011
 

Based on this thread it would seem that the general consensus is that beliefs and rationality are closely tied. I would argue that beliefs often cloud the decision-making capabilities of those in power, as well as affecting the daily choices of each person. Why do we stand firm in some of our beliefs, while understanding that certain others are probably untrue? Dieting is an example of this. There are countless methods that different individuals swear by, or believe in. How can such a situation promote rationality and clarity?

Clifford suggests that if an individuals beliefs are founded illegitimately, that is if there is a lack of good evidence, that person is subject to moral criticism. In another form, the individual has failed their duty as a rational human being by being fooled into false beliefs.

This raises a problem that can be exemplified in dieting. How are we to know who the 'experts' are? Which method actually makes one thinner and more healthy? Clifford is not suggesting that we distrust everyone, everything we are told. For him, the answer lies in social trust. If something is widely believed, it is more likely to be true. This, however, quite clearly does not provide correct beliefs every time.

Clifford leaves us with strong words to navigate through our beliefs with rationality. "It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence." True to his doctrine, I must ask where is his evidence for this statement? With the internet, there is now 'sufficient evidence' and 'proof' for almost every idea, belief and opinion. The best each individual can do is question where the evidence is from, and who. Beyond that each of us holds some beliefs that are false, and the best we can do is to be open to criticism and change.

CarlHermansen20:12, 22 September 2011
 
 

I would start a line of discussion here. For example: with all the things epistemology is supposed to cover, how can there be one topic. without it's becoming completely vacuous and crazy?

Then someone else would add a comment or an answer, signing their contribution. For example

Maybe its because philosophers think they have the answers to these things, just because they're over-confident. MARIANNE

and someone else could answer this. For example.

Well, there are some very simple things to say, which are generally relevant. Like: guessing doesn't give good answers. Or: scientific method tends to pay off. HENRY

and someone else might react to this. and so on.

AdamMorton23:15, 6 September 2011

I've just remembered to do this posting and after reading the prior entries, felt a sudden apprehension. I suppose I'm just admitting that the entries seem very informed and developed and given my last minute entry, am admitting that my post will be insufficient by comparison. However, considering how Professor Morton has allowed us to post what-ever, so long as we post, I am going to take advantage of that statement. It's better to post than none at all.

In response to the question of whether or not epistemology is 'merely' an expression of values and biases, my answer is a yes for now. But of course there's more to it although I'm not properly prepared to engage in discussion on it yet. However, my reasons for values and bias as knowledge are based on the reasoning that knowledge can be easily biased, which are typically based on our values. To be able to study knowledge and have beliefs without bias (which can skew reason), are definitely an ideal to strive for (equal to objectivity?), but a difficult one I could imagine. Now the question remains, whether or not there even is an objective truth to begin with, for us to study. KC

SeetCheeChan05:46, 13 September 2011
 

I found reading the text not only helped define the key terms, but it also gave examples to make it more clear to the reader what is knowledge, truth etc.. In the lecture, Professor Morton talked about a rational belief that is unjustified. From my understanding it is a belief that a person has come by through rational reasons and thinking, but somewhere, an aspect of that belief is wrong and has been overlooked. As the example given in class, a person doing their math homework believes everything was correct. They reviewed it and checked all their answers and everything was correct. They got someone else to review and check their work and everything is correct. But when handing in the homework o the math teacher he finds a mistake. Therefore, the students belief that they had answered every question correctly on their math homework was a rational belief, but it was unjustified because there was a mistake or a flaw in the belief. I was wondering if there were any other simple examples for us to remember rational beliefs that are unjustified?

CourtneyChristianson19:27, 19 September 2011
 

" it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence." I think I'll need a bit more evidence Mr. Clifford.

AndreRoberge03:38, 22 September 2011
 

So, this is my first forum contribution and, I am, admittedly, not sure I'm doing this right...but here goes. I find empiricism to be an interesting way of gaining knowledge. It is predicated on the BELIEF that knowledge can only be gained purely through our sense data; this seems highly contradictory because if we only base our beliefs on perceptions, then where is the evidence for the belief in empiricism in the first place? Perceptions feed into belief, and beliefs feed into our perceptions; it's a circular argument. Empiricism doesn't even seem to be ideal because time and time again our senses deceive our consciousness into believing something that is, in fact, untrue. I suppose it comes down to what we think ideal forms of knowledge should look like and how we go about achieving it. I'm also fairly sure this has all been said before, but these are just some of my thoughts on the matter.

Katie Ryder20:59, 13 October 2011
 

forum for week of 26 September: skepticism

People are usually pretty sure that they know many things. They have no doubts that 2+3=5, that they have toenails, that Vancouver is in Canada, and so on. But when they meet philosophical scepticism (scepticism/skepticism: either way) they usually fold without resistance. Sure, they say, we don't really know anything. Why is this? Possible explanations:
- they can't be bothered to argue, so it is easier to give in
- they like grand weird-sounding philosophical conclusions
- they recognize a truth about knowledge and get confused, thinking it is skepticism. (I tend to this one.)
- they accept the truth that we know very little, but stick with their old opinions for the sake of harmony with others
- or...?

AdamMorton21:38, 23 September 2011

I suspect they see scepticism as possible but the lack of resistance is because it isn't a falsifiable theory. Karl Popper noted that the demarcation between science and dogma was whether or not there was a potential piece of evidence that could prove the theory wrong. Scepticism offers no such possible piece of evidence, making arguing with a sceptic both pointless and exhausting.

SpencerKeys16:51, 26 September 2011
 

Simply put, I believe that when people think about philosophical skepticism they indeed recognize a (possible) truth in it. However, I also believe most people don't care enough about its importance since it normally has little impact on our daily life. For sure you could accept this truth, realize you cannot know anything for sure and thus stop believing in anything, but how would your life then look like? You would have to question everything at every given time, which seems rather impossible. Instead, people would rather accept it as a possibility, yet one not significant enough to actually influence their way of life. It's impractical to let it lead your life.


I'd like to add something to the comment above about Karl Popper's Falsificationism. Popper developed this theory in response to his problem of demarcation; he was looking for a criterion to distinguish general "real" science from "fake" pseudoscience. His theory is a response to confirmationism; a theory which states that we can justify scientific theories and beliefs through observations, thus confirming them. Popper argued that this is impossible, you cannot start making (useful) observations without having some expectations already in place. Instead, he argued, we believe in theories not because they are confirmed but because they are not yet falsified. We can never exclusively conclude that a theory is true, but we can conclude that a theory is false. As long as a theory has not yet been refuted, we are justified in believing it.


This argument too can be challenged. Can we really ever conclusively disprove a theory?


To get back to the point, debating over whether or not there is anything we can really know for sure is for most people not going to lead anywhere. Philosophers have been debating over this issue for a long time and the problem is that we can never really know if any of their theories is true or not, they can never be confirmed because any method to confirm such a theory is in itself subject to skepticism. Many theories will sound plausible but have little value to make life-changing decisions accordingly.

YaradeJong02:41, 27 September 2011
 

Although people carry with them many beliefs such as the ones in the example, when they are confronted by people who disbelieve such statements, this disbelief is usually only founded in ordinary incredulity rather than in philosophical skepticism. It is therefore fairly easy to rebuke these disbeliefs, either by proving false or neutralizing the grounds for doubt. Take the example of Vancouver being in Canada. Suppose someone doubts this, arguing that Vancouver is, in fact, in Madagascar. This could be rebuked by either showing the person a map (or several if the doubt remains) or else by finding out that, for example, the person who told them that Vancouver is located in Madagascar was trying to play a trick on them.

However, it becomes much more difficult to argue with philosophical skeptics since they question the very background of information that people generally rely on to verify facts like the examples in the question. It therefore becomes very difficult to falsify or neutralize any doubts since the reasons you would usually give to ordinary incredulity are themselves being undermined. It becomes difficult to argue with a skeptic due to disagreements on what counts as evidence.

People are also likely to be convinced by skepticism, at least temporarily, when initially confronted with it since skeptics offer some very convincing reasons; how we are constantly disproving and re-formulating our scientific theories for example, or how we have been wrong so often in the past. Upon further reflection though, people will probably realize that many previous errors doesn’t mean we will not be able to discover truths in the future.

Nevertheless, even if people are convinced by skepticism, they are likely to carry on with their life as they were before; by assuming they know certain things about the world or by at least working on the basis of what has been true for them in the past (though this may also be questioned in light of the dubious nature of our recollections), in order to function as humans. In this case, they would have to suspend their disbelief and suppose that we do live in a world in which there are facts; something that I would imagine even the most fervent skeptic does.

In conclusion, I think that quite often an agreement to skepticism stems from a difficulty in knowing how to begin to rebuke such an argument as usual methods of disproving beliefs have been undermined by the skeptic's claim.

AlexandraKnott06:25, 27 September 2011
 

I have to agree in concept with the above post. There certainly is merit to the idea that even our most steadfastly held ideas may not stand on solid epistemological ground,which can be startling to individuals not used to philosophical skepiticism. In everyday life, however, even the most fastidious skeptic must set aside there beliefs simply in order to live. Like Hume said,you have to believe the world is essentially as it appears to be when you exit the classroom.

Does this mean that the average person is in fact an unknowing skeptic? Hard to say. Skepticism definately holds inate appeal, but if you choose to judge an individual by their actions and not their stated beliefs, then you would be hard pressed to find any skeptics at all even amongst those that write articles extolling its virtues.

ZacharyZdenek06:36, 27 September 2011
 

I think people tend to give in because philosophical skepticism, when argued properly, is pretty convincing.

People think they know a lot of things, without actually knowing why or how they know them. Few people live their lives considering the ramifications of the theory of relativity, or quantum mechanics. Those theories have a level of precision that is not useful to people during their day to day lives. The same goes for considering the source of knowledge, as the thought involved is beyond utility for most people. We can all get by pretty well without having to think very hard, most of the time.

That sort of shallow thinking can, and does, create an environment ripe for philosophical conversion. It's easy to change someone's mind when they hadn't really made it up in the first place.

AmandaJohnson06:51, 27 September 2011
 

I believe it's more about self doubt about beliefs and reasoning

RunZheLi07:24, 27 September 2011
 

I personally have to disagree with the statement that most people fold in the face of skepticism. Discuss religion with someone who is devoutly religious sometime and regardless of the fact that skepticism discounts most of the devout believer's arguments they will continue to hold fast or simply denounce you as a servant of Satan. If discussing the existence of God, for example, they will most often quote the Bible which is supposedly the word of God yet within the context of the discussion God's existence has yet to be determined and therefore it is unproven whether or not He has any word but in their belief their arguments are fully valid. Though we may not think of a belief in the existence of God as knowledge it is to the devout believer and so it is a valid example for out discussion. The believer knows God exists just as the conspiracy theorist knows 911 was an inside job. Try using philosophical skepticism on one of those guys.

What is knowledge? The question of this course. We claim that something is not knowledge unless we have evidence or better yet proof but the devout believer, whatever it is they believe, sees proof of his belief everywhere. On the other hand do I really care that 2 + 3 = 5? No. Is that knowledge to me? Not really. It is, rather, an assumed and commonly held fact. What makes it different than knowledge is I don't have to and have never really thought much about it and therefore have never really seen any evidence or proof for it. (I am sure I have I just never really noticed because it is, to me, an inconsequential piece of trivia.) So it is not really knowledge to me but just something I assume to be correct because it is what I am told and do not care about it enough to look into it for myself. This is a topic on which I would admit the skeptics point but it would not really be folding or relenting as I did not really have a starting position of my own anyway.

Whether or not I have five toes or four on my left foot of whether or not I have a physical body at all can be called into doubt when I become convinced that it is quite possible that everything I perceive is either created by my brain or exists in the form I perceive only as my mind's interpretations of a physical world which I could not otherwise comprehend. But what must force my mind to concede that it is possible that nothing I perceive is real? That I might really be sitting in a classroom in Romania at this moment listen to a lecture by some Adam Morton guy and not laying on my bed writing this before going to sleep? In short, why do people relent in the face of skepticism regarding their "knowledge" which is not self-centered (e.i.: scientific knowledge)? We once believed the earth was flat which proved to be wrong. We thought, some time before we were born, that the earth was the center of the universe but is it? Up until sometime in the 1800's people believed the earth could not be more than 6000 years old. Einstein used to have us all convinced that it was impossible to travel faster than a known finite speed defined as the speed of light but wait... . In truth we as humans are aware of the fallibility of so-called knowledge. It is "so-called" because unless we have personally found proof of the belief it is not our own knowledge just something we are taking on authority. In the shadow of the long history of disproven beliefs and theories we admit somewhere inside that everything we think we know may be false and only refuse to openly admit it when it comes to those things we believe about the world or the way it should be that make us happy with ourselves.


In short, people do not agree with skeptics unless the subject matter is something they don't really care about. You can easily convince me there is an animal called a snuffleupagus but you can never convince a Priest to doubt the existence of God no matter what argument you make or how you undermine his arguments. In fact, in his mind you haven't even undermined his arguments. A doubt in the someone's knowledge must be self-derived because to them they have already seen the proof and must see the counter proof in order to doubt.

WilSteele08:25, 27 September 2011
 

I think the reason that the general public is prone to philosophical skepticism is because of the ease to access information in the 21st century. Prior to the invention of the computer and the Internet, few people could access information about how certain things worked. For example, only a few group of elites had access to information such as how the solar system worked in the beginning of the 20th century, whereas now even a 5 year-old will probably know more about it than people in the past. Because more and more people can access these information, they start to doubt the authenticity of them. In the past, few people bothered to know why something is the way it is, and instead just relied on the information that were available to them (mostly through people around them). If one is exposed to these information for a long time, they will firmly believe it even though they have no idea why it is so. On the other hand, nowadays, the information and ideas that people are exposed to are constantly changing due to the availability of information; people's "beliefs" about one subject changes before they are exposed to them long enough. Hence I believe the reason skepticism causes people give in so easily is that since past beliefs have been easily overturned, there is no reason to stand firmly by one's own beliefs but better to just open themselves to new things instead (even though the doubt could most likely be wrong).

YangSunnyLi08:28, 27 September 2011
 

I think that many people accept skepticism whilst making a conscious choice to ignore it, because accepting that there is very little truth makes living in the "reality" that we live in too difficult.

ChloeLawson15:12, 27 September 2011
 

I think people consider skepticism as another way to either prove or convnice themseleves of their beliefs

RunZheLi17:49, 27 September 2011
 

I believe that people become skeptical of their own previously founded beliefs when faced with philosophical skepticism is a result of two principles; a) It's human nature that we exhibit some nativity to information, when someone tells us their argument or point of view. This is especially evident to our beliefs that are not properly grounded or these very same beliefs being well dissected, debunked or unraveled by the source of contradictory information in question. When one of our thoughts gets challenged, we immediately start to challenge other thoughts associated with it. b) This leads into my second point of conformity. Humans have a great tendency to agree with each other. We feel a sense of social insecurity, and awkwardness when we think, reason (and in this case) believe something different then someone else. Thus we feel more at ease when we doubt our own ideas, and agree to the more "grounded" reasons of others

DanielKostovicLevi18:29, 27 September 2011
 

When faced with someone who doubts the very nature of reality, most people find it very difficult to construct an argument as neither point can be proven. Thus it becomes an argument solely for the sake of argument which is not overly exciting concept. If we are actually just brains in vats and our reality is nothing more than a false perception, nothing in our day to day lives would change, other than a far greater rate of depression.

AlanLaking00:59, 28 September 2011
 

The reference of this entry is to Dr. Morton's discussion of neutrinos travelling faster than the speed of light. The Stanford Encyclopedia of Philosophy scepticism article, under the heading: 2. Two Basic forms of Philosophical Skepticism, has the inclusion ..."'possible world scepticism' because the arguments for it typically involve imagining oneself to be in some possible world that is vastly different from the actual world and at the same time absolutely indistinguishable (at least by us) from the actual world. What underlies this form of skepticism is absent to the proposition that we cannot know EI-type [Epistemically Interesting] propositions because our evidence is inadequate. In The Associated Press news release dated September 23, 2011 and titled "Physicists wary of junking light speed limit yet" a skeptic is reports Alvaro De Rujala, a theoretical physicist at CERN, the European Organization for Nuclear Research outside Geneva from where the neutron beam was fired, said he blamed the readings on a so-far undetected human error. What I find captivating is DE Rujala's choice of proposition for quotation to The Associated Press. "The average person, said De Rujala, 'could, in principle, travel to the past and kill their mother before they were born.'" I speculate, how much is De Rujala the skeptical physicist, who argues the average person couldn't just say hello to their future mother, as a future sibling, instead of killing their future mother, because something like quantum theory would interpret such an example as an interference and the sibling future event being altered; and, how much De Rujala's choice for The Associated Press of event as example for the average person to travel to the past, may be designed by DE Rujala to tarnish the physicists who may have successfully discovered neutrinos go faster than light--or neither of these speculations--is of great interest. James Milligan

JamesMilligan06:58, 28 September 2011
 

Preceding line 5 should read: "Physicists wary of junking light speed limit yet" reports a skeptic, Alvaro Du Rejala, a theoretical physicist at CERN, James Milligan

JamesMilligan07:30, 28 September 2011
 

Expanding on Spencer's point of non-falsifiability in relation to the idea of skepticism, I feel automatically skeptical and aversive towards any theory or conjecture that is non-falsifiable, which makes me skeptical towards that ultimate philosophical skepticism that states we cannot know anything. I feel it's a similar aversion to non-falsifiability that is what has helped push our scientific progress to where it is today. These NF ideas are rightly viewed to be, while interesting to think and talk about, less than useful in a practical sense.

Compared to a more healthy skepticism, which forces solid evidence to advance an argument, the 'we don't really know anything' skepticism only hinders or will completely end any sort of inquisitive discussion. There are no responses to "oh, there is no way can really know that," and, in the same way, any NF claim. This leads to my own first skeptical inquiry towards any new claim or idea or theory, which is to ask for or think of something that would falsify it. It is easy to think of numerous obvious examples that would immediately falsify any scientific theory.

Back to the main question, we fold because their is no real response to a statement like this. People will either accept it because they can't see anything wrong with it, or reject/ignore it because the statement itself loses all its meaning the moment it is uttered.

"We can't really know anything" also contradicts itself, in that it makes a claim about knowledge that calls into question the certainty of everything, including the statement itself.

Johnlewis08:49, 28 September 2011
 

Why do we acknowledge that we don't REALLY know anything when confronted with philosophical skepticism? I see the relevance of accepting our ignorance and limitations, especially in how we perceive the world, in making a response to this question. The first thing that popped into my mind while reading the discussion topic was a quote by Socrates, "The more you know, the more you realize you know nothing." We tend to think that more knowledge means more intelligence. It's true but I believe it is also a humbling experience. When we learn new, amazing, and controversial things to be true, I am sure some of us out there will realize how ignorant we actually are through the "enlightenment" (it could be proved to be false or fallible again in the future). In <The Invisible Gorilla>, the first two chapters give several examples and case studies that show how human perception and memory can be misleading and fallible. We believe we saw things one way but in reality, it was in another way. We are surprised to see a gorilla walking past the basketball court because we didn't see it while we were counting the balls being thrown around. Something that seemed so trivial to one individual was an traumatic experience to another leading to two different accounts of the same situation. We rely on these memories and perception to understand the world around us, and most of the time we are caught believing things the way we want to. The more we become aware of the misleading aspects of human nature we once had so much faith in, I believe it leaves room for us questioning already established "truths". We all once had a situation where something we believed so firmly to be true was proven or argued against to be false, either converting or weakening our beliefs. We live in a world where "truth/false" is proved to be "false/true", the irrational is explained to be rational or vice versa. Therefore, I believe that whether we fully comprehend it or not, we have an underlying understanding of our limitations which will leave us searching for an absolute truth and questioning the already established "truths". And thus, I think it may be a possibility that a person will end up agreeing that we do not REALLY know anything when encountered by philosophical skepticism for the very reasons mentioned above.

ShinHyeKang01:10, 29 September 2011
 

It seems to me that most people cave in when presented with philosophical "globally" skeptic scenarios because we have an intuitive grasp over how Karl Popper argues we should treat unfalsifiable hypotheses. It is rather obvious that Descartes' Demon, for instance, is unfalsifiable, because it undermines the entire process of attaining knowledge by suggesting that our thoughts themselves may be manipulations of some malignant architect. We tend to accept the potential validity of such fantasies because we recognize firstly that they are difficult if not impossible to argue against, and secondly that the unfalsifiable nature of such hypotheses means that they should not be considered with nearly the same weight as a hypothesis verifiable through the scientific process. Karl Popper claims (as a previous poster notes) that an unfalsifiable belief is essentially dogmatic. In short, people do not argue with global skepticism with the same rigor afforded to other philosophical topics because it doesn't allow for much of an argument, and that the acceptance or not of such a broad, overarching, dogmatic belief has negligible consequences for our everyday lives. As an analogy, consider how many people are bothered by the idea of Bertrand Russell's teapot. Few of us would argue that we could disprove the notion that there is a celestial teapot in orbit around the sun, but few of us think that it would result in a drastic change in our everyday lives if it were to be true.

Aled23:21, 29 September 2011
 

I believe that a lot of people intentionally choose not to argue against philosophical skeptics not only because their theory isn't falsifiable, but also because the theory has very little impact on people's lives. the "fact" that 2+3=5 may not be true, but at least this reality, 2+3 always seems to be 5 to everyone. because of this consistency, the world is able to function with no problem regardless of how true our knowledge(which may be a giant interconnection of wrongness) is. In a way, this is an attitude similar to that of an externalist's since it is like arguing that what really matters is the end result, in which case would be the functioning of the world, rather than the process of getting to the end result.

JinKim21:22, 1 October 2011
 

I believe people are less inclined to dispute the ideas suggested by sceptics, because of a lack of attachment to the argument. Non-sceptics accept reality how it is and whether a mass illusion exists in generally not a topic of interest. This debate is comparable to the debate between theists and atheists, however, people defend their religious beliefs more rigorously, because it is a topic of great relevance and debate in our cultures and has been for thousands of years. However, the idea that our reality could be a massive illusion has resulted in far less debate due to it's lack of relevance in our society.

ChadMargolus22:33, 7 October 2011