forum 1, week of Jan 8, Dretske

Fragment of a discussion from Course talk:Phil440A
Jump to: navigation, search

While thinking about the third question, I was reminded of another issue where definition of relevance is crucial: the frame problem in AI, where one needs to be able to represent the logical effects of an action without representing a multitude of irrelevant information along the way. One of the suggested attempts at solving this problem was Jerry Fodor's (I believe) appeal to relevance of information, or the "context" in which the AI has to operate in any given situation. Following his logic, exclusion of relevant alternatives could then entail thinking of as many alternatives physically possible (or conceivable to exist in a possible world, if we are to be Lewisian) in the given set of circumstances, and further, thinking of a reason as to why what we know is different from these alternatives and is therefore better suited for this context. However, this solution, both in the present situation and the frame problem, has a risk of infinite regress of "relevance of relevant contexts"; how exactly can we tell what the "given set of circumstances" actually consist of, and do we know enough about this context in order to make judgments on the relevance of it? I tend to agree that even the mention of relevance may be a slippery slope in epistemological questions, because the assumption that we can, in fact, make judgments on what's relevant could lead to a bias.

06:46, 10 January 2012