Course:CPSC522/Ontology

From UBC Wiki

Please note: wording may still have to be refined.

TODO's

  • Smith's philosophical characterization of ontology can be put in a lager expositional framework by referring to the distinct goals of ontology discussed in Hofweber's article on the relationships between logic and ontology (Logic and Ontology in Stanford Encyclopedia of Philosophy).
  • My current pseudo-philosophical way of relating logic to ontology and ontology to facts in the world in this entry should be refined in light of Hofweber's article.
  • The import of ontology mentioned in this article is too narrow in scope (and poorly presented); it can be considerably expanded by referring to works cited in Ontology Summit 2017 Communique.
  • My explanation of what ontology is itself actually lacks clarity; it can be reviewed, referring to the schema of ontology extraction, given in the section 2.1 of the Communique.

Title

Ontology concerns designs for how domains of interest are represented for machines that reason and interact.

Author: Shunsuke Ishige

Preliminary remarks: I am the sole author of this entry, and so will not use the first person plural in my exposition. The convention is to avoid the first person singular in writing of scientific nature. However, my view is that insofar as statements that do not require support by formal empirical evidence, such as statistical data or claims made by scientists on grounds of such data, there does not seem to be any reason to follow the convention. Simply suppressing use of 'I' in such cases does not somehow add evidential support. This entry is not a report of experimental results or a formal survey of scientific literature. Hence, such statements include those that appeal to the common sense and intuition, which are primarily used for expository purpose of introducing the main ideas that themselves are based on reliable sources. As the author of the entry, I maintain my own narrative voice throughout; my goal is to present the material in a way that is not a mere summary of textbooks or other sources.

Abstract

This entry concerns ontology as in computer science, not ontology as in philosophy. For disambiguation, the latter may be roughly said to concern the questions of being, which is not the topic of this entry. Ontology, in the pertinent sense, is principles that underlie representation of knowledge, specifying how a domain of interest is represented for machines -- what things are there, what they are, and how they are related to each other. As such, it fundamentally connects machines to the world in that they use the information in reasoning about domains. In particular, whether reasoning yields any meaningful conclusions about the world depends on the basic designs of representation. I will begin this entry with characterization of ontology and the implications such as its effects on interaction of different machines with possibly diverse representation schemes, or levels of abstraction in representation. Machines relies on ontology in reasoning about the world, and we want derived conclusions to be true. In connection to logical reasoning in particular, how representation of things in the world relates to the notion of truth is briefly considered. Subsequently, I will roughly describe how ontology is realized in machines in the form of computer code in logic-based as well as probability-based intelligent systems.

Builds on

In terms of artificial intelligence, this entry assumes only a general idea of knowledge representation and reasoning. If the reader is not familiar with the notion, browsing the introductory chapter of a standard textbook should suffice. See, for example, Artificial Intelligence: Foundation of Computational Agent. Some basic notions from set theory, such as subsets, intersections, and relations, are assumed. (This brief introduction to set theory should provide enough background knowledge to read this entry.) I would also add the links to the Wiki entries on the first order logic and probability. Please note that I use the term concept but do not attempt to define it precisely, which would not be necessary for the purpose of this entry. My use of the term is not technical; I roughly mean something to the effect of mental representation capturing characteristic features of things such as chairs or dogs.

Related Pages

To be added, if any.

Content

Introduction

Let me begin with the following remarks on the import of ontology, which illuminates in an intuitive manner why basic schemes of representation are fundamental and matter. In reference to a view he ascribes to Gruber [1], Smith elucidates the notion of relations of concepts that underlie understanding in diverse contexts where the agent might find itself, such as science researches and story-telling: “[e]ach of these ways of behaving involves … a certain conceptualization… [, namely] a system of concepts in terms of which the corresponding universe of discourse is divided up into objects, processes, and relations in different sorts of ways” [2], where the "universe of discourse" means a totality of representable things [1]. In fact, without reference to such conceptual or semantic systems, in what terms could the agent make sense of the experience and comprehend it at all? What objects do we speak of, what properties do they possess, and what relations do they have? In particular, implicit in the above remark are intra- and possibly inter-system relations of such building blocks. Smith points out, continuing his discussion of Gruber, a “taxonomy” that may emerge from study of conceptualization [2]. In this entry, I will first describe ontology briefly, then illustrate its realization in relation to systems of logic, and finally describe a connection of ontology to probablistic models.

Characteristics of Ontology

In this section, I will explicate the first approximation of ontology given above. A definition of ontology is first made to give the reader a sense of direction, followed by a minimum description of ontology in general terms and its implications.

Specification of Things by Terms and Definitions

Following Smith, ontology may be characterized as follows: in representing things and their relations in a given domain for machines that utilize the knowledge for reasoning and interacting with other machines, ontology is a semantic scheme [2]. It is semantic in that ontology is a system of terms and their definitions used to represent things. It is a scheme, however, in that a mere list of terms with definitions would be useless for representation; we need a structured system or, as Smith puts it, "a lexical or taxonomical framework" [2]. As implicit in this remark, terms relate to each other, just like things do in the world. For example, dogs are mammals, and the latter, animals.

To flesh out this characterization, I follow the explanation by Uschold and Gruninger. However, I phrase their description of ontology in terms of the type-token distinction. For the reader who is not familiar with this distinction, here is an example: if there are roses in a vase, they are tokens of the type rose, each of the instances possessing quintessential properties associated with the plant. Let us see this in semantic terms. We say that something is a rose because we know what the word 'rose' means and that something fulfills the definition. Perhaps, we associate certain petal shape and scent with roses. Alternatively, in a different context, we might use some chemical or genetic make-ups of the plant as the definition. Now, with this distinction in mind, to move on to the description of ontology by Uschold and Gruninger, units of organization in such a systematic, hierarchical representation of a domain are called classes, which correspond to things construed as types, rather than as tokens. (When combined with tokens, the resulting system is called a knowledge base.) Classes capture attributes of the represented as properties. Classes may be subject to certain organizational restrictions for a coherent system [3]. So, we might have a class for dogs that capture their properties, and another, for mammals. In particular, these classes might relate to each other, reflecting the way the represented are.

Uschold and Gruninger summarize essential characteristics of ontologies: (1) a set of terms for classes, specifying what we speak of; and (2) their definitions in a wide range of forms, including data base and XML schema as well as systems of logic such as description logic [3]. Regarding the second point, as in Poole, definitions in formal languages enable machines to comprehend the meanings of terms, which contrasts with those in natural languages that might accompany database scheme [4]. In the domain of agriculture, for example, such a set might contain terms for different kinds of apples with their respective definitions coded in a computer language so that machines can use the information represented accordingly in reasoning about things in the domain. There are two points to note in this general characterization. First, it may be said that generally linguistic systems are social in nature; they should serve the purpose of successful communication, based on common terms with shared meanings. Second, domains and levels of abstraction we might want to speak of and at would help determine a system of relevant terms and their definitions.

Implication 1: Interoperability

The first point concerns the role of ontology as a fundamental basis of representation of domains. In particular, Smith states the “Tower of Babel problem”: development and adoption of diverse ontologies with no standardized vocabularies for data and knowledge bases cause confusion, hindering their interactions [2]. What is required is, as Poole and Mackworth say, "semantic interoperability": knowledge bases can interact, with terms of interests in a given domain being clearly defined in computer code [5]. Such a problem in interconnecting agents and data sources motivates Uschold and Gruninger to study ontology, as they say that "[s]treams of data were successfully transmitted between systems, however there was no meaning associated with the data...[, which is] analogous to successful delivery of an encrypted message", the cause of the problem being "the [incompatible] semantics of the exchanged data" [3]. Not surprisingly, if two systems do not use the same terms with the same meanings, they cannot correctly recognize or process the information from each other.

Implication 2: Levels of Abstraction

Regarding the second point mentioned above, kinds of and levels of abstraction of terms, the discussion so far may have given the impression that all representation scheme operate at the same level of abstraction with respect to other systems. For example, the particular domain of automobile manufacturing might have (possibly incompatible) sets of specific terms with their meanings. Similarly, the domain of agriculture might have such systems. However, as Russel and Norvig motivates "upper ontology" or a "general framework of concepts", use of abstract concepts overarching diverse specific domains is not precluded [6]. In particular, Smith cites concepts such as "time, space, inherence, instantiation, identity, measure, quantity, functional dependence, process, event, attribute, [and] boundary" as examples [2]. One example of how such inter-domain abstract concepts can bear on particular ontologies is provided in Poole, in which the basic notions such as "thing" and "identity" (of objects) are assumed in constructing the ontology for geology [4].

Realization of Ontology in Logical Systems

In this section, I will illustrate how ontology is expressed in formal languages and thereby serves the agent making inferences. The purpose for this section is to give an idea about how the characteristics of ontology can be eventually grounded in computer code.

Conceptions of Truth: What to Represent

Logic is used by machines making inferences about things in a given domain. Naturally, we want conclusions drawn in the reasoning to be true. However, it may be said that logic itself takes the notion of truth for granted and simply uses it. When a sentence is deduced logically, we might describe it as true; but what is the meaning of being true? What are fundamental conditions, fulfillment of which makes something true? If we have a definition of dogs, such as their physical and behavioral traits, and Fido possesses those features, we might say that 'Fido is a dog' is true. But if we do not have terms and definitions that in a sense match the things in the world we want to speak of, they are useless for us. Whatever inferences we make and describe as true, they are no longer about the world.

In this connection, let me briefly explain certain subtleties about conceptions of truth I did not mention when citing Smith's remarks in the introduction of this entry. He actually states the characterization of conceptualization I cited in a critical tone. The reason is that such representation may simply reflect "theories or languages or systems of beliefs" people have, detached from what things really are in the world [2]. Languages are social in nature. As such, they might reflect worldviews of the cultural groups that use the languages to communicate in day-to-day interactions in their respective ways of life. In particular, what Smith means here by theories, languages, or belief systems seems biased, subjective views of the world. Presumably, according to his view, truth is to be accounted for by a certain correspondence to things in the world, as opposed to, say, by some coherence of our beliefs; the question for ontology should be "whether its conceptualizations are true of some independently existing reality" [2], his concern being that "[c]onceptualization ... may deal only with created (pseud-) domains" [2]. The remedy he suggests is to align construction of ontology with natural sciences as he says that "we have to rely at any given stage on our best endeavors -- which means concentrating above all on the work of natural scientists" [2]. What he means by the best effort to represent the world is "striving for truth to independent reality ... [as] a paramount constraint" [2]. Presumably, to the extent that science is an objective endeavor, ontologies that draw on scientific conception of the world, its precise concepts and their relations, can provide the needed scheme for machines. As a basis of knowledge representation, ontology is a nexus between the world and machines; we would like to represent a given domain as it is really in the world so that machines can use logic (and other forms of reasoning) on the knowledge base, drawing conclusions true of the world.

One of the topics Smith raises is the correspondence theory of truth. Interested readers should see, for instance, Marian [7]; more generally, for theories of reference, i.e. how words fit to things in the world, Marga and Eliot [8]. I hope that this much is now clear: ontology is not a mere terminological issue. Contrary to the impression the use of the term definition may have made, it is not a mere stipulation; ontology may be seen as an attempt to represent (at least some) aspects or features of the world to leverage it in reasoning by machines.

Logical Underpinnings

The examples I draw on in the subsequent discussion for illustration of how ontology can actually be coded are in a language called OWL (Web Ontology Language). So, I briefly present the underlying logic, description logics, first. For the purpose of this entry, the important thing to note is the use of sets, especially subset relations, to express a taxonomy of terms with their meanings, which represents a domain, i.e. ontology. According to Russel and Norvig, description logic characteristically leverages the definitions of interrelated classes in inferences by telling the membership of a given object and by finding subset relations among the classes; the former is called classification, while the latter, subsumption [6]. To be sure, the choice of the particular kind of logic does not indicate that ontology somehow cannot be formalized in the standard first order logic. Russel and Norvig says that the first order logic can express sentences of a description logic; it is rather a matter of efficacy [6]. In particular, description logic is suitable for ontology, according to the authors, because of the graphical representation, called semantic networks, used to capture ontology class relationships and their objects [6]. To use an example similar to Russel and Norvig [6] (note that "nodes" and "edges" are as in a graph), a semantic network might have a node "Fido", which is connected to another node "Dog" by an edge labeled with "MemeberOf", the latter, in turn, to the node "Canidae" by the relation of subset. In particular, the construction is to be expressed in description logic sentences.

Translation into Computer Code

Please note that without very basic understanding of set theory, my explanation here does not make sense; for basic set theoretical notions, please follow the link I provided in the "Builds on" section. In particular, I am using the term relation in the set theoretical sense here. This is a formalization of the qualitative description of ontology, its classes and their relations, given in the section "Characteristics of Ontology".

Now, to move on to the description of how ontology might be coded, although I follow the exposition by Poole and Mackworth [5], using their example in OWL, I present it in terms of sets in order to make connection to the preceding paragraph. The authors motivate the import of formal languages, citing the need for "meanings that allow a computer to do some inference" [5]. Specifically, they explain that the ontology of a given domain is expressed in OWL, using three constructs: individuals, classes, and properties. Individuals are subjects of description, i.e. tokens to be predicated of with descriptions (, where the term "description" is to be clarified soon in connection to the definition of property). Individuals that satisfy given set memberships constitute classes. Properties are descriptions in terms of relations; for examples, the object property associates an individual with another, while the datatype property , streets with the type string [5]. Assuming that a general class having been already defined with residential buildings as a subclass in the ontology, Poole and Mackworth illustrate how these basic ideas can be utilized to code ontology with a hypothetical definition of apartment buildings as a special type of residential buildings, characterized by several rental units [5]. The following code snippet is from the authors (prefixes delimited by a colon indicate built-in classes or predicates) [5]. As apparent in the code, taking the intersection of the specific, pertinent sets defines apartment buildings in the ontology. Remember that in set theory, relations (and so functions) are sets of tuples. In particular, to rephrase the authors' explanation of the OWL syntax and semantics in terms of sets, the user-defined property or relation is a set of tuples consisting of individuals of the type , i.e. its domain, and the elements of the range; the particular set used in the definition of apartment buildings is a subset of constructed by specifying the value of the range [5]. So, in the set notation, .

:ApartmentBuilding
    owl:EquivalentClasses
        owl:ObjectIntersectionOf(
            owl:ObjectHasValue(:numberOfUnits :moreThanTwo)
            owl:ObjectHasValue(:ownership :rental)
           :ResidentialBuilding).

:numberOfUnits rdf:type owl:FunctionalObjectProperty;
               rdfs:domain :ResidentialBuilding;
               rdfs:range owl:OneOf(:one :two :moreThanTwo).

I am not familiar with OWL, so that I will not try to actually express it in code; but presumably other class relations such as that between dogs and their family Canidae I used as an example above in connection to semantic networks can be expressed in a similar fashion.

Connection of Ontology to Probabilistic Models

In this section, I briefly describe one way that ontology can be related to probabilistic models, using works of Poole that concern use of ontology to augment scientific researches, in particular in geology, by providing a systematic framework for data and theories [4, 9]. Please note that the same remark I made when explaining formalization of ontology in logical systems apply here: you need some basic set theory to understand the exposition.

From Logic to Probability

To comment on need for the transition, the preceding discussion of ontology is based on logic. At least insofar as use of the first order logic is concerned, in which a given property is predicated of everything that is in the domain of the universal quantifier and that is the subject of the sentence, Russel and Norvig point out problems that uncertainty and exceptions can cause to universal statements [6]. So, if ontology is limited to use in logic-based intelligent systems, its relevance might also be limited. However, ontology can be utilized in probablistic models as well.

One Possible Approach

Poole connects ontology to probablistic models through random variables, to "[build] on the advances in both [logic and probability based studies in artificial intelligence]" [4]. My presentation here is only for the purpose of giving a rough idea about how ontology might lend itself to probablistic interpretations. As such, it neither describes his approach in entirety (in particular, I skip the discussion of conditional probability) nor includes all details and subtleties. Interested readers should consult the original papers [4, 9], especially the former, for more rigorous, precise discussion.

The ontology design that forms the basis of his approach is "multidimensional design patterns", in which "the subclass relation is just derived from more primitive constructs" [4]. Actually, the definition of apartment buildings in OWL given in the preceding section is an example of this design pattern. As you remember, it is the properties or relations that are used to specify those individuals of the superset with a certain number of units and a specific kind of ownership and thereby to construct the subclass . To use another example he gives, if the class has the properties of genesis, composition, and texture, then the subclass might have the values "igneous", "felsic", and "coarse" for those properties, respectively [9] -- similar to the way apartment buildings are defined as being of residential buildings with specific property values. More generally, as Poole says, in this design pattern, with the properties associated with a superset being construed as dimensions, sub-classes of the set are those that assume different values of the properties of the shared parent set [4]. Now, the reason that the notion of dimension is built in this representation is that in this approach, he explains, random variables may be defined for a given individual, for each of its associated properties [4]. In particular, very roughly speaking, possible worlds that form the basis of probability can be obtained by way of the value assignments to triples [4]. For instance, continuing on the same example of granite as above, for an instance of rock, the prior probability of being granite is given by the probability distribution over each of the above mentioned dimensions of the class rock, in particular by way of the random variables corresponding to those three properties and specific to that individual [9]. So, the values of the triples for the possible worlds would give the realizations of those random variables, distributed in a certain way. When this observation is combined with the interpretation of a subclass as the intersection of pertinent sets obtained from properties or relations -- remember how I described the definition of apartment buildings in the last section --, such a prior probability would be in general of the form, , where denotes the set of possible worlds.

Annotated Bibliography

[1] Gruber T.R. Toward Principles for the Design of Ontologies Used for Knowledge Sharing. International Journal of Human-Computer Studies - Special issue: the role of formal ontology in the information technology, Vol. 43, Issue 5-6, p.p 907-928, Nov/Dec 1995.

https://www.sciencedirect.com/science/article/pii/S1071581985710816

[2] Smith B. Ontology. Blackwell Guide to the Philosophy of Computing and Information. Luciano Floridi (ed). p.p.155-166. Oxford UP, Blackwell, 2003. https://philpapers.org/archive/SMIO-11.pdf

[3] Uschold M., Gruninger M. Ontologies and Semantics for Seamless Connectivity. SIGMOD Record, Vol. 33, No. 4, p.p 58-64, December 2004.

https://dl.acm.org/citation.cfm?id=1041420

[4] Poole D.L., Smyth C., Sharma R. Ontology Design for Scientific Theories That Make Probabilistic Predictions. IEEE Intelligent Systems, Special Issue on Semantic Scientific Knowledge Integration, p.p 27-36, Jan/Feb 2009.

https://www.cs.ubc.ca/~poole/papers/PooleSmythSharma2009.pdf

[5] Poole D.L., Mackworth A.K. Artificial Intelligence: Foundation of Computational Agent. Cambridge UP, New York, 2011.

https://artint.info/aifca1e.html

[6] Russel S., Norvig P. Artificial Intelligence: A Modern Approach. Prentice Hall, New Jersey, 2010.

[7] Marian D. The Correspondence Theory of Truth. The Stanford Encyclopedia of Philosophy (Fall 2016 Edition), Edward N. Zalta (ed.)

https://plato.stanford.edu/archives/fall2016/entries/truth-correspondence/

[8] Marga R., Eliot M. Reference. The Stanford Encyclopedia of Philosophy (Winter 2018 Edition), Edward N. Zalta (ed.).

https://plato.stanford.edu/archives/win2018/entries/reference/

[9] Poole D.L., Smyth C., Sharma R. Semantic Science: Ontologies, Data and Probabilistic Theories. Uncertainty Reasoning for the Semantic Web I. Paulo C.G. da Costa, Claudia d'Amato, Nicola Fanizzi, Kathryn B. Laskey, Ken Laskey, Thomas Lukasiewicz, Matthias Nickles, and Mike Pool (eds.). p.p 26-40. Springer-Verlag Berlin, Heidelberg, 2008.

https://www.cs.ubc.ca/~poole/papers/SemSciChapter2008.pdf

To Add

  • Russel and Norvig mention modal logic -- that is, logic for modal notions such as necessity and possibility -- in connection to their exposition of ontology in terms of the first order logic. To the extent that modal logic can be expressed in terms of set theoretical devices, I suppose that computer code can be written to do modal reasoning. Poole also mentions possible world semantics in [4], which sounds very much like possible worlds as in related metaphysical discussion. I wonder if probability -- after all, it has a foundation in measure theory, which is a language of sets -- can be somehow related to possible worlds as in modal logic. It might not; probability and possibility are different notions.
  • I am not familiar with coding in OWL at this point; it would be nice if I could add more code examples.
  • The description of how ontology is actually realized in logical and probability based systems lacks a depth; marking aside, hopefully to be improved over time, by bringing in a wider range of perspectives as much as possible from primary sources, as opposed to textbooks, and analyzing and synthesizing them.