Course:CPSC522/Cognitive Robotics
Cognitive Robotics
Primarily referencing this paper.
Principal Author: Alistair Wick
Abstract
Cognitive robotics represents an attempt to define and tackle high-level problems of robot (or agent) control, in worlds of partially unknown and changing composition. It ultimately deals with the prescription of actions which a robot can attempt in pursuit of some goal(s), using a framework which proponents hope will generalize to all application domains. Broadly, Cognitive Robotics attempts to "develop an understanding of the relationship between the knowledge, the perception, and the action"[1] of intelligent agents. Here I will discuss the Situation Calculus (an extension of predicate logic) and its applications in Cognitive Robotics.
Builds on
Related Pages
Content
Introduction
Cognition is an internal process in a thinking agent in which the agent acquires knowledge and, more importantly, understanding of the world in which it finds itself[2]. This is inherently a learning process - knowledge is gathered and refined by the agent's observation of, and interaction with the world. The nature of understanding is difficult to pin down (it is often the domain of cognitive philosophers[3]), but to create robots capable of working in a dynamic and changing world, it is desirable to have those robots understand the world, at least at some basic level. Cognitive Robotics is the field tasked with solving this problem: creating robots that can observe, learn, and reason about their environments.
Situation Calculus
This section draws primarily from the Situation Calculus chapter of the Handbook of Knowledge Representation[4].
Introduction
The situation calculus is a formalism of state - a logical language used to represent a system (like a robot's environment) through cumulative changes to the system. Three principal components make up the calculus: situations, actions, and fluents. A situation is a state of the system at a moment in time; actions are changes which can be instigated by agents in the system, moving the system from one situation to the next. Fluents are functions which describe the effects of actions, and depend on the system's current situation. Here I will discuss Reiter's interpretation [5]: that a situation is logically the same as the history of actions taken since an initial situation .
Fluents
Broadly, two types of fluents exist - relational, and functional fluents:
Relational Fluents take a situation and return a true or false value. For example, in an aerial drone display, each drone may be landed on the ground, or in flight. We could describe this using the predicate , which is true if drone is safely landed in situation . To allow us to quantify over relational fluents (to use expressions involving "exists" and "for all") and to investigate causal relationships between fluents, we can define a binary predicate , which returns the truth value of the predicate in situation . In this formulation, is simply shorthand for — note that the inner predicate takes only a single argument, here the drone under consideration.
Functional Fluents act in much the same way as relational fluents, but return a non-boolean value such as an integer. This might include properties like the absolute position or battery level of a drone in our aerial display.
While the specifics of the calculus will depend on the domain, we can define some domain-independent predicates:
- whether relational fluent p holds in situation s.
- yields the situation resulting from action a being carried out in situation s.
- whether action a is possible in situation s.
Block-Stacking Example
The example given in the book chapter, which I will repeat here, is that of a block-stacking robot. In this system, blocks may be placed on a surface (the table) or stacked on top of one another. We can formalize this system so that the robot can know which blocks can be picked up, and where they can be placed, all by tracking changes from the initial state. The combination of actions and fluents in this (relatively) simple toy example is easy to follow: the robot has an action to pick up a block , which when performed with will yield a new situation in which the relational fluent is true—the robot has picked up the block, and is now holding it. Formalizing this requires a few additional fluents:
- is true when x is held by the robot
- is true when no block is being held (note: true if is true for any )
- is true when x is on the table (note: mutually exclusive with )
- is true when x is the top block in a stack, allowing it to be picked up
We can then describe the pickup action as follows, with first-order predicate logic:
- — picking up a block x is possible if and only if it is on the table, clear, and the robot's hand is empty
- — robot is holding x (and no other blocks) if we pick it up
- — robot's hand is not empty once we pick up a block (in the full example we would define a corollary for putting blocks down)
- — x is no longer clear after being picked up
- — x is no longer on the table after being picked up
Clearly this is only a subset of the sentences of the full example, but it serves to illustrate the fundamentals of the situation calculus — situations are molded through the application of actions, which are described by their effects on the fluents.
Situation Calculus in Cognitive Robotics
Introduction
The Cognitive Robotics chapter[1] by H. Levesque and G. Lakemeyer discusses Cognitive Robotics with a focus on the use and modification of the situation calculus for robotics tasks.
Events
The authors note that the simple situation calculus described above models actions that change the world discretely, instantaneously and deterministically. This may be suitable for some toy usages, but falls flat for more realistic usages in robotics. Various approaches can be applied and combined to bring the situation calculus to a usable standard. We may, for instance, define actions not as instantaneous changes but as changes which occur over a non-negligible timespan, with discrete start and end events:
That is, the robot is picking up block x at time t after (the time at which) it started picking it up, and before (the time at which) it finished picking it up, where it cannot finish picking up a block if it has not started picking it up (if the fluent does not hold). The sentence, in short, enforces time ordering of the start and end of the pickup action, though it does not specify how long the action takes from start to finish. Specifying these start and end times may be difficult, inconvenient or practically impossible, so an alternative is to define fluents as linear functions of time—a drone's location might be defined as , a function taking the current and starting times, and the drone's initial velocity.
The authors also comment on the deterministic nature of the calculus, where the assumption is that the outcome of a series of actions can be known beforehand. Robots do not exist in deterministic worlds, so this assumption is a problematic one; solutions involve, for example, Reiter's stochastic variant of the calculus, where possible deterministic actions are considered to be randomly selected by some unknown probability distribution. The selection is left to nature (or, as may be the case, the relevant simulation environment), and the result observed from the world—all that is needed is to enumerate the possible outcomes of a given action.
Sensing
Sensing the environment is critical to a robot's ability to learn about that environment, and so it must be represented in some way in the situation calculus. The main approach the authors discuss here is of introducing another special fluent, , and axioms for the relevant s and s which tie the internal representation (fluents of the situation) of whatever was sensed to the truth value of . Their example considers a robot which can sense the color of an object, perhaps with cameras and a conventional image processing system. Sensing that an object is red can be used to update the internal representation of the object as follows:
- — the object x is red if we sense it to be red
The predicates can then be used, somewhat confusingly, to "define what the robot learns" by taking a vector of actions in situation s and receiving some binary vector of results :
Knowledge
The authors argue that knowledge of the environment, while implicitly modeled by the situation calculus, should ideally be made explicit. This allows accounting for areas in which a robot lacks knowledge, enabling rational decisions about when and where to employ sensing, and enables modeling the knowledge of other actors (such as humans the robot is interacting with) in a multi-agent scenario. They propose modeling knowledge using a "possible world" approach, where situations are cognitively linked by introducing another special fluent , which states that "s' is epistemically accessible from s". We can then encode the retention of knowledge across situations, with stating that is true in all situations accessible from s:
They further specify a "successor state" axiom which can prune the tree of possible worlds, essentially removing those "accessible" situations which are invalidated by a sensory input, thus allowing the robot's knowledge to change over time.
This representation of knowledge has the effect of expanding the number of starting states: rather than one situation tree rooted at , a "forest of trees" exists, each with a different initial situation, to be selected by the refinement of the agent's knowledge.
Reasoning
Naturally, simply representing knowledge of a robot is not sufficient — decisions must be made with that knowledge for a robot to function. Temporal projection, which the authors refer to simply as "projection", is the task of finding whether some condition in an initial state will hold after a series of actions has been taken.
Annotated Bibliography
- ↑ 1.0 1.1 Cognitive Robotics Levesque, H. and Lakemeyer, G., Handbook of Knowledge Representation, Elsevier, 2008
- ↑ https://en.oxforddictionaries.com/definition/cognition
- ↑ Clark, Andy, and Rick Grush. "Towards a cognitive robotics." Adaptive Behavior 7.1 (1999): 5-16.
- ↑ Situation Calculus Lin, F., Handbook of Knowledge Representation, Elsevier, 2008.
- ↑ R. Reiter. Knowledge in Action: Logical Foundations for Specifying and Implementing Dynamical Systems, vol. I. Oxford University Press, 1994
To Add
Situation calculus - the frame, ramification and qualification problems
Explain the sensing construction - what does it mean
|