# Course:CPSC522/Cognitive Robotics

## Cognitive Robotics

Primarily referencing this paper.

Principal Author: Alistair Wick

## Abstract

Cognitive robotics represents an attempt to define and tackle high-level problems of robot (or agent) control, in worlds of partially unknown and changing composition. It ultimately deals with the prescription of actions which a robot can attempt in pursuit of some goal(s), using a framework which proponents hope will generalize to all application domains. Broadly, Cognitive Robotics attempts to "develop an understanding of the relationship between the knowledge, the perception, and the action"[1] of intelligent agents. Here I will discuss the Situation Calculus (an extension of predicate logic) and its applications in Cognitive Robotics.

## Content

### Introduction

Cognition is an internal process in a thinking agent in which the agent acquires knowledge and, more importantly, understanding of the world in which it finds itself[2]. This is inherently a learning process - knowledge is gathered and refined by the agent's observation of, and interaction with the world. The nature of understanding is difficult to pin down (it is often the domain of cognitive philosophers[3]), but to create robots capable of working in a dynamic and changing world, it is desirable to have those robots understand the world, at least at some basic level. Cognitive Robotics is the field tasked with solving this problem: creating robots that can observe, learn, and reason about their environments.

### Situation Calculus

This section draws primarily from the Situation Calculus chapter of the Handbook of Knowledge Representation[4].

#### Introduction

The situation calculus is a formalism of state - a logical language used to represent a system (like a robot's environment) through cumulative changes to the system. Three principal components make up the calculus: situations, actions, and fluents. A situation is a state of the system at a moment in time; actions are changes which can be instigated by agents in the system, moving the system from one situation to the next. Fluents are functions which describe the effects of actions, and depend on the system's current situation. Here I will discuss Reiter's interpretation [5]: that a situation is logically the same as the history of actions taken since an initial situation ${\displaystyle s_{0}}$.

#### Fluents

Broadly, two types of fluents exist - relational, and functional fluents:

Relational Fluents take a situation ${\displaystyle s}$ and return a true or false value. For example, in an aerial drone display, each drone may be landed on the ground, or in flight. We could describe this using the predicate ${\displaystyle landed(x,s)}$, which is true if drone ${\displaystyle x}$ is safely landed in situation ${\displaystyle s}$. To allow us to quantify over relational fluents (to use expressions involving "exists" and "for all") and to investigate causal relationships between fluents, we can define a binary predicate ${\displaystyle Holds(p,s)}$, which returns the truth value of the predicate ${\displaystyle p}$ in situation ${\displaystyle s}$. In this formulation, ${\displaystyle landed(x,s)}$ is simply shorthand for ${\displaystyle Holds(landed(x),s)}$ — note that the inner predicate takes only a single argument, here the drone under consideration.

Functional Fluents act in much the same way as relational fluents, but return a non-boolean value such as an integer. This might include properties like the absolute position or battery level of a drone in our aerial display.

While the specifics of the calculus will depend on the domain, we can define some domain-independent predicates:

• ${\displaystyle Holds(p,s)}$ whether relational fluent p holds in situation s.
• ${\displaystyle do(a,s)}$ yields the situation resulting from action a being carried out in situation s.
• ${\displaystyle Poss(a,s)}$ whether action a is possible in situation s.

#### Block-Stacking Example

The example given in the book chapter, which I will repeat here, is that of a block-stacking robot. In this system, blocks may be placed on a surface (the table) or stacked on top of one another. We can formalize this system so that the robot can know which blocks can be picked up, and where they can be placed, all by tracking changes from the initial state. The combination of actions and fluents in this (relatively) simple toy example is easy to follow: the robot has an action ${\displaystyle pickup(x)}$ to pick up a block ${\displaystyle x}$, which when performed with ${\displaystyle do(pickup(x),s_{0})}$ will yield a new situation ${\displaystyle s'}$ in which the relational fluent ${\displaystyle Holds(holding(x),s')}$ is true—the robot has picked up the block, and is now holding it. Formalizing this requires a few additional fluents:

• ${\displaystyle holding(x)}$ is true when x is held by the robot
• ${\displaystyle handempty}$ is true when no block is being held (note: true if ${\displaystyle holding(x)}$ is true for any ${\displaystyle x}$)
• ${\displaystyle ontable(x)}$ is true when x is on the table (note: mutually exclusive with ${\displaystyle holding(x)}$)
• ${\displaystyle clear(x)}$ is true when x is the top block in a stack, allowing it to be picked up

We can then describe the pickup action as follows, with first-order predicate logic:

• ${\displaystyle Poss(pickup(x),s)\equiv ontable(x,s)\land clear(x,s)\land handempty(s)}$ — picking up a block x is possible if and only if it is on the table, clear, and the robot's hand is empty
• ${\displaystyle holding(u,do(pickup(x),s))\equiv u=x}$ — robot is holding x (and no other blocks) if we pick it up
• ${\displaystyle \neg handempty(do(pickup(x),s)}$ — robot's hand is not empty once we pick up a block (in the full example we would define a corollary for putting blocks down)
• ${\displaystyle clear(u,do(pickup(x),s))\equiv clear(u,s)\land u\neq x}$x is no longer clear after being picked up
• ${\displaystyle ontable(u,do(pickup(x),s))\equiv ontable(u,s)\land u\neq x}$x is no longer on the table after being picked up

Clearly this is only a subset of the sentences of the full example, but it serves to illustrate the fundamentals of the situation calculus — situations are molded through the application of actions, which are described by their effects on the fluents.

### Situation Calculus in Cognitive Robotics

#### Introduction

The Cognitive Robotics chapter[1] by H. Levesque and G. Lakemeyer discusses Cognitive Robotics with a focus on the use and modification of the situation calculus for robotics tasks.

#### Events

The authors note that the simple situation calculus described above models actions that change the world discretely, instantaneously and deterministically. This may be suitable for some toy usages, but falls flat for more realistic usages in robotics. Various approaches can be applied and combined to bring the situation calculus to a usable standard. We may, for instance, define actions not as instantaneous changes but as changes which occur over a non-negligible timespan, with discrete start and end events:

${\displaystyle pickingup(x,t,do(a,s))\equiv \exists t'(a=startPickup(x,t')\land t'\leq t)\lor pickingup(x,t,s)\land \neg \exists t'(a=endPickup(x,t')\land t'\leq t)}$

That is, the robot is picking up block x at time t after (the time at which) it started picking it up, and before (the time at which) it finished picking it up, where it cannot finish picking up a block if it has not started picking it up (if the ${\displaystyle pickingup(x)}$ fluent does not hold). The sentence, in short, enforces time ordering of the start and end of the pickup action, though it does not specify how long the action takes from start to finish. Specifying these start and end times may be difficult, inconvenient or practically impossible, so an alternative is to define fluents as linear functions of time—a drone's location might be defined as ${\displaystyle location(t,t_{0},v)}$, a function taking the current and starting times, and the drone's initial velocity.

The authors also comment on the deterministic nature of the calculus, where the assumption is that the outcome of a series of actions can be known beforehand. Robots do not exist in deterministic worlds, so this assumption is a problematic one; solutions involve, for example, Reiter's stochastic variant of the calculus, where possible deterministic actions are considered to be randomly selected by some unknown probability distribution. The selection is left to nature (or, as may be the case, the relevant simulation environment), and the result observed from the world—all that is needed is to enumerate the possible outcomes of a given action.

#### Sensing

Sensing the environment is critical to a robot's ability to learn about that environment, and so it must be represented in some way in the situation calculus. The main approach the authors discuss here is of introducing another special fluent, ${\displaystyle SF(a,s)}$, and axioms for the relevant ${\displaystyle a}$s and ${\displaystyle s}$s which tie the internal representation (fluents of the situation) of whatever was sensed to the truth value of ${\displaystyle SF}$. Their example considers a robot which can sense the color of an object, perhaps with cameras and a conventional image processing system. Sensing that an object is red can be used to update the internal representation of the object as follows:

${\displaystyle SF(senseRed(x),s)\equiv Color(x,red,s)}$ — the object x is red if we sense it to be red

The ${\displaystyle SF}$ predicates can then be used, somewhat confusingly, to "define what the robot learns" by taking a vector of actions ${\displaystyle {\vec {a}}=a_{1},a_{2},\cdots ,a_{n}}$ in situation s and receiving some binary vector of results ${\displaystyle {\vec {r}}=r_{1},\cdots ,r_{n}}$:

${\displaystyle Sensed(\langle \rangle ,\langle \rangle ,s):=True;}$
${\displaystyle Sensed({\vec {a}}\cdot A,{\vec {r}}\cdot 1,s):=SF(A,do({\vec {a}},s))\land Sensed({\vec {a}},{\vec {r}},s);}$
${\displaystyle Sensed({\vec {a}}\cdot A,{\vec {r}}\cdot 0,s):=\neg SF(A,do({\vec {a}},s))\land Sensed({\vec {a}},{\vec {r}},s).}$

#### Knowledge

The authors argue that knowledge of the environment, while implicitly modeled by the situation calculus, should ideally be made explicit. This allows accounting for areas in which a robot lacks knowledge, enabling rational decisions about when and where to employ sensing, and enables modeling the knowledge of other actors (such as humans the robot is interacting with) in a multi-agent scenario. They propose modeling knowledge using a "possible world" approach, where situations are cognitively linked by introducing another special fluent ${\displaystyle K(s',s)}$, which states that "s' is epistemically accessible from s". We can then encode the retention of knowledge across situations, with ${\displaystyle Knows(\Phi ,s)}$ stating that ${\displaystyle \Phi }$ is true in all situations accessible from s:

${\displaystyle Knows(\Phi ,s):=\forall s'.K(s',s)\supset \Phi [s].}$

They further specify a "successor state" axiom which can prune the tree of possible worlds, essentially removing those "accessible" situations which are invalidated by a sensory input, thus allowing the robot's knowledge to change over time.

This representation of knowledge has the effect of expanding the number of starting states: rather than one situation tree rooted at ${\displaystyle s_{0}}$, a "forest of trees" exists, each with a different initial situation, to be selected by the refinement of the agent's knowledge.

#### Reasoning

Naturally, simply representing knowledge of a robot is not sufficient — decisions must be made with that knowledge for a robot to function. Temporal projection, which the authors refer to simply as "projection", is the task of finding whether some condition in an initial state will hold after a series of actions has been taken.

## Annotated Bibliography

1. Cognitive Robotics Levesque, H. and Lakemeyer, G., Handbook of Knowledge Representation, Elsevier, 2008
2. https://en.oxforddictionaries.com/definition/cognition
3. Clark, Andy, and Rick Grush. "Towards a cognitive robotics." Adaptive Behavior 7.1 (1999): 5-16.
4. Situation Calculus Lin, F., Handbook of Knowledge Representation, Elsevier, 2008.
5. R. Reiter. Knowledge in Action: Logical Foundations for Specifying and Implementing Dynamical Systems, vol. I. Oxford University Press, 1994