# Course:CPSC522/Control Theory

## Title

Control theory deals with how to intelligently perceive, reason, and behave in an environment. It considers the input and the feedback from the environment - determines how the input should be adjusted to achieve the desired goal.

Principal Author: Dandan Wang
Collaborators:Yu Yan, Yan Zhao

## Abstract

Control theory is like the glue that stitches all the engineering fields(e.x. Communication Engineering, Mechanical Engineering, Civil Engineering etc.) together. In Artificial Intelligence field, a control system is the core of an intelligent agent that decides how intelligently it can behave. Control theory covers a very wide scope of knowledges from model consideration to the control strategy chosen. This page covers the basis of control theory, practical applications and examples to help to understand.

### Builds on

Control theory is an important mechanism that related to different fields like Electronic Engineering, Mechanical Engineering and Civil Engineering, and it is also a core concept in Artificial Intelligence. This page provides some basis of Control theory and focuses on the practical application in Artificial Intelligence.

### More general than

Hierarchical Control is one of the major control techniques that widely applied in Artificial Intelligence field.

## Introduction

### Control System

Control system is a mechanism that alters the future state (i.e. expected output) of a system. Control theory is a branch of strategies that deals with how to generate the outcomes and find the appropriate input. All control systems have two basic parts - the plant (i.e. the system to be controlled) and the input into the plant. There are two types of control systems:

Open Loop Control
Open Loop Control

• Open Loop Control - If a control system has only the basic two parts (i.e. plant and input) mentioned previously in a system, then we can say that the control action is independent of the output, which is a open loop control. For example, a dish washer is a open loop control system. The goal of this machine is to clean dishes, which is also the output. Regardless of the cleanness of the dishes, this dish washer works under the instruction of the timer. When the time is off, the machine stops, and it is a normal experience that some dishes are still not fairly clean. Another extreme case is that even all the dishes put in this machine are clean, this dish washer will still start working when the start button is pressed.
Closed Loop Control
• Closed Loop Control (aka feedback control, active control ) - Compared to open loop control, the closed loop control system measures the system outputs with a sensor and compares the output signals with the reference (i.e. desired signal). A error term is generated, and send through the controller with the error converted to the input of the system. How does this closed loop control work in practice? Let's take the dish washer for another example. To design a closed loop controlled dish washer, we need a sensor to measure the cleanness of the dishes in the machine, and the desired cleanness of the plates can be set by user through the menu, this desired signal (i.e. cleanness level) would be compared with the practical measured cleanness level, and a error signal would be generated to send to the controller, to instruct if to continue running or shut off the dish washer.

### Definition of controlled dynamical systems

Controlled dynamical system tries to manipulate the inputs of the system to realize desired behavior at the output of the system. It consists of the following conditions:

• the next-state equation( the function ${\displaystyle f}$ specifies change in state) ${\displaystyle {\dot {x}}=f(x,u,t)}$
• the output equation ( the function ${\displaystyle g}$ specifies what is observed from state) ${\displaystyle y=g(x,u,t)}$
• ${\displaystyle x}$ specifies the interval (i.e. state space, the space between the possible states) state of the system
• ${\displaystyle y}$ specifies the observation variable
• ${\displaystyle u}$ specifies the input to the control system - control policy (i.e. how the control system determines the next input to the system): ${\displaystyle u=\pi (x,\alpha ,t)}$
• ${\displaystyle \alpha }$ specifies the desired signal
• ${\displaystyle t}$ specifies time variable

### Control factors

• Stability - bound the behavior of the system${\displaystyle u}$
• Controllability - use system's inputs ${\displaystyle u}$ to manipulate internal states ${\displaystyle x}$
• Observability - internal states ${\displaystyle x}$ can be inferred from external outputs ${\displaystyle y}$

### Representing Time

After reading the definition of the control system, people might have a question in mind? how is the time variable ${\displaystyle t}$ measured? Continuously or discretely? According to different mode of time measurement, control systems can be classified as below:

Continuous and Discrete System
• Continuous dynamical systems - the time is measured continuously(i.e. the output is measured continuously, and has continuous signal as the input of the system). Let's take the car's cruise control system as an example. The goal of this system is to maintain the car's speed at a constant speed value set by driver. The plant in this case is a car, and the controller is the cruise control system. The reference is the speed set by driver, output is the car's speed. This control system controls the actuator to determines how much power should the engine delivers. When the car runs in different conditions, the speed varies. When goes uphills, the car needs more power to maintain the set speed. And when goes downhills, the car needs to decrease the power to keep the stability of the speed. So the sensor needs to monitor the car in a continuous manner and compare with the reference speed to send signal (i.e. adjusted input) to the controller to achieve the goal.
• Discrete dynamical systems - the time is measured in discrete steps (i.e. the outputs are measured in specific interval and creates discrete signals as the inputs of the system). Let's talk again about the dish washer. A closed loop controlled dish washer has a sensor, and monitors the cleanness of the dishes. Is it necessary that the sensor works all the time, sending signals continuously to run or shut off the machine? The answer is definitely no. It requires too much power, but not a must to do so. Suppose there is a interval of 2 mins or 5, such that the sensor measures the cleanness and compares with the set cleanness level by this interval. If it has not reached the desired level, the error signal(i.e. stop signal) is sent to the controller and instruct the machine to continue or else to shut off.

Comparing the two different types of control systems, we can easily get the conclusion that the discrete dynamical systems actually has several merits (it is not about that it is better or more useful than continuous dynamical systems)

1. Power requirement is less than that of continuous system, for it does not need to work all the time.
2. It has higher accuracy and performs complex computations more easily than that of continuous system, for it has enough time to process calculations.
3. Losses are less than that of continuous system, for the signal misses do not happen that often as continuous system.
4. Reliability of digital system is higher than that of continuous system, due to the reasons that mentions above.

### Other Control System Classification

SISO VS MIMO

In order to obtain a better understanding of control theory, it is essential to get a general idea on the different types of control systems. Other than the two types (i.e. continuous vs discrete system) mentioned previously, there are also several other ways to classify control systems.

#### SISO VS MIMO

Control systems can be divided into different categories depending on the number of inputs and outputs.

• Single-input single-output (SISO) - It has single input for single output, which is much easier to control and analysis. For example, the water supply system is a SISO system. The goal of the system is to maintain a constant water pressure of the house, which has single input and output signals. When the sensor measures that the water pressure is not big enough(i.e. compared with the reference) to afford the water use, it would send error signal to the controller and the water tank will let more water in.
Water Tap system
• Multiple-input multiple-output (MIMO) - It has multiple inputs for multiple outputs. Let's take a quick example. The water tap system can be a MIMO system. It has two inputs, hot water and cold water. The goal is to reach a specific temperature set by the user. It is more complicated than the water supply system that it has two inputs and two outputs. If the temperature is higher than the set value, the warm water is slowed down and let more cold water come in. When it is lower than the set temperature, the controller would send signal to make cold water slow down and more hot water come in. The amount of temperatured water can be described as below:
${\displaystyle m(t)=u_{1}(t)+u_{2}(t)}$
${\displaystyle \vartheta (t)={\frac {\vartheta _{1}*u_{1}(t)+\vartheta _{2}*u_{2}(t)}{u_{1}(t)+u_{2}(t)}}}$
${\displaystyle t}$ specifies time variable.
Function ${\displaystyle m}$ specifies the amount of water flows, where ${\displaystyle u_{1}}$ and ${\displaystyle u_{2}}$ specify the speed of hot water and cold water.
Function ${\displaystyle \vartheta }$ specifies the temperature.

#### Linear VS nonlinear control system

• Linear control system - this requires the devices obeying the superposition principle (i.e. the output is proportional to the input). To be specific, it follows the principle of homogeneity and additivity. (e.x. water supply system mentioned previously)
1. Homogeneity: if multiply the input with a constant, then the output will also be multiplied by the same value. Suppose we have a input of ${\displaystyle a}$, the output is ${\displaystyle b}$. Then when we have the input of ${\displaystyle a*c}$, the output should be ${\displaystyle b*c}$.
2. Additivity: Suppose we have a input of ${\displaystyle a_{1}}$, the output is ${\displaystyle b_{1}}$. When we have a input of ${\displaystyle a_{2}}$, the output is ${\displaystyle b_{2}}$. Additivity means that when we have a input of ${\displaystyle a_{1}+a_{2}}$, then we would have the output of the summation of the previous 2 outputs.(i.e. ${\displaystyle b_{1}+b_{2}}$)
• Nonlinear control system - this is applied more in real-world systems---almost all the control systems are nonlinear. (e.x. car's cruise control system) These systems are often governed by nonlinear differential equations. (i.e. do not follow the principle of homogeneity.)

#### Centralised VS decentralised systems

Centralised VS Decentralised system
• Centralized system - it has a central controller that control components of the system directly, or hierarchically (i.e. Hierarchical Control a higher level component instructs a lower level component)
• Decentralized system - it has multiple controllers to control components of the system. Each components in this system is equally important for contributing to the global, complex behavior. Decentralization is useful in many ways, for example, it is applied in operations over long distance.

### Controllers and control theory

Forward control theory
• Forward model - it maps the control signals to the predicted system behavior.(i.e. to predict the outputs of the system)
${\displaystyle y[n+1]=h(x[n],u[n])}$
• Inverse model - maps the desired system behavior to control signals that will act the behavior. (i.e. to find a proper input to get the desired output of the system)
${\displaystyle u[n]=h^{-1}(x[n],y[n+1])}$

### Other Related Concepts

A control system must firstly guarantee the stability of the closed-loop behavior. For linear systems, it is easy by directly placing the poles. Non-linear control systems must use specific theories instead of inner dynamics of the system.

Controller structure with adjustable controller gains
• Adaptive Control - it means to change oneself, so that the behavior of the system will be suitable to changed situations. Adaptive control uses on-line identification of parameters, or modification of controller gains to obtain strong robustness properties. Take autopilots for high-performance aircraft for example. Aircraft operate over a wide range of speeds and altitudes, and their dynamics are nonlinear and conceptually time varying. To be specific, for an operating point ${\displaystyle i}$
${\displaystyle {\dot {x}}=A_{i}x+B_{i}u}$
${\displaystyle y=C_{i}\top {x}+D_{i}u}$
where ${\displaystyle A_{i}}$, ${\displaystyle B_{i}}$, ${\displaystyle C_{i}}$, and ${\displaystyle D_{i}}$ are functions of the operating point ${\displaystyle i}$.
As the aircraft goes to different flight conditions(i.e. altitude, speed, weather), the operating point changes causes different values for ${\displaystyle A_{i}}$, ${\displaystyle B_{i}}$, ${\displaystyle C_{i}}$, and ${\displaystyle D_{i}}$.
At the same time the feedback controller which consists of a feedback loop and a controller with adjustable gains should be able to disturbance dynamics distinguishes one scheme from another.
• Hierarchical Control - it is a type of control system that multiple devices and governing software are arranged in levels of a hierarchical tree.
• Intelligent Control - it can be divided into several major sub-domains - neural networks, bayesian probability, fuzzy logic, non-fuzzy logic, machine learning, evolutionary computation and genetic algorithms.

## How is the control theory related to Artificial Intelligence

### Components of an intelligent control system

Example of robot control system

An intelligent system is all about how to perceive, reason, and behave in the environment, and the controller is the core to make an agent act intelligently. To be specific, a controller has four functions: measure, compare, compute and correct. The output of a controller follows a reference (i.e. desired control signal), and monitors the output to compares it with the reference. The error signal (i.e. difference between actual and desired output), is applied as the feedback input to the system, to improve the actual output closer to the desired control signal. And basically, the intelligent systems need the controller to be closed-looply designed.

Structure of cruise control system in closed loop control theory

Take a robot as an example. A robot has a "body", which is used to feel the environment by sensors, get instructions and follow the instructions using actuators to carry out movement. It also needs a "brain"---a controller, to facilitate what to do, when to do and how to do referencing to the past experiences and improves its service by comparing environment feedback with goal.

#### Example of car's cruise control

As mentioned previously, the goal of this system is to maintain the car's speed at a constant speed value set by the driver. The agent is a car in this case, and the controller is the cruise control. The reference of the system is the speed set by the driver, output is the car's speed, and this system controls the actuator to determines how much power should the engine delivers.

There are two possible solutions.

• Design a open-loop control system - No feedback to the system. No matter how the road condition is(e.x. flat, uphills, downhills or curve), the controller get no feedback from the car(i.e. no measurement of the speed). And Linear control theory is also suitable for this case. But the limitation is that it would go faster when downhills and slower uphills.
• Design a closed loop control system - This solution would be more reasonable. The sensor monitors the speed of the car, passes to the controller. The controller would compare it to the desired output, and make a adjustment. In this way, when the car goes uphills, the sensor continuously gets the error signal(i.e. difference between desired speed and the real speed), and increase the power. When downhills, the controller would instruct to decrease power.

Obviously, the closed loop design is more smart. And this is what the real life is.

### Acting with Reasoning

Another major issue is that how the knowledge (i.e. the experience in a specific domain that accumulated by an intelligent agent) that the system referred is formed? It can be formed initially at design time, offline afterwards or online real-timely.

Real-time bidding system
• Online - When an intelligent agent is acting, the knowledge base is used to behave. It observes the environment, compares the goals and actions and update its knowledge base real timely. This conforms to the systems that does not require complex computation, and the response time needs to be within a very short range.
• Offline - When an intelligent agent requires more complex computation, it usually builds the knowledge base before acting to make the online computation more efficient or effective. This knowledge base is built referencing the prior knowledge and past experiences.