# Course:CPSC522/Monte Carlo Localization

## Monte Carlo Pedestrian Localization for Indoor Environments

This page gives an overview of the Monte Carlo Localization method (MCL)[1] and demonstrates an extension of the technique to enable for pedestrian tracking within a building environment.[2]

Principal Author: Tommaso D'Amico

Collaborators: -

Papers Discussed:

- Fox, Dieter, Wolfram Burgard, Frank Dellaert, and Sebastian Thrun. “Monte Carlo Localization: Efﬁcient Position Estimation for Mobile Robots,” In Proc.of the IEEE International Conference on Robotics & Automation (ICRA). 1999, 7.

- Woodman, Oliver, and Robert Harle. “Pedestrian Localisation for Indoor Environments.” In Proceedings of the 10th International Conference on Ubiquitous Computing - UbiComp ’08, 114. Seoul, Korea: ACM Press, 2008. https://doi.org/10.1145/1409635.1409651.

## Abstract

This page gives an overview of the Monte Carlo Localization method (MCL)[1] and demonstrates an extension of the technique to enable for pedestrian tracking within a building environment[2]. MCL is analyzed as a technique for pedestrian localization as it allows for global localization with multi modal probability distributions over continuous space and is shown to be robust to highly noisy sensors. The Pedestrian Localization paper demonstrates a technique to extend MCL to a 2.5D map and illustrates the benefits of it for reducing uncertainty.

### Builds on

The Monte Carlo Localization method is an algorithm that utilizes particle filtering as a means of localization. It builds on the idea of using Bayesian probability with Markov Chains to develop a probability distribution of the likely states of a robot to aid it in its localization.

For the adaptation of MCL to pedestrians in indoors environments the Kullback-Leibler divergence (KLD) sampling algorithm is used to dynamically adapt the number of particles sampled.

### Related Pages

a related combination of Monte Carlo Method with Markov Chains is the MCMC algorithm.

## Content

### Monte Carlo Localization (MCL)

The goal of Monte Carlo Localization is to allow a robot to localize itself in an environment given only an odometry system, a sensor to perceive its surroundings and a 2D map of its navigable environment[1]. In this page we will also explore the use of MCL without the aid of a sensor to perceive the surroundings[2] and how it is still capable of providing accurate estimates using different aids and assumptions.

#### Global Localization and Position Tracking

Localization for robotics is often broken down in two categories: Global localization and position tracking[1]. In position tracking the robot knows its initial position and only needs to accommodate for small errors or drift in the odometry system as it moves. Global Localization is a bit more complex as it looks at identifying the robots' position within its global environment given no initial state.[1] This is also referred to as the hijacked robot problem. Since the robot needs to localize itself from scratch, this tends to be a very computationally heavy process and is often omitted by most localization algorithms.[1]

#### MCL Algorithm

Animation showing the operation of the Monte Carlo Localization for a robot using laser range finders

Monte Carlo Localization is generically known as a particle filter. The key idea of the method is to represent the posterior belief ${\displaystyle Bel(l)}$by a set of ${\displaystyle N}$ weighted, random samples or particles ${\displaystyle S={(s_{i}|i=1...N)}}$[1]. A sample set constitutes a discrete approximation of a probability distribution[1]. In MCL the samples are of type ${\displaystyle \langle \langle x,y,\theta \rangle ,p\rangle }$where ${\displaystyle x,y,\theta }$represent the coordinates on the 2D map and the robot orientation, while ${\displaystyle p\geq 0}$ represents a discrete probability of the likelihood of the particle[1].

MCL operates in two steps, the robot motion step and the sensor reading step.

When the robot moves MCL generates ${\displaystyle N}$new samples that approximate the robot's position after the motion command. Each sample is generated by randomly drawing a sample from the previously computed sample set with likelihood determined by their p-values[1]. Let ${\displaystyle l'}$ denote the position of this sample. The new sample’s ${\displaystyle l}$ is then generated by generating a single, random sample from ${\displaystyle P(l|l',a)}$, using the action ${\displaystyle a}$ as observed. The p-value of the new sample is ${\displaystyle N^{-1}}$.[1]

Sensor readings are incorporated by re-weighting the sample set, in a way that implements Bayes rule in Markov localization. More specifically, let ${\displaystyle \langle l,p\rangle }$be a sample[1]. Then ${\displaystyle p\longleftarrow aP(s|l)}$ where ${\displaystyle s}$ is the sensor measurement, and ${\displaystyle a}$ is a normalization constant that enforces the sum of all p-values to 1[1]. The incorporation of sensor readings is typically performed in two phases, one in which ${\displaystyle p}$ is multiplied by ${\displaystyle P(s|l)}$, and one in which the various p-values are normalized.[1]

In practice the introduction of a small number of uniformly distributed random samples at each step is recommended to enable re-localization in the rare event that the robot looses track of its position.[1]

#### Properties of MCL

The main property that makes MCL such a powerful algorithm is that it can universally approximate arbitrary probability distributions. The variance of the importance sampler converges to zero at a rate of ${\displaystyle 1/{\sqrt {N}}}$(under MCL conditions). This creates a clear trade-off of accuracy and computational load[1]. However, the true advantage lies in the way MCL places computational resources on regions with high likelihood, where things really matter[1] also known as adaptive sampling. More about this is shown in the second paper description below.

Given there is no discretization of the space or the data, MCL is able to estimate the state of the robot to any numerical accuracy given sampling size.[1]

MCL is an online algorithm, meaning that it leads itself nicely to an any-time implementation. This means that the algorithm is able to provide an answer at any time, however the quality of the solution increases with time.[1]

### Pedestrian Localization for Indoor Environments

The pedestrian localization problem is the second paper of this page, it builds upon the MCL algorithm and adapts it to the localization of pedestrians in an indoor environment. In particular this paper looks at how a foot-mounted inertial unit, a detailed building model, and a particle filter can be combined to provide absolute positioning, despite the presence of drift in the inertial unit and without knowledge of the user’s initial location[2].

There are several differences between robot and pedestrian localisation as presented in the paper. Firstly, the robots used in existing literature have been unable to climb stairs. As a result, a 2-dimensional map of the environment has been sufficient. Secondly, the movement of a robot can be actively controlled. This is clearly not possible in a pedestrian localisation system. Thirdly, robot localisation is typically solved knowing both the relative movement of the robot and measurements obtained from a laser range finder. In contrast, only relative movement information is used (in the form of step events) to solve the pedestrian localisation problem.

#### 2.5D Mapping

Illustration of a 2.5D map. It illustrates the way stairs and walls are mapped in the environment.

There are many obstacles that limit the possible movement of a pedestrian within a building. In particular walls are impassable obstacles. In order to enforce such constraints it is necessary to have a computer-readable plan of the building. Since it is reasonable to assume that a pedestrian’s foot is constrained to lie on the floor during the stance phase of the gait cycle, a 2.5-dimensional description of the building (in which each object has a vertical position but no depth) is sufficient[2]. Hence a map is defined to be a collection of planar floor polygons.Each floor polygon corresponds to a surface in the building on which a pedestrian’s foot may be grounded[2]. Each edge of a floor polygon is either an impassable wall or a connection to the edge of another polygon. Connected edges must coexist in the (x,y) plane, however they may be separated in the vertical direction to allow the representation of stairs[2].

#### Tracking using a foot mounted IMU

Although we're dealing with tracking people, the MCL is usually applied to robots, where mobile devices typically use inertial sensors, laser range-finders and computer vision to provide accurate localisation without the requirement of fixed infrastructure[2]. Applying the same systems to people is, however, fraught with difficulties; laser range-finders and cameras are impractical, we lose the ability to control the subject to maximize our chances of precise localisation, and the techniques are usually developed with a single floor in mind[2].

One type of sensor which does seem applicable to people tracking is inertial measurement units (IMUs). An IMU contains three orthogonal rate-gyroscopes and accelerometers, which report angular velocity and acceleration respectively. In principle, it is possible to track the orientation of the IMU by integrating the angular velocity signals. This can then be used to resolve the acceleration samples into the global frame of reference, from which acceleration due to gravity is subtracted. The remaining acceleration can then be integrated twice to track the position of the IMU relative to a known starting point and heading[3].

For foot-mounted IMUs the cubic-in-time drift problem can be reduced by detecting when the foot is in the stationary stance phase (i.e. in contact with the ground) during each gait cycle. Zero velocity updates (ZVUs) can be applied during this phase, in which the known direction of acceleration due to gravity is used to correct tilt errors which have accumulated during the previous step. The application of such constraints replaces the cubic-in-time error growth with an error accumulation that is linear in the number of steps.[2]

#### Particle Propagation for pedestrian localization

The particle propagation for pedestrian localization closely resembles the update given by the robot motion in MCL, with an exception for determining which floor polygon to which the propagated particle is constrained. Initially it is assumed that the particle resides in the floor polygon it came from. In order for it to exit the floor polygon the step vector must intersect one of its edges in the (x,y) plane. There are three cases outlined by the authors:

1. No intersection point is found. The particle must still be constrained to the same polygon.
2. The first intersection C is with a wall. In this case the particle’s weight should be set to equal 0 in the correction step, enforcing the constraint that walls are impassable.
3. The first intersection C is with an edge connecting to another polygon. In this case we update the current polygon to the new connected polygon.

Since a single step may span multiple connections, the intersection test is repeated between the remainder of the updated polygon. This process continues recursively until one of the first two cases applies.

#### Particle Correction for pedestrian localization

The correction step sets the weight ${\displaystyle w_{t}}$of a propagated particle. This step is used to enforce wall constraints. If a wall is intersected during the propagation step used to generate the state of the particle, then it is assigned a weight wt = 0[2]. If a wall is not intersected, the particle is assigned a weight based on the difference between the height change ${\displaystyle \delta z}$ of the current step and the difference in height between the start and end floor polygons. The height change according to the map is given by ${\displaystyle \delta z_{poly}=Height(poly_{t})-Height(poly_{t-1})}$[2].

Here we are using the change in height reported in the current step as a measurement in the Bayesian framework. Particles whose change in height over the step closely matches the change in height reported in the step event are assigned stronger weights[2]. This allows localisation to occur quickly when the user climbs or descends stairs.

Algorithm containing particle propagation and correction steps for pedestrian localization. Source: Woodman, Oliver, and Robert Harle. “Pedestrian Localisation for Indoor Environments.”

#### Re-sampling for pedestrian localization

The number of particles needed to represent ${\displaystyle Bel(s)}$to a given level of accuracy depends on the complexity of the distribution, which can vary drastically over time. As a result it can be highly inefficient to use a fixed number of particles. This is particularly true for localisation problems, where the number of particles required to track an object after convergence is typically only a small fraction of the number required to adequately describe the distribution in the early stages of localisation[2].

Kullback-Leibler divergence (KLD) sampling is used in this framework since likelihood-based adaptation is not well suited for problems where ${\displaystyle Bel(s)}$ can be a multi-modal distribution, as is often the case during indoor localisation due to symmetry in the environment[2]. The idea of KLD-sampling is to generate a number of particles at each step such that the approximation error introduced by using a sample-based representation of ${\displaystyle Bel(s)}$remains below a specified threshold ${\displaystyle \epsilon }$.

Since the propagation step in this framework uses control information (in the form of step events), the propagated belief is a reasonable estimate of the posterior.

### Contributions and Comparisons

Taking MCL and adapting it to a pedestrian tracker requires several modifications. MCL was designed for robot localization, robots can have multiple sensors in them to provide information about the environment. Strapping a laser sensor or camera to an individual can be a difficult demand. For such reason the authors resorted in only using an IMU for localization, which makes localization much more difficult. Given humans move by walking in steps, the researchers exploited the IMU data further by making assumptions on the readings, they included ZVU to understand when a step occurred and assumed all footsteps would be in contact with the floor. This enabled the sole use of the IMU for tracking given the aforementioned assumptions and the expansion of the algorithm to a 3 dimensional environment with the 2.5D mapping trick that was show above.

From the pedestrian adaptation of MCL the main contributions to be carried forward are the concept of expanding the map to higher dimensions and the use of assumptions as constraints to reduce the number of sensors or readings needed to make the algorithm work.

The second paper introduces the use of KLD as a method for adaptive sampling, which may be relevant to general applications of MCL.

It is challenging to compare accuracy in localization for both approaches as environments and data used are very different and the nature of the algorithm in place is stochastic and can be varied in many ways. That said, reported accuracies for experiments for robotics application of MCL fall between 5-10cm with large enough sampling (~1000-10000 samples)[1] while for the pedestrian application they report an accuracy of 0.5m 75% of the time and 0.73m 95% of the time[2]. It can be seen that the removal of observatory sensors is quite damaging to accuracy even when employing assumptions to the data being generated, although given the limited sensors and large map environment, the results can still be useful for the task at hand (Robot localization applications tend to require more accurate localization as their navigation often depends on it, human can do it themselves).

## Annotated Bibliography

[1] Fox, Dieter, Wolfram Burgard, Frank Dellaert, and Sebastian Thrun. “Monte Carlo Localization: Efﬁcient Position Estimation for Mobile Robots,” In Proc.of the IEEE International Conference on Robotics & Automation (ICRA). 1999, 7.

[2] Woodman, Oliver, and Robert Harle. “Pedestrian Localisation for Indoor Environments.” In Proceedings of the 10th International Conference on Ubiquitous Computing - UbiComp ’08, 114. Seoul, Korea: ACM Press, 2008. https://doi.org/10.1145/1409635.1409651.

[3] Titterton, D. H., and J. L. Weston. Strapdown Inertial Navigation Technology. 2nd ed. IEE Radar, Sonar, Navigation, and Avionics Series 17. Stevenage: Institution of Electrical Engineers, 2004.