Course:CPSC522/Dynamic Bayesian Networks

From UBC Wiki

Dynamic Bayesian Networks

A Dynamic Bayesian Network is a Bayesian Network over time sequences with arcs connecting adjacent time sequences

Principal Author: Bronson Bouchard
Collaborators:

Abstract

This page describes properties of Dynamic Bayesian Networks, how they are different from Hidden Markov Models, and briefly describes inference.

Builds on

A Dynamic Bayesian Network is a Bayesian Networks that is extended to include a temporal probability model.

Related Pages

Hidden Markov Models are a basic case of a Dynamic Bayesian Network where the state is represented by one random variable.

Content

Overview

A Dynamic Bayesian Network (DBN) is a belief network that models a set of variables and their dependencies at a given time but includes dependencies between steps through time. This can be modeled by a pair of Bayesian Networks made of an initial Bayesian Network which contains the prior probability distribution and a Bayesian network that models the transition from one time step to another, usually called a "Two-Timeslice Bayesian Network (2TBN)."[1] In a first-order model the parent of any node in a time slice can be within the time slice or in the previous time slice, higher order models allow a parent to be in a further time step from the current but complicates the model.[2] The benefit of the temporal component is the belief in a variable can be changed depending on the observations in the previous time steps.

The Dynamic part of a DBN is only with regard to the belief that can be made by making temporal observations, the network structure must remain static. While stepping through time no variables can be added or removed and the conditional probability distributions cannot be changed. Additionally there cannot be a dependency from a future time step to a previous time step[3]

The main difference between a DBN and a Hidden Markov Model is that a DBN represents the hidden state as a set of random variables where the Hidden Markov Model only uses a single variable.[4] DBNs provide more freedom in model and conditional probability distribution (CPD) structure.

One of the applications of DBNs is detecting sensor failure. If several sensor can be modeled to have some conditional dependence on the states in previous time steps then when a sensor detects something abnormal it can use the observations made in previous time steps and infer whether it is a true reading.

Definition

With the assumption of a first-order transition model a DBN can be defined formally as follows.
Given a collection of random variables at any time slice which are either hidden or observable and prior distribution .
The 2TBN can be defined as

where denotes the parent of .
Each variable has a conditional probability distribution that defines where can either be in the current time step or the previous.
Unrolling the DBN for T time slices results in the joint distribution of
[2]

Inference

Performing inference on a Dynamic Bayesian Network is done in the same way as with a Bayesian Network. The limitation being, unless the number of time steps t since the start is small, the inference will be computationally impossible to compute exactly. This is because the Bayesian Network at time step t has to be unrolled to the initial state, which can be nearly infinite time steps.[5] The inference can instead be done approximately, one way is by Particle Filtering[6], which works to reduce the number of samples required for the inference. There are also a few tasks which are specific to the temporal domain, mainly Filtering, Smoothing, Prediction, and Decoding. Filtering is the process of updating the current belief state by recursive estimations using the previous states. Smoothing, or hindsight, is the task of estimating the state by using all the observations made up to the current time step. Prediction is done by using the belief of the current state, after t time steps, to predict the state at some number of time steps in the future. Decoding, or most probable explanation, is the process computing the most likely hidden variables given all observations made that could have caused some observations in the current state.

Learning

Learning in DBNs is done in a similar way to traditional BNs. When the structure is fully known and the values for parameters need to be learned, the EM Algorithm can be applied as in normal Bayes Nets with the addition that parameters must span time steps and must not change for a given variable. When the structure is unknown it can be learned in the same fashion as in a BN because the structure will not change between time steps, the only difference being that the parent of a variable can be in the current time step or the previous time step.

Example

In this example a very simple model of an engine in a car is used to demonstrate some of the concepts of a DBN.

Figure 1. The example initial Bayesian Network
Figure 2. The example Two-Timeslice Bayesian Network
Figure 3. The example DBN unrolled for three time steps

The set of variables is
, ... , .
and are observable, and and are hidden variables corresponding to some problems that can occur.
We can define our initial BN as in figure 1, the only initial dependency is that the initial coolant level depends on the coolant leak. We can also define a 2TBN as in figure 2 where all variables in are persistent across time slices, or is dependent on . Unrolling the 2TBN in figure 2 with the initial BN from figure 1 for three time steps results in the Bayes Net in figure 3. Assuming the parameters are known we can perform inference tasks on the model. If we are at time step 3 and we are unsure of our observation of we can perform filtering with our previous two time steps and update our current belief of the model. Additionally, if we observe our engine is overheating at time step 3 and we want an explanation we can step go through the previous two time steps and find the highest probability hidden variables along the trip through time and make a guess as to what caused the over heating. We can also predict whether our engine will overheat in the future given our current belief in the state of the model from our observations made.

Applications

DBNs can be used in a variety of different applications that have temporal data and can benefit from the more complex models. DBNs are used in robotics for such things as understanding human interaction[7], collision avoidance tasks[8], and action choice[9]. DBNs are also applied in speech recognition[10], computer vision[11], and bioinformatics[12][13].


Annotated Bibliography

  1. Murphy, K. P., & Russell, S. (2002). Dynamic bayesian networks: representation, inference and learning. 14
  2. 2.0 2.1 Murphy, K. P., & Russell, S. (2002). Dynamic bayesian networks: representation, inference and learning. 15
  3. David Poole , Alan K. Macworth "Artificial Intelligence: Foundations of Computational Agents 2E 8.5.4"
  4. Murphy, K. P., & Russell, S. (2002). Dynamic bayesian networks: representation, inference and learning. 15
  5. Russell, S. J., Norvig, P., & Davis, E. (2010). Artificial intelligence: A modern approach. Upper Saddle River, NJ: Prentice Hall. 596
  6. Russell, S. J., Norvig, P., & Davis, E. (2010). Artificial intelligence: A modern approach. Upper Saddle River, NJ: Prentice Hall. 597
  7. Tahboub, K.A. J Intell Robot Syst (2006) 45: 31. https://doi-org.ezproxy.library.ubc.ca/10.1007/s10846-005-9018-0
  8. Hirai H., Takano S., Suzuki E. (2011) Simulating Swarm Robots for Collision Avoidance Problem Based on a Dynamic Bayesian Network. In: Kampis G., Karsai I., Szathmáry E. (eds) Advances in Artificial Life. Darwin Meets von Neumann. ECAL 2009. Lecture Notes in Computer Science, vol 5778. Springer, Berlin, Heidelberg
  9. Baraglia J , Cakmak M, Nagai Y, Rao R , Asada M (2017) Efficient human-robot collaboration: When should a robot take initiative? The International Journal of Robotics Research Vol 36, Issue 5-7, pp. 563 - 579
  10. Zweig, G., & Russell, S. (1998). Speech recognition with dynamic Bayesian networks.
  11. Kafai, M., & Bhanu, B. (2012). Dynamic Bayesian networks for vehicle classification in video. IEEE Transactions on Industrial Informatics, 8(1), 100-109.
  12. Lo, L., Wong, M., Lee, K., & Leung, K. (2015). High-order dynamic Bayesian Network learning with hidden common causes for causal gene regulatory network. BMC Bioinformatics
  13. Burge, J., Lane, T., Link, H., Qiu, S. and Clark, V. P. (2009), Discrete dynamic Bayesian network analysis of fMRI data. Hum. Brain Mapp., 30: 122–137.

To Add

Put links and content here to be added. This does not need to be organized, and will not be graded as part of the page. If you find something that might be useful for a page, feel free to put it here.


Some rights reserved
Permission is granted to copy, distribute and/or modify this document according to the terms in Creative Commons License, Attribution-NonCommercial-ShareAlike 3.0. The full text of this license may be found here: CC by-nc-sa 3.0
By-nc-sa-small-transparent.png