Course:CPSC522/WeightedModelCounting
Weighted Model Counting
An efficient approach to probabilistic inference is to reduce the problem to weighted model counting. The approach involves encoding of the probabilistic model usually a Bayesian network as a propositional knowledge base in conjunctive normal form(CNF) with weights associated with each model (assignment of variables) according to the network parameters. Here the term model refers to two different concepts. Given this CNF, computing the probability of some evidence is equivalent to calculating the sum of the weights of all assignments to CNF variables compatible with the evidence.
Principal Author: Hooman Hasehmi
Secondary Author:
Abstract
This page provides a summary on weighted model counting and different approaches to it. The first section provides background, one of the variations and a simple example task. The next sections provide different ways to encode the probabilistic model here a Bayesian network, ways to count the models and finally what local structures can be exploited (that is what properties of parameter values of a specific probabilistic model can be exploited). The article ends by providing another variation of the problem and conclusion. This article is structured based on a survey [1] and tries to convey similar information.
Builds on
Weighted model counting converts the model to conjunctive normal form (CNF). The probabilistic model here is a Bayesian network which is a subtype of graphical models.
Related Pages
One of the counting methods relies on a d-DNNF knowledge compilation.
Content
Introduction and definition
Definitions
A Bayesian network represents a joint distribution as a pair of , where is a acyclic graph representing dependencies and independences and is a set of factors. For each variable with parents there is a factor which is a function such that for each assignment of these variables . Followed by the chain rule and using independences we have,
In many cases a factor is described by a conditional porbablity table(CPT).
Example
A simple example Bayesian network is provided in Figure 1. The joint probability and the conditional probability tables are provided bellow[1].
A | B | C | Pr |
---|---|---|---|
0.001 | |||
0.002 | |||
0.007 | |||
0.009 | |||
0.018 | |||
0.063 | |||
0.0018 | |||
0.0162 | |||
0.162 | |||
0.0072 | |||
0.0648 | |||
0.648 |
A | Pr |
---|---|
0.1 | |
0.9 |
A | B | Pr |
---|---|---|
0.1 | ||
0.9 | ||
0.2 | ||
0.8 |
A | C | Pr |
---|---|---|
0.1 | ||
0.2 | ||
0.7 | ||
0.01 | ||
0.09 | ||
0.9 |
Using this model, the probability can be calculated by summing the constitent rows and which gives .
The following is an example encoding of this probabilistic model as a CNF.
Or equivalently,
...
...
Where the is an indicator variable representing assignment of a value to a variable and is a parameter variable representing which conditional probability parameter is applied. Weight of the positive literal is , other literals have a weight of .
After assigning weights to each variable in the CNF obtained from the Bayesian network, the weight of a model(an assignment to these variables) is calculated by multiplying the individual literal weights. The weight of can be calculated as , which is equivalent to the joint probability.[1]
Without any evidence total weight of all models adds to one.
To compute the mentioned probability , one should compute sum of weights of all models compatible with the statement and the CNF obtained for the probabilistic model which is .
The advantages of this method are discussed in the next sections.
Formulation
We call the weights assigned to each literal by . As mentioned for the positive literals , and for all the other literals the weight is , that is . The weights for the literals define a weight for each model as follows.
For a logical theory , is computing the sum of all models of ,
Evidence is incorporated either by zeroing out weight of indicator variable literals that are inconsistent with or by computing , where encodes evidence as a conjunction of indicator variables.[1]
Encoding
There are different ways to encode the model as as a CNF, some take advantage of local structures and other reduce the number of variables of the simple encoding.
ENC1
This is a simple way to encode a Bayesian network(similar to the example?). First we describe how to produce the clauses.
1. Indicator clauses. For each network variable with domain we have clauses of the form,
These clauses ensure that exactly one indicator variable for is set to true.
2. parameter Clauses. For each parameter , where is any assignment to the th parrent of we generate the following clauses,
These clauses ensure that the parameter variable is set to true if and only if the corresponding indicator variables are true.
The mentioned encoding does not take advantage of parameter values (local structures), but using this it is easy to utilize determinism with small modification (determinism is what follows).
In the scenario that weight of one of the parameter variables is , say in the first example. By excluding models that set this parameter variable to true, the corresponding parameter clauses will simplify to , the redundant variable can be eliminated from the encoding.
ENC2
This encoding simplifies the previous one. For each network variable with values , it requires an ordering over the values of the variable specified by if comes before . Assuming are the new parameter variables, the weight associated to these variables when positive is the probability that given that and holds that is,
and when negative. Given this we have,
The encoding takes advantage of this and utilizes the ability to render as don't care variables when or does not hold so their weights do not affect the model counting. The indicator clauses are the same, for a given parameter the parameter clauses are modified to the following clause,
Notice that because of determinism can be eliminated and the clause can be,
Furthermore, there is no if and only if relationship so more than one of can be true in a consistent model. That is how the mentioned variables are considered as don't care.
One of the differences is that, produces smaller CNFs with fewer parameters and clauses. Another difference is that for each instantiation of network variables , set of consistent models can have more than one element where sum of their weights is equal to joint probability of .
Similarity and differences
is similar to the example, differences are mentioned in the end of section. The difference in encoding is that the parameters are ordered and only the first parameter variable that is equal to true matters.
ENC3 and ENC4
takes advantage of equal weights of parameters to reduce the number of CNF variables. The method tries to remove irrelevant variables to make finding decompositions easier. The encodings are provided in the local structure section.
Model counting
There are different ways to compute the weighted sum. Here model counting using search and knowledge compilation is discussed.
Search
The search is based on a series of decomposing and splitting the CNF into smaller subproblems.
- Decomposing. When the CNF can be decomposed into two set of clauses that do not share variables, one can solve each of the sub problems separately because any two consistent model in the subproblems give a consistent model in the original problem and by multiplying the of the subproblems the weighted sum of original problem can be obtained.
- Splitting based on a CNF variable . When there is no way to decompose the CNF, one can count the models in which an arbitrary variable is true or false separately. After assuming value of and simplifying the clauses, two new problems with fewer variables can be solved. The sum of weights for the original problem is sum of the of each of subproblems.[1]
An example of the search algorithm is provided in Figure 2.
Knowledge Compilation
The method based on knowledge compilation, compiles the CNF into a smooth d-DNNF. The traces generated by search algorithms similar to the previous section can be interpreted as members of the d-DNNF language. The properties of d-DNNFs are briefly explained here.[1]
Smooth d-DNNF
A d-DNNF(deterministic Decomposable Negation Normal Form) is a rooted DAG with conjunction and disjunction as nodes and literals as leaves. The following are the properties of a smooth d-DNNF.
- Decomposability. Conjunctions can not share variables.
- Determinism. Disjunctions must be logically disjoint (not true at the same time).
- Smoothness. Disjunctions mention the same set of variables. Here our d-DNNF is also smooth.
Model counting and arithmetic circuit
The approach further converts the d-DNNF into an arithmetic circuit(AC), leading to a WMC circuit as in Figure 3. The circuit explicitly expresses the as summations and multiplications of subproblems similar to the search algorithm.[1]
Local structures
In the previous sections, encoding determinism was discussed. In this section we discuss other methods to take advantage of structure in like, equal parameters, decomposability and evidence.
Equal parameters
The first method to take advantage of structure is to remove parameter variables that have similar value in the same conditional probability table (in the same factor), that is use the same boolean variable to reduce number of variables (e.g. and if both share the same value). However this does not directly work for , for example the two if and only if statements sharing and two different assignment to the same variable imply that can't be true. The encoding uses the following clauses to solve this problem.
parameter clauses,
This ensures that always at least one of the parameters is true, but does not imply that there should be a unique assignment. This causes more models to be included than , and to be larger than it's actual value. This problem is solved by a minimization operation, that is exclude any model that sets more than two to true since always only one is enough to satisfy the implications.[1]
Decomposability
The encoding tries to remove irrelevant variables and make clauses more decomposable. The irrelevant variables are defined as the variables whose values does not affect the probability (that is the factor value in the CPT is uniform over literals of that variable). Similar to conditional Independence, this type of independence caused by removing these variables allows decomposition.
To explain the modifications necessary to , first we need to define new concepts and an algorithm to simplify the encoding. Since we are working with a set of multivariate variables ,
we need to utilize a logic over multi-valued variables. The generalization is straightforward each world which consists of an atom of each variable satisfies an atom if and only if it assigns the common variable the same value. A term over is a conjunction of atoms one for each variable of . Let be a disjunction of terms over . An implicant of is a term over that implies . A prime implicant is an implicant which is minimal, that is removing any atom results in a term that is not an implicant.
The algorithm to simplify works as follows, first we divide the CNF clauses into encoding groups which share the same parameter values (or are false). Then we encode each encoding group separately. For each group we first find the prime implicants using the algorithm(todo ref) and then for each implicant we add a clause of the form . Where is the conjunction of the indicators and is the consequent of the encoding group. The removal of literals allows more decomposition.
The improves both time and size (number of edges) of the resulting compilation.[1]
Evidence
Evidence is another local structure that WMC takes advantage of, it is encouraged to refer to the original article[1] to see the methods used for using evidence efficiently.
Other problem variations
Continuous domain
For continues domains it is possible to encode factors that have only one continues variable either as a parent or child. The CNF is created by terms like "x < value" and inference tasks similar to exact inference are done by approximating integration over the real domain by modeling it as a set of polynomial integrations. [2]
Annotated Bibliography
Put your annotated bibliography here. Add links where appropriate.
- ↑ 1.00 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 1.09 1.10 M. Chavira and A. Darwiche, On probabilistic inference by weighted model counting. Artificial Intelligence, 2018.
- ↑ V. Belle and A. Passerini and G. Van den Broeck, Probabilistic Inference in Hybrid Domains by Weighted Model Integration. AAAI, 2015.
To Add
Put links and content here to be added. This does not need to be organized, and will not be graded as part of the page. If you find something that might be useful for a page, feel free to put it here.
Local structures (cont.)
Evidence(important)
Exploiting evidence can make inference in a Bayesian network more tractable. Two of the most common technics are removing leaf nodes and removing edges outgoing from observed notes. We call these preprocess the classical pruning can decrease connectivity of the network. The work (ref) exploits evidence in a way which provide much more benefits.
The second method conjoins unit clauses encoding the evidence prior to compiling and therefor eliminating them from theory. This makes compilation more tractable and the result much smaller. The disadvantage is that queries only superset of current evidence can be answered.
From the practical situations that this algorithm can be applied, first the evidence may be fixed only a subset of variables (like MAP algorithms). Second, one may be interested in the actual values of the network parameter which will maximize the probability of given evidence(this for example happens in genetic linkage analysis). In this scenario one may want to use iterative algorithms like EM or gradient descent, which is done with many network queries with the same evidence but different network parameter values. A similar application appears in sensitivity analysis , where the goal is to search for some network parameters that satisfy a given constraint. The method is simple, just for each in the evidence assert the unit clause into the CNF. However the effect of this seemingly innocent action belie it's true power. Several detailed examples are provided in(ref), which shows that how it can reduce work of compiler algorithm.
The example is from genetic linkage analysis, and is a common occurrence in that domain. It involves four variables: child C with parents A, B, and S.
The variable C is the genotype in a child which is inherited from one of the parent’s genes, A/B, based on the value of selector S. We assume that all four variables are binary and that the portion of the CPT with S = s1 is as follows.
S A B C Pr(C|A,B)
s1 a1 b1 c1 1.0
s1 a1 b2 c1 1.0
s1 a2 b1 c2 1.0
s1 a2 b2 c2 1.0
As described in Section 4.1, the algorithm on which the compilation is based works by repeatedly conditioning to decompose the CNF. Let us consider the case where we are given evidence {c1}, and during compilation, we condition on S = s1. Assuming a proper encoding of the network into CNF, combining the evidence with the value for S allows us to infer a1, which unit resolution can use to achieve further gains. Conditioning on S = s2 yields a similar conclusion for b1. In this case, the full power of conditioning on S is realized only when combined with evidence on C. This example reveals how evidence can combine with the operations of the compilation algorithm to simplify the task.
Recall that classical pruning severs edges leaving evidence nodes and deletes certain leaf nodes. Injecting unit clauses is analogous to this severing of edges but is strictly more powerful for several reasons. First, this technique not only exploits the fact that a variable has been instantiated, but also exploits the specific value to which it has been instantiated. Second, rather than simply affecting the CPTs of children of evidence nodes, injecting unit clauses can affect many more parts of the network since unit clauses will often allow the WMC algorithm to infer additional unit clauses, and the effects can propagate to many ancestors and many descendants of evidence nodes. Third, rather than only realizing a limited number of gains during initialization, injecting unit clauses can continue to realize gains throughout the WMC algorithm.
Several results are given in [2] demonstrating large gains when compiling with evidence. Algorithms that exploit only topological structure could not perform inference on many of the data sets, even after performing classical pruning, because of high treewidth. Furthermore, in the majority of cases, applying classical pruning and compiling the CNF without the introduction of the unit clauses based on the evidence also failed. However, with the introduction of the unit clauses, compilation became possible in many cases. Moreover, the paper showed that the performance of this general technique subsumed the performance of the specialized quickscore algorithm [13], which capitalizes on evidence in certain types of diagnostic networks. Finally, the paper showed that when combined with some aggressive preprocessing and applied to several difficult problems from genetic linkage analysis, the technique outperformed SUPERLINK 1.4, a state-of-the-art system for the task, on a number of challenging problems.
Table 16 lists a few of the results from the paper from the field of genetic linkage analysis and compares the performance to that of SUPERLINK. There are several observations. First, general-purpose algorithms that exploit only topological structure such as jointree could solve only one of the listed networks, because of high treewidth, even after applying classical pruning techniques. Second, only one of these networks could be compiled without the introduction of unit clauses to capture evidence. However, once the unit clauses were injected, all of the networks yielded to compilation in minutes. Finally, WMC compilation times are in most cases more efficient than SUPERLINK online times, and WMC online times are much more efficient still. Given that compilation must occur once, and online inference must be repeated many times, this effect of this improvement multiplies.
Table 16
Net, Max clust.
ee33 20.2 ee37 29.6 ee30 35.9 ee23 38.0 ee18 41.5
Comp. time (s) 25.33
61.29 376.78 89.47 283.96
Comp. size 2,070,707
1,855,410 27,997,686 3,986,816 23,632,200
Online time (s)
0.59 0.39 8.37 1.08 6.63
SUPERLINK time (s) 1046.72
1381.61 815.33 502.02 248.11
Table 17
ACE vs. Jointree when there is no local structure. Online time is averaged over sixteen evidence sets, where for each
evidence set, we compute probability of evidence and
a posterior marginal for every network variable
Network A C E offline time (s)
alarm 1 bm-5-3 721 diabetes 1345 hailfinder 3 mm-3-8-3 195 munin2 284 munin3 254 munin4 1248 pathfinder 37 pigs 41 students-3-2 241 tcc4f.obfuscated 3 water 340
Jointree avg. online time (s)
0.007 3.328 1.202 0.018 1.117 0.764 0.495 1.770 0.062 0.115 0.961 0.022 0.659
ACE avg. Improv. online time (s)
0.005 1.41 3.965 0.84 1.268 0.95 0.007 2.66 1.336 0.84 0.596 1.28 0.534 0.93 1.872 0.95 0.036 1.72 0.123 0.93 1.806 0.53 0.007 3.17 0.591 1.12
Other problem variations and conclusion
Continuous domain (maybe useful)
For continues domains it is possible to encode factors that has only one continues variable (either as a parent or child, the CNF is created by terms like "x < value" and the exact inference is done by approximating(?) integration over the real domain by modeling it as a set of polynomial integrations)
Probabilistic inference in hybrid domains by weighted model integration. V Belle, A Passerini, G Van den Broeck. Proceedings IJCAI 2015
http://web.cs.ucla.edu/~guyvdb/papers/BelleIJCAI15.pdf
Conclusion
We conclude by noting that ENC2, ENC3, and ENC4 can effectively take advantage of evidence in the way described in this section, since they utilize indicator variables in the same way as ENC1. Furthermore, performing WMC by search can utilize evidence by examining the weights of variables and asserting a negative unit clause any time a weight is equal to 0. In the case of compilation, the disadvantage of incorporating evidence was that compilation would need to be performed again for some queries, which removes one of the chief advantages of compiling (although we have seen that in many practical cases, this is not necessary). However, in the case of a search, the algorithm is re- run for each new evidence anyway, so there is really no disadvantage to incorporating evidence in this case. Encoding evidence in the context of search algorithms was indeed applied effectively in [1]
References
- [1] M. Chavira, A. Darwiche, Compiling Bayesian networks with local structure, in: Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI), 2005, pp. 1306–1312.
- [2] M. Chavira, D. Allen, A. Darwiche, Exploiting evidence in probabilistic inference, in: Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence (UAI), 2005, pp. 112–119.
- [3] M. Chavira, A. Darwiche, Encoding cnfs to empower component analysis, in: Proceedings of the 9th International Conference on Theory and Applications of Satisfiability Testing (SAT), in: Lecture Notes in Computer Science, vol. 4121, Springer, Berlin, Heidelberg, 2006, pp. 61–74.
- [4] F.V. Jensen, S. Lauritzen, K. Olesen, Bayesian updating in recursive graphical models by local computation, Computational Statistics Quar- terly 4 (1990) 269–282.
- [5] S.L. Lauritzen, D.J. Spiegelhalter, Local computations with probabilities on graphical structures and their application to expert systems, Journal of Royal Statistics Society, Series B 50 (2) (1988) 157–224.
- [6] N.L. Zhang, D. Poole, Exploiting causal independence in Bayesian network inference, Journal of Artificial Intelligence Research 5 (1996) 301–328.
- [7] R. Dechter, Bucket elimination: A unifying framework for probabilistic inference, in: Proceedings of the 12th Conference on Uncertainty in Artificial Intelligence (UAI), 1996, pp. 211–219.
- [8] A. Darwiche, Recursive conditioning, Artificial Intelligence 126 (1–2) (2001) 5–41.
- [9] F. Jensen, S.K. Andersen, Approximations in Bayesian belief universes for knowledge based systems, in: Proceedings of the Sixth Conference on Uncertainty in Artificial Intelligence (UAI), Cambridge, MA, 1990, pp. 162–169.
- [10] C. Boutilier, N. Friedman, M. Goldszmidt, D. Koller, Context-specific independence in Bayesian networks, in: Proceedings of the 12th Conference on Uncertainty in Artificial Intelligence (UAI), 1996, pp. 115–123.
- [11] D. Larkin, R. Dechter, Bayesian inference in the presence of determinism, in: C.M. Bishop, B.J. Frey (Eds.), Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, Jan 3–6, 2003, Key West, FL.
- [12] F. Bacchus, S. Dalmao, T. Pitassi, Value elimination: Bayesian inference via backtracking search, in: Proceedings of the 19th Annual Confer- ence on Uncertainty in Artificial Intelligence (UAI-03), Morgan Kaufmann Publishers, San Francisco, CA, 2003, pp. 20–28.
- [13] D. Heckerman, A tractable inference algorithm for diagnosing multiple diseases, in: Proceedings of the Fifth Conference on Uncertainty in Artificial Intelligence, 1989, pp. 174–181.
- [14] D. Poole, N. Zhang, Exploiting contextual independence in probabilistic inference, Journal of Artificial Intelligence 18 (2003) 263–313.
- [15] A. Darwiche, A logical approach to factoring belief networks, in: Proceedings of KR, 2002, pp. 409–420.
- [16] T. Sang, P. Beame, H. Kautz, Solving Bayesian networks by weighted model counting, in: Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI-05), vol. 1, AAAI Press, 2005, pp. 475–482.
- [17] M. Chavira, A. Darwiche, Compiling Bayesian networks using variable elimination, in: Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI), 2007, pp. 2443–2449.
- [18] T. Sang, F. Bacchus, P. Beame, H.A. Kautz, T. Pitassi, Combining component caching and clause learning for effective model counting, in: SAT, 2004.
- [19] T. Sang, P. Beame, H.A. Kautz, Heuristics for fast exact model counting, in: SAT, 2005, pp. 226–240.
- [20] A. Darwiche, On the tractability of counting theory models and its application to belief revision and truth maintenance, Journal of Applied Non-Classical Logics 11 (1–2) (2001) 11–34.
- [21] J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, Morgan Kaufmann Publishers, Inc., San Mateo, CA, 1988.
- [22] M. Chavira, A. Darwiche, M. Jaeger, Compiling relational Bayesian networks for exact inference, International Journal of Approximate Reasoning 42 (1–2) (May 2006) 4–20.
- [23] D. Roth, On the hardness of approximate reasoning, Artificial Intelligence 82 (1–2) (1996) 273–302.
- [24] F. Bacchus, S. Dalmao, T. Pitassi, Algorithms and complexity results for #sat and Bayesian inference, in: FOCS, 2003, pp. 340–351.
- [25] M. Davis, G. Logemann, D. Loveland, A machine program for theorem proving, CACM 5 (1962) 394–397.
- [26] A. Darwiche, New advances in compiling CNF to decomposable negational normal form, in: Proceedings of European Conference on Artificial Intelligence, 2004, pp. 328–332.
- [27] R. Dechter, R. Mateescu, AND/OR search spaces for graphical models, Artificial Intelligence 171 (2–3) (2007) 73–106.
- [28] W. Wei, B. Selman, A new approach to model counting, in: SAT, 2005, pp. 324–339.
- [29] W. Wei, J. Erenrich, B. Selman, Towards efficient sampling: Exploiting random walk strategies, in: AAAI, 2004, pp. 670–676.
- [30] R. Bayardo, J. Pehoushek, Counting models using connected components, in: AAAI, 2000, pp. 157–162.
- [31] A. Darwiche, A compiler for deterministic, decomposable negation normal form, in: Proceedings of the Eighteenth National Conference on
Artificial Intelligence (AAAI), AAAI Press, Menlo Park, CA, 2002, pp. 627–634.
M. Chavira, A. Darwiche / Artificial Intelligence 172 (2008) 772–799 799
- [32] F. Bacchus, S. Dalmao, T. Pitassi, Dpll with caching: A new algorithm for #SAT and Bayesian inference, Electronic Colloquium on Compu- tational Complexity (ECCC) 10 (003).
- [33] A. Darwiche, P. Marquis, A knowledge compilation map, Journal of Artificial Intelligence Research 17 (2002) 229–264.
- [34] J. Huang, A. Darwiche, Dpll with a trace: From sat to knowledge compilation, in: Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI), 2005, pp. 156–162.
- [35] M. Jaeger, Relational Bayesian networks, in: D. Geiger, P.P. Shenoy (Eds.), Proceedings of the 13th Conference of Uncertainty in Artificial Intelligence (UAI-13), Morgan Kaufmann, Providence, USA, 1997, pp. 266–273.
- [36] A. Darwiche, A differential approach to inference in Bayesian networks, Journal of the ACM 50 (3) (2003) 280–305.
- [37] J.P. Hayes, Introduction to Digital Logic Design, Addison Wesley, 1993.
- [38] M.M. Mirsalehi, T.K. Gaylord, Logical minimization of multilevel coded functions, Applied Optics 25 (1986) 3078–3088.
- [39] R.D. Shachter, Evaluating influence diagrams, Operations Research 34 (6) (1986) 871–882.
- [40] S. Ross, Evidence absorption and propagation through evidence reversals, in: Proceedings of the 5th Annual Conference on Uncertainty in Artificial Intelligence (UAI-90), Elsevier Science Publishing Company, Inc., New York, 1990.
- [41] J. Park, A. Darwiche, Approximating MAP using stochastic local search, in: Proceedings of the 17th Conference on Uncertainty in Artificial Intelligence (UAI), Morgan Kaufmann Publishers, Inc., San Francisco, CA, 2001, pp. 403–410.
- [42] J. Park, A. Darwiche, Solving map exactly using systematic search, in: Proceedings of the 19th Conference on Uncertainty in Artificial Intelligence (UAI), 2003, pp. 459–468.
- [43] C. Yuan, T.-C. Lu, M. Druzdzel, Annealed MAP, in: Proceedings of the 20th Annual Conference on Uncertainty in Artificial Intelligence (UAI), 2004, pp. 628–635.
- [44] M. Fishelson, D. Geiger, Exact genetic linkage computations for general pedigrees, Bioinformatics 18 (1) (2002) 189–198.
- [45] M. Fishelson, N. Dovgolevsky, D. Geiger, Maximum likelihood haplotyping for general pedigrees, Tech. Rep. CS-2004-13, Technion, Haifa, Israel, 2004.
- [46] A. Dempster, N. Laird, D. Rubin, Maximum likelihood from incomplete data via the EM algorithm, Journal of the Royal Statistical Soci- ety 39 (1) (1977) 1–38.
- [47] H. Chan, A. Darwiche, Sensitivity analysis in Bayesian networks: From single to multiple parameters, in: Proceedings of the Twentieth Conference on Uncertainty in Artificial Intelligence (UAI), AUAI Press, Arlington, Virginia, 2004, pp. 67–75.
- [48] J. Park, A. Darwiche, A differential semantics for jointree algorithms, Artificial Intelligence 156 (2004) 197–216.
Links:
catchit- https://www.cs.rochester.edu/u/kautz/papers/modelcount-sat04.pdf
|
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedWMC