Critiques
A solid draft! Here are my scores, with general and section-specific comments below that. Let me know if any clarification or discussion is needed.
Scale of 1 to 5, where 1 = strongly disagree and 5 = strongly agree:
- (5) The topic is relevant for the course.
- (3.5) The writing is clear and the English is good.
- (5) The page is written at an appropriate level for CPSC 522 students (where the students have diverse backgrounds).
- (4) The formalism (definitions, mathematics) was well chosen to make the page easier to understand.
- (4) The abstract is a concise and clear summary.
- (4) There were appropriate (original) examples that helped make the topic clear.
- (3) There was appropriate use of (pseudo-) code.
- (4) It had a good coverage of representations, semantics, inference and learning (as appropriate for the topic).
- (4.5) It is correct.
- (4) It was neither too short nor too long for the topic.
- (4) It was an appropriate unit for a page (it shouldn't be split into different topics or merged with another page).
- (2) It links to appropriate other pages in the wiki.
- (2.5) The references and links to external pages are well chosen.
- (4) I would recommend this page to someone who wanted to find out about the topic.
- (3.5) This page should be highlighted as an exemplary page for others to emulate.
Comments:
- A proofreading pass for grammar issues is recommended.
- It might be useful to highlight significant terms using bold or italics.
- The examples and figures were useful, but not always original. I note that Figure 5 is credited to Murphy, which is good. If you made the other figures from scratch, then well done on professional-looking figure generation; but it might be useful to make the figures more consistent with each other (don't worry, it's NOT a high priority).
Abstract:
- The abstract is a bit too concise. I'd extend it to 2-3 sentences and include things like the names of the two main representations.
- needs "More general than" section that links to Bayesian network, Markov network, and HMM pages
Representation of Graphical Models:
- might be useful to have some links to the Bayesian network and Markov network pages here as well, but not strictly necessary since you'd have them in the "More general than" section
Directed Graphical Models:
- There are pages for both Bayesian networks and HMMs. It seems appropriate that you have a short summary of significant aspects for each, and so I don't think there's significant content duplication here; but there should definitely be links to the pages here for user convenience.
Undirected Graphical Models
- There's a bit more detail here than for the DGM section, and so there might be some concerns about content overlap.
- I don't see a definition for the g_i in the equation for P(X); since they aren't conditional probabilities, it would be worth mentioning what they are. Not much detail is required, since that's covered in the Markov networks page; simply mentioning that the g_i are factors over cliques and linking to the relevant part of the Markov networks page should be sufficient.
- Markov blankets are applied in reference to Bayesian networks as well as Markov networks, and so that part might be better placed in your section about conditional independence. Also, I believe Markov blankets are already minimal.
- The Markov blankets as listed in Figure 4 are a bit confusing. The Markov blanket of A is C', the Markov blanket of B and C is {C',C"}, and the Markov blanket of C" is {B, C', C}. Murphy mentions Markov blankets in the Chapter 19 pdf in your bibliography for a bit of clarification.
Inference in graphical models:
- Parts of this section seem to be in reference specifically to Bayesian networks; and so some of this content might be better off on the Bayesian networks page For example, the material related to Figure 5 might be better suited there.
- A definition of the Markov blanket might be useful in the conditional independence section, as mentioned previously.
- The formulas related to P(R|W) uses a mix of T's and 1's as truth values. I'd recommend sticking with T's for consistency with the figure.
Factor graph and propagation algorithm:
- Since it can be applied to both DGM and UGM, this is a useful section to have in this page.
- "A factor graph is a bipartite graph representing the factorization of a function." Any function? What kind of function?
- When you define x_s_j, you have i in is_j; do you mean i in S_j?
- What is the junction tree algorithm? What about message passing algorithms? External links would be useful if you don't want to write them up here.
- I'm finding the pseudocode difficult to read, though the comments help a bit. It would help to be more clear about what c, k, C, Mc, Dc, and xc represent.
Learning in Graphical Models:
- Part of your first paragraph contains information that might be better placed in your introduction to the nature of graphical models.
- Need more detailed descriptions of the components and notation for the definitions of D and L.
- External links would be useful here for things like MLEs and the EM algorithm.
Structure Learning:
- This section seems less detailed and more rushed than the rest of the document; terms are given mathematical symbols that are not really used; and a number of terms (d-separation, Likelihood Score, etc.) would benefit greatly from external links.
References look fine.
Hi JordonJohnson,
Thanks for your valuable feedback. Lack of hight light and grammar error issues are very important points that I will modified them immediately. And from reading your wiki page about Markov Network, I found the overlap content and I will make my Undirected Graphical Models section be more general and try to find something new that more related to graphical models. I will also add contents to Inference in graphical models and explanations for the pseudo code as you suggest. Actually, structure learning part is a little difficult for me and I still learn it. So it seems less detailed. I will add contents in this part later. Thanks again for your feedback. It really do lots of help for me. I will let you know when I am done with that.
YuYan