Course talk:CPSC522/Variational Auto-Encoders

From UBC Wiki

Contents

Thread titleRepliesLast modified
Critique 2014:12, 16 March 2020
Critique006:16, 16 March 2020
Feedback017:37, 13 March 2020

Critique 2

This is a difficult topic and I think you made plenty of effort to make this topic digestible. The figure really helped make the topic clearer. Nonetheless, I still had to read some parts multiple times to fully understand the topic at hand. I would recommend repeatition of ideas ("recall ...") as it can help keep the reader on course.

(5) The topic is relevant for the course.

(5) The writing is clear and the English is good.

(4) The page is written at an appropriate level for CPSC 522 students (where the students have diverse backgrounds).

(4) The formalism (definitions, mathematics) was well chosen to make the page easier to understand.

(5) The abstract is a concise and clear summary.

(5) There were appropriate (original) examples that helped make the topic clear.

(5) There was appropriate use of (pseudo-) code.

(5) It had a good coverage of representations, semantics, inference and learning (as appropriate for the topic).

(5) It is correct.

(5) It was neither too short nor too long for the topic

(5) It was an appropriate unit for a page (it shouldn't be split into different topics or merged with another page).

(5) It links to appropriate other pages in the wiki.

(5) The references and links to external pages are well chosen.

(5) I would recommend this page to someone who wanted to find out about the topic.

(5) This page should be highlighted as an exemplary page for others to emulate.

If I was grading it out of 20, I would give it: 19/20

ObadaAlhumsi (talk)14:12, 16 March 2020

For someone without the required background knowledge on the topic, the technical aspect of the page could be rather elusive and difficult to follow. In this regard, providing elementary overviews before elaboration would definitely aid comprehension. Moreover, different types of losses were mentioned without any explanation as to what they measure (e.g. reconstruction loss, classification loss, cross entropy loss). A brief description of their purpose would be very useful. Finally, it would be beneficial to have the contributions of the second paper elucidated in a more explicit way. Overall, it was a very good page and the use of examples was helpful.

P.S. This sentence seems to lack a verb, so you might want to look into it. “we also a corresponding discrete random variable ...”


(5) The topic is relevant for the course.

(5) The writing is clear and the English is good.

(4) The page is written at an appropriate level for CPSC 522 students (where the students have diverse backgrounds).

(4) The formalism (definitions, mathematics) was well chosen to make the page easier to understand.

(5) The abstract is a concise and clear summary.

(5) There were appropriate (original) examples that helped make the topic clear.

(5) There was appropriate use of (pseudo-) code.

(5) It had a good coverage of representations, semantics, inference and learning (as appropriate for the topic).

(5) It is correct.

(5) It was neither too short nor too long for the topic

(5) It was an appropriate unit for a page (it shouldn't be split into different topics or merged with another page).

(4) It links to appropriate other pages in the wiki.

(4) The references and links to external pages are well chosen.

(5) I would recommend this page to someone who wanted to find out about the topic.

(4) This page should be highlighted as an exemplary page for others to emulate.

If I was grading it out of 20, I would give it: 18.5/20

AlirezaIranpour (talk)06:07, 16 March 2020

(I am late because this page did not exist when I gave my other feedback).

You need to give explicit reference to the source of all figure that are not yours on the main page. Finding the source of the figures through clicking on them is not enough.

It would be good to untangle "auto-encoder" from "variational auto-encoder"; much of what you describe is just an auto-encoder (e.g., Figure 1). You should first say what a auto-encoder is, and then what is special about a variational auto-encoder. Perhaps also say what a probabilistic auto-encoder is (the model that the variational auto-encoder is a variational approximation to).

Can you please provide an intuitive high-level overview before you do the math (e.g., in the SGVB estimator section; the previous section was good, but this was impenetrable (for me at least)).

I found it difficult to work out what paper 2 was doing. What was its contribution? What problem does it solve? Is the difference that z is decomposed into multiple latent variables? (Are these assumed to be independent?) When you say "direct[ed] graphical model" which one are you talking about? (Was the x in the first paper also a set of random variables?) I'm not sure how the semi-supervised part works.

I think that you need more of a tutorial introduction. Someone who reads it should be enticed to read the papers.

There are still a few typos (e.g, escope)

DavidPoole (talk)17:33, 13 March 2020