Course talk:CPSC522/Combining Collaborative Filtering with Personal Agents for Better Recommendations

From UBC Wiki

Contents

Thread titleRepliesLast modified
Feedback018:46, 14 March 2020
peer feedback004:02, 12 March 2020
Feedback000:47, 7 March 2020

So the hypothesis are essentially pairs of main hypothesis and the null hypothesis. This distinction could probably make them a bit more clear. That said, very well structured page. It's easy to navigate. Perhaps some references to personal agents vs which filtering method could be more clear by using a proper notation, but overall an excellent page. I'd rate 18.5/20!

(5) The topic is relevant for the course.

(5) The writing is clear and the English is good.

(5) The page is written at an appropriate level for CPSC 522 students (where the students have diverse backgrounds).

(5) The formalism (definitions, mathematics) was well chosen to make the page easier to understand.

(5) The abstract is a concise and clear summary.

(5) There were appropriate (original) examples that helped make the topic clear.

(5) There was appropriate use of (pseudo-) code.

(5) It had a good coverage of representations, semantics, inference and learning (as appropriate for the topic).

(5) It is correct.

(4) It was neither too short nor too long for the topic.

(5) It was an appropriate unit for a page (it shouldn't be split into different topics or merged with another page).

(4) It links to appropriate other pages in the wiki.

(5) The references and links to external pages are well chosen.

(5) I would recommend this page to someone who wanted to find out about the topic.

(5) This page should be highlighted as an exemplary page for others to emulate.

PeymanBateni (talk)18:46, 14 March 2020

peer feedback

In reading the page from the top in order, it is not clear to me if the Personal agents use the content-based filtering? Overall this was a really interesting page to read. I would like to hear more background on the actual Bots and how these were thought up and why these specific ones were chosen. Also it was a little bit inconsistent with the usage of personal agent = content-filtering = information filtering. I would also suggest to include more details about the first paper and their results - to have more specifics on what exactly the newer paper added and how these agents differ. Also it would be great to hear about other possible extensions to these models and maybe even your own opinion about the works. 18/20

(5) The topic is relevant for the course.

(5) The writing is clear and the English is good.

(5) The page is written at an appropriate level for CPSC 522 students (where the students have diverse backgrounds).

(5) The formalism (definitions, mathematics) was well chosen to make the page easier to understand.

(4) The abstract is a concise and clear summary.

(5) There were appropriate (original) examples that helped make the topic clear.

(5) There was appropriate use of (pseudo-) code.

(5) It had a good coverage of representations, semantics, inference and learning (as appropriate for the topic).

(5) It is correct.

(4) It was neither too short nor too long for the topic.

(5) It was an appropriate unit for a page (it shouldn't be split into different topics or merged with another page).

(5) It links to appropriate other pages in the wiki.

(5) The references and links to external pages are well chosen.

(5) I would recommend this page to someone who wanted to find out about the topic.

(4) This page should be highlighted as an exemplary page for others to emulate.

SvetlanaSodol (talk)04:02, 12 March 2020

I don't get the sparsity problem. Every user has only seen a small proportion of the items, and each item has only been seen by a small proportion of the users. That *is* the problem; why is it a limitation of the collaborative filtering systems? What you might mean is the dual of the early rater problem; when a new user comes they have very few ratings.

It would be good to refer the solutions back to the problems. Which solutions are designed to solve which problems?

For the DoppelgangerBots, please separate out the definition of TFIDF 1-3 (I think) from what they are doing with TF-IDF. (I know that TF-IDF is, but I can't work out what this is doing).

In the definition of RipperBot, how does it decide if an instance "classified the movie as high"? Presumably it has a learning algorithm that gives 0/1 predictions; you should tell us what this is.

GenreBot makes no sense to me. (Or are they only used as features for the linear regression = Mega-GenreBots?).

I am finding it difficult to parse the results (I am not sure 4 means). "Rejected" is meant in a very technical aspect; it does not mean that we should believe the negation. It really is a statement about the sample size than the truth of the hypothesis.

Overall, the page needs more intuitive explanations. What is the intuition behind each method, what problem is it trying to solve and why would we expect it to solve that problem?

DavidPoole (talk)00:47, 7 March 2020