Course talk:CPSC522/List Recommendation
- [View source↑]
- [History↑]
Contents
Thread title | Replies | Last modified |
---|---|---|
Feedback | 1 | 16:02, 21 April 2016 |
Critique | 1 | 15:56, 21 April 2016 |
Discussion with your page | 1 | 15:54, 21 April 2016 |
Critique | 1 | 15:48, 21 April 2016 |
Hi Yuyan,
Interesting topic and informative.I think I can guess how amazon can recommend books to you. They have the list of purchased books for one specific person, and list of books searched in the same session before they made the decision. so when they get to know what kind of books you are searching for, then it would be easy for them to recommend you books. Their way might be a little different from yours.
I have some questions and suggestions for you:
1. you mentioned implicit feedback in your introduction, it would be much more interesting if you involve features like clicking, viewing, favoriting and also commenting in your algorithm, and also I believe those would improve the accuracy of the model.
2. For the optimization you applied for learning in MF-list and MF-item, do you want to add more information about that?
3.I have one question about your test, as you are using the same data set, is any difference between the 5 test results?
4.Do you have a data set that tells the ground truth? how did you know which should be recommend or not? For example, compare with implicit feedback, then you can get to know if a specific people is interested in your list?
5.for your AUC graph, I think it is better that you introduce ROC first, and then expend onto AUC.
Hi Dandan,
Thanks for your valuable suggestions.
For your questions:
1. Actually the content-based algorithm I used are based on the implicit feedback. All the list/item features are get from clicking or viewing.
2. I will make more contents in the algorithm part.
3. I use the same dataset and 5 times can make the experiment more precise.
4. In my page's reference section, I list a paper to introduce BPR and this paper introduces how to apply AUC in implicit feedback recommendation.
5. I will add ROC in the page.
Thanks again.
Bests,
Yu Yan
Hi Yu,
Overall, it's a nice page. But it would be good if you will include pseudo code and a brief explanation about it.
Hi Yu,
Nice to see your page and your contribution to combine the regression, matrix factorization to determine the accuracy for list and item based recommendation. Actually I think the result must be item based because it has a much better granularity than list. Anyway, I have some inquires regarding your page
- In your paper, you use the average as the aggregate function to link item to list. So you use the weighted average or mean average or other?
- Can you give a snapshot about how the Goodread dataset looks like and how do you cleanse your data?
- Can you elaborate your source code or pseudo code for experiment?
- How do you determine the factor parameters for your regression model in both two experiments?
- Why do you run the experiment 5 times?
Regards Arthur
Hi Arthur,
Thanks for your valuable suggestions.
For your questions:
1. Current I only use the mean average and will try to use weighted average in the future.
2. I will add some more description of the dataset.
3. I will add more contents in the algorithm part.
4. For regression model the optimization criteria I use is RMSE, and the learn method is Gradient Descent.
5. 5 times experiment can make the result more precise.
Thanks again for your advices.
Bests,
Yu Yan
Hi Yu, Very nice page and being an area where I work, it's a super exciting read for me. First and foremost suggestion I would have is to read this paper http://dl.acm.org/citation.cfm?id=2645750 And if possible have a chat with Yidan, if you are interested to take this forward. Her entire thesis is on list reco.
Now coming to your work, few suggestions: 1. One major assumption you are probably implicitly making, that the lists are given as "list". Sorry for the cryptic term :-), but what I mean is, it's fairly challenging is to decide on length of list etc, what you call list features. Now often while recommending they are not known up-front. How this pose challenge to your algo? Do you see the result varying depending on how big a list is, it should. Also a study on how adoption actually gets affected, e.g. do people really care about what you recommend below a certain rank in a list?
2. Your algorithm is described well. But it will be much easy to understand if you can give a pseudo code, or since you implemented, actual code snippets.
3. It will be great to add right pointers to other relevant pages. For e.g Arthur made a page on general RS in last assignment. Referring to that would help interested user to follow more.
4. I will suggest a proof-reading as there are few quibbles but all said, it's a great step towards a very active research topic.
Hi Pirthu,
First thanks for your valuable advices. Actually, I get the dataset from Yidan and she also gave me some suggestions when I met problems on my work.
For your Critique:
1. Currently I only consider the length issues. My item-based algorithm is to do aggregation of all items' preference in a list and get the average preference value to represent the preference for the list. The reason to do average is considering the lists with large length will probably have more items that the user may be interested in. And I have not consider the order yet. However, I will follow your advices to elaborate the work more in the future.
2. I will add more contents in algorithm part.
3. I will add more external links.
4. I will do a proof reading before final draft.
Thanks again for your suggestions.
Best,
Yu Yan