Feedback

Hi Yuyan,

Interesting topic and informative.I think I can guess how amazon can recommend books to you. They have the list of purchased books for one specific person, and list of books searched in the same session before they made the decision. so when they get to know what kind of books you are searching for, then it would be easy for them to recommend you books. Their way might be a little different from yours.

I have some questions and suggestions for you:

1. you mentioned implicit feedback in your introduction, it would be much more interesting if you involve features like clicking, viewing, favoriting and also commenting in your algorithm, and also I believe those would improve the accuracy of the model.

2. For the optimization you applied for learning in MF-list and MF-item, do you want to add more information about that?

3.I have one question about your test, as you are using the same data set, is any difference between the 5 test results?

4.Do you have a data set that tells the ground truth? how did you know which should be recommend or not? For example, compare with implicit feedback, then you can get to know if a specific people is interested in your list?

5.for your AUC graph, I think it is better that you introduce ROC first, and then expend onto AUC.

DandanWang (talk)06:57, 21 April 2016

Hi Dandan,

Thanks for your valuable suggestions.

For your questions:

1. Actually the content-based algorithm I used are based on the implicit feedback. All the list/item features are get from clicking or viewing.

2. I will make more contents in the algorithm part.

3. I use the same dataset and 5 times can make the experiment more precise.

4. In my page's reference section, I list a paper to introduce BPR and this paper introduces how to apply AUC in implicit feedback recommendation.

5. I will add ROC in the page.

Thanks again.

Bests,

Yu Yan

YuYan1 (talk)16:02, 21 April 2016