Course:CPSC522/Combining Collaborative Filtering with Personal Agents for Better Recommendations

From UBC Wiki

Combining Collaborative Filtering with Personal Agents for Better Recommendations

This page focuses on content-based and collaborative filtering and how personal agents can be used to combine both approaches for improved recommendations.

Principal Author: Alireza Iranpour

Abstract

Content-based filtering and collaborative filtering both attempt to overcome information overload by suggesting relevant items to the user. Content-based filtering focuses on the analysis of semantic content and selects items based on their similarity to user preferences. Collaborative filtering, on the other hand, selects items based on the correlation between users with similar preferences. Each approach has its own benefits and limitations that suggest opportunities for hybrid systems.

Builds on

Recommender systems, collaborative filtering, content-based filtering.

Related Pages

This page is closely related to recommender systems, collaborative filtering, content-based filtering

Content

Introduction [1]

Information filtering or content-based filtering systems make recommendations by comparing the contents of items with user profiles. These profiles describe user preferences and are created by analyzing the contents of user’s preferred items. Information filtering systems can quite effectively identify items that are relevant to particular topics of interest. However, these systems lack the ability to distinguish between items on the same topic. As a result, the most effective systems must incorporate human judgement.

Collaborative filtering recommenders create a database of user opinions on the available items and use that to identify users with similar preferences. These systems do not take into account the contents of items and make recommendations based merely upon the opinions of a group of like-minded users known as neighbors. Despite their considerable success, collaborative filtering systems do have limitations of their own:

The early rater problem. Collaborative filtering systems cannot provide recommendations for items with no available ratings. This means that these systems provide little or no value for users who are the first to rate the new items. These systems depend on the willingness of a group of altruistic users to provide ratings for many items without receiving many recommendations in return.

The sparsity problem. The number of available items usually exceeds the amount a user is able or willing to explore and rate. This makes it challenging to find items with enough ratings on which to base predictions. It also becomes harder to find neighbors.

In order to address the rating sparsity and the early rater problems, the first paper incorporates semi-intelligent information filtering agents called filterbots into a collaborative filtering system. These filterbots, which are automated rating robots, evaluate and rate new items as soon as they are made available. In fact, the collaborative filtering system views filterbots as ordinary users who generously provide ratings for new items without expecting recommendations. This paper showed that users who agreed with the filterbots benefited from the contributed ratings.

Incremental contributions [2]

The second paper extends the concept of filterbots proposed by the first paper in three important ways:

  1. Use a more intelligent set of filterbots and personalized learning agents.
  2. Apply this work to small communities to serve single human users.
  3. Evaluate the simultaneous use of multiple filterbots.

They also demonstrate the potential of collaborative filtering framework for both integrating agents and combining them with humans.

Hypotheses [2]

The paper explores four different models:

  • Pure collaborative filtering (using only the opinions of a community of users)
  • A single personalized agent (machine learning filter)
  • Combination of many agents
  • Combination of multiple agents and the user opinions

And comes up with the following four hypotheses:

  1. User opinions alone provide better recommendations than a single personalized agent
  2. A personalized combination of multiple agents provides better recommendations than a single personalized agent
  3. User opinions provide better recommendations than a personalized combination of multiple agents
  4. A personalized combination of multiple agents and user opinions provides better recommendations than either of them alone

These hypotheses were tested in the context of a small, anonymous community of movie fans. The small size of the community together with the non-textual content of the items (movies) caused disadvantages for both collaborative filtering and information filtering, thus provided a middle ground between the common contexts for either approaches.

Data set [2]

Fifty users were randomly selected from the MovieLens system with over 120 ratings. For each user, three sets of ratings were selected at random without replacement.

  • Training set: for training the personalized agents (50 ratings)
  • Correlation set: for combining users, agents, or both together (50 ratings)
  • Test set: for assessment of performance (20 ratings)

Evaluation metrics [2]

Recommendation accuracy was measured using Mean Absolute Error (MAE). Decision-support accuracy was also measured using ROC sensitivity to evaluate the effectiveness of each model in finding high-quality items. To operationalize ROC, the paper considered ratings of 4 and 5 as good and anything lower as bad.

User opinions only [2]

For collaborative filtering the paper used the DBLens collaborative filtering research engine. DBLens has several parameters that control performance, coverage and accuracy. For this experiment, the engine was set to prefer maximum coverage (the number of items for which the system could provide recommendations).

For each user, the associated correlation set and test set were fed into the engine. The engine would then make predictions for each of the movies in the test set, compare it against the actual ratings and spit out the error and ROC statistics.

Individual agents [2]

In this work, three types of information filtering agents (filterbots) were created. These filterbots are intended to contribute ratings for the available items on behalf of human users. These bots use current user ratings and content of the rated items to make predictions for the missing ratings. For new items with no prior ratings, these agents look at the only information available (content) and based on that and each user's individual preference in terms of content (learned from the items they previous rated), generate ratings for these items as soon as they arrive. In addition, for new users who have not rated enough items (which makes it difficult to find neighbors), these bots can take advantage of the content information of the few rated items to predict the user's rating for many other unrated ones.

DoppelgangerBots (DGBots) [2]

These bots are personalized agents that build profiles of user preferences (in terms of content) and generate predictions based on content features of each movie. Three DGBots were created. One that filtered cast data, one that filtered descriptions, and one that filtered both. For each movie, the associated information was obtained from IMDb.

Personal recommendations were produced using TFIDF according to the following five steps:

The general idea behind TF-IDF is to treat as more important terms (keywords) that appear more frequently in a given document and less frequently across all other documents. For instance, the word "movie" might frequently appear in a given description, but it is also frequent in all other descriptions. As a result, it is not an important term to consider.

  1. Form an IDF (inverse document frequency) vector to represent the relative scarcity of each keyword using the following formula where is the total number of movies and (document frequency) is the number of movies in which the keyword appeared. Prevalent words such as "and" that appear everywhere usually have an IDF of almost 0 (no scarcity across different documents)

  2. Form a TF (term frequency) vector for each movie to indicate which keywords appeared by putting 1 for those that occur and 0 for others.
  3. Create a profile vector for each user to indicate the weights associated with each keyword by:
    • Building a keyword preference vector for each of the 50 movies in the user’s associated training set as:

    Keyword preference vector = normalized user rating × TF vector × IDF vector

    • And then using the mean of the 50 resulting keyword preference vectors as the user's profile vector:

    User profile vector = mean of the 50 keyword preference vectors

  4. Score each movie based on the user weights:
    Movie score = user profile vector . TF vector
  5. Rate movies based on their scores (top 21% = 5, next 34% = 4, next 28% = 3, next 12% = 2, last 5% = 1)

RipperBot [2]

RipperBot was created using an inductive logic program called Ripper (created by William Cohen 1995).

For each user, four ripper instances were trained on the user’s associated training set to distinguish between the following user ratings and perform binary classification:

  • Instance 1: {5} = high, {4, 3, 2, 1} = low
  • Instance 2: {5, 4} = high, {3, 2, 1} = low
  • Instance 3: {5, 4, 3} = high, {2, 1} = low
  • Instance 4: {5, 4, 3, 2} = high, {1} = low

The first instance, for example, is trained by only considering 5 as high. The last one, which is the most lenient, is trained by considering all ratings above 1 as high. Each instance would classify the entire movie set, and then the recommendation value (the bot's rating) is calculated as:

Recommendation value = 1 + number of instances that classified the movie as high

For instance, If all four instances classify a movie as high that movie gets a rating of 5 (1+ 4), and if no instance classifies the movie as high, it would receive a rating of 1 (1+ 0).

GenreBots [2]

Consisted of 19 simple bots, one for each genre, that rated a movie 5 if it matched the bot's genre and 3 otherwise.

{ActionBot, AdventureBot, AnimationBot, ChildrensBot, ComedyBot, CrimeBot, DocumntryBot, DramaBot, FamilyBot, Film-NoirBot, HorrorBot, MusicalBot, MysteryBot, RomanceBot, Sci-FiBot, ThrillerBot, UnknownBot, WarBot, WesternBot}

For instance, the movie Joker would receive a rating of 5 from the CrimeBot, the DramaBot, and the ThrillerBot, and 3 from every other bot.

We can think of each GenreBot as a user who has a favorite genre (as though we have 19 users with different preferred genres). These bots help identify movies with similar genres.

Mega-GenreBot [2]

In order to learn user preferences in terms of genre, a Mega-GenreBot was created for each user by training a linear regression model on the user’s associated training set where the actual ratings were considered as a dependent variable of the 19 individual genre bots. The regression coefficients could then be used to make predictions for new movies based on their genre.

Combination of IF agents [2]

Combination strategies [2]

  • Selecting the best agent for each user by testing each bot on the user’s associated correlation set and picking the one with the lowest MAE (highest accuracy)
  • Averaging the agents together by taking the arithmetic mean of agents recommendations
  • Using linear regression to find the best fit combination for each user
  • Using the DBLens collaborative filtering engine to create personal combinations

For the last three strategies that combine multiple agents, the following combinations of agents were proposed:

  • {19 GenreBots + 3 DGBots + RipperBot} (23-agent)
  • {Mega-GenreBot + 3 DGBots + RipperBot} (5-agent)

Combination of users and IF agents [2]

Since collaborative filtering (fourth strategy) with the 23-agent combination (first group) yielded the best performance, it was selected to be combined with the 50 users using the same strategy (collaborative filtering).

CF({19 GenreBots + 3 DGBots + RipperBot} + {opinions of 50 users})

Results [2]

The second and the fourth hypotheses were accepted while the first and the third were rejected:

  1. User opinions alone provide better recommendations than a single personalized agent (Rejected)
  2. A personalized combination of multiple agents provides better recommendations than a single personalized agent (Accepted)
  3. User opinions provide better recommendations than a personalized combination of multiple agents (Rejected)
  4. A personalized combination of multiple agents and user opinions provides better recommendations than either of them alone (Accepted)

Discussion [2]

The results of the paper suggest that an effective method for providing good recommendations is to use multiple sources of information and let the collaborative filtering framework decide which ones to use for each user. In fact, it becomes more important to have a collection of useful bots (agents) than inventing a brilliant one.

Annotated Bibliography

To Add

Context awareness

Diversity

Trust

Dynamic rating

Some rights reserved
Permission is granted to copy, distribute and/or modify this document according to the terms in Creative Commons License, Attribution-NonCommercial-ShareAlike 3.0. The full text of this license may be found here: CC by-nc-sa 3.0
By-nc-sa-small-transparent.png