Loading
Jul 30, 2018

Research & Recommendation Engines

written by wlcr

A long time ago, in a city medium-far away, I worked as an Adjunct Professor at the University of Minnesota. It was here that I first learned about, and studied, recommendation engines. The team I worked with is called GroupLens and is responsible for a piece of software called MovieLens. MovieLens was one of the very first recommendation engines to be used in the wild. Amazon.com used the company’s technology to form its early recommendation engine for consumer purchases (source Wikipedia). As described by GroupLens:

MovieLens is a website that helps people find movies to watch. It has hundreds of thousands of registered users. We conduct online field experiments in MovieLens in the areas of automated content recommendation, recommendation interfaces, tagging-based recommenders and interfaces, member-maintained databases, and intelligent user interface design.

At the time of my employment, we were researching automated content recommendations based on user-generated reviews and comparisons of behaviors as a means to predict the success of a recommendation. We were employing students to watch films and then fill out a statistical satisfaction [Likert Scale] survey providing insight into their experience with the film. Then matching users who “liked” (remember this is WAY before Facebook) the same films and recommending unseen films that other uses, with the same preferences, had seen and enjoyed, but the participant had not yet seen. Then we compared this indicator with the overall average rating a film receives. So in this case, we’re just recommending popular movies with no consideration for past behavior or crowdsourced insight.

Here is the abstract from my paper, submitted to and published by SIGCHI (Special Interest Group on Computer-Human Interaction):

ABSTRACT The purpose of this experiment was to determine whether recommendations based on collaborative filtering (CF) are perceived as superior to recommendations based on user population averages. The test vehicle was a movie recommender. 29 subjects were divided into 2 groups, each group using one of these systems. The recommender systems suggested movies which subjects later viewed. Each subject filled out pre and post-questionnaires about their experience. Subjects using the CF algorithm rated more movies. Subjects placed slightly more confidence in the recommendations of the population averages algorithm. Both algorithms were over-confident compared to subjects ratings. Subjects found both recommender systems to be an effective source of finding entertainment. User responses did not reveal a noticeable difference between the two algorithms. Keywords: Collaborative Filtering, Recommender System

If you’d like to read the whole article, you can download it here: Download User Response to Two Algorithms as a Test of Collaborative Filtering