|
Recommendation engines like Netflix and Amazon will often generate "explanations" along with the recommendation. For example, "We think you'll like this movie because you watched that one", or "You should buy this because your friends bought things similar to it" and so on. Much of this comes under the general rubric of collaborative filtering. But that is mostly about how to generate the recommendations.
As Netflix has recently pointed out, generating good explanations is almost as important as the recommendations themselves. |
|
Evaluating the effectiveness of explanations for recommender systems is a quite recent survey on that subject. You can also get some pointers from this paper and section 5.4 of this one. |
|
Another attempt at explaining recommendation was made in Transparent User Models for Personalization. In this paper, the idea is to create a probabilistic model to learn "badges" for each user. Each badge represents how a recommendation engine might perceive the user and could be an alternative to the latent representation learned by matrix factorisation-like algorithms. Note that actually plugging this into a recommender system is follow on work. |