(This post describes my proposed final research project for MAS.S62 Interactive Machine Learning at the MIT Media Lab.)
Many interactive machine learning systems ask users to make sequences of judgments while training. For example, recommender systems often prompt the user to rate a series of items in a single session. Most systems assume that such judgments are stable over time and across varying conditions. However, there is extensive evidence from psychology that such judgments are subject to anchoring effects. First demonstrated by Kahneman and Taversky, “anchoring” describes how subjects make judgments by adjusting away from an initial piece of information rather than based on a consistent scale.
I propose to explore the intuition that users of interactive machine learning systems are subject to anchoring bias and that accounting for such bias can improve the results of interactive machine learning systems.
Specifically I propose to look for statistical evidence of anchoring bias in existing sequentially labeled data sets such as the Netflix Prize set of movie rankings. Secondarily, I propose to explore the design of a novel interactive machine learning system that takes anchoring bias as its starting point.
Understanding the basis of user decision-making is essential to the design of effective interactive machine learning systems. Over the last 40 years, the psychology of judgment and decision-making has cataloged many cognitive biases that affect the kind of evaluations machine learning systems ask of their users. This body of research has yet to significantly impact the machine learning research community. Most machine learning systems treat the preferences and judgments expressed through user labels as consistent across time and varying conditions. If, instead, these labels are swayed by the users’ cognitive biases as the psychological literature suggests, measuring the effect of these biases and accounting for them in the design of our algorithms and interactive systems could make a significant impact on the quality of the results of machine learning applications.
Tversky, Amos, and Daniel Kahneman. “Judgment under Uncertainty: Heuristics and Biases.” Science. no. 4157 (1974): 1124–1131.
Adomavicius, Gediminas, Jesse Bockstedt, Shawn Curley, and Jingjing Zhang. “Recommender systems, consumer preferences, and anchoring effects.” In RecSys 2011 Workshop on Human Decision Making in Recommender Systems, pp. 35–42. 2011.
Koren, Yehuda. “The bellkor solution to the netflix grand prize.” Netflix prize documentation (2009).
Cardie, Claire. “A cognitive bias approach to feature selection and weighting for case-based learners.” Machine Learning 41, no. 1 (2000): 85–116.
Cardie, Claire. “Integrating case-based learning and cognitive biases for machine learning of natural language.” J. Exp. Theor. Artif. Intell. 11, no. 3 (1999): 297–337.
Slovic, Paul. “The construction of preference.” American psychologist 50, no. 5 (1995): 364.
Carenini, Giuseppe, and David Poole. “Constructed preferences and value-focused thinking: implications for AI research on preference elicitation.” In AAAI–02 Workshop on Preferences in AI and CP: symbolic approaches, pp. 1–10. 2002.
Herbrich, Ralf, Thore Graepel, Peter Bollmann-Sdorra, and Klaus Obermayer. “Learning preference relations for information retrieval.” In ICML–98 Workshop: text categorization and machine learning, pp. 80–84. 1998.
Carterette, Ben, and Desislava Petkova. “Learning a ranking from pairwise preferences.” In Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 629–630. ACM, 2006.