Behavioral Economics: Lab
Behavioral economics applies psychological principles and methods to analyze economic decisions. Usually, this means incorporating people’s natural limits on computational ability, willpower, and selfishness.
Much of our research uses experiments in a physical laboratory or online. We also run interventions in ongoing field settings and study how people who make decisions in their everyday lives or markets answer experimental questions (a lab-in-the-field).
For example, we recently studied bargaining between two players, where only one player knows the amount that is available to share (Camerer et al., 2019, Management Science). We used machine learning to predict which fine-grained details of bargaining steps predicted how often they come to agreement. This type of research, expanded to include other biological factors, could be used to design “neuroeconomic mediation” to reduce disagreements (and possibly legal bills and heartache as well).
Behavioral Economics: Field
Our group has done many analyses of natural field data. Usually, our motivation is to test whether theories which have been carefully tested and honed using laboratory experiments are also ‘portable’ to real-life, consequential decisions.
For example, we have created algorithmic models of how people use low or high levels of strategic thinking in experimental games. Our “cognitive hierarchy” approach (closely related to similar level-k analyses) can explain a lot of strategic choices better than more familiar theories (such as “equilibrium analysis,” which assumes game players have somehow figured out what everyone else is likely to do).
Can this type of cognitive hierarchy theory, born and raised in a lab, also find limits on strategic thinking in everyday settings? The answer is yes. We found that when moviegoers were deciding what movies to see (in 2000-09), some of them ignored the fact that low-quality movies were often not shown to critics. The lack of critic reviews if a signal of low quality (Brown et al., 2012, AEJ: Micro).
Our group is now working with large-scale data sets on fascinating types of behavior. The key is that these so-called “big data” have three important V qualities—high volume of data, a variety of measures (e.g., choices, social media, sensors, IoT), and high velocity of new data being collecting as you read this. Our group believes that these types of data, along with innovations in scientific methods and valuable ‘trade’ between scientific disciplines, put us in a Golden Age of Social Science.
Neuroeconomics uses ideas and methods from neuroscience to understand the detailed mechanisms of economic decision making. Our group believes in the ambitious idea that the best theories of human behavior should be understood at three mutually-constraining levels: Why does the behavior exist? (What is the behavior for, adaptively?); How does the behavior work, as expressed in an algorithm; and What are the neural mechanisms?
For example, habits form to make regular choices in order to save time at a low cost (the Why); they are a consequence of autopilot decisionmaking when a habitual choice has high reward reliability (the What); and habits are implemented by distinct neural circuitry, including reliability signals to tell the brain when to interrupt habits and think harder (the How).
The Reproducibility Upgrade
The goal of science is to create theories that distill cumulated empirical regularity, to make predictions, and to build machines and systems that solve problems. Evidence cannot cumulate if scientific methods do not produce replicable results. Beginning around 2005, scientists across many disciplines found that it was often not easy to get actual data to reproduce statistical results. Even when data were available, basic results often could not be reproduced, and when experiments were repeated, the results were not always the same.
Good science self-corrects, but how and how fast? Open Science is quickly self-correcting social science. The result is a “reproducibility upgrade.” The upgrade is a series of changes to how planned collection and analyses of data are shared (“pre-registration”); how journals select and review studies for publication; and how well different kinds of methods and results replicate.
Our group has been active in some areas of the Reproducibility Upgrade. With large international teams, we have studied how well social science laboratory experiments replicate, using studies from economics journals (Camerer, 2016, Science) and Science and Nature (Camerer, 2018, NHB). We have also studied how accurately prediction markets— bets for money on whether a scientific finding will replicate—forecast actual replication (they generally forecast well, although not always.) By combining all these data, we have also used machine learning methods to figure out which features of a particular social science experiment make it likely to replicate or not.
We also did a unique large-scale analysis of economic decisions in an educational platform. The platform makes the experimental method identical to thousands of experimental sessions to teach economics (https://moblab.com/). We found that trading in buyer-seller market experiments is almost the same around the world, but bargaining in take-it-or-leave-it ultimatum games varies in how much different groups of people share.