« Back to Results

Economic Implications of Incorrect Mental Models

Paper Session

Friday, Jan. 6, 2023 8:00 AM - 10:00 AM (CST)

Hilton Riverside, Quarterdeck A-B
Hosted By: American Economic Association
  • Chair: Renee Bowen, University of California-San Diego

Evolution of Contract Systems

Giampaolo Bonomi
,
University of California-San Diego
Renee Bowen
,
University of California-San Diego and NBER
Joel Watson
,
University of California-San Diego

Abstract

This paper studies the formation and evolution of systems of contracts. Players interact in repeated simultaneous move games, and can formally or informally agree on rules regulating their interactions. Individual agreements can be partial (i.e. leave discretion on a subset of situations) and in conflict with each other, giving rise to system inconsistencies. We assess conditions under which simultaneous contracts emerge, and how conflicts are resolved when there are inconsistencies between contract terms or gaps in the system of contracts.

Mental Models and Learning

Ignacio Esponda
,
University of California-Santa Barbara
Emanuel Vespa
,
University of California-San Diego
Sevgi Yuksel
,
University of California-Santa Barbara

Abstract

We conduct laboratory experiments to explore the implications of incorrect mental models on the persistence of biases. We document that mental gaps that commonly arise for certain behavioral biases persist even in the presence of transparent and substantial feedback. The reason is that subjects who suffer from these biases are typically confident that they know the correct answer and unaware of the mental gap in their model of the world. This confidence results in both too little updating given information and also lower attention to information.

The Behavioral Foundations of Model Misspecification: A Decomposition

Aislinn Bohren
,
University of Pennsylvania
Daniel Hauser
,
Aalto University

Abstract

In this paper, we link two common approaches to modeling how agents process information and update beliefs: (i) defining an updating rule that specifies a mapping from prior beliefs and the signal to the agent's subjective posterior, and (ii) modeling an agent as a Bayesian learner with a misspecified model. Our main result shows that any misspecified model can be decomposed into an updating rule and another object---a forecast---which captures how the agent anticipates future information. Moreover, we derive necessary and sufficient conditions for a forecast and updating rule to be represented by a misspecified model, and establish that this representation is unique. These conditions characterize all of the implications for belief formation implicit in the misspecified model approach. Finally, we consider two natural ways to select forecasts: introspection-proofness and naive consistency, and demonstrate their impact on belief formation.

Discussant(s)
Daniel Hauser
,
Aalto University
Kathleen Ngangoué
,
University of California-Los Angeles
Renee Bowen
,
University of California-San Diego
JEL Classifications
  • C7 - Game Theory and Bargaining Theory