« Back to Results

Neurofinance, Cognition

Paper Session

Friday, Jan. 3, 2025 2:30 PM - 4:30 PM (PST)

San Francisco Marriott Marquis, Yerba Buena Salon 3 & 4
Hosted By: American Finance Association
  • Andrea Eisfeldt, University of California-Los Angeles

On the Source and Instability of Probability Weighting

Cary Frydman
,
University of Southern California
Lawrence Jin
,
Cornell University

Abstract

We propose and experimentally test a new theory of probability distortions in risky choice. The
theory is based on a core principle from neuroscience called efficient coding, which states that information
is encoded more accurately for those stimuli that the agent expects to encounter more
frequently. As the agent’s prior beliefs vary, the model predicts that probability distortions change
systematically. We provide novel experimental evidence consistent with the prediction: lottery
valuations are more sensitive to probabilities that occur more frequently under the subject’s prior
beliefs. Our theory generates additional novel predictions regarding heterogeneity and time variation
in probability distortions.

A Cognitive Foundation for Perceiving Uncertainty

Aislinn Bohren
,
University of Pennsylvania
Josh Hascher
,
University of Chicago
Alex Imas
,
University of Chicago
Michael Ungeheuer
,
Aalto University
Martin Weber
,
University of Mannheim

Abstract

We propose a framework where perceptions of uncertainty are driven by the interaction between cognitive constraints and the way that people learn about it—whether information is presented sequentially or simultaneously. People can learn about uncertainty by observing the distribution of outcomes all at once (e.g., seeing a stock return distribution) or sampling outcomes from the relevant distribution sequentially (e.g., experiencing a series of stock returns). Limited attention leads to the overweighting of unlikely but salient events—the dominant force when learning from simultaneous information—whereas imperfect recall leads to the underweighting of such events—the dominant force when learning sequentially. A series of studies show that, when learning from simultaneous information, people are overoptimistic about and are attracted to assets that mostly underperform, but sporadically exhibit large outperformance. However, they overwhelmingly select more consistently outperforming assets when learning the same information sequentially, and this is reflected in beliefs. The entire 40-percentage point preference reversal appears to be driven by limited attention and memory; manipulating these factors completely eliminates the effect of the learning environment on choices and beliefs, and can even reverse it.

Theory Is All You Need: AI, Human Cognition, and Decision Making

Teppo Felin
,
University of Oxford
Matthias Holweg
,
University of Oxford

Abstract

Artificial intelligence (AI) now matches or outperforms human intelligence in an
astonishing array of games, tests, and other cognitive tasks that involve high-level
reasoning and thinking. Many scholars argue that—due to human bias and bounded
rationality—humans should (or will soon) be replaced by AI in situations involving
high-level cognition and strategic decision making. We disagree. In this paper we
first trace the historical origins of the idea of artificial intelligence and human
cognition as a form of computation and information processing. We highlight
problems with the analogy between computers and human minds as input-output
devices, using large language models as an example. Human cognition—in
important instances—is better conceptualized as a form of theorizing rather than
data processing, prediction, or even Bayesian updating. Our argument, when it
comes to cognition, is that AI’s data-based prediction is different from human theory-based
causal logic. We introduce the idea of belief-data (a)symmetries to highlight the
difference between AI and human cognition, and use “heavier-than-air flight” as an
example of our arguments. Theories provide a mechanism for identifying new data
and evidence, a way of “intervening” in the world, experimenting, and problem
solving. We conclude with a discussion of the implications of our arguments for
strategic decision making, including the role that human-AI hybrids might play in
this process.

Cognitive Inequality and Big Data

Laura Veldkamp
,
Columbia University
Indira Puri
,
New York University

Abstract

We combine insights from medical and big data literature to propose a novel model, which suggests that the expansion of big data exacerbates cognitive inequality. While individuals with high cognitive abilities may benefit from the targeting and customization facilitated by big data, those with lower cognitive abilities—and even children—may suffer adverse effects. Data from political discourse supports our predictions. The findings introduce a new consideration to the debate on big data regulation and emphasize the necessity of addressing cognitive inequality.

Discussant(s)
Lars Lochstoer
,
University of California-Los Angeles
Ryan Oprea
,
University of California-Santa Barbara
Barney Hartman-glaser
,
University of California-Los Angeles
Michael Sockin
,
University of Texas-Austin
JEL Classifications
  • G0 - General