- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Artificial minds and real beliefs : perceiving mental...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Artificial minds and real beliefs : perceiving mental states in AI Jacobs, Oliver
Abstract
Mental attribution refers to ascribing or perceiving mental states in others—be it people, animals, or even non-sentient targets like artificial intelligence (AI), Large Language Models (LLMs), and social robots. These mental attributions can be categorized by distinguishing between agentic (the ability to do) or experiential (the ability to feel) attributions using the mind perception framework. Using this framework, over the course of eleven experiments (2020-2024), I systematically investigate mental attributions toward AI and LLMs, how such attributions affect how people perceive human minds, and the way these effects vary between different individuals. After an Introduction in Chapter 1, Chapter 2 starts by providing a taxonomic structure to categorize the ongoing psychological work with LLMs to situate the work reported in the subsequent chapters and to provide a roadmap for organizing future psychological research with LLMs. In Chapter 3, I discover that people ascribe agentic and experiential attributions toward a wide range of robotic and AI agents, including LLMs. In Chapter 4, I investigate how loneliness can influence mental attribution toward LLMs, finding that loneliness, moderated by prior exposure, predicts greater experiential attributions but not agentic attributions. In Chapter 5, I demonstrate that the mind perception framework can be used to investigate how one views their own mind. Then, I examine if exposure to LLMs can influence how people view their own mind and which attributes people consider uniquely human. I discover that, after being exposed to LLMs, people increase their self-evaluations of agency and experience, while reducing their belief that these features of mind are uniquely human. In Chapter 6, I find that a forced-choice design, in contrast to an absolute numerical scale, yields greater preferences for human-generated art compared to AI-generated art. Collectively, across the eleven experiments, I demonstrate that individuals frequently attribute both agency and experience to AI and that these attributions, in turn, affect how people perceive human minds—in themselves and in others.
Item Metadata
Title |
Artificial minds and real beliefs : perceiving mental states in AI
|
Creator | |
Supervisor | |
Publisher |
University of British Columbia
|
Date Issued |
2025
|
Description |
Mental attribution refers to ascribing or perceiving mental states in others—be it people, animals, or even non-sentient targets like artificial intelligence (AI), Large Language Models (LLMs), and social robots. These mental attributions can be categorized by distinguishing between agentic (the ability to do) or experiential (the ability to feel) attributions using the mind perception framework. Using this framework, over the course of eleven experiments (2020-2024), I systematically investigate mental attributions toward AI and LLMs, how such attributions affect how people perceive human minds, and the way these effects vary between different individuals. After an Introduction in Chapter 1, Chapter 2 starts by providing a taxonomic structure to categorize the ongoing psychological work with LLMs to situate the work reported in the subsequent chapters and to provide a roadmap for organizing future psychological research with LLMs. In Chapter 3, I discover that people ascribe agentic and experiential attributions toward a wide range of robotic and AI agents, including LLMs. In Chapter 4, I investigate how loneliness can influence mental attribution toward LLMs, finding that loneliness, moderated by prior exposure, predicts greater experiential attributions but not agentic attributions. In Chapter 5, I demonstrate that the mind perception framework can be used to investigate how one views their own mind. Then, I examine if exposure to LLMs can influence how people view their own mind and which attributes people consider uniquely human. I discover that, after being exposed to LLMs, people increase their self-evaluations of agency and experience, while reducing their belief that these features of mind are uniquely human. In Chapter 6, I find that a forced-choice design, in contrast to an absolute numerical scale, yields greater preferences for human-generated art compared to AI-generated art. Collectively, across the eleven experiments, I demonstrate that individuals frequently attribute both agency and experience to AI and that these attributions, in turn, affect how people perceive human minds—in themselves and in others.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2025-04-04
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0448298
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2025-05
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International