- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Personalizing explanations of AI hints based on user...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Personalizing explanations of AI hints based on user characteristics in an intelligent tutoring system Bahel, Vedant Rajesh
Abstract
Explainable AI (XAI) refers to having artificial intelligence-based systems provide understandable explanations or justifications for their decisions, predictions, or actions. In essence, XAI seeks to demystify the opaque nature of AI models, allowing humans to comprehend the reasoning behind AI-driven outcomes. While such explanations are beneficial in general, the need and effectiveness of the explanations might depend on application-dependent factors and user differences. In this thesis, we investigate personalizing the explanations that an Intelligent Tutoring System (ITS) generates to justify the hints it provides to students to foster their learning. The personalization targets students with low levels of two traits, Need for Cognition and Conscientiousness, and aims to enhance these students’ interaction with the explanations, based on prior findings that these students do not naturally engage with the explanations but would benefit from them if they do. To evaluate the effectiveness of the personalization, we conducted a user study where we found that our proposed personalization significantly increased our target users’ interaction with the hint explanations and their learning. Hence, this work provides insights into personalizing AI-driven explanations for learning which is arguably a cognitively demanding task. Furthermore, to gain initial insights on if and how hint explanations should be personalized based on the user’s state of confusion about the hints, we conducted an exploratory study using the same ITS. The goal of this study was to see if and how users rely on explanations when they are confused about the AI hints. However, no user in the study reported confusion on ACSP hints indicating the need for further investigation of the potential role of short-term user states on explanation effectiveness.
Item Metadata
Title |
Personalizing explanations of AI hints based on user characteristics in an intelligent tutoring system
|
Creator | |
Supervisor | |
Publisher |
University of British Columbia
|
Date Issued |
2024
|
Description |
Explainable AI (XAI) refers to having artificial intelligence-based systems provide understandable explanations or justifications for their decisions, predictions, or actions. In essence, XAI seeks to demystify the opaque nature of AI models, allowing humans to comprehend the reasoning behind AI-driven outcomes. While such explanations are beneficial in general, the need and effectiveness of the explanations might depend on application-dependent factors and user differences.
In this thesis, we investigate personalizing the explanations that an Intelligent Tutoring System (ITS) generates to justify the hints it provides to students to foster their learning. The personalization targets students with low levels of two traits, Need for Cognition and Conscientiousness, and aims to enhance these students’ interaction with the explanations, based on prior findings that these students do not naturally engage with the explanations but would benefit from them if they do. To evaluate the effectiveness of the personalization, we conducted a user study where we found that our proposed personalization significantly increased our target users’ interaction with the hint explanations and their learning. Hence, this work provides insights into personalizing AI-driven explanations for learning which is arguably a cognitively demanding task.
Furthermore, to gain initial insights on if and how hint explanations should be personalized based on the user’s state of confusion about the hints, we conducted an exploratory study using the same ITS. The goal of this study was to see if and how users rely on explanations when they are confused about the AI hints. However, no user in the study reported confusion on ACSP hints indicating the need for further investigation of the potential role of short-term user states on explanation effectiveness.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2024-05-30
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0443819
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2024-11
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International