- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Ethics and healthcare artificial intelligence : some...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Ethics and healthcare artificial intelligence : some problems and solutions for advanced diagnostic systems Wadden, Jordan Joseph
Abstract
The use of artificial intelligence for diagnostic purposes is a significant topic of discussion in the present-day healthcare field. Developers working on this technology intend it to outperform human clinicians and therefore remove some of the burden from clinicians so that they can spend more time developing relationships of care with their patients. I focus on three ethical questions I believe should be answered prior to their widespread onboarding in healthcare systems. I first investigate what kind of artificial intelligence we ought to want for diagnostic systems in healthcare. I analyse three high-level categories of artificial intelligence: opaque black box systems, robustly transparent explainable systems, and semi-transparent grey box systems. I start by outlining the characteristics of these systems, culminating in the development of a novel definition for the black box problem. I then take this information and analyse each kind of high-level system. I defend the position that the best kind of systems for healthcare applications are grey boxes due to their customizability and semi-transparent nature. Second, I examine what obligations clinicians ought to have to their patients whenever they employ an artificial intelligence for diagnostics. I separate this chapter into three sets of obligations, one for each of the categories from chapter one, as a clinician may not always have the option to work exclusively with grey systems. By providing three lists, I ensure that clinicians have a minimum starting point regardless of what kind of system they employ. Finally, I address the implicitly articulated concerns surrounding whether the general push for more advanced AI in healthcare jeopardizes a patient’s ability to provide informed consent to a procedure or treatment plan. These concerns are raised nearly ubiquitously in the literature, yet to date no comprehensive analysis regarding the issue exists. I argue that the concerns over consent are actually due to a false dilemma in how we discuss the kinds of artificial intelligence in the literature. I then demonstrate how this false dilemma can be defeated through appeals to grey box systems and using human clinician secondary readers.
Item Metadata
Title |
Ethics and healthcare artificial intelligence : some problems and solutions for advanced diagnostic systems
|
Creator | |
Supervisor | |
Publisher |
University of British Columbia
|
Date Issued |
2023
|
Description |
The use of artificial intelligence for diagnostic purposes is a significant topic of discussion in the present-day healthcare field. Developers working on this technology intend it to outperform human clinicians and therefore remove some of the burden from clinicians so that they can spend more time developing relationships of care with their patients. I focus on three ethical questions I believe should be answered prior to their widespread onboarding in healthcare systems.
I first investigate what kind of artificial intelligence we ought to want for diagnostic systems in healthcare. I analyse three high-level categories of artificial intelligence: opaque black box systems, robustly transparent explainable systems, and semi-transparent grey box systems. I start by outlining the characteristics of these systems, culminating in the development of a novel definition for the black box problem. I then take this information and analyse each kind of high-level system. I defend the position that the best kind of systems for healthcare applications are grey boxes due to their customizability and semi-transparent nature.
Second, I examine what obligations clinicians ought to have to their patients whenever they employ an artificial intelligence for diagnostics. I separate this chapter into three sets of obligations, one for each of the categories from chapter one, as a clinician may not always have the option to work exclusively with grey systems. By providing three lists, I ensure that clinicians have a minimum starting point regardless of what kind of system they employ.
Finally, I address the implicitly articulated concerns surrounding whether the general push for more advanced AI in healthcare jeopardizes a patient’s ability to provide informed consent to a procedure or treatment plan. These concerns are raised nearly ubiquitously in the literature, yet to date no comprehensive analysis regarding the issue exists. I argue that the concerns over consent are actually due to a false dilemma in how we discuss the kinds of artificial intelligence in the literature. I then demonstrate how this false dilemma can be defeated through appeals to grey box systems and using human clinician secondary readers.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2023-02-23
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0427266
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2023-05
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International