- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Library and Archives /
- "Stop Generating" : Generative AI in the context of...
Open Collections
UBC Library and Archives
"Stop Generating" : Generative AI in the context of Indigenous Studies Gaertner, David, 1979-
Description
Generative AI has forced universities to contend with complex ethical and social questions—namely because writing is so deeply entrenched as an institutional gatekeeping. For many students, particularly those from marginalized backgrounds or for whom English is not a first language, the pressure to translate ideas into “proper” English contributes to attrition rates and exacerbates feelings of inadequacy, alienation, and exclusion from many academic communities. From an equity and inclusion perspective, AI has the potential to disrupt institutional barriers by offering accessible tools that level the grammatical playing field. By functioning as virtual tutors or co-writers, AI systems can assist students in producing more polished and coherent prose, thus challenging the traditional hierarchies that privilege certain grammatical and stylistic norms. Instead of attempting to ban these tools (which is, to say the least, impractical), I side with a growing number of technology scholars who argue that we should focus on teaching students how to use generative AI responsibly and effectively. However, I do so with the caveat that teaching responsible AI use means critically engaging the complex and often messy processes that make AI what it is. In this presentation, I draw from Indigenous theorists and authors to situate generative AI and large language models (LLMs) within a long colonial history of extraction. Just as colonial states declare Indigenous lands terra nullius, allowing settlers to exploit resources through mining, clear-cutting, and other forms of extraction, generative AI similarly depends on the unchecked extraction of data, including Indigenous knowledge and cultural resources, often without consent. The late Gregory Younging refers to this process as gnaritas nullius, the colonial rendering of Indigenous knowledge into public property. The unchecked extraction of writing, including, but not limited to, Indigenous knowledge, represents a new frontier for colonial capitalism, where cultural and intellectual property are commodified by those with the most access and power. As Nando de Freitas notes, the future of AI development depends on scale: those who control the largest datasets will have the greatest advantage and profit the most from AI. The numerous high-profile copyright cases against companies like OpenAI and Meta show that how this data is collected is treated as a secondary issue. This unbridled, dehumanizing race for data mirrors the extractive practices that have driven capitalist-colonial expansion for centuries. Building on these ideas, I mobilize the insights of Indigenous authors like Younging, Scott Lyons, and Cherie Dimaline to highlight strategies for resisting colonial extraction and challenging capitalist systems through rhetorical sovereignty and the concept of incommensurability. The goal is not to discourage the use of generative AI but, in the Faustian sense, to reveal the costs of embracing it, especially when it is employed to subvert oppressive institutional structures. The speaker was introduced by Erin Fields, Open Education and Scholarly Communications Librarian, UBC.
Item Metadata
Title |
"Stop Generating" : Generative AI in the context of Indigenous Studies
|
Creator | |
Date Issued |
2024-10-24
|
Description |
Generative AI has forced universities to contend with complex ethical and social questions—namely because writing is so deeply entrenched as an institutional gatekeeping. For many students, particularly those from marginalized backgrounds or for whom English is not a first language, the pressure to translate ideas into “proper” English contributes to attrition rates and exacerbates feelings of inadequacy, alienation, and exclusion from many academic communities.
From an equity and inclusion perspective, AI has the potential to disrupt institutional barriers by offering accessible tools that level the grammatical playing field. By functioning as virtual tutors or co-writers, AI systems can assist students in producing more polished and coherent prose, thus challenging the traditional hierarchies that privilege certain grammatical and stylistic norms. Instead of attempting to ban these tools (which is, to say the least, impractical), I side with a growing number of technology scholars who argue that we should focus on teaching students how to use generative AI responsibly and effectively. However, I do so with the caveat that teaching responsible AI use means critically engaging the complex and often messy processes that make AI what it is.
In this presentation, I draw from Indigenous theorists and authors to situate generative AI and large language models (LLMs) within a long colonial history of extraction. Just as colonial states declare Indigenous lands terra nullius, allowing settlers to exploit resources through mining, clear-cutting, and other forms of extraction, generative AI similarly depends on the unchecked extraction of data, including Indigenous knowledge and cultural resources, often without consent. The late Gregory Younging refers to this process as gnaritas nullius, the colonial rendering of Indigenous knowledge into public property. The unchecked extraction of writing, including, but not limited to, Indigenous knowledge, represents a new frontier for colonial capitalism, where cultural and intellectual property are commodified by those with the most access and power. As Nando de Freitas notes, the future of AI development depends on scale: those who control the largest datasets will have the greatest advantage and profit the most from AI. The numerous high-profile copyright cases against companies like OpenAI and Meta show that how this data is collected is treated as a secondary issue. This unbridled, dehumanizing race for data mirrors the extractive practices that have driven capitalist-colonial expansion for centuries. Building on these ideas, I mobilize the insights of Indigenous authors like Younging, Scott Lyons, and Cherie Dimaline to highlight strategies for resisting colonial extraction and challenging capitalist systems through rhetorical sovereignty and the concept of incommensurability. The goal is not to discourage the use of generative AI but, in the Faustian sense, to reveal the costs of embracing it, especially when it is employed to subvert oppressive institutional structures. The speaker was introduced by Erin Fields, Open Education and Scholarly Communications Librarian, UBC.
|
Type | |
Language |
eng
|
Series | |
Date Available |
2024-11-06
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0447199
|
URI | |
Affiliation | |
Peer Review Status |
Unreviewed
|
Scholarly Level |
Faculty
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International