- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Faculty Research and Publications /
- Harmful Hallucinations : Generative AI and Elections
Open Collections
UBC Faculty Research and Publications
Harmful Hallucinations : Generative AI and Elections McKay, Spencer; Tenove, Chris; Gupta, Nishtha; Ibañez, Jenina; Mathews, Netheena; Tworek, Heidi
Abstract
It was predicted that 2024 would be the “Year of Deepfake Elections?” Generative artificial intelligence (GenAI) technologies have indeed been used at unprecedented levels in elections around the world. However, the consequences of GenAI for elections and the appropriate policy responses remain unclear. This report argues that while GenAI technologies currently pose greater risks than benefits to democratic elections, those risks remain manageable. In general, GenAI lowers the costs of some existing threats to democracy rather than creating entirely new ones. We recommend actions for governments, technology companies, journalists, and other actors to reduce the additional risks posed by GenAI in elections. Most of these are extensions of existing efforts to protect democratic elections, though some – such as regulating the creation and transparency of models – are more GenAI-specific. All these responses need to be part of broader efforts to strengthen and build trust in democratic institutions. The report begins by clarifying the capabilities of GenAI to enhance or undermine key democratic goods, including free and fair elections, reliable and accurate information, mutual respect and inclusion, a balance of trust and critical judgment, and effective representation. GenAI could contribute to democratic goods by informing citizens, supporting deliberation, and improving representation. There is little evidence that GenAI has effectively advanced these benefits. GenAI may undermine democratic goods by empowering deception, polluting the information environment, and enhancing harassment. These harmful uses have been seen in recent elections around the world. Political figures have had their voices or likenesses imitated to mislead voters. Some politicians and political candidates have been harassed through deepfakes showing non-consensual intimate content. The information ecosystem has also seen the proliferation of chatbots that confidently provide incorrect information alongside the mass production of AI-generated low-quality information. This report includes five brief case studies that examine the recent misuse of GenAI in France, India, Slovakia, the United Kingdom, and the United States. These studies suggest that almost all near-term uses of GenAI are extensions of existing techniques to misinform, manipulate, or misdirect citizens and officials during elections. There is no technological silver bullet to address the risks that GenAI poses to democracy. Changes to AI model design and efforts to develop effective watermarking technologies need to be paired with policy changes and robust interventions by GenAI service providers, social media platforms, journalists, trust and safety professionals, academic researchers, electoral management bodies (EMBs), and political parties. Many current responses by GenAI service providers, social media platforms, and political parties are based on voluntary commitments that lack teeth. The capacity for independent scrutiny by journalists, academics, and trust and safety professionals is constrained by a lack of resources – including access to data – and insufficient independence from GenAI services and platforms. As a result, state-led regulation, including vigorous oversight by EMBs, is necessary. The report concludes by making sector-specific recommendations to reduce the risks posed by GenAI to democratic elections. Key recommendations include: ● Governmental and non-governmental actors should not overstate the dangers that GenAI poses to democracy, since this messaging can itself undermine election legitimacy and participation. Vigilance is needed, not alarmism or despair. ● Policy responses should focus on targeting specific forms of harmful content rather than on assessing whether they are generated by AI. Accountability measures to address these harmful uses need to be clarified and strengthened, both to deter future violations and to reassure the public that GenAI need not lead to a breakdown of election laws and norms. In the short-term, particular attention should be given to deepfake content used to harass or intimidate politicians and election candidates, and to synthetic content spreading false information about election participation. ● Governments should enact regulatory frameworks for AI and social media services to ensure transparency, safeguard fundamental rights, and promote risk-based mechanisms to address the most harmful uses of GenAI in election contexts. Technical solutions, such as watermarking, should continue to be explored, but they will be challenging to standardize and enforce across jurisdictions, and will be vulnerable to evasion by more sophisticated actors. ● Governments should take steps to ensure a robust information ecosystem that is capable of scrutinizing and responding to harmful content, AI-generated or otherwise. This may include enacting and enforcing transparency requirements for AI service providers, incentivizing the development of a trust and safety sector, funding relevant research, and investing in citizen literacy. ● Deceptive uses of GenAI can be partly addressed by high-quality and high-visibility corrective messaging. Several sectors can play critical roles in creating and amplifying high-quality information, including the news media, electoral management bodies, political parties, and social media platforms. These organizations need to develop their own capacities and best practices. Government should provide forms of direct and indirect support to enhance the capacity of these actors to do so. ● Several contexts or contributing factors can make the misuse of GenAI more likely to harm democratic elections, such as deepfakes of supposedly private remarks by candidates or election officials immediately prior to voting day. Government agencies and non-governmental actors should conduct red-teaming exercises and develop strategies to address high-risk scenarios. ● The potential for GenAI to produce democratic goods requires further research and experimentation. This will likely require investment and cooperation between AI service providers, electoral management bodies, journalists, and academic researchers. This should be done transparently and cautiously to maintain public trust. While we do not think GenAI presents a doomsday scenario for democratic elections, policymakers should not be complacent. Ultimately, policymaking should support a resilient information system with institutions capable of producing and disseminating accurate, trusted information, and building citizens’ confidence and capacities to participate in these systems. Such efforts will build stronger democracies, regardless of the communication technologies we use.
Item Metadata
Title |
Harmful Hallucinations : Generative AI and Elections
|
Creator | |
Contributor | |
Date Issued |
2024-08-21
|
Description |
It was predicted that 2024 would be the “Year of Deepfake Elections?” Generative artificial
intelligence (GenAI) technologies have indeed been used at unprecedented levels in elections
around the world. However, the consequences of GenAI for elections and the appropriate policy
responses remain unclear. This report argues that while GenAI technologies currently pose
greater risks than benefits to democratic elections, those risks remain manageable. In general,
GenAI lowers the costs of some existing threats to democracy rather than creating entirely new
ones. We recommend actions for governments, technology companies, journalists, and other
actors to reduce the additional risks posed by GenAI in elections. Most of these are extensions of
existing efforts to protect democratic elections, though some – such as regulating the creation
and transparency of models – are more GenAI-specific. All these responses need to be part of
broader efforts to strengthen and build trust in democratic institutions.
The report begins by clarifying the capabilities of GenAI to enhance or undermine key
democratic goods, including free and fair elections, reliable and accurate information, mutual
respect and inclusion, a balance of trust and critical judgment, and effective representation.
GenAI could contribute to democratic goods by informing citizens, supporting deliberation, and
improving representation. There is little evidence that GenAI has effectively advanced these
benefits. GenAI may undermine democratic goods by empowering deception, polluting the information
environment, and enhancing harassment. These harmful uses have been seen in recent elections
around the world. Political figures have had their voices or likenesses imitated to mislead voters.
Some politicians and political candidates have been harassed through deepfakes showing
non-consensual intimate content. The information ecosystem has also seen the proliferation of
chatbots that confidently provide incorrect information alongside the mass production of
AI-generated low-quality information. This report includes five brief case studies that examine
the recent misuse of GenAI in France, India, Slovakia, the United Kingdom, and the United
States. These studies suggest that almost all near-term uses of GenAI are extensions of existing
techniques to misinform, manipulate, or misdirect citizens and officials during elections.
There is no technological silver bullet to address the risks that GenAI poses to democracy.
Changes to AI model design and efforts to develop effective watermarking technologies need to
be paired with policy changes and robust interventions by GenAI service providers, social media
platforms, journalists, trust and safety professionals, academic researchers, electoral management
bodies (EMBs), and political parties. Many current responses by GenAI service providers, social
media platforms, and political parties are based on voluntary commitments that lack teeth. The
capacity for independent scrutiny by journalists, academics, and trust and safety professionals is
constrained by a lack of resources – including access to data – and insufficient independence
from GenAI services and platforms. As a result, state-led regulation, including vigorous
oversight by EMBs, is necessary. The report concludes by making sector-specific recommendations to reduce the risks posed by
GenAI to democratic elections. Key recommendations include:
● Governmental and non-governmental actors should not overstate the dangers that GenAI
poses to democracy, since this messaging can itself undermine election legitimacy and
participation. Vigilance is needed, not alarmism or despair.
● Policy responses should focus on targeting specific forms of harmful content rather than
on assessing whether they are generated by AI. Accountability measures to address these
harmful uses need to be clarified and strengthened, both to deter future violations and to
reassure the public that GenAI need not lead to a breakdown of election laws and norms.
In the short-term, particular attention should be given to deepfake content used to harass
or intimidate politicians and election candidates, and to synthetic content spreading false
information about election participation.
● Governments should enact regulatory frameworks for AI and social media services to
ensure transparency, safeguard fundamental rights, and promote risk-based mechanisms
to address the most harmful uses of GenAI in election contexts. Technical solutions, such
as watermarking, should continue to be explored, but they will be challenging to
standardize and enforce across jurisdictions, and will be vulnerable to evasion by more
sophisticated actors.
● Governments should take steps to ensure a robust information ecosystem that is capable
of scrutinizing and responding to harmful content, AI-generated or otherwise. This may
include enacting and enforcing transparency requirements for AI service providers,
incentivizing the development of a trust and safety sector, funding relevant research, and
investing in citizen literacy.
● Deceptive uses of GenAI can be partly addressed by high-quality and high-visibility
corrective messaging. Several sectors can play critical roles in creating and amplifying
high-quality information, including the news media, electoral management bodies,
political parties, and social media platforms. These organizations need to develop their
own capacities and best practices. Government should provide forms of direct and
indirect support to enhance the capacity of these actors to do so.
● Several contexts or contributing factors can make the misuse of GenAI more likely to
harm democratic elections, such as deepfakes of supposedly private remarks by
candidates or election officials immediately prior to voting day. Government agencies
and non-governmental actors should conduct red-teaming exercises and develop
strategies to address high-risk scenarios. ● The potential for GenAI to produce democratic goods requires further research and
experimentation. This will likely require investment and cooperation between AI service
providers, electoral management bodies, journalists, and academic researchers. This
should be done transparently and cautiously to maintain public trust.
While we do not think GenAI presents a doomsday scenario for democratic elections,
policymakers should not be complacent. Ultimately, policymaking should support a resilient
information system with institutions capable of producing and disseminating accurate, trusted
information, and building citizens’ confidence and capacities to participate in these systems.
Such efforts will build stronger democracies, regardless of the communication technologies we
use.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2024-08-20
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0445035
|
URI | |
Affiliation | |
Citation |
McKay, Spencer, Chris Tenove, Nishtha Gupta, Jenina Ibañez, Netheena Mathews, Heidi Tworek. (2024). Harmful Hallucinations: Generative AI and Elections. Vancouver: Centre for the Study of Democratic Institutions, University of British Columbia.
|
Peer Review Status |
Unreviewed
|
Scholarly Level |
Faculty; Postdoctoral; Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International