AI is boosting disinformation attacks on voters, especially in communities of color
On Ed
Bill Wong and Mindy RomeroMarch 22, 2024
As the general election campaign begins in earnest, we can expect disinformation attacks to target voters, especially in communities of color. This has happened before: In 2016, for example, Russia’s disinformation programs targeted Black Americans, creating Instagram and Twitter accounts masquerading as Black voices and producing fake news websites such as blacktivist.info, blacktolive.org, and blacksoul.us.
Technological advances will make these efforts more difficult to recognize. Imagine those same fake accounts and websites with hyper-realistic videos and images designed to sow racial division and mislead people about their right to vote. The advent of generative artificial intelligence makes this possible at little to no cost, fueling the kind of misinformation that has always targeted communities of color.
It will be a problem for candidates, election offices and voter groups in the coming months. But voters will ultimately have to figure out for themselves what is real news or fake news, authentic or AI-generated news.
For immigrants and communities of color who often face language barriers, distrust democratic systems, and lack access to technology, the challenge is likely to be greater. Across the country, and especially in states like California, with large communities of immigrants and people with limited English proficiency, governments must help these groups identify and avoid misinformation.
Asian Americans and Latinos are especially vulnerable. Immigrants make up about two-thirds of Asian American and Pacific Islanders, and a Pew Research Center report shows that [86%] of Asian immigrants aged 5 and older say they speak a language other than English at home. The same dynamic applies to Latinos, with only 38% of the foreign-born Latino population in the US reporting proficiency in English.
Targeting non-English speaking communities has several benefits for those seeking to spread disinformation. These groups are often cut off from mainstream news sources that have the greatest resources to debunk deepfakes and other disinformation, preferring online engagement in their native languages, where moderation and fact-checking are less common. Forty-six percent of Latinos in the US use WhatsApp, while many Asian Americans prefer WeChat. Wired magazine reported that the platform is used by millions of Chinese Americans and people with friends, family or business in China, including as a tool for political organizing.
Disinformation targeting immigrant communities is poorly understood and difficult to detect and counter, but is becoming easier to create. In the past, producing fake content in non-English languages ​​required intensive human work and was often of low quality. Now, AI tools can create hard-to-trace disinformation in language at lightning speed and without the vulnerabilities and scale issues associated with human limitations. Despite this, much research on misinformation and disinformation focuses on English-language use.
Efforts to target communities of color and non-English speakers with disinformation are aided by the fact that many immigrants rely heavily on their cell phones for internet access. Mobile user interfaces are particularly vulnerable to misinformation because many desktop design and branding elements are minimized in favor of content on smaller screens. With 13% of Latinos and 12% of African Americans relying on mobile devices for broadband access, compared to 4% of white smartphone owners, they are more likely to receive and share false information.
Social media companies’ efforts to combat voter misinformation have fallen short. Meta’s announcement in February that it would flag AI-generated images on Facebook, Instagram, and Threads is a positive but small step toward countering AI-generated misinformation, especially for ethnic and immigrant communities who may know little about the effects thereof. It is clear that a stronger government response is needed.
The California Initiative for Technology and Democracy, or CITED, of which we serve on the board of directors, will soon unveil a legislative package
which that
would require broader transparency for generative AI content, letting social media users know which video, audio, and images were created by AI tools. The bills also require labeling AI-enabled political disinformation on social media, ban campaign ads before elections from using the technology, and restrict anonymous trolls and bots.
In addition, CITED plans to host a series of community forums in California with partner organizations rooted in their region. The groups will speak directly to leaders in communities of color, labor leaders, local elected officials and other trusted messengers about the dangers of false AI-generated information likely to be circulating this election season.
The hope is that this information will be relayed at the community level, making voters in the state more aware and skeptical of false or misleading content, building trust in the election process, election results, and our democracy.
Bill Wong is a campaign strategist and author of Better to Win: Hardball Lessons in Leadership, Influence, & the Craft of Politics. Mindy Romero is a political sociologist and director of the Center for Inclusive Democracy at the USC Price School of Public Policy.
Fernando Dowling is an author and political journalist who writes for 24 News Globe. He has a deep understanding of the political landscape and a passion for analyzing the latest political trends and news.