Fake images created to show Trump with black supporters highlight concerns around AI and elections

(Evan Vucci/Associated Press)

Fake images created to show Trump with black supporters highlight concerns around AI and elections

Elections 2024, Artificial Intelligence

MATT BROWN and DAVID KLEPPER

March 8, 2024

At first glance, the images circulating online showing former President Trump surrounded by groups of black people smiling and laughing may seem like nothing out of the ordinary, but upon closer inspection, it’s telling.

Strange lighting and too-perfect details provide clues to the fact that they were all generated using artificial intelligence. The photos, which have not been associated with the Trump campaign, emerged as Trump tries to win over black voters, who polls show remain loyal to President Biden.

The fabricated images, which emerged in a recent BBC investigation, provide further evidence to support warnings that the use of AI-generated images will only increase as the November general election approaches. Experts said they highlight the danger that any group of Latinos, women and older male voters could be targeted by lifelike images designed to deceive and confuse, and to demonstrate the need for regulation around the technology.

In a report published this week, researchers from the nonprofit Center for Countering Digital Hate used several popular AI programs to show how easy it is to create realistic deepfakes that can fool voters. The researchers were able to generate fake images of a meeting between Trump and Russian agents, Biden stuffing a ballot box and armed militia members at polling places, even though many of these AI programs say they have rules to ban this type of content.

The center analyzed some of the recent deepfakes of Trump and black voters and found that at least one was originally created as satire but was now being shared by Trump supporters as evidence of his support among blacks.

Social media platforms and AI companies must do more to protect users from the harmful effects of AI, said Imran Ahmed, CEO and founder of the center.

If a picture is worth a thousand words, then these dangerously sensitive image generators, combined with mainstream social media’s dismal efforts to moderate content, present a more powerful tool for bad actors to mislead voters than we’ve ever seen, said Ahmed. This is a wake-up call for AI companies, social media platforms, and legislative actions not to endanger American democracy.

The images sparked alarm on both the right and left that they could mislead people about the former president’s support among black people. Some close to Trump have expressed frustration over the spread of the fake images, believing the fabricated scenes undermine Republican outreach to black voters.

If you see a photo of Trump with black people and don’t see it on an official campaign or surrogate page, that didn’t happen, said Diante Johnson, president of the Black Conservative Federation. It is nonsensical to think that the Trump campaign should use AI to show its black support.

Experts expect additional efforts to use AI-generated deepfakes to target specific voter blocs in key swing states, such as Latinos, women, Asian Americans and older conservatives, or any other demographic group a campaign hopes to attract. to mislead or frighten. With dozens of countries holding elections this year, the challenges posed by deepfakes are a global problem.

In January, voters in New Hampshire received a robocall that mimicked Biden’s voice and falsely told them that if they voted in that state’s primary, they would be ineligible to vote in the general election. A political consultant later acknowledged creating the robocall, which may be the first known attempt to use AI to disrupt U.S. elections.

Such content can have a corrosive effect even if it is not believed, according to a February study by researchers at Stanford University that examined the potential impact of AI on Black communities. When people realize that they cannot trust the images they see online, they may disregard legitimate sources of information.

As AI-generated content becomes more common and more difficult to distinguish from human-generated content, individuals may become more skeptical and distrustful of the information they receive, the researchers wrote.

Even if it fails to fool a large number of voters, AI-generated content about voting, candidates and elections could make it harder for anyone to distinguish fact from fiction, causing them to disregard legitimate sources of information and at a loss of information. trust that undermines faith in democracy and at the same time increases political polarization.

While false claims about candidates and elections are nothing new, AI makes it faster, cheaper, and easier than ever to create lifelike images, video, and audio. When AI deepfakes are released on social media platforms like TikTok, Facebook or X, they can reach millions before tech companies, government officials or legitimate news outlets are even aware of their existence.

AI has simply accelerated and rapidly pushed disinformation forward, says Joe Paul, a businessman and advocate who has worked to increase digital access among communities of color. Paul noted that Black communities often have a history of distrust of major institutions, including in politics and the media, making Black communities more skeptical of public narratives about them, as well as fact-checking designed to inform the community.

Digital literacy and critical thinking skills are one of the defense mechanisms against AI-generated disinformation, says Paul. The goal is to enable people to critically evaluate the information they encounter online.

Matt Brown and David Klepper write for the Associated Press.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_imgspot_img

Hot Topics

Related Articles