Thousands of pro-Trump bots attack DeSantis and Haley on Twitter

In the past 11 months, someone has created thousands of fake automated Twitter accounts — perhaps hundreds of thousands — to praise former President Trump.

The fake reports not only published words of admiration for him, but also mocked his bipartisan critics and attacked Nikki Haley, the former South Carolina governor and United Nations ambassador who is challenging her former boss for the 2024 Republican presidential nomination.

When it came to Florida Governor Ron DeSantis, the bots aggressively suggested that he couldn’t beat Trump but would make a good running mate.

As Republican voters weigh up their candidates for 2024, whoever created the bot network tries to put a finger on the scales by using online manipulation techniques developed by the Kremlin to influence Twitter conversations about candidates, all the while using the algorithms of the digital platform used to maximize their reach.

The massive bot network was discovered by researchers at Cyabra, an Israeli technology company, who shared their findings with the Associated Press. While the identities of the people behind the network of fake accounts are unknown, Cyabra analysts have determined it was likely created in the United States.

To identify a bot, researchers look for patterns in an account’s profile, list of followers, and the content it posts. Human users usually post on different topics with a mix of original and reposted material, but bots often post repetitive content on the same topics.

This was true for many of the bots identified by Cyabra.

“A report will say, ‘Biden is trying to take our guns; Trump was the best,” and another will say, “Jan. 6 was a lie and Trump was innocent,” said Jules Gross, the Cyabra engineer who first discovered the network. “These voices are not people. For the sake of democracy, I want people to know this is happening.”

Bots, as they are commonly known, are fake automated accounts that became notorious after Russia used them to interfere in the 2016 presidential election. While major tech companies have improved their detection of fake accounts, the network identified by Cyabra shows that they have a remain a strong force in shaping online political discourse.

The new pro-Trump network is actually three different networks of Twitter accounts, all created in bulk in April, October and November 2022. Overall, researchers believe hundreds of thousands of accounts could be involved.

The accounts all contain personal photos of the alleged account owner along with a name. Some accounts have posted their own content, often in response to real users, while others have reposted content from real users to further amplify it.

“McConnell…Traitor!” One of the reports, released in response to an article in a conservative publication about GOP Senate Majority Leader Mitch McConnell (R-Ky.), one of many Republican critics of Trump targeted by the network, has been published.

One way to measure the impact of bots is to measure the percentage of posts on a given topic generated from accounts that appear fake. The percentage for typical online debates is often in the low single digits. Twitter itself has said that less than 5% of its daily active users are fake or spam accounts.

However, when Cyabra researchers investigated negative reports about certain Trump critics, they found a much higher level of inauthenticity. For example, almost three quarters of the negative messages about Haley were attributed to fake accounts.

The network also helped publish a call for DeSantis to join Trump as a vice presidential nominee — a result that would serve Trump well and allow him to avoid a potentially bitter match if DeSantis enters the race.

The same network of accounts shared overwhelmingly positive content about Trump and contributed to an overall misrepresentation of his support online, researchers found.

“Our understanding of prevailing Republican sentiment for 2024 is being manipulated by the proliferation of bots on the Internet,” the Cyabra researchers conclude.

The triple network was discovered after Gross analyzed tweets about various national political figures and found that many of the accounts posting the content were created on the same day. Most accounts remain active despite a relatively modest number of followers.

A message left with a Trump campaign spokeswoman was not immediately answered.

According to Samuel Woolley, a University of Texas professor and disinformation researcher whose latest book focuses on automated propaganda, most bots are not designed to persuade people, but rather to amplify specific content so more people see it.

When a human user sees a bot’s hashtag or content and reposts it, they’re doing the work of the network for it and also sending a signal to Twitter’s algorithms to further encourage the spread of the content.

Bots can also be successful in convincing people that a candidate or idea is more or less popular than he or she actually is, Woolley said. For example, more pro-Trump bots may lead people to overestimate his overall popularity.

“Bots definitely affect the flow of information,” Woolley said. “They are built to give the illusion of popularity. Repetition is the nuclear weapon of propaganda, and bots are very good at it. They are very good at conveying information to people.”

Until recently, most bots were easy to spot due to their clumsy spelling or account names containing nonsensical words or long strings of random numbers. As social media platforms have gotten better at tracking these accounts, the bots have gotten more sophisticated.

So-called cyborg accounts are an example: a bot that is regularly taken over by a human user that can post original content and interact with users in a human way, making espionage much more difficult.

Thanks to advances in artificial intelligence, bots may soon become very sneaky. New AI programs can create lifelike profile pictures and posts that sound much more authentic. Bots that sound like a real person and use deepfake video technology can challenge platforms and users in new ways, said Katie Harbath, a fellow at the Bipartisan Policy Center and former director of public policy at Facebook.

“Platforms have gotten so much better at fighting bots since 2016,” said Harbath. “But the guys we’re seeing now can use AI to create fake people and fake videos.”

These advancements in technology are likely to ensure that bots have a long future in US politics – as digital foot soldiers in online election campaigns and as potential troublemakers for voters and candidates trying to defend against anonymous online attacks.

“There has never been more noise online,” said Tyler Brown, policy adviser and former digital director of the Republican National Committee. “How much of this is maliciously or even unintentionally false? It’s easy to believe that people can manipulate it.”

Author: DAVID CLIPPER

Source: LA Times

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_imgspot_img

Hot Topics

Related Articles