Skip to main content
Open this photo in gallery:

This photo of Donald Trump is 100-per-cent real, but many of the ones you'll see on social-media platforms are not. Artificial intelligence has made it easier than ever to create realistic-looking hoaxes.Rebecca Cook/Reuters

Some of the images are obvious fakes: Donald Trump, thick tattooed biceps bulging as he brandishes an assault rifle in front of the U.S. Capitol. Mr. Trump and Kamala Harris sitting next to each other in claw foot tubs, fully clothed by holding hands. A smiling Ms. Harris being cradled on a couch by Elon Musk, his hand on her pregnant belly.

But others might pass as real, their visual verisimilitude overshadowing their outlandish content. One shows Ms. Harris kneeling on the ground behind an American flag that is being consumed by flames. Another – posted to social media by Mr. Trump himself – shows Taylor Swift in an Uncle Sam outfit, pointing at the viewer with the caption: “Taylor Swift wants you to vote for Donald Trump.”

Ms. Swift, who voted for Joe Biden and in 2020 said she supported Ms. Harris, has not given any indication that she has switched political allegiances.

But the possibility that some Trump followers were duped was the latest confirmation that a new era has arrived, the age of artificial intelligence in politics. It’s a moment that doesn’t merely promise fresh chaos, a more artful digital insertion of fiction into fact and the dissemination of disinformation at supercomputer speed.

It’s already happening.

On Wednesday, the Federal Communications Commission said Lingo Telecom would pay a US$1-million fine for allowing spoofed messages of Mr. Biden to be transmitted on its network. A political consultant used AI to clone the President’s voice in messages to voters during the New Hampshire Democratic primary, telling them not to cast a ballot.

At the same time, the machinery of American politics is racing to lubricate its own gears with AI tools, with startups and political strategists hoping that the newest generation of computer smarts can help them squeeze more dollars from donors, find new ways to propel voters to the ballot box and deliver messages tailored to more individual interests.

“If you’re not using it, you’re going to lose, because the competitor’s using it,” said Victoria Gaffney, CEO of Tiger Byte Consulting. It is a recently formed small company offering data analytics and AI services that has found itself unexpectedly popular inside the Democratic National Convention in Chicago this week. “As soon as people hear that you’re in AI, they’re like, ‘Please, help our campaign,’” Ms. Gaffney said.

Tiger Byte has this month worked with the Italian American Democratic Leadership Council to locate voters with Italian backgrounds who might have been overlooked in prior elections.

“They missed two per cent in Alaska, apparently,” said Ms. Gaffney.

Democratic leaders involved with outreach to different minority groups are already seeking ways to deliver more personalized messages, such as expanding the number of ethnic categories from eight to over 100. AI tools can help, using surname identification to sort people – but also by sifting through social media, online membership lists, school alumni records and publicly available home addresses. It’s a question of assessing individual backgrounds, their willingness to donate and even the neighbourhoods best suited to door-knocking.

For political campaigns, AI tools can take a stump speech and transform it into a social media post, or allow someone unfamiliar with communications to draft a press release.

“People ask me how is AI going to revolutionize campaigns? It’s going to revolutionize them in the most boring way possible,” said Mike Nellis, a Democratic strategist who co-founded Quiller, which generates AI content for fundraising.

Quiller has worked with Patti Minter, a former Kentucky State legislator now running for mayor in Bowling Green, cutting in half the time to draft fundraising emails.

“They’re not raising more money. But they’re raising more money per minute spent, which means they’re off doing other things,” said Mr. Nellis. For those who know how to use them, AI tools can amount to “an army of interns at your disposal,” he said.

For now, however, AI tools remain largely an area of experimentation for campaigns.

Their pernicious electoral influence is less hypothetical.

Polling has shown nearly eight in 10 U.S. voters worry about falsified impersonations of candidates or faked content. Experts say there is reason for concern that the malicious use of AI will have an outsized effect on women, who are already the target of the overwhelming number of sexualized deepfake images.

It’s a serious enough risk that it could alter the gender balance of who is willing to seek office, said Seth Reznik, who leads Microsoft’s Campaign Success team that provides AI innovation tools for parties.

Even well-intentioned efforts can easily go awry, Mr. Nellis said. One effort to produce a likeness of an Irish man, for example, yielded images of an angry drunkard.

“I worry a lot about AI getting it wrong,” Mr. Nellis said.

An effective response will require regulation, he added.

Some technology companies have come together to develop ways to digitally identify AI-generated images, verify genuine content and prevent their own services from generating audio or visuals of candidates or election staff. Microsoft, which has its own AI tools, has developed tools for politicians and their staff to report counterfeit content.

For now, however, such efforts are voluntary – and have gaping exceptions. Grok-2, the AI tool built into X, will happily create realistic pictures of Mr. Trump kneeling inside a mosque, or a sleepy Mr. Biden in a wheelchair or Ms. Harris speaking to a Chicago crowd bristling with hammer-and-sickle Communist flags.

Such images may not, on their own, reshape elections.

“Here in the U.S., we’re navigating an information environment where public trust is at all-time low, and entrenchment is at an all-time high – so there are real limits to AI’s ability to meaningfully change hearts and minds over the course of a campaign,” said Ami Fields-Meyer, a senior fellow at the Harvard Kennedy School who was a senior policy adviser to Ms. Harris on technology and civil rights issues.

But, he warned, AI poses other risks that may be even more serious. The faked voice messages from Mr. Biden early this year have already demonstrated a new way of meddling with the fundamentals of the democratic processes.

“Maybe it’s a phone call in the voice of your local pastor or elected official saying the election is moved. Maybe it’s a fake image showing immigration agents at a polling place,” he said.

“The use of AI for voter suppression is an acute and immediate threat.”

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe