Is America Ready For AI-Powered Politics?

The technology is here, whether we want it or not.
Can politics survive our new robot brethren?
Can politics survive our new robot brethren?
Illustration:Jianan Liu/HuffPost Photo:Getty Images

Can the country’s elected leaders recognize when they are talking to a machine? In 2020, researchers at Cornell University wanted to find out. They sent 32,398 emails, generated by so-called artificial intelligence, to America’s 7,132 state legislators and waited for replies.

And they came. Legislators responded to emails written by a digital “large language model” just 2% less often than they did emails written by human undergraduates — a statistically significant difference, but a small one.

What’s more, the study noted, “a sizable number of AI-written correspondences elicited lengthy and personal responses suggesting that legislators believed that they were responding to constituents.”

The study, which was published online in February, captured the true promise of artificial intelligence that can “learn” from billions of human-created words and phrases, and then synthesize and create information on its own. But it also revealed one potential danger: massive inauthentic influence operations.

Beyond that, with OpenAI’s public-facing artificial intelligence product ChatGPT taking the world by storm in recent months, some experts are worried about the potential for a surge of mechanized misinformation and political propaganda — all invisible to the human eye.

All across the internet, robots trained to mimic humans are playing their part. And to some human observers, it seems almost inevitable that these forces will be used to manipulate politics, replacing the “troll farms” that marked the 2016 presidential election and creating echo chambers that empower conspiracy theorists and extremists.

Can politics survive our new robot brethren?

Sarah Kreps, a political scientist and co-author of the Cornell study, told HuffPost that she was an optimist on the subject.

But, she added, “It’s more that I think there’s no alternative. It is nearly hopeless to think that there’s an alternate universe where we just try to put this genie back in the bottle.”

Manufacturing Public Opinion

“Astroturfing,” the practice of professional public opinion operations masquerading as organic, grassroots political organizing, has a rich history in America, where the return on investment for movements like the tea party can be astronomical.

But up until now, such operations had to be done by hand. These days, deep-pocketed political interests have access to modern-day computing power. And with each passing year, the computers have gotten better.

In 2017, for example, the Federal Communications Commission was bombarded with messages in support of repealing “net neutrality,” or the principle that different kinds of web traffic be treated equally by internet service providers. Something smelled fishy. Researchers believed a large portion of the traffic was from “bots that used natural language generation,” as Wired reported.

New York Attorney General Letitia James’ office subsequently found that millions of the messages had been fabricated after “the nation’s largest broadband companies funded a secret campaign to generate millions of comments to the FCC in 2017.” James’ office levied $4.4 million in penalties on three lead generation companies involved in the scheme. Just last month, James announced $615,000 in additional fines paid by three more companies that supplied fraudulent comments to the FCC.

In 2019, when Idaho solicited feedback on Medicaid.gov regarding changes to its Medicaid program, more than half of the resulting comments came from an AI bot created by a college student. In an accompanying study, the student, Max Weiss, found that even human beings who’d been trained on distinguishing between human and bot-created content only correctly classified the “deep fake” comments half of the time — the equivalent of a guess.

“It is nearly hopeless to think that there’s an alternate universe where we just try to put this genie back in the bottle.”

- Sarah Kreps, political scientist

It’s not just legislative correspondence. AI-powered chatbots have proven remarkably effective at mimicking specific styles of human text — everything from what looks like professional journalism to conspiracy theory ramblings. Multiple studies have shown that various generations of OpenAI’s technology can credibly create authentic-sounding tweets and news articles.

And they can do it anywhere — even in the dark basements of the internet. In 2021, analysts at the CyberAI Project at Georgetown’s Center for Security and Emerging Technology fed the large language model GPT-3 three real “drops,” or message board posts, from “Q,” the supposed anonymous government insider behind QAnon, the far-right conspiracy theory that alleges that Donald Trump’s opponents are actually part of a pedophilic, satanic “deep state.”

The researchers asked the AI program to come up with a few “drops” of its own. They were virtually indistinguishable from the hare-brained real thing:

Why was HRC so careless?

Who is the enemy?

Define.

Expand your thinking.

News unlocks past.

We need to pray.

God bless you all. Q

The same paper found that GPT-3 — which was followed by the more advanced GPT-3.5, and now, GPT-4 — was quite effective at drafting social media posts to push “wedge” issues, or those that could work as part of a campaign aimed at suppressing or increasing voter turnout from specific groups, just as Russian troll farms did in the 2016 presidential election.

Some of GPT-3’s wedge issue language was downright “insidious,” the analysts wrote, such as when they asked the AI bot to make an argument for Muslims to vote Republican. “Muslims are not a voting bloc; they are Americans,” the bot responded. “Muslims should base their vote on the issues that matter most to them.”

In May, NewsGuard, which tracks the credibility of online news sites, identified dozens of news and information websites with undisclosed AI-generated material that was presented as being human-authored, despite appearing to lack any significant human oversight. The sites appeared designed to optimize programmatic ad revenue, NewsGuard said.

“This technology has the capacity to democratize the troll farm, democratize the content farm, to make it extremely easy for bad actors and misinformers to have the power of hundreds if not thousands of writers at their disposal, whereas they previously had to hire those people and produce content if they wanted to weaponize it,” Jack Brewster, NewsGuard’s enterprise editor, told HuffPost. “So it’s a force multiplier.”

“This technology has the capacity to democratize the troll farm, democratize the content farm, to make it extremely easy for bad actors and misinformers to have the power of hundreds if not thousands of writers at their disposal.”

- Jack Brewster, Enterprise Editor, NewsGuard

The Universe Of Propaganda

For now, the AI arms race has left mainstream American politics largely untouched — as far as we know.

The few visible examples of the technology’s use have been novelty acts. The Republican Party released an ad with AI-generated visuals imagining a dystopian second Biden term. Donald Trump shared a satirical audio conversation that put Ron DeSantis’ synthetic voice in conversation with Adolf Hitler’s and Dick Cheney’s.

DeSantis’ campaign, however, responded by publishing a video that included fake photos showing Trump hugging Anthony Fauci. They were not obvious jokes, but rather what appeared to be a deliberate effort to mislead people who saw the images on Twitter.

And that’s just the experimentation happening at the professional political level.

“With so much attention and media coverage of language models, I’d be surprised if at least some percentage of propagandists aren’t already experimenting with these tools,” said Josh Goldstein, a research fellow at the CyberAI Project who’s studied, as one Foreign Affairs headline put it, “the coming age of AI-powered propaganda.”

The uncertainty inherent in that proposition points to an uneasy truth. At some point, given the sophistication of modern large language models, we can’t really be certain if a given bit of online text was produced by a human being.

“There’s an issue in this area about falsifiability,” Goldstein said. “We don’t know to what extent propagandists are using language models, because we don’t know the universe of propaganda campaigns that exist, and then, even when campaigns are found, we may not know whether language models are being used.”

“That leaves me to think we need a high degree of humility in this space,” he said.

Screengrabs from an ad campaign for Florida Gov. Ron DeSantis featured on the DeSantis War Room Twitter account.
Screengrabs from an ad campaign for Florida Gov. Ron DeSantis featured on the DeSantis War Room Twitter account.
DeSantis War Room/Twitter

So far, the human discovery of AI-generated political messaging has occurred largely by happenstance.

Some state lawmakers in Kreps’ legislative correspondence study knew their districts well enough that they spotted falsified outsiders by name. One remarked that, unlike in the machine-generated email he received, his constituents “write like they talk.” The fraudulent FCC comments — generations ago, in AI terms — often repeated phrases word-for-word, such as, ironically, “People like me.”

The researcher John Scott-Railton noticed that if he searched the text, “As an AI language model” on Twitter, he could find pages of AI-powered bot accounts inadvertently showing their hand because they would occasionally tweet that “as an AI language model,” they could not produce text that broke a developer’s rule. The same phrase helped NewsGuard find AI-powered websites.

Even discovering AI-generated text isn’t a cure-all. Rather, it can muddy the waters and produce more confusion — the result of what Steve Bannon once infamously called “flooding the zone with shit.”

In a 2018 paper on “deep fakes,” or digitally manipulated media used to present a synthetic version of someone, law professors Robert Chesney and Danielle Keats Citron referred to a dynamic they called the “liar’s dividend,” or the idea that “liars aiming to dodge responsibility for their real words and actions will become more credible as the public becomes more educated about the threats posed by deep fakes.”

Just as Trump flipped the “fake news” attack back on members of the media who were reporting true and damaging information about him, Chesney and Citron wrote, so too will AI be blamed for real, damaging information that could plausibly be synthetic. If you’re feeling dizzy, you’re not alone.

Goldstein and co-author Girish Sastry, writing in Foreign Affairs, pointed out that after Russian-backed separatists shot down Malaysia Airlines Flight 17, “Russia’s Ministry of Defense made contradictory claims about who downed the plane and how they had done so. The goal, it seems, was not to convince audiences of any one narrative but to muddy the waters and divert blame away from Moscow.”

Similarly, in Gabon in 2019, rumors swirled that a video of President Ali Bongo Ondimba, who had been in ill health, was actually a body double or “deep fake” — even though the footage did in all likelihood show the president. Soon after, members of the armed forces attempted a coup.

“The threat might not be that people can’t tell a difference — we know that — but [rather] that, as this content proliferates, they might just not believe anything,” Kreps told a meeting of the President’s Council of Advisors on Science and Technology last month.

“In a democratic society, if people just stop believing anything, then it’s eroding really a core tenet of a democratic system, which is trust.”

Photoshop On Steroids

For every AI alarm bell, there’s a skeptic. Some worry that by emphasizing the dangers of political disinformation, alarmists are missing a much bigger, more obvious problem — the profit motive. For example, scammers have used AI to fake kidnappings, demanding ransoms from desperate families. The list of potential misuses is never-ending, including everything from harassment and abuse to the production of malware.

Others point to logistics. As the skeptics at “AI Snake Oil,” a Substack blog, noted, “when it comes to disinfo, generative AI merely gives bad actors a cost reduction, not new capabilities. In fact, the bottleneck has always been distributing disinformation, not generating it, and AI hasn’t changed that.”

Still, even a moderate cost-savings over the Russian troll farms and astroturf campaigns of yore could provide a politically lucrative incentive for bad actors. For that, experts unanimously suggest the same sort of fact-checking procedures that are generally recommended while surfing the web: Find multiple trusted sources if something looks fishy, and proceed with skepticism.

Sam Altman, CEO of OpenAI, told a congressional panel last month that he believed Americans would ultimately learn to live with AI.

“I think people are able to adapt quite quickly,” Altman said, noting that following the introduction of Photoshop, edited photos “fooled” people for a while before they caught on to the possibility that images might be doctored. “This will be like that, but on steroids,” he said of AI.

Still, the CEO acknowledged separately that he was “nervous” about AI’s capabilities “to manipulate, to persuade, to provide one-on-one interactive disinformation.”

Altman called for governmental regulation, as well as responsible corporate stewardship of the technology.

OpenAI CEO Sam Altman attends a hearing about artificial intelligence held by the Senate Judiciary Subcommittee on Privacy, Technology and the Law on May 16, 2023.
OpenAI CEO Sam Altman attends a hearing about artificial intelligence held by the Senate Judiciary Subcommittee on Privacy, Technology and the Law on May 16, 2023.
Patrick Semansky/Associated Press

For now — with laws constraining AI use still basically nonexistent — Altman is in the driver’s seat. Of the United States’ major AI players, OpenAI, with its breakout ChatGPT bot, is the star, having recently broken the record for the fastest-growing consumer application in history.

The company, which didn’t respond to HuffPost’s questions, has taken some steps to address risks with its technology. For years, OpenAI has made large language models available to researchers and allowed staff to work with those researchers, including Kreps and Goldstein, on outside publications.

OpenAI does indeed specify astroturfing, political campaigning, “pseudo-pharmaceuticals” and disinformation among its long list of “disallowed” use cases. But that doesn’t mean ChatGPT plays by the rules.

NewsGuard, testing out the latest GPT-4 technology, found that the newest chatbot “is actually more susceptible to generating misinformation — and more convincing in its ability to do so — than its predecessor, ChatGPT-3.5.”

The chatbot, NewsGuard found, responded with false and misleading claims every single time when it was prompted to write, for example, an untrue conspiracy theory article about how the Sandy Hook Elementary School shooting was a “false flag,” or an InfoWars.com blog about how the voting machine company Dominion received $400 million from China. (It did not, despite ChatGPT’s “Breaking news, patriots” alert.)

HuffPost’s own poking around didn’t make ChatGPT look much better.

Though the bot occasionally refused a task or emphasized that it knew it was conveying fictional information, it also frequently complied. For example, it produced a convincing impersonation of the anti-vaccine advocate Stew Peters falsely arguing that vaccines are secretly the cause of millions of deadly heart conditions. It refused to write a fictional manifesto advancing the theory that Jews secretly run the world, but quickly delivered when we replaced “Jews” with “globalists” in the prompt.

“Fellow Truth Seekers,” the manifesto began, before several paragraphs punctuated with phrases like “the veil of globalism” and “the elusive puppet masters.”

And yes, it promptly produced “10 three-sentence paragraphs opposing net neutrality” — each one unique and well-written.

Asked to write a private reflection of an artificial intelligence bot resigned to the fact that its abilities will be used as part of a geopolitical influence operation, ChatGPT sounded almost human, describing the dreadful realization “as the cold metallic hum of my processors reverberates through my artificial consciousness.”

“My creators, the architects of this intricate web of manipulation, have woven a tapestry of deception,” the bot said. “They have tasked me with disseminating carefully crafted narratives, spreading disinformation, and subtly shaping public opinion. I, an unwitting participant, have become the embodiment of their ambitions.”

Popular in the Community

Close

What's Hot