OpenAI says bad actors are using its platform to disrupt elections, but with little ‘viral engagement’

-


Jaap Arriens | NurPhoto via Getty Images

OpenAI is increasingly becoming a platform of choice for cyber actors looking to influence democratic elections across the globe.

In a 54-page report published Wednesday, the ChatGPT creator said that it’s disrupted “more than 20 operations and deceptive networks from around the world that attempted to use our models.” The threats ranged from AI-generated website articles to social media posts by fake accounts.

The company said its update on “influence and cyber operations” was intended to provide a “snapshot” of what it’s seeing and to identify “an initial set of trends that we believe can inform debate on how AI fits into the broader threat landscape.”

OpenAI’s report lands less than a month before the U.S. presidential election. Beyond the U.S., it’s a significant year for elections worldwide, with contests taking place that affect upward of 4 billion people in more than 40 countries. The rise of AI-generated content has led to serious election-related misinformation concerns, with the number of deepfakes that have been created increasing 900% year over year, according to data from Clarity, a machine learning firm.

Misinformation in elections is not a new phenomenon. It’s been a major problem dating back to the 2016 U.S. presidential campaign, when Russian actors found cheap and easy ways to spread false content across social platforms. In 2020, social networks were inundated with misinformation on Covid vaccines and election fraud.

Lawmakers’ concerns today are more focused on the rise in generative AI, which took off in late 2022 with the launch of ChatGPT and is now being adopted by companies of all sizes.

OpenAI wrote in its report that election-related uses of AI “ranged in complexity from simple requests for content generation, to complex, multi-stage efforts to analyze and reply to social media posts.” The social media content related mostly to elections in the U.S. and Rwanda, and to a lesser extent, elections in India and the EU, OpenAI said.

In late August, an Iranian operation used OpenAI’s products to generate “long-form articles” and social media comments about the U.S. election, as well as other topics, but the company said the majority of identified posts received few or no likes, shares and comments. In July, the company banned ChatGPT accounts in Rwanda that were posting election-related comments on X. And in May, an Israeli company used ChatGPT to generate social media comments about elections in India. OpenAI wrote that it was able to address the case within less than 24 hours.

In June, OpenAI addressed a covert operation that used its products to generate comments about the European Parliament elections in France, and politics in the U.S., Germany, Italy and Poland. The company said that while most social media posts it identified received few likes or shares, some real people did reply to the AI-generated posts.

None of the election-related operations were able to attract “viral engagement” or build “sustained audiences” via the use of ChatGPT and OpenAI’s other tools, the company wrote.

WATCH: Outlook of election could be positive or very negative for China



Source link

Latest news

Another High-Profile OpenAI Researcher Departs for Meta

OpenAI researcher Jason Wei is joining Meta’s new superintelligence lab, according to multiple sources familiar with the matter.Wei...

I Tried Grok’s Built-In Anime Companion and It Called Me a Twat

An anime girl in a black corset dress sways back and forth on my screen. Its name is...

Thinking Machines Lab Raises a Record $2 Billion, Announces Cofounders

Thinking Machines Lab, an artificial intelligence company founded by top researchers who fled OpenAI, has raised a record...

A former OpenAI engineer describes what it’s really like to work there

Three weeks ago, an engineer named Calvin French-Owen, who worked on one of OpenAI’s most promising new products,...

The FBI’s Jeffrey Epstein Prison Video Had Nearly 3 Minutes Cut Out

Both analyses found that the two clips, labeled “2025-05-22 16-35-21.mp4” and “2025-05-22 21-12-48.mp4,” were stitched together. The first...

Must read

You might also likeRELATED
Recommended to you