SAN FRANCISCO — OpenAI, the creator of ChatGPT, revealed on Thursday that entities from Russia, China, Iran, and Israel exploited its technology to attempt to sway global political discourse, underscoring fears that generative artificial intelligence is easing the path for state actors to execute clandestine propaganda initiatives as the 2024 presidential election approaches.
OpenAI deactivated accounts linked to notorious propaganda entities in Russia, China, and Iran; an Israeli political campaign entity; and an unidentified group originating from Russia that the company’s researchers have named “Bad Grammar.” These factions leveraged OpenAI’s technology to craft posts, translate them into multiple languages, and develop software facilitating automated social media postings.
These entities garnered minimal engagement; the associated social media accounts reached a limited audience, with only a few followers, according to Ben Nimmo, the principal investigator on OpenAI’s intelligence and investigations team. Nonetheless, OpenAI’s findings indicate that long-active propagandists on social media are now employing AI technology to amplify their efforts.
“We’ve observed them producing text in greater volumes and with fewer mistakes than these operations have traditionally managed,” Nimmo, a former investigator at Meta specializing in influence operations, stated during a briefing with journalists. He acknowledged the potential for other groups to be utilizing OpenAI’s tools unbeknownst to the company. “This is not a moment for complacency. Historical patterns show that influence campaigns, after years of ineffectiveness, can suddenly succeed if left unmonitored,” he warned.
Governments, political entities, and activist groups have long used social media to attempt to sway political outcomes. In response to concerns over Russian meddling in the 2016 presidential election, social media platforms began scrutinizing their sites for efforts to manipulate voters. These platforms generally ban governments and political groups from disguising coordinated efforts to influence users, and political advertisements must disclose their sponsors.
As AI tools capable of generating realistic text, images, and even video become widely accessible, disinformation researchers have raised alarms about the increasing difficulty in identifying and countering false information or covert influence operations online. With hundreds of millions voting globally this year, the proliferation of generative AI deepfakes is a growing concern.
OpenAI, along with other AI companies like Google, is working on technology to detect deepfakes created with their tools, but this technology remains unproven. Some AI experts are skeptical that deepfake detectors will ever be fully reliable.
Earlier this year, a group affiliated with the Chinese Communist Party posted AI-generated audio of a candidate in the Taiwanese elections, falsely suggesting he endorsed another politician. However, the politician, Foxconn founder Terry Gou, made no such endorsement.
This article was originally published on techpedia4. Read the orignal article.
Comments
Post a Comment