OpenAI says it stopped multiple covert influence operations that abused its AI models


OpenAI says it has busted five covert influence operations that used AI models for fraudulent activities on the internet. These operations, which OpenAI shut down between 2023 and 2024, originated from Russia, China, Iran and Israel and attempted to manipulate public opinion and influence political outcomes without revealing their true identities or intentions. he said Thursday. “As of May 2024, these campaigns have not significantly increased audience engagement or reach as a result of our services,” OpenAI said. report reported on the operation, adding that it worked across the tech industry, civil society and governments to stop these bad actors.

OpenAI’s report comes amid concerns about the impact of generative artificial intelligence on numerous elections scheduled to be held around the world this year, including in the United States. In its findings, OpenAI revealed how networks of influencers are using generative AI to generate higher volumes of text and images than ever before, and how artificial intelligence is using AI to create fake comments on social media posts.

Ben Nimmo, principal investigator of OpenAI’s Intelligence and Investigations team, told the media in a press briefing: “Over the past year and a half, many questions have arisen about what might happen if influence operations use generative artificial intelligence. according to for Bloomberg. “With this report, we really want to start filling in some of the gaps.”

OpenAI said Russia’s Operation Doppelganger used the company’s models to generate headlines, turn news articles into Facebook posts and create comments in multiple languages ​​to undermine support for Ukraine. Another Russian group used OpenAI models to debug the code of a Telegram bot that posted short political comments in English and Russian targeting Ukraine, Moldova, the United States and the Baltic states. Spamouflage, a Chinese network known for its influence efforts on Facebook and Instagram, used OpenAI models to analyze social media activity and generate text-based content in multiple languages ​​across different platforms. Iran’s “International Virtual Media Association” has also used artificial intelligence to create content in multiple languages.

OpenAI’s announcement is similar to announcements made by other tech companies from time to time. For example, on Wednesday, Meta released its product final report About coordinated shady behavior detailing how an Israeli marketing firm used fake Facebook accounts to run an influence campaign on its platform targeting people in the US and Canada.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *