Meta says AI-generated content was less than 1 percent of election misinformation


AI-generated content has played less of a role in global election disinformation than many officials and researchers feared, according to a new analysis by Meta. In update In its efforts to protect dozens of elections in 2024, the company said the AI ​​content was only part of the election-related disinformation that was caught and flagged by fact-checkers.

“In the key elections listed above, ratings for AI content on election, political and social topics accounted for less than 1% of all fact-checked disinformation,” the company said in a blog post. Parliamentary elections in the USA, UK, Bangladesh, Indonesia, India, Pakistan, France, South Africa, Mexico and Brazil, as well as the EU.

The update comes after multiple government officials and researchers have raised the alarm for months about the role generative artificial intelligence could play in election disinformation overload in a year when more than 2 billion people are expected to go to the polls. According to Nick Clegg, the company’s President of Global Affairs, those fears have not manifested themselves, at least on the Meta platforms.

“People have been worried this year about the potential impact of generative AI on the upcoming election, and there have been all kinds of warnings about the potential risks of things like widespread deep fraud and AI-powered disinformation campaigns,” Clegg said. he said in a briefing with journalists. “Based on the results we have observed in our services, it appears that these risks have not materialized significantly and that any such impact is modest and limited in scope.”

Meta didn’t elaborate on how much election-related AI content its fact-checkers have in the lead-up to major elections. The company sees billions of pieces of content every day, so even a small percentage can add up to a large number of posts. However, Clegg endorsed Meta’s policies, including his own expansion AI tagging earlier this year criticism From the Board of Control. He noted that Meta’s own AI image generator blocked 590,000 requests to create images of Donald Trump, Joe Biden, Kamala Harris, JD Vance and Tim Walz in the month leading up to US election day.

At the same time, Meta has increasingly taken steps to distance itself from politics altogether, as well as some past efforts to disinformation the police. The company has changed the default settings for users on Instagram and Threads stop recommending has and has political content without priority News on Facebook. Mark Zuckerberg said he regrets it the way the company handles some misinformation policies during the pandemic.

Looking ahead, Clegg said Meta is still trying to strike the right balance between enforcing its rules and ensuring freedom of expression. “We know that our rates of error in enforcing our policies are still very high, and this hampers freedom of expression,” he said. I think we also want to really redouble our efforts to improve the accuracy and precision that we’re moving now.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *