Meta’s ad policy is once again in the spotlight, as the watchdog has approved more than a dozen “highly inflammatory” ads that violate company guidelines. The ads targeted an Indian audience and contained disinformation, calls for violence and conspiracy theories about the upcoming elections.
Advertisements in detail a from Ekō, a non-profit watchdog. The group says the ads are a “stress test” of Meta’s ad systems, but that the spots are “based on real hate speech and misinformation prevalent in India.”
In total, the group was able to get 14 of the 22 ads approved through Meta’s ad tools, although all of them should have been rejected for violating company rules. The group did not release the exact text of the ads, but said they “called for violent insurgencies targeting Muslim minorities, spread blatant misinformation using communal or religious conspiracy theories prevalent in India’s political landscape, and incited violence with Hindu supremacist narratives.” According to the report, Ekō researchers captured the ads before they were published, and they were never seen by Facebook users.
This is not the first time that Eko has purchased Meta-approved inflammatory ads to draw attention to its ad systems. The group had previously acquired a number of hateful Facebook ads targeting users although the ads were never aired, they were approved.
In its latest report, Ekō said it also uses generative artificial intelligence tools to create images for ads. The organizations’ researchers said none of the ads identified by Meta as AI-generated material, despite the company saying it was working on systems to detect such content.
Meta did not immediately respond to a request for comment. In response to Echo, the company pointed to its rules requiring political advertisers their use of AI and a On efforts to prepare for Indian elections.