Meta’s Board of Control is once again taking on the social network’s rules regarding AI-generated content. The council accepted two cases involving AI-generated nude images of public figures.
While Meta’s rules already prohibit nudity on Facebook and Instagram, the board said in a statement that it wants to address “whether Meta’s policies and its enforcement practices are effective in addressing explicit imagery generated by artificial intelligence.” . AI-generated images of female celebrities, politicians, and other public figures, sometimes called “deep fake porn,” have become an increasingly common form of online harassment, drawing a wave. . In either case, the Supervisory Board could push Meta to adopt new rules to eliminate such harassment on its platform.
The Oversight Board said that while it described the circumstances surrounding each post, it did not name the two public figures at the center of each case to avoid further prosecution.
One case involves an Instagram post showing an AI-generated image of a nude Indian woman posted by an account that “only shares AI-generated images of Indian women.” The post was reported to Meta, but the report was closed after 48 hours due to lack of review. The same user appealed this decision, but the appeal was also closed and never heard. After the user appealed to the Review Board and the board agreed to take the case, Meta eventually removed the post.
The second case involved a Facebook post in a group dedicated to the art of artificial intelligence. The article in question shows “a picture of a naked woman with a man holding her breast created by artificial intelligence.” The woman’s name was meant to resemble the “American public figure” in the headline. The post was automatically removed because it was previously reported and Meta’s internal systems were able to match it with the previous post. The user appealed the decision to cancel it, but the complaint was “automatically closed”. The user then appealed to the Supervisory Board, which agreed to hear the case.
Supervisory Board co-chair Helle Thorning-Schmidt said in a statement that the board took two cases from different countries to assess potential differences in how Meta policies are applied. “We know that Meta is faster and more effective in managing content in some markets and languages than in others,” Thorning-Schmidt said. “By taking a case from the US and India, we want to see if Meta is fairly protecting all women globally.”
The Review Board is seeking public comment over the next two weeks and will publish its decision, along with policy recommendations for Meta, within the next few weeks. A similar process involving misleadingly edited video Meta recently agreed more AI-generated content on its platform.