Meta needs updated rules for sexually explicit deepfakes, Oversight Board says


Meta’s Board of Supervisors is calling on the company to update its rules on sexually explicit deepfakes. As part of its decision, the council made recommendations in two cases related to AI-generated images of public figures.

The lawsuits stem from two user complaints about AI-generated images of public figures, although the board declined to name the individuals. A post shared on Instagram depicted a naked Indian woman. The post was reported to Meta, but as a subsequent user request, the report was automatically closed after 48 hours. Eventually, the company removed the post after it came to the attention of the Review Board, but that reversed Meta’s original decision to keep the image.

The second post, shared in the Facebook group dedicated to the art of artificial intelligence, shows “an image of a naked woman with a man holding her breast, created by artificial intelligence.” Meta automatically removed the post because it was added to an internal system that can identify images previously reported to the company. The Supervisory Board determined that Meta’s dismissal was correct.

In both cases, the Watchdog said the AI-powered deep-fakes violated the company’s rules, which prohibit “degrading sexually Photoshopped” images. But in its recommendations to Meta, the Review Board said the current language used in the rules is outdated and could make it harder for users to report explicit AI-generated images.

Instead, the board says it should update its policies to clarify that it prohibits unofficially public images created or manipulated by AI. “Much of the non-consensual sexual images circulating online today are created by generative artificial intelligence models that either automatically edit existing images or create entirely new ones,” the council writes. editing methods clearly for both users and company moderators.”

The board also called out Meta’s practice of automatically closing user appeals, saying it could have “significant human rights implications” for users. However, the council said it did not “know enough” about the practice to make a recommendation.

As “deepfake porn” has become a more widespread form of online harassment in recent years, the spread of explicit AI imagery has become an increasingly prominent issue. The council’s decision comes a day after the US Senate obviously a bill to combat deepfakes. If passed, the measure would allow victims to sue the creators of such images for up to $250,000.

These cases are not the first time the Review Board has forced Meta to update its rules for AI-generated content. In another high-profile case, the panel a Video of President Joe Biden. The case eventually settled with Meta policies on how AI-generated content is tagged.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *