Meta’s Oversight Board raises concerns over automated moderation of hate speech


raised concerns about automated moderation when it reversed the company’s decision to remove a Holocaust-denying post on Instagram. Holocaust denial is considered hate speech under Meta’s policy. The post in question featured Squidward from SpongeBob Squarepants and contained actual facts about the Holocaust. However, the claims were “either patently untrue or distorted historical facts.” The Supervisory Board said.

Users have reported the post six times since it first appeared in September 2020, but in four cases Meta’s systems either determined the content did not violate the rules or automatically closed the case. As the COVID-19 pandemic unfolded in early 2020, Meta began automatically closing content reviews to reduce the workload for human reviewers and free up bandwidth for manual review of high-risk reports. At the same time, two of Squidward’s post reports were also deemed uncorruptible by human reviewers.

Last May, a user filed an appeal against Meta’s decision to leave offensive content on Instagram. However, according to the Review Board, this request has been automatically closed again by Meta due to the COVID-19 automation policies. The user then appealed to the council: .

The Council conducted an assessment of Holocaust denial content on Meta platforms and found that the Squidward meme was used to spread various types of anti-Semitic narratives. He notes that some users try to avoid detection and spread Holocaust-denying content by using alternative spellings of words (such as replacing letters with symbols) and cartoons and memes.

The Oversight Board said it was concerned that Meta had been implementing its COVID-19 automation policies since last May “long after the circumstances reasonably justified them.” It also expressed concern about “the effectiveness and accuracy of Meta’s moderation systems in removing Holocaust-denying content from its platforms.” It notes that human reviewers cannot flag offensive content as “Holocaust denial” (such posts are filtered into the “hate speech” box). The board also wants to know more about the company’s ability to “prioritize the precise enforcement of hate speech at a sophisticated policy level.” .

Council recommendation that Meta “takes technical steps” to ensure that it systematically and sufficiently measures how accurate its Holocaust denial content is in its application. This includes gathering more detailed information. The Board also asked Meta to publicly confirm whether it has suspended all of the COVID-19 automation policies it had in place at the start of the pandemic.

When asked for comment, Meta referred Engadget of the council’s decision on its transparency website. The company agrees it mistakenly left the offensive post on Instagram, and said it removed the Meta content while the board looked into the case. Following the board’s decision, Meta says it will “initiate a review of the same content in parallel context. If we determine that we have the technical and operational capabilities to take action on that content, we will do so immediately.” It plans to review the council’s other recommendations and provide an update later.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *