The world’s leading AI companies pledge to protect the safety of children online


Leading AI companies including OpenAI, Microsoft, Google, Meta and others have pledged together to prevent their AI tools from being used to exploit children and create child sexual abuse material (CSAM). Child safety group Thorn and All Tech Is Human, a non-profit organization focused on responsible technology, led the initiative.

The promises of AI companies, Thorn he said, “set a groundbreaking precedent for the industry and represented a significant leap forward in efforts to protect children from sexual abuse as a feature with generative artificial intelligence.” The aim of the initiative is to prevent the creation of sexual content with the participation of children and remove it from social media platforms and search engines. Thorn says that in 2023 alone, more than 104 million suspected child sexual abuse materials were reported in the United States. In the absence of collective action, generative AI is poised to worsen this problem and overwhelm law enforcement agencies that are already struggling to identify true victims.

Tuesday edition of Thorn and All Tech Is Human new paper The document, “Safety by Design for General AI: Preventing Child Sexual Exploitation,” provides strategies and recommendations for AI tools, search engines, social media platform companies, hosting companies, and developers to take steps to prevent the use of generative artificial intelligence. harming children.

One recommendation, for example, asks companies to carefully select the datasets used to train AI models and avoid those containing only CSAM examples, as well as adult sexual content altogether, as generative AI tends to conflate the two concepts. Thorn is also asking social media platforms and search engines to remove links to websites and apps that allow people to display “nude” images of children, thus creating new AI-generated child sexual abuse material online. According to the paper, the deluge of AI-generated CSAM will make it harder to identify true victims of child sex abuse, adding to the “chaff stack problem” — a reference to the amount of content law enforcement agencies currently have to review.

“This project was designed to make it clear that you don’t need to raise your hands,” said Rebecca Portnoff, vice president of data science at Thorn. he said the The Wall Street Journal. “We want to be able to change the course of this technology to where the current harms of this technology are cut at the knees.”

According to Portnoff, some companies have already agreed to separate images, videos and audio involving children from datasets intended for adults to prevent their models from conflating the two. Others also add watermarks to identify AI-generated content, but this method is not foolproof – watermarks and metadata can easily removed.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *