White House gets voluntary commitments from AI companies to curb deepfake porn


The White House released describes the commitment several AI companies are making today to prevent the creation and spread of image-based sexual exploitation. Participating businesses have identified the steps they take to prevent their platforms from being used to create non-consensual intimate images (NCII) of adult and child sexual exploitation material (CSAM).

Specifically, Adobe, Anthropic, Cohere, Common Crawl, Microsoft and OpenAI said they will:

All of the above except Common Crawl also agreed:

  • “Protecting against image-based sexually abusive AI models by incorporating feedback loops and iterative stress-testing strategies into development processes”

  • And “removal of nude images from AI training database” as needed.

This is a voluntary commitment, so today’s announcement does not create new actionable steps or consequences for not following through on those promises. But still, a well-intentioned effort to address this serious problem is worth applauding. Notable in today’s White House release are Apple, Amazon, Google and Meta.

Many big tech and AI companies are taking steps to make it easier for NCII victims to stop spreading deeply fake images and videos, separate from this federal effort. There is StopNCII with for a comprehensive approach to cleaning up this content, while other businesses are rolling out specific tools to report sexual abuse based on AI-generated imagery on their platforms.

If you believe you have been a victim of intimate image sharing without consent, you can file a case with StopNCII. ; If you are under 18, you can report to NCMEC .



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *