Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks

The White House secured AI industry voluntary commitments against image-based sexual abuse

The Biden-Harris Administration has secured voluntary commitments from major AI companies to combat AI-generated image-based sexual abuse through responsible data sourcing, safeguards in development processes, and dataset modifications.

Ellie Ramirez-Camara profile image
by Ellie Ramirez-Camara
The White House secured AI industry voluntary commitments against image-based sexual abuse
Photo by René DeAnda / Unsplash

The Biden-Harris administration is celebrating the 30th anniversary of the Violence Against Women Act by endorsing a more proactive stance against image-based sexual abuse, both in the form of non-consensual intimate imagery (NCII) of adults, and child sexual abuse material (CSAM). Recently, the White House announced voluntary commitments from several AI model developers and data providers, which build on the White House Call to Action to Combat Image-Based Sexual Abuse. These commitments mark a crucial step in safeguarding individuals, particularly women, children, and LGBTQI+ people, from the misuse of AI technology.

As part of these voluntary commitments, Adobe, Anthropic, Cohere, Common Crawl, Microsoft, and OpenAI have pledged to source their datasets responsibly and safeguard them from image-based sexual abuse. Similarly, Adobe, Anthropic, Cohere, Microsoft, and OpenAI have vowed to implement measures, such as feedback loops and iterative stress-testing strategies, to stop AI models from outputting image-based sexual abuse material. These five companies have also pledged to remove images featuring nudity when appropriate, following the models' purposes.

These commitments by AI model developers and data providers to tackle image-based sexual abuse build on last year's commitments by leading AI companies to address the risks posed by AI. They also join voluntary commitments by other tech companies, including Cash App, Square, Google, GitHub, Microsoft, and Meta, which have outlined specific actions to prevent the creation, distribution, and monetization of abusive imagery.

Ellie Ramirez-Camara profile image
by Ellie Ramirez-Camara
Updated

Data Phoenix Digest

Subscribe to the weekly digest with a summary of the top research papers, articles, news, and our community events, to keep track of trends and grow in the Data & AI world!

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More