Skip to main content
Tech

Tech giants commit to removing nude images from AI training data

This report was created with support from enhanced software.


Several big tech companies have committed to removing nude images from AI training datasets. The news is in response to the Biden administration’s call for action.

Media Landscape

MediaMiss™This story is a Media Miss by the right as only 0% of the coverage is from right leaning media. Learn more about this data
Left 80% Center 20% Right 0%
Bias Distribution Powered by Ground News

Companies including Adobe, Anthropic, Cohere, Microsoft, and OpenAI have pledged to implement what they call responsible sourcing practices and incorporate feedback loops to guard against the output of image-based sexual abuse.

Tech companies have pledged to remove nude images from AI training datasets “when appropriate and depending on the purpose of the model.”

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

The White House reports an alarming increase in image-based sexual abuse. Platforms have become breeding grounds for the creation, dissemination, and monetization of deepfakes, alongside a concerning rise in sextortion cases.

The 2023-24 school year witnessed a global epidemic of deepfake incidents, with teenage girls being the primary targets of explicit content created and shared by their peers.

Commitments from these AI companies specifically target the prevention of nonconsensual intimate images of adults and child sexual abuse material.

However, critics argue that voluntary efforts are insufficient to tackle the complex challenges posed by AI-generated sexual content. The president of the Cyber Civil Rights Initiative, Mary Franks, emphasized that responsible and accountable practices by tech companies could have prevented the current crisis.

Another group of tech giants, including Bumble, Discord, Match Group, Meta, and TikTok, also announced a set of voluntary principles aimed at preventing image-based sexual abuse.

Tags: , , , , ,

Lauren Taylor

SEVERAL BIG TECH COMPANIES HAVE COMMITTED TO REMOVING NUDE IMAGES FROM AI TRAINING DATASETS, IN RESPONSE TO THE BIDEN ADMINISTRATION’S CALL FOR ACTION.

 

COMPANIES INCLUDING ADOBE, ANTHROPIC, COHERE, MICROSOFT, AND OPENAI HAVE PLEDGED TO IMPLEMENT WHAT THEY CALL RESPONSIBLE SOURCING PRACTICES AND INCORPORATE FEEDBACK LOOPS TO GUARD AGAINST THE OUTPUT OF IMAGE-BASED SEXUAL ABUSE.

 

TECH COMPANIES HAVE PLEDGED TO REMOVING NUDE IMAGES FROM AI TRAINING DATASETS “WHEN APPROPRIATE AND DEPENDING ON THE PURPOSE OF THE MODEL.”

 

THE WHITE HOUSE REPORTS AN ALARMING INCREASE IN IMAGE-BASED SEXUAL ABUSE PLATFORMS HAVE BECOME BREEDING GROUNDS FOR THE CREATION, DISSEMINATION, AND MONETIZATION OF DEEPFAKES, ALONGSIDE A CONCERNING RISE IN SEXTORTION CASES.

 

THE 2023-24 SCHOOL YEAR WITNESSED A GLOBAL EPIDEMIC OF DEEPFAKE INCIDENTS, WITH TEENAGE GIRLS BEING THE PRIMARY TARGETS OF EXPLICIT CONTENT CREATED AND SHARED BY THEIR PEERS.

 

COMMITMENTS FROM THESE AI COMPANIES SPECIFICALLY TARGET THE PREVENTION OF NON-CONSENSUAL INTIMATE IMAGES OF ADULTS AND CHILD SEXUAL ABUSE MATERIAL.

 

HOWEVER – CRITICS ARGUE THAT VOLUNTARY EFFORTS ARE INSUFFICIENT TO TACKLE THE COMPLEX CHALLENGES POSED BY AI-GENERATED SEXUAL CONTENT.

 

PRESIDENT OF THE CYBER CIVIL RIGHTS INITIATIVE, EMPHASIZED THAT RESPONSIBLE AND ACCOUNTABLE PRACTICES BY TECH COMPANIES COULD HAVE PREVENTED THE CURRENT CRISIS.

 

ANOTHER GROUP OF TECH GIANTS INCLUDING BUMBLE, DISCORD, MATCH GROUP, META, AND TIKTOK ALSO ANNOUNCED A SET OF VOLUNTARY PRINCIPLES AIMED AT PREVENTING IMAGE-BASED SEXUAL ABUSE.

 

FOR SAN, I’M LT