Meta sues developer of AI ‘nudify’ app for evading ad rules


Summary

The lawsuit

Meta has filed a lawsuit against the developer of the " CrushAI “nudify” app over accusations that the company worked to circumvent safeguards against explicit content in advertisements.

‘Nudify’ apps

The legal challenge comes as so-called “nudify” apps, which allow users to produce nonconsensual nude or sexually explicit photos of people.

Safeguards

Meta said it has implemented enhanced advertisement screening in response to a push by online safety advocates and lawmakers over concerns with “nudity” apps.


Full story

Meta is suing a Hong Kong-based company for running ads on Facebook and Instagram to promote an app that creates non-consensual nude images using artificial intelligence. The lawsuit, filed on Thursday, June 12, targets Joy Timeline HK Limited, the developer of  CrushAI, a so-called “nudify app.”

What is CrushAI?

CrushAI allows users to upload a photo of someone and create nude or intimate images using AI technology. 

Meta filed the suit in Hong Kong in an effort  to block the company from advertising on its platforms.

According to Meta, Joy Timeline repeatedly violated its ad rules, circumventing the platform’s review process even after previous ads were removed.

“This legal action underscores both the seriousness with which we take this abuse and our commitment to doing all we can to protect our community from it,” Meta said in a blog post. “We’ll continue to take the necessary steps — which could include legal action — against those who abuse our platforms like this.”

Prior warnings and political pressure

Researchers and lawmakers have long warned about the rise of  “nudify apps,” which are readily available online, in app stores, and on social media advertising platforms. 

In February, Sen. Dick Durbin, D-Ill., wrote a letter to Zuckerberg requesting his company crack down on CrushAI, citing research that showed more than 8,000 CrushAI-linked ads appeared on Meta platforms in just two weeks.

Meta’s new enforcement tools 

Meta also announced stronger “enforcement methods,” including AI-based detection tools to flag inappropriate ads, even those without explicit content. It will also use pattern-matching tech to catch “copycat” ads and tactics borrowed from counter-disinformation efforts to dismantle ad networks promoting these apps.

Meta said it’s coordinating with outside and in-house “specialist teams” to monitor nudify apps as they “evolve their tactics to avoid detection.”

The  company also plans to share data with other tech firms to help shut down these services across the digital ecosystem. 

The broader ‘nudify’ problem

Meta’s action comes amid a surge in the use of nudify apps.

A 2024 report by Bellingcat’s Kolina Koltai linked the app ClothOff to over 9.4 million users in one quarter. It was also associated with high-profile cases of AI-generated child sexual abuse, including one incident at a U.S. school.

Research shows ClothOff was accessed by over 235,000 users through social media in just three months. Some of its promotional accounts on X, formerly Twitter, had hundreds of thousands of followers. One premium account had minimal engagement — a sign it was likely used primarily for advertising.

A Tech Transparency Project report found X failed to remove any reported deepfakes flagged as nonconsensual sexual content. The project also revealed that ClothOff recently moved to a new domain, as tracked by researchers on BlueSky.

A double standard?

Just one day before Meta’s lawsuit, AI Forensics released a report alleging Meta uses different standards when reviewing ads compared to organic content. Researchers say they successfully uploaded screenshots of flagged ads that had previously run on Meta and found the same content was immediately removed when posted organically.

The report accuses Meta of maintaining a “systemic double standard” and misleading EU regulators about its ad review systems in filings required by the Digital Services Act. 

Jason Morrell (Morning Managing Editor) and Lea Mercado (Digital Production Manager) contributed to this report.
Tags: , , ,

Why this story matters

Meta's legal action against a developer of AI-powered 'nudify apps' highlights growing concerns over non-consensual image generation, online platform responsibility and the evolving use of artificial intelligence for abuse.

Non-consensual AI imagery

The story addresses the proliferation of AI tools used to create nude images without consent, raising ethical, legal and privacy issues.

Platform accountability

Meta's lawsuit and new enforcement methods highlight efforts by social media companies to regulate harmful content and address gaps in ad review systems.

Regulatory and societal response

Lawmakers, researchers and advocacy groups are increasing pressure on technology companies and regulators to prevent misuse of AI and protect vulnerable individuals online.

Get the big picture

Behind the numbers

Multiple sources reference that Joy Timeline HK Limited, behind the CrushAI app, ran thousands of ads promoting AI-generated non-consensual nude images. According to CNN and TechCrunch, more than 8,000 such ads appeared on Facebook and Instagram in the first two weeks of 2025. Meta claims it spent $289,000 on investigation, regulatory responses and policy enforcement.

Community reaction

Community and advocacy groups, such as the UK's NSPCC, have expressed deep concern over the emotional harm caused by such apps, particularly to children. There have been calls from child safety advocates and lawmakers, including Sen. Dick Durbin, urging Meta to take decisive action and for governments to introduce stricter regulations banning or controlling these technologies.

Context corner

The surge in deepfake and AI-generated explicit imagery follows years of increasing accessibility to generative AI. Previously, deepfakes were mainly associated with political or celebrity impersonation, but now tools like CrushAI allow anyone to create explicit images, broadening the scope of privacy and safety risks. Legislative efforts, like the U.S. Take It Down Act, have started to address these challenges.

Bias comparison

  • Media outlets on the left frame Meta’s lawsuit and enforcement as a long-overdue, morally urgent crackdown on exploitative AI "nudify" apps that generate nonconsensual sexualized images, emphasizing corporate responsibility and prior enforcement failures with emotionally charged terms like "finally" and "circumspect condemnations.
  • Media outlets in the center adopt a more procedural tone, focusing on Meta’s multi-layered enforcement tactics, legislative context and ongoing challenges without the left’s harsh critique or right’s cultural emphasis.
  • Media outlets on the right employ firm language such as "cracks down," highlighting decisive action consistent with conservative values of upholding social norms, yet de-emphasize broader regulatory or feminist implications.

Media landscape

Click on bars to see headlines

65 total sources

Key points from the Left

  • Meta has filed a lawsuit against Joy Timeline HK Limited in Hong Kong to stop the advertising of CrushAI apps on its platforms, as these apps use AI to simulate nude images of clothed individuals.
  • The lawsuit follows multiple attempts by Joy Timeline to bypass Meta's ad review process after ads for nudify apps appeared on Facebook and Instagram, violating Meta's advertising policies.
  • Meta has banned non-consensual intimate imagery on its platforms and has developed new technology to detect ads that appear benign but violate its policies.
  • The company plans to share information about violations with other tech firms to improve child safety on their platforms through the Tech Coalition's Lantern Program, having provided information on over 3,800 violating sites since March.

Report an issue with this summary

Key points from the Center

  • On April 14, 2025, Meta sued Joy Timeline HK Limited in Hong Kong for running ads promoting CrushAI, an app that creates non-consensual sexualized images using AI on Meta's platforms.
  • This legal action followed repeated violations where Joy Timeline circumvented Meta's ad review process despite multiple removals of CrushAI ads violating Meta's standards on nudity and harassment.
  • Since early 2025, Meta's expert teams investigated and disrupted four separate account networks promoting AI nudify services and developed new technology to detect such ads even without visible nudity.
  • In the first two weeks of 2025, over 8,010 ads related to CrushAI were displayed on Facebook and Instagram; Meta emphasized its strong commitment to addressing this misuse and is actively pursuing measures, including legal steps, to protect its platforms.
  • Meta continues collaborating with external experts and sharing data with other tech firms to prevent similar ads and protect users, signaling ongoing efforts against AI nudify app abuses on social platforms.

Report an issue with this summary

Key points from the Right

No summary available because of a lack of coverage.

Report an issue with this summary

Other (sources without bias rating):

Powered by Ground News™