Meta targets spam, removes 10 million Facebook impersonator profiles


This recording was made using enhanced software.

Summary

Spam removals

Meta removed around 10 million Facebook profiles in early 2025 for impersonating well-known content creators. The action is part of a broader initiative to limit spam and prioritize original content in the feed.

Policy enforcement

Accounts that repeatedly post unoriginal content without permission or meaningful edits now face reduced distribution and restricted monetization. Meta says it’s also testing tools that link duplicated posts back to the original creators.

AI infrastructure

Meta plans to activate its first AI supercluster, Prometheus, in 2026. CEO Mark Zuckerberg says the company is investing “hundreds of billions” in computer systems, including a scalable platform called Hyperion.


Full story

In the first half of 2025, Meta removed about 10 million Facebook profiles impersonating large content creators. Separately, the company introduced new penalties — including reduced reach and restricted monetization — for accounts that repeatedly repost unoriginal content without permission or meaningful edits.

The company also penalized 500,000 accounts for spam-like behavior, demoting their comments, reducing post distribution and restricting monetization access.

What does Meta consider unoriginal content?

Meta defines unoriginal content as videos, images or posts that users copy from others without proper attribution or meaningful transformation. Meta encourages remixing, reaction videos and trend participation, but warns that minor changes — like adding a watermark or stitching clips — do not qualify as “meaningful enhancements.”

To counter this, Meta is deploying detection systems to identify duplicate videos and reduce their reach in the feed. The company is also testing ways to link duplicate content back to the original creator.

How is Meta enforcing the new rules?

Meta may penalize repeat offenders by cutting off monetization and limiting the reach of all their posts. The company said it designed these measures to protect legitimate creators and improve content quality for users.

Creators can track how their content is performing and whether they are at risk of penalties through the “Professional Dashboard.” New post-level insights help explain why some content may not be getting traction.

How is Meta’s AI infrastructure evolving to support moderation?

To power its growing AI tools for content moderation, Meta is building massive new computing systems. CEO Mark Zuckerberg announced plans to bring the company’s first AI supercluster, called Prometheus, online in 2026. These superclusters are designed to train advanced models and handle the heavy workloads needed to detect spam, impersonation and unoriginal content at scale.

Zuckerberg said Meta will invest “hundreds of billions of dollars” in AI compute infrastructure. One planned system, Hyperion, could eventually scale up to five gigawatts. The effort is part of a broader overhaul of Meta’s AI strategy, led by the new Meta Superintelligence Labs.

The company is also hiring top AI researchers and engineers to improve how its tools flag and filter content. Zuckerberg said he wants Meta to lead the industry in compute power per researcher, one way the company hopes to stay competitive with rivals like OpenAI and Google.

How does this fit into broader AI concerns?

The changes come as platforms address growing concerns over “AI slop,” or mass-produced, low-quality content generated using artificial intelligence. YouTube recently updated its own monetization policies to block repetitive, spammy videos from earning revenue. Like Meta, YouTube said AI-assisted content can still qualify for monetization if it adds genuine creative value.

Emma Stoltzfus (Video Editor) and Mathew Grisham (Digital Producer) contributed to this report.
Tags: , , , ,

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Why this story matters

Meta's removal of millions of impersonator accounts and its introduction of stricter penalties for unoriginal and spammy content highlight efforts to promote genuine content, protect creators and address the broader challenges posed by AI-generated media on social platforms.

Content authenticity

Strengthening measures against impersonation and unoriginal content aims to protect legitimate creators and foster a more trustworthy environment for users.

Platform policy evolution

Adapting policies to address issues like spam and low-quality and AI-generated content aligns Meta with broader industry trends and influences how monetization and user engagement are managed across digital platforms.

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Media landscape

Click on bars to see headlines

19 total sources

Key points from the Left

No summary available because of a lack of coverage.

Report an issue with this summary

Key points from the Right

No summary available because of a lack of coverage.

Report an issue with this summary

Powered by Ground News™

Daily Newsletter

Start your day with fact-based news

Start your day with fact-based news

Learn more about our emails. Unsubscribe anytime.

By entering your email, you agree to the Terms and Conditions and acknowledge the Privacy Policy.