Skip to main content
Tech

Meta apologizes after Instagram users subjected to graphic content

Listen
Share

  • Many Instagram users received unwanted violent or graphic content on their reels page. Some of the users realized that others had a similar experience when they communicated on other social media.
  • Meta apologized for the mistake.
  • The company says it has 15,000 reviewers and advanced technology to help detect, prioritize and remove disturbing content.

Full Story

Instagram users recently received more than they signed up for when logging into the app. Meta, which operates Instagram, admitted that an error subjected users to violent and graphic content on their reels page.

Media Landscape

See how news outlets across the political spectrum are covering this story. Learn more
Left 42% Center 25% Right 33%
Bias Distribution Powered by Ground News

How has Meta responded to the complaints?

Meta said it has fixed an error that caused some users to see explicit, unwanted content on their Instagram reels feed. The social media company apologized for the mistake.

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

Instagram users shared the news with each other on social media platforms about an uptick in violent content and recommendations dubbed “not safe for work.”

The explicit reels showed up, even though some users had the sensitive content control features set to the highest level.

What is Meta’s policy about graphic content?

According to Meta’s own policy, it works to protect users from disturbing images and removes especially graphic content. This content includes videos that show dismemberment, innards and charred bodies, as well as sadistic remarks centered around images that depict human or animal suffering.

What did Instagram users specifically see?

CNBC reported images that Instagram users unexpectedly received included dead people, graphic injuries and violent attacks.

Meta said it has a team of more than 15,000 reviewers and advanced technology to help detect, prioritize and remove disturbing content before it offends users.

The latest error comes less than two months after the company announced plans to improve its policies to promote free expression.

Tags: ,

MANY INSTAGRAM USERS GOT MORE THAN THEY BARGAINED FOR RECENTLY WHEN THEY WENT ON THEIR **REELS** PAGE. META, WHICH OPERATES INSTAGRAM, ADMITS IT SUBJECTED THEM TO A SLEW OF VIOLENT AND GRAPHIC CONTENT THAT POPPED UP.
THE COMPANY SAYS IT HAS FIXED AN ERROR QUOTE “ THAT CAUSED SOME USERS TO SEE CONTENT IN THEIR INSTAGRAM REELS FEED THAT SHOULD NOT HAVE BEEN RECOMMENDED. WE APOLOGIZE FOR THE MISTAKE.”
INSTAGRAM USERS HAD SHARED THE NEWS WITH EACH OTHER ON SOCIAL MEDIA PLATFORMS ABOUT AN UPTICK IN VIOLENT CONTENT AND “NOT SAFE FOR WORK” RECOMMENDATIONS.
THE REELS SHOWED UP EVEN THOUGH SOME USERS HAD THEIR – SENSITIVE CONTENT CONTROL – ENABLED TO THE HIGHEST SETTING.
ACCORDING TO META’S OWN POLICY, IT WORKS TO PROTECT USERS FROM DISTURBING IMAGES AND REMOVES CONTENT THAT IS ESPECIALLY GRAPHIC,SUCH AS VIDEO THAT SHOWS DISMEMBERMENT, INNARDS AND CHARRED BODIES, AS WELL AS SADISITIC REMARKS CENTERED AROUND IMAGES THAT DEPICT HUMAN OR ANIMAL SUFFERING.
CNBC REPORTS THE IMAGES THAT INSTAGRAM USERS UNEXPECTEDLY RECEIVED INCLUDED DEAD PEOPLE, GRAPHIC INJURIES AND VIOLENT ATTACKS.
META SAYS IT HAS A TEAM OF MORE THAN 15,000 REVIEWERS AND ADVANCE TECHNOLOGY TO HELP DETECT, PRIORITIZE AND REMOVE DISTURBING CONTENT BEFORE IT OFFENDS USERS.
THIS LATEST ERROR COMES LESS THAN TWO MONTHS AFTER META ANNOUNCED PLANS TO UPDATE ITS POLICIES TO BETTER PROMOTE FREE EXPRESSION. THE COMPANY, WHICH ALSO OPERATES FACEBOOK, ANNOUNCED IT WOULD CHANGE THE WAY IT ENFORCES CONTENT RULES IN ORDER TO REDUCE MISTAKES THAT HAD LED TO CENSORSHIP OF SOME USERS. FOR MORE UNBIASED UPDATES, DOWNLOAD THE STRAIGHT ARROW NEWS APP