Big Tech faces global push as social content moderators unionize


Summary

Mental toll

Content moderators report trauma, PTSD, and depression from constant exposure to graphic content.

Global union movement

Moderators from Kenya, Turkey, and more form a global union to push for beter conditions and support.

US to join

U.S. moderators were not a part of the union launch, but they are expected to support GTUACM.


This recording was made using enhanced software.

Summary

Mental toll

Content moderators report trauma, PTSD, and depression from constant exposure to graphic content.

Global union movement

Moderators from Kenya, Turkey, and more form a global union to push for beter conditions and support.

US to join

U.S. moderators were not a part of the union launch, but they are expected to support GTUACM.


Full story

Content moderators are demanding better working conditions from Big Tech. They argue their work reviewing violent and harmful online material takes a severe mental toll.

When users report violent or offensive posts on platforms like Facebook, TikTok, or Instagram, those posts don’t just disappear. A hidden team of mostly contracted moderators reviews the flagged content and decides if it violates community guidelines.

In Kenya, the Global Trade Union Alliance of Content Moderators (GTUACM) announced its launch at the end of April. The group says it will “hold Big Tech responsible” for poor working conditions and mental health risks.

Workers share mental health struggles

Sociologist Milagros Miceli said, “There are surely some content moderators that haven’t suffered mental health problems connected to the job, but I haven’t met them.” She compared the work to coal mining, calling it a hazardous profession.

According to former Meta moderator Michał Szmagaj per The Verge, “We had to look at horrific videos – beheadings, abuse, torture. It damages you. But it doesn’t stop there. We’re also stuck with short-term contracts, constant pressure to meet targets, and being constantly watched.”

Moderators are now calling for better mental health services, more stable contracts, and fair treatment across the industry.

A global push for accountability

The new union includes members from Kenya, Turkey, and the Philippines. While the United States was not part of the initial launch, union leaders expect U.S. moderators to be involved in some capacity.

Artificial intelligence tools now assist in moderation, but they still rely on human workers to train and guide the systems.

Companies respond cautiously

Some tech companies are currently tied up in legal action as it relates to content moderators.

In February, The Guardian highlighted a content moderator who is part of a class action lawsuit in Kenya. There, 185 content moderators sued Meta, claiming trauma, low pay, and rushed work conditions.

Meta declined to comment on the lawsuit but said it is working on solutions such as blurring graphic content to reduce workers’ exposure.

Unionizing could slow moderation process

Critics warn that unionizing content moderators could slow down how efficiently platforms flag and remove harmful content. While the major tech companies have not issued public statements directly opposing the new union, they emphasize the importance of maintaining moderation speed and flexibility.

The growing movement marks a pivotal moment for an invisible workforce many users don’t even realize exists – one that shapes what we see (and don’t see) every time we open our apps.

Bast Bramhall (Video Editor) contributed to this report.
Tags: , , ,

Why this story matters

The organization of content moderators worldwide highlights growing concerns about their working conditions and mental health risks, as well as the pivotal role they play in shaping online experiences.

Working conditions

Content moderators protect platforms from harmful content but face severe mental strain, often without proper support.

Union impact

If union demands reshape how moderation operates, it could affect how quickly and effectively platforms respond to harmful posts.

AI moderation

AI relies on content moderators to learn from their habits and practices.