Skip to main content
Business

Social media could use facial recognition to root out underage users in UK

Listen
Share

Social media platforms in the U.K. could be tasked with using facial recognition technology to “drive out” underage users. Ofcom, Britain’s communications regulator, will officially lay out plans next month to keep kids off social media, as research shows an increased number of young children on the sites.

Media Landscape

See who else is reporting on this story and which side of the political spectrum they lean. To read other sources, click on the plus signs below. Learn more about this data
Left 60% Center 0% Right 40%
Bias Distribution Powered by Ground News

“The sort of thing that we might look to in that space is some of this facial age estimation technology that we see companies bringing in now, which we think is really pretty good at determining who is a child and who is an adult,” Ofcom’s Online Safety Policy Director Jon Higham told The Telegraph.

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

“So we’re going to be looking to drive out the use of that sort of content, so platforms can determine who’s a child and who isn’t, and then put in place extra protections for kids to stop them seeing toxic content,” he added.

Ofcom researched the number of children using social media platforms across the U.K. It estimates 60% of 8 to 11-year-olds have social media accounts. That would mean platforms could lose as many as 1.6 million users once the rules are implemented.

Meanwhile, Higham says more than 20% of underage children, who have social media accounts, claim they are adults.

“It doesn’t take a genius to work out that children are going to lie about their age. So we think there’s a big issue there,” he told The Telegraph.

Scrutiny of social media’s effects on young children has ramped up in recent years. In response, platforms have rolled out more ways to verify a user’s age, including scanning IDs, facial age estimation and even having an adult confirm the child’s age.

Next month, Ofcom will officially lay out exactly what it expects from social media sites to do to make sure their users aren’t underage.

But Higham says the regulator will expect “the technology to be highly accurate and effective. We’re not going to let people use poor or substandard mechanisms to verify kids’ age.”

Failure to comply with these rules could result in a big bill for tech giants. Under the U.K.’s Online Safety Act, Ofcom can fine tech companies that fail to protect children up to 10% of global revenue. That could run around $12 billion for a company like Facebook. For persistent failures to protect kids, executives could be jailed for up to 2 years.

Tags: , , , , ,

[Karah Rucker]

Social media sites could use facial recognition to root out underage users in the U.K., under plans set to be unveiled in January.

The tech could “drive out” millions of underage children from platforms, according to the head of online safety policy for Ofcom, the U.K.’s communications regulator.

In an interview with the Telegraph, Ofcom’s Jon Higham said “The sort of thing that we might look to in that space is some of this facial age estimation technology that we see companies bringing in now, which we think is really pretty good at determining who is a child and who is an adult.”

“So we’re going to be looking to drive out the use of that sort of content, so platforms can determine who’s a child and who isn’t, and then put in place extra protections for kids to stop them seeing toxic content,” He added.

Ofcom estimates 60% of eight to 11-year-old children have social media accounts. So, that affects more than 1.5 million kids in the U.K. Most major social media platforms have an age floor of 13.

Higham says more than 20% of under-age children, who do have social media accounts, claim to be adults.

“It doesn’t take a genius to work out that children are going to lie about their age. So we think there’s a big issue there,” he told the Telegraph.

Amid global scrutiny of the effect of social media on children, platforms have rolled out more ways to verify age including scanning IDs, facial age estimation and even having an adult confirm age.

But Ofcom research shows that age verification isn’t triggered that often. When it comes to being asked to verify age. Only 18 percent of Instagram users triggered a check, while 19 percent of TikTok users had to provide proof and just 14 percent of snap chat users were ever asked.

Next month, Ofcom will officially lay out exactly what it expects from social media sites to do to make sure their users aren’t under age.

But Higham says the regulator will expect “the technology to be highly accurate and effective. We’re not going to let people use poor or substandard mechanisms to verify kids’ age.”

Failure to comply with these rules could result in a big bill for tech giants. Under the U.K.’s Online Safety Act, Ofcom can fine tech companies that fail to protect children up to 10% of global revenue. That could run around $12 billion for a company like Facebook. For persistent failures to protect kids, executives could be jailed for up to 2 years.
{Reporter Tag}