Finance
Meta responds to claims it’s ‘struggling’ to keep child predators off Facebook and Instagram
Meta, the parent company of Facebook and Instagram, responded to a Wall Street Journal story from Friday that claimed the social media giant struggles to remove child predators and child exploitation content from the platforms.
The Journal reported that even after Meta set up a child-safety task force in June, it was still found that “Instagram’s algorithms connected a web of accounts devoted to the creation, purchasing and trading of underage-sex content.”
“Five months later,” per the article, “tests conducted by the Journal as well as by the Canadian Centre for Child Protection show that Meta’s recommendation systems still promote such content. The company has taken down hashtags related to pedophilia, but its systems sometimes recommend new ones with minor variations. Even when Meta is alerted to problem accounts and user groups, it has been spotty in removing them.”
META ADDING AI DISCLOSURE REQUIREMENT FOR 2024 ELECTION ADS
Meta has been quick to reject accusations of liability for the sharing of child exploitation material, pointing out that it has taken multiple steps to reduce, remove and eliminate such content from its social media sites. In a company blog post, Meta listed some of those actions.
“[W]e created a task force to review existing policies; examine technology and enforcement systems we have in place; and make changes that strengthen our protections for young people, ban predators, and remove the networks they use to connect with one another,” Meta wrote in a statement on its website. “The task force took immediate steps to strengthen our protections, and our child safety teams continue to work on additional measures.”
A Meta spokesperson also released a separate statement on the issue of child exploitation.
META OFFICIALS WARN CHINA, RUSSIA, IRAN PLAN ‘FOREIGN COVERT INFLUENCE OPERATIONS’ AHEAD OF 2024 ELECTION
“Child exploitation is a horrific crime and online predators are determined criminals,” a Meta spokesperson said in a statement. “They use multiple apps and websites, test each platforms’ defenses, and adapt quickly. We work hard to stay ahead.”
The spokesperson continued: “That’s why we hire specialists dedicated to online child safety, develop new technology that roots out predators, and we share what we learn with other companies and law enforcement. We are actively continuing to implement changes identified by the task force we set up earlier this year.”
The Journal detailed its efforts to expose “disturbing” sexual content of children in various forums.
“During the past five months, for Journal test accounts that viewed public Facebook groups containing disturbing discussions about children, Facebook’s algorithms recommended other groups with names such as ‘Little Girls,’ ‘Beautiful Boys’ and ‘Young Teens Only.’ Users in those groups discuss children in a sexual context, post links to content purported to be about abuse and organize private chats, often via Meta’s own Messenger and WhatsApp platforms.”
Researchers at the Stanford Internet Observatory, the Journal reported, have discovered that “when Meta takes down an Instagram or Facebook hashtag it believes is related to pedophilia, its system often fails to detect, and sometimes even suggests, new ones with minor variations. After Meta disabled #Pxdobait, its search recommendations suggested to anyone who typed the phrase to try simply adding a specific emoji at the end.
Read the full article here