News
  • Facebook
    Facebook
  • Twitter
    Twitter
  • Pinterest
    Pinterest
  • +
  • Linkedin
    Linkedin
  • WhatsApp
    WhatsApp
  • Email
    Email
SHARE THIS
  • Facebook
    Facebook
  • Twitter
    Twitter
  • Pinterest
    Pinterest
  • Linkedin
    Linkedin
  • WhatsApp
    WhatsApp
  • Email
    Email

YouTube and companies that advertise with it are coming under fire after one of the video-sharing site’s users showed how the platform can apparently be used to bolster child exploitation.

User MattsWhatItIs posted the video late Sunday night. It quickly went viral, garnering nearly 600,000 views within 14 hours.

In the video, MattsWhatItIs claims to have discovered a “wormhole” leading to “a softcore pedophile ring.”

He says that after starting a new YouTube account unconnected to any of his previous browsing history, it takes him only a few moments of searching and surfing to get from the YouTube homepage to one of many seemingly innocuous videos that have nonetheless racked up unusually high view totals and comment counts.

The videos typically feature young girls engaged in activities such as yoga, gymnastics or other day-to-day activities. Comments in many languages along the lines of “beautiful goddess” and “beautiful video Barbie” are common, as are suggestions for what the girls could do in future videos. The vast majority of the comments stop short of being sexually explicit.

The comments often include timestamps linking users’ messages to a specific moment in the video. “These guys aren’t timestamping this stuff because the little girl made a funny joke,” MattsWhatItIs notes in his video.

More concerning to MattsWhatItIs is that YouTube’s recommendation engine makes it even easier for users interested in these sorts of videos to find more of them. While he says it took “about five clicks” for his new account to find its first such video, he was soon bombarded with suggestions of similar content, all one click away.

“YouTube’s algorithm, through some kind of glitch or error in its programming, is actually facilitating their ability to do this,” he says.

The video has attracted thousands of comments and social media posts largely supporting its claims and slamming YouTube for allowing the questionable behaviour to fester. Some users have also contacted prominent companies that advertise on YouTube, alerting them that their commercials could be associated with the videos and comments in question.

What is YouTube doing?

Some of the videos flagged by MattsWhatItIs contain advertisements, meaning the people who uploaded them – or in some cases, appear to have stripped them from their creators’ pages and reuploaded them – are making money off the inappropriate interest the videos have generated.

A YouTube spokesperson said the platform is “invest[ing] heavily” in efforts to combat child exploitation, including by forming partnerships with non-profit groups.

“Any content – including comments – that endangers minors is abhorrent and we have clear policies prohibiting this on YouTube,” the spokesperson said in a statement to CTVNews.ca.

“We enforce these policies aggressively, reporting it to the relevant authorities, removing it from our platform and terminating accounts.”

YouTube was criticized in 2017 for monetizing videos which appeared to be harmless cartoons but later morphed into sexual or otherwise potentially disturbing content.

The platform announced at the time that it was strengthening its Community Guidelines to crack down on “content on YouTube that attempts to pass as family-friendly, but is clearly not.” YouTube also said it was taking an “even more aggressive” stance on “inappropriate sexual or predatory comments,” including turning off all comments on videos of children where those sorts of remarks are noticed.

A few of the videos MattsWhatItIs showcased had comments turned off. Most did not, possibly suggesting that they had never been reported by YouTube’s users or flagged by its monitoring system.

The videos themselves would not be taken down by YouTube, as the guidelines cover content that is intended to be in some way sexual – not innocuous videos which prompt sexual interest in a portion of their audience.

YouTube revealed last December that it had removed 7.8 million videos, nearly 1.7 million accounts and more than 224 million comments over three months for violating the platform’s guidelines. The vast majority of those videos contained either spam or adult content, with about 10 per cent being flagged for child safety reasons. About 60 per cent of the videos had not been viewed even once.

How to protect your children

YouTube requires its users to sign up for Google accounts, which requires them to provide their birthdates. Canadians and Americans under the age of 13 are not allowed to create accounts, meaning they cannot publish videos to YouTube, although there is nothing stopping them from providing fake birthdates.

The platform also provides a page of safety tips for teenage users, suggesting that they make use of YouTube’s privacy controls and stay away from filming sexually suggestive content. YouTube’s recommendations for parents include watching how their children use the service and flagging anything that appears to violate the platform’s guidelines.

Accounts that repeatedly violate the guidelines are subject to increasing punishments, with YouTube deleting any accounts that break the rules three times within three months. Accounts can also be terminated immediately if YouTube finds that they engaged in “predatory behaviour” or otherwise endangered children.

Technology expert Claudiu Popa would like to see YouTube and other social media platforms allocate more resources to filtering out inappropriate content and teaching their users about the consequences of using their services.

“There needs to be more investment in filtering; there needs to be more investment in prevention and in education,” he told CTVNews.ca Monday.

Popa is the founder of the KnowledgeFlow Cybersafety Foundation, which works with teachers and schools to promote cybersafety. He said parents should not use social media “as a babysitter” and always be aware of what their children are doing online, without watching every keystroke.

“You don’t necessarily want to be watching everything that they’re constantly typing … because they’re just going to burrow deeper,” he said.

The Canadian Centre for Child Protection maintains the Protect Kids Online website, which offers advice for parents seeking more information about dangers children may face by using the internet and social media.

More on this story from CTVNews.ca