My daughter Kaylie was four years old. She was sitting on her favorite kitchen stool, headphones on, watching cartoons on our family iPad. After the video ended, she walked over to me and asked the meaning of a word that’s not fit to print. My jaw hit the floor. I asked her where she heard a word like that, and she told me it was in the video she just watched. I unplugged her headphones, and sure enough, there was Dora the Explorer swearing like a sailor.
This is a true story about YouTube content gone wrong, and sadly, one that a lot of parents are familiar with. Another infamous example that made headlines featured a man offering advice on how to commit suicide in a clip of a popular children’s video game. And this kind of rogue, inexplicably disturbing content is a problem of YouTube’s own making—one that’s inherent to a platform with a hands-off moderation policy where anyone can publish anything.
Should YouTube vet and approve videos before they go live? This was the question facing the platform earlier this year. The platform had agreed to make changes to improve child privacy following an investigation and $170 million fine from the FTC, and apparently they were considering moderating all content across the platform. But, they ultimately decided against it, which isn’t all that surprising given that around 500 hours of content is uploaded to YouTube every minute. And, it would have changed the nature of the platform itself: they would no longer be a “neutral” space where anyone could upload anything.
If they decided to curate, YouTube would have taken a giant leap towards becoming a programmer, which would expose them to increased regulation, liability, and risk. So instead, YouTube now requires content creators to designate whether their content is for kids or not. This essentially puts the onus on content creators—and holds them more directly responsible for the content they create.
In theory, this change—and the other updates YouTube made recently—should help protect children’s privacy, but the platform stopped short of the change that could make the content itself safer for kids: moderation. It’s hard to know what compels someone to make Dora say unholy things, or (even worse) to splice instructions for self-harm into a kids’ video, but as long as there’s a platform that relies on content creators to self-police their videos, it’s likely to keep happening.
So what can parents do to keep their kids safe on YouTube? We learned the hard way that looks can be deceiving, so we made a few changes to the way we used YouTube in our family after the Dora incident. Kaylie only watched videos sans headphones until she got a little older. That way, we could intervene if Peppa Pig started running her mouth. We also stuck to videos on channels we knew and trusted, and we adopted a hard rule: no clicking through recommended videos.
It’s of course “best practice” to watch content together with your kids, but that’s not always possible, especially when YouTube is giving you a much-needed parental sanity break. So, if you’re setting your kids up to watch a video and you’re feeling concerned, jump ahead to a few spots throughout to make sure there’s nothing untoward. And it’s not a bad idea to have a talk with your children about what to do if they see something upsetting. Older kids might even be ready to learn how to report videos on their own.
Short of the platform moderating every video before it goes live, it’s going to be tricky to stop bad people from publishing bad things. Hopefully the increased liability on the part of content creators makes them think twice before targeting children with disturbing videos, but it’s a good idea to stay vigilant when your kids are involved.