YouTube will rely on spotty AI to comply with FTC settlement
(Bloomberg) --YouTube will stop selling personalized ads on videos aimed at children as part of a regulatory settlement on Wednesday. But the company’s plan relies on technology that has struggled to make nuanced decisions in the past.
The Google unit will use artificial intelligence to identify which videos are aimed at kids, then cut those clips off from targeted ads.
It’s a plan politicians and consumers have heard before. YouTube has used AI for years to find and take down unwanted content including pornography, terrorist propaganda and extreme violence. Other tech companies, such as Twitter Inc. and Facebook Inc., have said AI is the answer to their problems too, from online harassment to election meddling by foreign states.
Google is one of the most accomplished AI companies, but with so much online content, the technology sometimes falls short, as it did when thousands of videos of the March terrorist attack on a New Zealand mosque were uploaded to YouTube.
AI isn’t the first line of defense. YouTube is also asking video creators to self-report if their content is aimed at kids. But creators rely heavily on ad revenue, so they may have little incentive to tell YouTube when their clips are for kids. Indeed, some are already describing their productions as “family-based play” or “co-play,” rather than videos specifically for children. That suggests AI will have a major role in policing the new rules and finding videos that might fall into a gray zone between kids and other content.
“In order to identify content made for kids, creators will be required to tell us when their content falls in this category, and we’ll also use machine learning to find videos that clearly target young audiences, for example those that have an emphasis on kids characters, themes, toys, or games,” YouTube Chief Executive Officer Susan Wojcicki wrote in a blog.
“If creators intentionally fail to properly classify their content, we will take appropriate action,” a spokeswoman for Google said.
YouTube uses machine learning, a type of AI software that gets smarter by crunching more data and needs less input from human coders. Google is a leader in the field, but it’s unclear how well the tech will work when applied to the reams of kids content on YouTube.
When YouTube launched a kids-specific app in 2015, it used software to pick the right videos from billions of clips on the main YouTube site. Less than three months after launch, child and consumer advocacy groups found inappropriate content on the app, including explicit sexual language and jokes about pedophilia.
“The AI has been improving its ability to identify content and, although I’m sure there will be mistakes here and there, I believe it will adequately identify content intended for children,” said Melissa Hunter, founder of the Family Video Network, which has a channel on YouTube. “And as scammers develop new methods to trick the AI, which I’m sure they are already working on, YouTube engineers will update the classifiers to overcome those tricks. Nothing is full proof, but I think it is up to the task.”
Brenda Bisner, an executive at Kidoodle.TV, a rival streaming service, is less convinced. Creators of kids videos have become dependent on YouTube sales, so it’s only a matter of time before children see something they shouldn’t on the service again.
The U.S. government should have forced YouTube to eliminate all kids’ videos from its website, and banned YouTube from schools, Bisner said. “Anyone who makes kids’ content shouldn’t be on YouTube,” she added. “It’s been proven time and again that it’s not safe.”
Earlier this year, YouTube software mistook a live video of the Notre Dame cathedral fire for a clip of the 9/11 terrorist attacks. “Our systems sometimes make the wrong call,” a YouTube spokesman said at the time. YouTube’s algorithms can also be tricked by making slight tweaks to a video, such as changing the color of some pixels or flipping it on its side, especially if the content is new to the service. That’s how many of the New Zealand mosque shooting videos got through YouTube’s digital defenses.
YouTube is taking other steps to try to protect children from such blunders in the future. It plans to promote the YouTube Kids app more aggressively and recently limited which channels can be part of this children’s video service.
Targeted ads are more valuable for YouTube’s owner Google. But YouTube’s solution is far less expensive than other potential remedies, such as doing away with all types of ads on children’s videos.
Loup Ventures, a research firm, estimates YouTube’s total revenue will be $10 billion to $15 billion this year, with children’s media contributing between $500 million and $750 million of that. Eliminating targeted ads on kids clips will dent total revenue by 1% at most, according to Doug Clinton, a Loup Ventures analyst.
“Bottom line: YouTube will still serve ads alongside kids content but with less data, and the data probably doesn’t add as much of a premium to the inventory as one might think,” he said.
The change may take a larger bite out of the cottage industry of creators who have grown thriving businesses by making videos for kids and posting them to YouTube, taking a cut of the ad money. YouTube said it expects a “significant business impact” for these kinds of channels, and is setting aside $100 million to help fund “thoughtful, original” kids content.
“The fund is a big deal,” said Chris Williams, chief executive officer of kids’ media company Pocket.Watch. “It clearly shows that YouTube is going to try and soften the blow.”