IE 11 is not supported. For an optimal experience visit our site on another browser.

YouTube's recommendations still push harmful videos, crowdsourced study finds

An investigation from the Mozilla Foundation asked more than 37,000 YouTube users to act as watchdogs and report harmful content.
Image:
Two brothers watch a Brazilian Youtube gaming channel at their house in Rio de Janeiro on Jan. 3, 2019.Mauro Pimentel / AFP via Getty Images file

YouTube’s recommendation algorithm suggests videos with misinformation, violence, hate speech and other content that violates its own policies, researchers say. 

A new crowdsourced investigation from the Mozilla Foundation, the nonprofit behind the Firefox web browser, asked more than 37,000 YouTube users to act as watchdogs and report harmful content through a browser extension that was then analyzed by research assistants at the University of Exeter in England. That user-supplied content included Covid-19 misinformation, political conspiracy theories, and both violent and graphic content, including sexual content that appeared to be cartoons for children, the analysis found. 

A majority — 71 percent — of all the reports came from videos recommended by YouTube’s algorithm,  and recommended videos were 40 percent more likely to be reported than intentionally searched-for videos, according to the report. Reported videos also out-performed other videos, acquiring 70 percent more views per day than others watched by volunteers. 

“When it's actively suggesting that people watch content that violates YouTube’s policies, the algorithm seems to be working at odds with the platform's stated aims, their own community guidelines, and the goal of making the platform a safe place for people,” said Brandi Geurkink, Mozilla’s senior manager of advocacy.

The report provides a fresh look into YouTube's recommendation system, which has come under fire in recent years for its propensity to push some viewers to more extreme content. YouTube has made some changes to the system, but it remains a source of concern among internet activists and extremism researchers.

The crowdsourced research is both the largest of its kind and just “the tip of the iceberg,” according to Mozilla’s report. The research, based on 3,362 self-identified “regrettable” videos, from 91 countries from July 2020 to May, offers a limited but important window into YouTube’s opaque recommendation machine. 

YouTube said videos promoted by the recommendation system result in more than 200 million views a day from its homepage, and that it pulls in more than 80 billion pieces of information. 

"We constantly work to improve the experience on YouTube and over the past year alone, we’ve launched over 30 different changes to reduce recommendations of harmful content," the company said in a statement. "Thanks to this change, consumption of borderline content that comes from our recommendations is now significantly below 1 percent.”

YouTube is the second-most popular website in the world, after its parent company, Google. Yet little is known about its recommendation algorithm, which drives 70 percent of what users watch.

The report highlights the ways in which non-English speaking videos are more likely to be amplified and escape moderation. The rate of “regrettable” videos were 60 percent higher in non-English speaking countries (most notably in Brazil, Germany and France) and harmful videos related to the pandemic were more common in non-English languages.

Nearly 200 videos recommended to volunteers were eventually removed by YouTube, according to the report. These videos had a collective 160 million views before their removal.

Mozilla first published individual stories from users who reported negative experiences with YouTube in 2019 as part of a campaign they called “YouTube Regrets.” Those stories of radicalization and falling down rabbit holes mirror findings from investigations into the dangers of social media algorithms published by news organizations and academic researchers. Mozilla’s newest research was an attempt to work around YouTube’s reluctance to provide researchers with data and validate the anecdotal evidence from its original project, Geurkink said. 

“If YouTube wouldn’t release any data, we could just ask people to send it to us instead,” Geurkink said. “I was struck by how quickly the data confirmed some of the things that we had already been hearing from people.”

The report recommended several solutions, including that users update their data settings on YouTube and Google, that platforms publish transparency reports including information about recommendation algorithms, and that policymakers enact regulations that mandate such transparency. YouTube’s current quarterly reports include limited data on video removals and views. 

“Ultimately what we need is more robust oversight, genuine transparency and the ability to scrutinize these systems,” Geurkink said. “We can't just continue to have this paradigm where researchers raise issues, companies say ‘OK, it’s solved,’ and we go on with our lives.”