Study suggests automated video recommendations for young users don’t always filter out violence or other content that’s not age appropriate
11:49 AM
Author |
When a child peruses YouTube, the content recommended to them isn’t always age appropriate, a study suggests.
Researchers mimicked search behaviors of children using popular search terms, such as memes, Minecraft and Fortnite, and captured video thumbnails recommended at the end of each video.
Among the 2,880 thumbnails analyzed, many contained problematic clickbait, such as violence or frightening images, according to the Michigan Medicine led research in JAMA Network Open.
“Children spend a significant amount of time on free video sharing platforms that include user-generated content,” said lead author Jenny Radesky, M.D., developmental behavioral pediatrician at University of Michigan Health C.S. Mott Children’s Hospital.
“It’s important to understand that platforms with billions of hours of content can’t perform human review of everything suggested to children and use algorithms that are imperfect. Parents and children need to be aware of the risks of exposure to inappropriate content and develop strategies to avoid it.”
Some research suggests children eight years and younger spend about 65% of their online time on video sharing sites, many averaging an hour a day, Radesky says.
With hundreds of videos uploaded to such platforms every minute, most content moderation sites rely on automated systems to flag videos that violate policies or depict violent or dangerous content.
In response, some platforms like YouTube have created made-for-kids labels to identify content appropriate for younger viewers.
But recent research suggests that many young children seek out videos that don’t fall in the “child-friendly” categories, searching for influencers, video games or funny videos.
Among thumbnails yielded in searches, more than half were identified as including “shocking, dramatic or outrageous” messaging, the study suggests.
It’s important to understand that platforms with billions of hours of content can’t perform human review of everything suggested to children and use algorithms that are imperfect.” Jenny Radesky, M.D.
A little less than a third included violence, peril and pranks while 29% included “creepy, bizarre and disturbing” imagery.
Researchers flagged other content suggestions for “visual loudness,” or using attention capturing designs, as well as manufactured drama and intrigue and depictions of far-fetched luxury items, such as cars, jewelry and houses.
A smaller percentage of automated suggestions included gender stereotypes.
“These findings contribute to growing research on how digital designs aim to capture and keep users’ attention,” Radesky said.
“We need more research on children’s interactions with these platforms to guide better polices that protect them from negative media experiences.”
Additional authors include Enrica Bridgewater, B.S.; Shira Black; August O’Neil; Yilin Sun; Alexandria Schaller, B.A.; Heidi Weeks, Ph.D. and Scott Campbell, Ph.D., all of U-M.
Study cited: “Algorithmic Content Recommendations on a Video-sharing platform used by children,” JAMA Network Open. DOI: 10.1001/jamanetworkopen.2024.13855
Sign up for Health Lab newsletters today. Get medical tips from top experts and learn about new scientific discoveries every week by subscribing to Health Lab’s two newsletters, Health & Wellness and Research & Innovation.
Sign up for the Health Lab Podcast: Add us on Spotify, Apple Podcasts or wherever you get you listen to your favorite shows.
Explore a variety of health care news & stories by visiting the Health Lab home page for more articles.
Department of Communication at Michigan Medicine
Associate Professor
Want top health & research news weekly? Sign up for Health Lab’s newsletters today!