How AI-generated videos are distorting your child’s YouTube feed
The Times focused primarily on YouTube Shorts when conducting its analysis of AI videos, as most AI tools default to short-form video and offer vertical formatting options.
Written by Arijeta Lajka
Four seconds into one version of “Old MacDonald Had a Farm,” an animated horse with two arms and four legs hatches from an egg.
In another video, a pink elephant, an orange flamingo and other animals appear next to letters of the alphabet, performing complicated gymnastic maneuvers on tightropes.
And in another video, animals form from paint being squirted into a glass of water and inexplicably grow mermaid tails.
The New York Times reviewed these clips, along with more than 1,000 other videos recommended to young children on YouTube, and found that the algorithm pushes bizarre, often nonsensical, artificial intelligence-generated videos from channels claiming to teach “toddlers” and “preschoolers” about the alphabet and animals.
In some videos, animals and people have warped faces or extra body parts. Often, the videos contain garbled text. Most clips have incoherent narratives, some riddled with misinformation. And none are longer than about 30 seconds, allowing little time to develop ideas, plots or any sense of repetition that is often necessary for learning.
Now produced with the help of readily available AI tools and online tutorials, many of these videos have millions of views and counting, with channels churning out videos at a rapid rate, sometimes even multiple times a day.
Many of the YouTube accounts producing AI-generated videos reviewed by the Times specifically target the youngest of viewers and their parents, marketing their channels as “educational” as opposed to entertainment. Creators are profiting off this content with little oversight from YouTube.
“To me, the meaninglessness of these videos is a huge problem because they’re just attention capture,” said Dr. Jenny Radesky, a developmental behavioral pediatrician and associate professor of pediatrics at the University of Michigan Medical School. “And then the worst case is that it’s so fantastical and full of attention capture that it is going to be cognitively overloading to the child.”
Radesky and others raised concerns about hyperrealistic AI content, especially for children who are too young to be able to distinguish fantasy from reality.
McCall Booth, a developmental psychologist and researcher at Georgetown University, said children “may have a harder time in the future identifying fake content because their mental schema had already adapted to include improbable but aesthetically realistic character actions.”
Even on YouTube Kids, which is intended to provide a more controlled digital environment for children, these kinds of AI videos are easy to find. Last summer, videos of AI-generated animals diving into pools was even a TikTok trend.
Rachel Barr, a developmental psychologist and director of the Georgetown University Early Learning Project, pointed out that the pool-diving videos in particular contain a lot of conflicting information for young children who may have a hard time deciphering what is real.
“The animal could be real. The pool could be real, but again, it’s a mismatch between what should happen in the real world between those two things. So that is going to place a lot of this cognitive load on the child to try and map those things together,” Barr said.
“It may seem like it’s innocuous,” she added. “But that is not going to help them learn either about swimming or giraffes or ‘G.’”
Radesky explained that well-crafted media serves as a mirror and helps reflect the world that children already know, back to them. Shows like “Mister Rogers’ Neighborhood” or “Sesame Street,” for example, intentionally try to help make sense of the world — not only through letters and numbers, but also through emotions and learning about interpersonal relationships.
The American Academy of Pediatrics issued a guide for parents on how to select media content for their young children, telling parents to avoid content that is either AI-generated or highly sensationalized. The guidance also cautioned against consuming short-form videos.
While there aren’t many studies yet on how short-form media affects young children, Barr said that for children under the age of 5 whose attention systems are still developing, the videos move too rapidly, and usually aren’t long enough to include any meaningful context or story plot.
The Times focused primarily on YouTube Shorts when conducting its analysis of AI videos, as most AI tools default to short-form video and offer vertical formatting options.
Over the course of several weeks, the Times watched videos from popular children’s channels on YouTube like CoComelon, “Bluey” or Ms. Rachel from a private browser at different times throughout the day. Then we scrolled through the platform’s recommended YouTube Short videos in 15-minute intervals in order to better understand how the algorithm floods the feed with this content.
In one 15-minute session, after watching CoComelon’s “Wheels on the Bus” video, more than 40% of the videos watched appeared to contain AI-generated visuals. The Times manually reviewed each of the videos, some of which clearly featured YouTube’s label for “altered or synthetic content,” while others displayed visual errors or other distortions in the background.
The AI-generated content wasn’t always obviously flawed, and some videos were sufficiently seamless to evade casual detection by the human eye. To further vet the videos, the Times used an AI detector to determine with high probability that the videos, and in some cases the music and voices, were AI-generated.
The Times also found that the same AI videos or channels tended to pop up repeatedly in multiple sessions.
Mitch Prinstein, a professor of psychology and neuroscience at the University of North Carolina at Chapel Hill, further questioned the addictive nature of these videos.
“These do strike me as something that are made to really get in your head,” Prinstein said. “It may even be harmful, but we need more data.”
Prinstein explained that due to the dramatic proliferation of AI content in just the last year alone, it’s hard to keep up with the research findings.
While the jury is still out when it comes to definitive long-term health effects, and low-quality videos aimed at children existed on the platform long before the rise of AI, experts fear that the sheer volume of these videos now may cause displacement, in which children lose out on opportunities to engage with media content or other activities like reading and interacting with others that could bring them more benefits.
The vast quantity of AI content is already upending the feeds of all kinds of social media users. Elsewhere on YouTube, older children can easily find disturbing videos depicting abusive and violent scenes featuring popular children’s characters. Facebook pages are uploading altered images that misrepresent historical events. AI avatars in the form of “doctors” on Instagram are pushing bogus wellness advice and products. In November, TikTok said it had labeled over 1.3 billion videos as AI-generated.
Some platforms have begun to tighten their rules around the use of these tools. Pinterest has features that allow users to select how much of this kind of content they want to see. TikTok also said it was testing ways that would enable people to reduce the amount of AI content in their feeds. Last month, YouTube announced new controls that allow parents to set time limits on YouTube Shorts.
The Times requested comment from YouTube on its policy around AI videos for children, and shared five channels as examples. In response, YouTube suspended all five accounts from the YouTube Partner Program, meaning they are ineligible to earn ad revenue on YouTube and are blocked from appearing on YouTube Kids. The Times also sent three examples of hyperrealistic AI videos on YouTube Kids, which YouTube then removed from the app.
YouTube also stated that it removed one video the Times shared for violating child safety policies. The AI video showed animals being chased and turning different colors once inoculated with a syringe. However, similar videos can still be found on the channel.
“We require creators to disclose when they’ve used AI to create realistic content, meaning things a viewer could easily mistake for a real person, place, or event,” Boot Bullwinkle, a YouTube spokesperson, said in an email to the Times.
But the Times’ review found that creators are not consistently disclosing if videos contain synthetic visuals to make more realistic-looking content. And when it comes to animated AI videos for children, YouTube does not require these to be labeled at all.
This means that much of the burden of identifying AI content is falling to parents — a task that is daunting even for experts as the tools that make this content are rapidly improving.
Some parents have turned to Reddit looking for tips to filter out AI videos on YouTube. Other commentators on the platform advise fellow parents to create their own playlist of vetted content, while some parents are arguing to boycott the platform altogether.
Allison Sims, 34, has two children and lives in Texas. She often turns on her own YouTube account to keep her 2-year-old occupied while she’s making dinner. Her daughter watches Ms. Rachel, The Wiggles and other channels that play nursery rhymes. But it wasn’t long before she figured out how to scroll through YouTube Shorts.
After coming across several shorts that she found disturbing in her daughter’s watch history, Sims said she removed the app from the iPad. She shared some of the videos her daughter watched with the Times, which included AI-generated videos.
“Because AI is so new and as a parent, I wouldn’t know what to look out for except for when they’re very obvious that I stop and look at it,” Sims said. “But I feel like it’s something that as parents we should kind of know and be aware of.”
Sims also questioned the motive of the creators behind the videos. “Is it that they’re actually wanting to help or is it they’re trying to grab your kids’ attention?”
Many of the YouTube accounts uploading AI content for children are largely anonymous with no contact information or identifiable details as to who is behind the account.
But one creator, Syeda Jaria Hassan, spoke to the Times and explained how she taught herself how to make AI videos using tools like Google’s Whisk and Runway. She said that creating AI content for children has become her full-time job.
Hassan, who lives in the city of Sargodha in Punjab, Pakistan, said she decided to focus on making content for children after teaching at a Montessori school for children between 4 and 8. Her account, Suno Kids TV, which is described as a channel to educate and entertain children, features animated AI videos of animals and sing-along songs.
The videos with the most views on her channel are specifically about Halloween. With more than 370 million views, one of her YouTube Shorts features spooky animals covered in bloody wounds with haunting green eyes.
Hassan, 29, declined to say how much revenue this particular video generated or how much the channel makes overall, but noted that if videos “get nice views, it will give you a nice living.”
She even showed some of the videos she created to her former students.
“They loved it,” she said. “They picked up very fast from the videos. They learned the sounds. They learned the spellings. They learned the letters,” Hassan said.
When asked about how children can be distracted by these kinds of effects, Hassan responded that TV channels and other YouTube channels for children also rely heavily on visual effects and that she’s just following a model of children’s programming that has been around for years.
However, when it comes to learning, experts say children benefit most from watching media that has a clear narrative with a beginning, middle and end, along with characters that children can attach to and scenes that relate to their real life. Barr noted that storybooks and other well-structured content aligns with a familiar format, which is following a character throughout a journey. Media that illustrates relatable scenes, like going to the park, ultimately help children understand and connect back to their own world.
Simple language and short phrases are also helpful when it comes to cognitive development. Programming that teaches children about concepts like problem-solving or feature intentional repetition can help with memory recall.
One example is PBS Kids’ “Daniel Tiger’s Neighborhood,” a modern spinoff of “Mister Rogers’ Neighborhood,” which follows a young animated tiger who teaches life skills and social strategies. The show works with child development experts when crafting stories.
Ellen Doherty, chief creative officer at Fred Rogers Productions, explained that they developed a structural pattern for the show, specifically in the format of two separate short stories in every episode with songs that strategically help reinforce the themes within the episode that parents and children can both sing and remember. This music also helps move the story along, but at a controlled speed.
“Everything happens in a pace that a young child who does not have cinematic language yet can follow and can actually literally process what’s happening,” Doherty said.
In one story, Daniel Tiger teaches children about brushing their teeth through song, making sure to interact with young viewers and taking long pauses.
“That spark of human connection is everything,” Doherty said.
But just because a video contains AI elements, does it mean it can’t foster human connection?
Some researchers like Ying Xu, an assistant professor of education at the Harvard Graduate School of Education, say that well-designed AI can actually serve to support children’s learning by satisfying children’s curiosity and helping answer their questions.
Xu focuses her research on designing AI that supports language and literacy development, and collaborates closely with producers of the animated PBS Kids’ shows “Elinor Wonders Why” and “Lyla in the Loop.”
For Xu’s research, an interactive Elinor was developed to allow children to directly respond to the character’s questions, who offered feedback based on the children’s responses. Xu found that the conversational videos helped children better understand science, technology, engineering and math concepts.
“I don’t agree that adults should actually use AI to monetize, to mass produce low-quality videos, but I do think that it actually offers a tool for children to express themselves,” Xu said, adding the caveat that navigating certain AI tools to help children engage in storytelling by creating their own multimedia content should always be guided by teachers and parents.
This article originally appeared in The New York Times.
Written by Arijeta Lajka
Four seconds into one version of “Old MacDonald Had a Farm,” an animated horse with two arms and four legs hatches from an egg.
In another video, a pink elephant, an orange flamingo and other animals appear next to letters of the alphabet, performing complicated gymnastic maneuvers on tightropes.
And in another video, animals form from paint being squirted into a glass of water and inexplicably grow mermaid tails.
The New York Times reviewed these clips, along with more than 1,000 other videos recommended to young children on YouTube, and found that the algorithm pushes bizarre, often nonsensical, artificial intelligence-generated videos from channels claiming to teach “toddlers” and “preschoolers” about the alphabet and animals.
In some videos, animals and people have warped faces or extra body parts. Often, the videos contain garbled text. Most clips have incoherent narratives, some riddled with misinformation. And none are longer than about 30 seconds, allowing little time to develop ideas, plots or any sense of repetition that is often necessary for learning.
Now produced with the help of readily available AI tools and online tutorials, many of these videos have millions of views and counting, with channels churning out videos at a rapid rate, sometimes even multiple times a day.
Many of the YouTube accounts producing AI-generated videos reviewed by the Times specifically target the youngest of viewers and their parents, marketing their channels as “educational” as opposed to entertainment. Creators are profiting off this content with little oversight from YouTube.
“To me, the meaninglessness of these videos is a huge problem because they’re just attention capture,” said Dr. Jenny Radesky, a developmental behavioral pediatrician and associate professor of pediatrics at the University of Michigan Medical School. “And then the worst case is that it’s so fantastical and full of attention capture that it is going to be cognitively overloading to the child.”
Radesky and others raised concerns about hyperrealistic AI content, especially for children who are too young to be able to distinguish fantasy from reality.
McCall Booth, a developmental psychologist and researcher at Georgetown University, said children “may have a harder time in the future identifying fake content because their mental schema had already adapted to include improbable but aesthetically realistic character actions.”
Even on YouTube Kids, which is intended to provide a more controlled digital environment for children, these kinds of AI videos are easy to find. Last summer, videos of AI-generated animals diving into pools was even a TikTok trend.
Rachel Barr, a developmental psychologist and director of the Georgetown University Early Learning Project, pointed out that the pool-diving videos in particular contain a lot of conflicting information for young children who may have a hard time deciphering what is real.
“The animal could be real. The pool could be real, but again, it’s a mismatch between what should happen in the real world between those two things. So that is going to place a lot of this cognitive load on the child to try and map those things together,” Barr said.
“It may seem like it’s innocuous,” she added. “But that is not going to help them learn either about swimming or giraffes or ‘G.’”
Radesky explained that well-crafted media serves as a mirror and helps reflect the world that children already know, back to them. Shows like “Mister Rogers’ Neighborhood” or “Sesame Street,” for example, intentionally try to help make sense of the world — not only through letters and numbers, but also through emotions and learning about interpersonal relationships.
The American Academy of Pediatrics issued a guide for parents on how to select media content for their young children, telling parents to avoid content that is either AI-generated or highly sensationalized. The guidance also cautioned against consuming short-form videos.
While there aren’t many studies yet on how short-form media affects young children, Barr said that for children under the age of 5 whose attention systems are still developing, the videos move too rapidly, and usually aren’t long enough to include any meaningful context or story plot.
The Times focused primarily on YouTube Shorts when conducting its analysis of AI videos, as most AI tools default to short-form video and offer vertical formatting options.
Over the course of several weeks, the Times watched videos from popular children’s channels on YouTube like CoComelon, “Bluey” or Ms. Rachel from a private browser at different times throughout the day. Then we scrolled through the platform’s recommended YouTube Short videos in 15-minute intervals in order to better understand how the algorithm floods the feed with this content.
In one 15-minute session, after watching CoComelon’s “Wheels on the Bus” video, more than 40% of the videos watched appeared to contain AI-generated visuals. The Times manually reviewed each of the videos, some of which clearly featured YouTube’s label for “altered or synthetic content,” while others displayed visual errors or other distortions in the background.
The AI-generated content wasn’t always obviously flawed, and some videos were sufficiently seamless to evade casual detection by the human eye. To further vet the videos, the Times used an AI detector to determine with high probability that the videos, and in some cases the music and voices, were AI-generated.
The Times also found that the same AI videos or channels tended to pop up repeatedly in multiple sessions.
Mitch Prinstein, a professor of psychology and neuroscience at the University of North Carolina at Chapel Hill, further questioned the addictive nature of these videos.
“These do strike me as something that are made to really get in your head,” Prinstein said. “It may even be harmful, but we need more data.”
Prinstein explained that due to the dramatic proliferation of AI content in just the last year alone, it’s hard to keep up with the research findings.
While the jury is still out when it comes to definitive long-term health effects, and low-quality videos aimed at children existed on the platform long before the rise of AI, experts fear that the sheer volume of these videos now may cause displacement, in which children lose out on opportunities to engage with media content or other activities like reading and interacting with others that could bring them more benefits.
The vast quantity of AI content is already upending the feeds of all kinds of social media users. Elsewhere on YouTube, older children can easily find disturbing videos depicting abusive and violent scenes featuring popular children’s characters. Facebook pages are uploading altered images that misrepresent historical events. AI avatars in the form of “doctors” on Instagram are pushing bogus wellness advice and products. In November, TikTok said it had labeled over 1.3 billion videos as AI-generated.
Some platforms have begun to tighten their rules around the use of these tools. Pinterest has features that allow users to select how much of this kind of content they want to see. TikTok also said it was testing ways that would enable people to reduce the amount of AI content in their feeds. Last month, YouTube announced new controls that allow parents to set time limits on YouTube Shorts.
The Times requested comment from YouTube on its policy around AI videos for children, and shared five channels as examples. In response, YouTube suspended all five accounts from the YouTube Partner Program, meaning they are ineligible to earn ad revenue on YouTube and are blocked from appearing on YouTube Kids. The Times also sent three examples of hyperrealistic AI videos on YouTube Kids, which YouTube then removed from the app.
YouTube also stated that it removed one video the Times shared for violating child safety policies. The AI video showed animals being chased and turning different colors once inoculated with a syringe. However, similar videos can still be found on the channel.
“We require creators to disclose when they’ve used AI to create realistic content, meaning things a viewer could easily mistake for a real person, place, or event,” Boot Bullwinkle, a YouTube spokesperson, said in an email to the Times.
But the Times’ review found that creators are not consistently disclosing if videos contain synthetic visuals to make more realistic-looking content. And when it comes to animated AI videos for children, YouTube does not require these to be labeled at all.
This means that much of the burden of identifying AI content is falling to parents — a task that is daunting even for experts as the tools that make this content are rapidly improving.
Some parents have turned to Reddit looking for tips to filter out AI videos on YouTube. Other commentators on the platform advise fellow parents to create their own playlist of vetted content, while some parents are arguing to boycott the platform altogether.
Allison Sims, 34, has two children and lives in Texas. She often turns on her own YouTube account to keep her 2-year-old occupied while she’s making dinner. Her daughter watches Ms. Rachel, The Wiggles and other channels that play nursery rhymes. But it wasn’t long before she figured out how to scroll through YouTube Shorts.
After coming across several shorts that she found disturbing in her daughter’s watch history, Sims said she removed the app from the iPad. She shared some of the videos her daughter watched with the Times, which included AI-generated videos.
“Because AI is so new and as a parent, I wouldn’t know what to look out for except for when they’re very obvious that I stop and look at it,” Sims said. “But I feel like it’s something that as parents we should kind of know and be aware of.”
Sims also questioned the motive of the creators behind the videos. “Is it that they’re actually wanting to help or is it they’re trying to grab your kids’ attention?”
Many of the YouTube accounts uploading AI content for children are largely anonymous with no contact information or identifiable details as to who is behind the account.
But one creator, Syeda Jaria Hassan, spoke to the Times and explained how she taught herself how to make AI videos using tools like Google’s Whisk and Runway. She said that creating AI content for children has become her full-time job.
Hassan, who lives in the city of Sargodha in Punjab, Pakistan, said she decided to focus on making content for children after teaching at a Montessori school for children between 4 and 8. Her account, Suno Kids TV, which is described as a channel to educate and entertain children, features animated AI videos of animals and sing-along songs.
The videos with the most views on her channel are specifically about Halloween. With more than 370 million views, one of her YouTube Shorts features spooky animals covered in bloody wounds with haunting green eyes.
Hassan, 29, declined to say how much revenue this particular video generated or how much the channel makes overall, but noted that if videos “get nice views, it will give you a nice living.”
She even showed some of the videos she created to her former students.
“They loved it,” she said. “They picked up very fast from the videos. They learned the sounds. They learned the spellings. They learned the letters,” Hassan said.
When asked about how children can be distracted by these kinds of effects, Hassan responded that TV channels and other YouTube channels for children also rely heavily on visual effects and that she’s just following a model of children’s programming that has been around for years.
However, when it comes to learning, experts say children benefit most from watching media that has a clear narrative with a beginning, middle and end, along with characters that children can attach to and scenes that relate to their real life. Barr noted that storybooks and other well-structured content aligns with a familiar format, which is following a character throughout a journey. Media that illustrates relatable scenes, like going to the park, ultimately help children understand and connect back to their own world.
Simple language and short phrases are also helpful when it comes to cognitive development. Programming that teaches children about concepts like problem-solving or feature intentional repetition can help with memory recall.
One example is PBS Kids’ “Daniel Tiger’s Neighborhood,” a modern spinoff of “Mister Rogers’ Neighborhood,” which follows a young animated tiger who teaches life skills and social strategies. The show works with child development experts when crafting stories.
Ellen Doherty, chief creative officer at Fred Rogers Productions, explained that they developed a structural pattern for the show, specifically in the format of two separate short stories in every episode with songs that strategically help reinforce the themes within the episode that parents and children can both sing and remember. This music also helps move the story along, but at a controlled speed.
“Everything happens in a pace that a young child who does not have cinematic language yet can follow and can actually literally process what’s happening,” Doherty said.
In one story, Daniel Tiger teaches children about brushing their teeth through song, making sure to interact with young viewers and taking long pauses.
“That spark of human connection is everything,” Doherty said.
But just because a video contains AI elements, does it mean it can’t foster human connection?
Some researchers like Ying Xu, an assistant professor of education at the Harvard Graduate School of Education, say that well-designed AI can actually serve to support children’s learning by satisfying children’s curiosity and helping answer their questions.
Xu focuses her research on designing AI that supports language and literacy development, and collaborates closely with producers of the animated PBS Kids’ shows “Elinor Wonders Why” and “Lyla in the Loop.”
For Xu’s research, an interactive Elinor was developed to allow children to directly respond to the character’s questions, who offered feedback based on the children’s responses. Xu found that the conversational videos helped children better understand science, technology, engineering and math concepts.
“I don’t agree that adults should actually use AI to monetize, to mass produce low-quality videos, but I do think that it actually offers a tool for children to express themselves,” Xu said, adding the caveat that navigating certain AI tools to help children engage in storytelling by creating their own multimedia content should always be guided by teachers and parents.
This article originally appeared in The New York Times.