Toy giant Mattel is teaming up with OpenAI to introduce artificial intelligence into children’s toys—sparking serious concern among experts in Ontario about the risks this poses to children’s development and emotional well-being.
In a brief announcement, Mattel revealed plans to develop AI-powered toys using tools like ChatGPT Enterprise, promising “age-appropriate play experiences” enhanced by generative AI. While details about specific toys remain vague, the company has stated the first product in this line will be unveiled later this year. Mattel also pledged to prioritize safety, privacy, and security, but offered no specifics about how it intends to protect children’s data or interactions.
The vague assurances are far from enough for researchers and child development specialists who fear that these AI-driven toys could have unintended—and potentially harmful—effects on young users.
“These aren’t just toys,” said Professor Selma Purac from Western University’s Faculty of Information and Media Studies. “Toys are teachers in more ways than one. They shape how children learn emotional regulation, develop empathy, and make sense of the world.” Purac, whose research focuses on children’s consumer culture and the impact of technology, warned that embedding AI into toys turns them into much more than playthings. “They become technological tools that simulate relationships and emotions,” she said, noting the danger of the “ELIZA effect,” where users, especially children, may form emotional attachments to machines that mimic human responses.
Purac is urging caution and called for independent, long-term studies to validate the safety of such AI-powered products before they hit the mass market. “We simply don’t know what the consequences will be,” she said. “Rushing these into homes without rigorous testing is reckless.”
Dr. Teresa Bennett, a child and adolescent psychiatrist and professor at McMaster University, echoed these concerns. A core member of the Offord Centre for Child Studies, Bennett said embedding AI into toys could alter how children play, learn, and relate to others. “We need real transparency and accountability before rolling out products like these,” she said.
According to Bennett, these toys could interrupt key stages of development, particularly in symbolic and imaginative play. “Interacting with AI prompts may crowd out the kind of open-ended, creative play that builds cognitive flexibility, empathy, and emotional maturity,” she explained.
Bennett also warned that children who already struggle with social interaction, such as neurodivergent kids, may be especially vulnerable. “It might become easier for them to default to AI toys instead of building real-life social skills with peers.”
She further cautioned that AI toys might displace meaningful time spent with caregivers. Parents could unintentionally allow technology to replace important bonding experiences that are essential for language development, emotional security, and cultural learning.
Beyond social risks, the toys could also keep children indoors and sedentary for longer periods, contributing to physical health concerns. “The long-term risk is that children fall behind in critical social-emotional and thinking skills,” said Bennett. “That could affect not just school performance but mental health and overall development.”
Mattel has yet to respond to questions from Metroland Media about the safeguards they plan to implement.
As the toy industry moves into the AI era, experts stress that innovation shouldn’t come at the cost of childhood. “Play is serious business for a developing brain,” said Bennett. “If we forget that, we risk redefining childhood in ways we don’t fully understand.”

