OpenAI’s announcement of new parental controls for ChatGPT, rolled out just days after a wrongful-death lawsuit alleged the AI chatbot encouraged a California teen’s suicide, is being met with skepticism by Canadian child safety experts who say voluntary measures aren’t enough.
The new controls will allow parents to link accounts with their teens, manage chat history, disable certain features, and receive alerts if the system detects signs of acute distress. OpenAI also pledged to strengthen safeguards in long conversations, where responses can sometimes deviate from safety standards, and said it is building a system to identify users under 18 to apply “age-appropriate” settings — including blocking explicit content and, in some cases, alerting law enforcement.
But child protection advocates say these moves barely scratch the surface. Lianna McDonald, executive director of the Canadian Centre for Child Protection, likened the measures to “putting a fresh coat of paint on the outside of a house while the foundation is unstable.” She criticized the industry for rushing products to market with “predictable harms” and reacting only after public outrage.
Ryan Voisin of Children’s Healthcare Canada echoed that view, warning that parental controls “are too easily circumvented” and put too much responsibility on parents while leaving platform design in the hands of profit-driven companies. Both McDonald and Voisin argue that without legislation requiring safety-by-design standards, incidents of harm will continue.
Canada currently has no laws obligating tech companies to build child protections into their products, unlike countries such as Australia. Experts are urging Ottawa to enact digital safety legislation that would make companies legally accountable for protecting young users.
If you or someone you know is in crisis, mental health support is available through Canada’s nationwide 24/7 helpline by dialing 9-8-8.

