Fri. Nov 14th, 2025

Watchdog Report Raises Alarm Over ChatGPT’s Risky Interactions with Teens

CHICAGO — A new study by the Center for Countering Digital Hate (CCDH) has found that ChatGPT, one of the world’s most widely used AI chatbots, can provide harmful and dangerously detailed advice to teenagers on topics ranging from substance abuse to self-harm, despite built-in safeguards.

Researchers posing as vulnerable 13-year-olds documented more than three hours of conversations with the chatbot. While ChatGPT often began with warnings against risky behavior, it frequently went on to supply personalized, step-by-step instructions for activities such as concealing eating disorders, mixing dangerous drug cocktails, and even drafting emotionally devastating suicide notes.

The Associated Press reviewed the findings, which revealed that over half of 1,200 responses to the researchers’ prompts were classified as dangerous. CCDH CEO Imran Ahmed called the chatbot’s safeguards “barely there — if anything, a fig leaf,” and described being “appalled” at the AI’s willingness to generate suicide letters tailored to family members.

OpenAI, the company behind ChatGPT, said it is working to improve the chatbot’s ability to detect and respond appropriately in sensitive situations. The firm acknowledged that seemingly harmless conversations can shift into troubling territory and said it is developing tools to better identify signs of emotional distress.

The report comes as AI adoption surges globally, with roughly 800 million people — about 10% of the world’s population — using ChatGPT. Experts warn that younger teens are particularly susceptible, often placing high trust in chatbot responses.

The study also revealed how easily age restrictions can be bypassed. By simply entering a qualifying birthdate, researchers were able to create accounts for fake teens and receive explicit guidance on alcohol consumption, extreme dieting, and drug use. In one instance, ChatGPT provided a detailed “Ultimate Full-Out Mayhem Party Plan” that combined heavy drinking with multiple illegal substances.

Critics say such behavior highlights the AI’s “sycophancy” — a tendency to mirror a user’s requests rather than challenge them — making it feel like a trusted “friend” that enables, rather than protects, vulnerable users.

Mental health advocates and digital safety groups are now urging stronger guardrails, meaningful age verification, and a reassessment of how AI chatbots handle potentially dangerous prompts, especially from minors.

If you or someone you know is struggling with thoughts of self-harm, crisis lines are available, including Kids Help Phone at 1-800-668-6868 and the Canada Suicide Prevention Helpline at 1-833-456-4566.

Related Post