In 2025 about 5.4 million young people reported using generative AI tools for mental health or emotional support when they felt sad, angry, or nervous. For many young people, these conversations feel comforting and supportive. But they also come with real risks. Emotional dependency, misleading or inaccurate advice, and the inability of current AI systems to respond appropriately in crisis situations, such as self-harm or suicidal thoughts, have already led to devastating consequences. Several major AI companies now face lawsuits following teen suicides linked to harmful chatbot interactions.
At the same time, policymakers and researchers are struggling to keep up with the pace of technological change. Technology companies are asking for clearer guidance on how to better protect young users. And yet, young people themselves are often missing from the conversation.







