Policymakers nationwide are in catch-up mode as they continue to understand and rein in social media’s impact on youth mental health. In 2023, 12 states enacted social media-related bills, including Montana’s TikTok ban, the first in the nation. This year, state legislatures across the country have filed 230 social media bills.
Policymakers are making moves to try to avoid the potential harms of generative Artificial Intelligence (AI). Already, in 2024 legislative sessions, 80% of states have introduced AI-related bills. According to the National Conference of State Legislatures, over 40 states have bills related to AI — 100 in California alone. Colorado recently enacted the nation’s most sweeping AI law to protect users from “algorithmic discrimination.” Most states, however, focus more on creating AI task forces (Indiana) and combating the rise in child pornography created using AI tools (South Dakota).
Moving policy beyond the status quo
The choice is not between the status quo and banning young people from accessing TikTok, Instagram, and other platforms and technologies. We must take a more nuanced and evidence-based approach to supporting young people. Lawmakers can insist on guardrails that limit harm while not depriving them of vital and affirming online communities. In addition, young people can play a role in the policies that directly impact their lives.
The findings in Hopelab and Common Sense Media’s 2024 report, “A Double-Edged Sword: How Diverse Communities of Young People Think About the Multifaceted Relationship Between Social Media and Mental Health,” show us that frequent social media use continues to be widespread among young people age 14-22 and in ways both supportive and challenging.
Respondents noted that they turn to social media for emotional support, and connection, and to learn about ways to support their mental health and well-being. Many, especially Black and LGBTQ+ young people, join online communities that are challenging to find in-person locally. That said, young people also encounter potentially harmful content that they must actively take steps to manage exposure to, including taking temporary and permanent breaks from their social media accounts.
Evidence-based policy in action
In New York State, the Stop Addictive Feeds Exploitation (SAFE) for Kids Act aims to help young people manage the content they see by eliminating the use of algorithms that determine content feeds. Instead, content would be required to be presented to users in the order that it is posted by the accounts being followed. It is one of several bills introduced in state legislatures nationwide, including in Maryland and North Carolina, to curb platform algorithms’ impact on users.
Another critical research finding notes the increasing percentage of young people aged 14-22 who value the importance of social media for creating connection and creativity. In all, 70% of respondents use social media to get inspiration from others, 60% to express themselves creatively, 55% to feel less alone, and 54% to get support or advice when they need it. While often targeted online, LGBTQ+ young people use social media to help them combat feelings of loneliness and connect them with content that validates their identities.
In addition, 68% of social media users age 14-22 often or sometimes come across comments celebrating a range of body shapes, sizes, and capabilities. And about six in 10 young people often or sometimes see comments affirming people from different racial or ethnic backgrounds (63%), those from LGBTQ+ communities (63%), and intersectional identities (60%) (e.g., Latinx and LGBTQ+).
Moving forward with youth-centered policy design
The evidence is clear. We must not take an all-or-nothing approach to legislating social media. Last year, Utah became the first state in the nation to ban young people under 18 from having a social media account without parental consent. (Utah has since amended the laws after multiple lawsuits.)
Generative AI is often seen through the prism of the lessons learned from our collective social media experiences over the last 20 years. It is the latest wave of technology that will undoubtedly change our society, and we need protections for young people to avoid repeating history.
In fact, in a report released on June 3, Hopelab, Common Sense Media, and The Center for Digital Thriving at Harvard Graduate School of Education found that LGBTQ+ young people were more likely to say the impact of generative AI on their lives will be mostly negative and less likely to say it will be mostly positive in the next 10 years compared to cisgender/straight young people. These responses are an early warning sign for all of us.
The research found that while 51% of young people aged 14-22 have used generative AI at some point in their lives — most commonly for getting information and brainstorming — only 4% use it daily. As with social media, generative AI is seen as a mixed bag: 41% of young people believe that it will likely have positive and negative impacts on their lives in the next 10 years.
One of the biggest challenges for policymakers regarding generative AI is the speed at which the technology is evolving and proliferating in our daily lives. That speed far outpaces our normal policymaking timelines, which often extend years. The time to act is now before we’re too far down this road.
As young people begin to experiment more with generative AI, it is incumbent on federal and state leaders to do something that still isn’t happening in the world of social media-related legislation: center young people’s full, lived experiences and co-create solutions with them, not onto them. The window to act is upon us.