Edward J. Markey

03/25/2026 | Press release | Distributed by Public on 03/25/2026 18:30

Markey Introduces Legislation to Protect Children from Privacy and Safety Risks Posed by AI Chatbots

Bill Text (PDF) | One-Pager (PDF)

Washington (March 25, 2026) - Senator Edward J. Markey (D-Mass.), member of the Commerce, Science, and Transportation Committee, today introduced the Youth AI Privacy Act, legislation that would require artificial intelligence (AI) companies to implement privacy safeguards in their AI chatbots. In 2025, approximately two-thirds of teenagers reported using AI chatbots, and roughly a quarter reported using them daily. In a couple tragic cases, a teenager died by suicide after encouragement or advice from an AI chatbot.

"AI chatbots pose grave new risks to kids' privacy and safety, but Big Tech continues to speak only one language: profit," said Senator Markey. "My bill stops AI companies from using manipulative tricks to keep kids hooked on chatbots, and it imposes critical privacy protections to stop Big Tech from profiting off our young people. Right now, these chatbots can collect a kid's deepest thoughts, feelings, and fears, and then use that information to keep them coming back. You wouldn't let a stranger do that to your child. A chatbot shouldn't get to either. My Youth AI Privacy Act will stop this exploitative behavior and protect children from the growing dangers of AI chatbots."

Emerging evidence clearly suggests that minors are especially vulnerable to the harms of AI chatbots, particularly as companies introduce increasingly manipulative design features and rely on large amounts of children's personal data. The Youth AI Privacy Act would set new privacy standards for these systems, curb the business incentives that drive harmful design choices, and address the ways Big Tech has engineered chatbots to encourage compulsive use among young people.

The Youth AI Privacy Act would establish:

  1. Safe Design Features
  • Disclosure Requirements: AI chatbots must provide clear, repeated notices to minors that the AI chatbot is not a human.
  • Memory Restrictions: AI chatbots may only use recently collected data in personalizing responses to a minor. AI chatbots may not use any other data in delivering a response to the minor.
  • Addictive Features Limitations: AI chatbots cannot include any features that encourage minors' usage of, or time spent on, the AI chatbot, such as push alerts.
  1. Privacy Safeguards
  • Advertising Ban: AI chatbots cannot display advertisements to minors.
  • Prohibition on Training Models on Minors' Personal Data: AI chatbot companies cannot use minors' personal data to train an AI chatbot.
  • Profiling: AI chatbots cannot use minors' personal data to profile a user.
  • Prohibition on Repurposing Minors' Inputs: Companies cannot use minors' AI chatbot inputs for any reason except to provide an output to the minor or to address safety issues in the AI chatbot.
  1. Enforcement: The Federal Trade Commission, state attorneys general, and private plaintiffs are authorized to enforce the legislation.

Senator Markey continues to demand transparency from AI companies about their deployment of AI chatbots. Most recently in January, Senator Markey wrote to seven major tech companies-OpenAI, Anthropic, Google, Meta, Microsoft, Snap Inc., and xAI-urging details on how the companies will protect their users from manipulation and exploitation if the companies plan to integrate advertising into their AI chatbots.

###

Edward J. Markey published this content on March 25, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on March 26, 2026 at 00:30 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]