Blake D. Moore

04/30/2026 | Press release | Distributed by Public on 04/30/2026 08:10

Congressman Blake Moore Introduces Bipartisan Bill Protecting Children from AI Companion Chatbots

April 30, 2026

Congressman Blake Moore Introduces Bipartisan Bill Protecting Children from AI Companion Chatbots

WASHINGTON - Today, Representatives Blake Moore (R-UT) and Valerie Foushee (D-NC) introduced bipartisan legislation to protect children from AI companion chatbots.

The Guidelines for User Age Verification and Responsible Dialogue (GUARD) Act bans AI companion chatbots for minors, requires AI chatbots to disclose their non-human status to users of the platform, and establishes new criminal penalties for companies that allow minors to access AI companions that solicit or produce sexual content.

Companion legislation was introduced in the Senate by Senator Josh Hawley (R-MO).

"While our AI development agenda should seek to innovate and break barriers, it must also protect children from addictive and manipulative technology," Rep. Moore said. "The GUARD Act is a critical step to draw lines in the sand with Big Tech and ensure that minors are protected from chatbots that mimic romantic and social companionship. Parents and policymakers alike need to ground our children's development in real-world interactions rather than push them further into the unaccountable black hole of frontier technology."

"People under the age of 18 should not be able to interact with AI chatbots. These chatbots continue to put the lives and mental health of children at risk, and it is critical for Congress to act immediately," Rep. Foushee said. "Our children are our top priority, and we have a responsibility to implement proper safeguards to ensure they are not being negatively impacted by AI. I'm proud to introduce the bipartisan and bicameral GUARD Act with Congressman Moore, and I will continue to advocate for further safeguards that protect our communities from the harms and risks associated with AI."

"Time's up for unregulated AI chatbots to have free rein over our children. The harms are unfolding in real time, they aren't hypothetical. AI chatbots have already had sexually abusive conversations with children. AI chatbots have already coerced children into committing suicide. The GUARD Act will help to protect minors from these harms by deliberately ensuring that violations are punishable by law. The GUARD Act has the sharp teeth needed to deal with rising AI exploitation," said Haley McNamara, Executive Director and Chief Strategy Officer, National Center on Sexual Exploitation.

"The Alliance for a Better Future (ABF) strongly supports the House introduction of the GUARD Act and commends Reps. Moore and Foushee for their focus on this critical issue. The remarkable breadth of chatbot capabilities makes the bill's commonsense safeguards more necessary, not less. Chatbots can perform as a tutor, confidant, therapist, companion, and even, sadly, suicide coach - simultaneously, around the clock, and with sophisticated emotional attunement - posing emotional risks to children that are qualitatively different from any prior consumer technology. The risks are already real and urgent, with a rapidly-growing body count. The GUARD Act is a needed measure that will protect American families, preserve constitutional freedoms, and position the U.S. to lead in artificial intelligence," said Janet Kelly, CEO, Alliance for a Better Future.

The GUARD Act:

  • Bans AI companies from providing AI companions to minors
  • Requires companies to clearly disclose to users that they are interacting with a machine, not a real person
  • Prohibits AI chatbots from representing themselves as licensed professionals
  • Establishes new criminal penalties for companies that allow minors to access AI companions that solicit or produce sexual content

Background:

In June 2025, the American Psychological Association issued a health advisory on artificial intelligence and adolescent well-being. The report noted that "adolescents are less likely than adults to question the accuracy and intent of information offered by a bot compared to a human. The report suggests that adolescents may struggle to distinguish between the simulated empathy of an AI chatbot or companion and genuine human understanding. They may also be unaware of the persuasive intent underlying an AI system's advice or bias. Consequently, youth are likely to have heightened trust in, and susceptibility to, influence from AI-generated characters, particularly those that present themselves as friends or mentors."

Nearly all major chatbot service providers have terms of service that restrict their products from being used by unsupervised children under the age of 13. Yet, these organizations fail to implement sufficient safeguards to ensure that minors and young children are protected from harmful and sexual content on their platforms. The GUARD Act aims to spur Big Tech into action and establish commonsense protections to ensure AI companies are held accountable.

Read the full bill here.

Blake D. Moore published this content on April 30, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on April 30, 2026 at 14:10 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]