SIIA - Software & Information Industry Association

05/12/2026 | Press release | Distributed by Public on 05/12/2026 11:22

Balancing AI Safety and Privacy: Why the GUARD Act Misses the Mark

The following post can be attributed to Danny Bounds, SIIA Counsel for Education Policy

Despite deep divisions across Capitol Hill, protecting children online has emerged as one of the few areas of genuine bipartisan consensus. We agree and support efforts to ensure the safe and responsible deployment of innovative technologies.

This is a complex issue - one that we have grappled with for years. And it is not an issue that can be solved through a simple policy. As our Child and Teen Privacy and Safety Principles and CHAT SAFE Chatbot Principles reflect, it's critical to look at youth online safety and privacy holistically. Policymaking has an important role but is not magic. Educators, parents, and platforms each have a role to play.

Although well-intended, some of the measures don't always translate into effective policy. The Guidelines for User Age-verification and Responsible Dialogue Act - the "GUARD Act" - is a prime example of a measure that attempts to solve a complex issue through rigid, technically flawed mandates that ultimately put all users at risk.

The GUARD Act requires covered entities to verify all users' identities in order to prohibit any person under 18 years of age from using an AI chatbot. It introduces sweeping technological bans that would fundamentally compromise the cybersecurity and privacy of American's data. The bill explicitly rejects self-attestation, which would result in platforms being forced to require users to hand over highly sensitive personal data - including government-issued IDs, credit cards, or biometric facial scans - just to access digital tools.

Mandating the collection of a consumer's personal identity information forces companies to build massive new data repositories. With a rise of cybercrimes in the U.S., we should assume these databases will become prime targets for cyberattacks and identity theft. In effect, this bill may undermine the very users it aims to protect, leaving sensitive data exposed to malicious actors.

The GUARD Act also infringes on Americans' First Amendment rights. By attempting to wall minors off from AI interactions deemed too human-like, the GUARD Act engages in unconstitutional, content-based regulation. The Supreme Court has consistently affirmed that minors possess robust First Amendment rights. Burdening these tools with excessive age-verification regulations also unconstitutionally narrows the pathways through which individuals of all ages can safely and anonymously encounter protected ideas.

The GUARD Act classifies almost any generative AI system capable of open-ended prompts as an AI chatbot. The bill's outright ban on minors using AI companions - vaguely defined as chatbots providing adaptive, human-like responses and simulating interpersonal interaction - is deeply concerning. Under this framework, something as helpful as an AI educational tutor using an encouraging, conversational tone could be outlawed. Regulating software based on conversational style, rather than objective technical benchmarks, sets an unworkable precedent.

Protecting youth online is a vital goal, but the GUARD Act is the wrong vehicle to achieve it. Effective AI policy must balance safety with the preservation of constitutional rights, user privacy, and a thriving, competitive marketplace. By imposing overbroad definitions, violating the First Amendment, and mandating the creation of massive cybersecurity vulnerabilities, the GUARD Act does more harm than good. Policymakers should reject this flawed legislation and instead work with industry leaders to develop nuanced, privacy-protective solutions that keep kids safe without stifling America's position as the global leader in software and AI innovation.

SIIA - Software & Information Industry Association published this content on May 12, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on May 12, 2026 at 17:22 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]