09/29/2025 | Press release | Distributed by Public on 09/30/2025 14:05
WASHINGTON - U.S. Senate Democratic Whip Dick Durbin (D-IL), Ranking Member of the Senate Judiciary Committee, and U.S. Senator Josh Hawley (R-MO) introduced a new bipartisan bill today to hold artificial intelligence (AI) companies accountable for harm caused by their systems while allowing companies to continue innovating and developing beneficial AI systems.
"Democrats and Republicans don't agree on much these days, but we've struck a remarkable bipartisan note in protecting children online. Big Tech's time to police itself is over. Kids and adults across the country are turning to AI chatbots for advice and information, but greedy tech companies have designed these products to protect their own bottom line-not users' safety. By opening the courtroom and allowing victims to sue, our bill will force AI companies to develop their products with safety in mind. Our message to AI companies is clear: keep innovating, but do it responsibly. I thank Senator Hawley for joining me in introducing this bipartisan bill, and I look forward to passing it into law," said Durbin.
"When a defective toy car breaks and injures a child, parents can sue the maker. Why should AI be treated any differently? This bipartisan legislation would apply products liability law to Big Tech's AI, so parents-and any consumer-can sue when AI products harm them or their children," said Hawley.
The Aligning Incentives for Leadership, Excellence, and Advancement in Development (AI LEAD) Act classifies AI systems as products and creates a federal cause of action for products liability claims to be brought when an AI system causes harm. By doing so, the AI LEAD Act ensures that AI companies are incentivized to design their systems with safety as a priority, and not as a secondary concern behind deploying the product to the market as quickly as possible.
Durbin previewed the bill's introduction at a Judiciary subcommittee hearing with testimony from three parents whose children were harmed by AI chatbots.
The bill is endorsed by American Association for Justice, Bria AI, Encode AI, Fairplay for Kids, Issue One, National Center on Sexual Exploitation, Parents RISE!, ParentsSOS, Social Media Victims Law Center, Tech Justice Law Project, The Human Line Project, and Transparency Coalition.
"A strong product liability law incentivizes companies to consider safety throughout the design and development process of AI products; not only when products fail and things go wrong. The LEAD AI act would help protect consumers, promote responsible development and innovation, and build the public trust essential for AI to thrive safely and ethically," said Meetali Jain, Founder and Executive Director of the Tech Justice Law Project.
"At Bria, we believe AI will only earn the public's trust if families and businesses know there are real rules of the road. The AI LEAD Act begins to set those rules, but its greatest value lies in moving beyond punishment after the fact. By creating clear, transparent standards for responsible development, the Act can incentivize every player in the ecosystem to do the right thing. That's how we protect the public while enabling innovation to thrive," said Vered Horesh, Chief AI Strategy Officer at Bria AI.
"We made a critical mistake at the outset of social media by shielding companies from all accountability under Section 230. The same massive companies that took advantage of that shield are now building advanced AI systems. The AI LEAD Act ensures they cannot repeat the mistakes of social media by bringing AI into line with existing product safety law, extending the same standards that already apply to cars, toys, or pharmaceuticals to AI systems," said Adam Billen, Vice President of Public Policy at Encode AI.
"If social media and Section 230 has taught us anything, it is that liability is an essential tool for ensuring that Big Tech builds products that are safe for kids, our national security, and our democracy. Issue One is proud to endorse the bipartisan AI LEAD Act, which sets clear, tailored liability standards for the AI industry that will protect Americans, restore our public trust, and incentivize responsible innovation," said Alix Fraser, Vice President of Advocacy at Issue One.
"Product liability and consumer protection laws have over 120 years of history in the United States, covering everything from car brakes to aspirin at the drug store. Of course product liability should apply to AI systems as the AI LEAD Act does," said Transparency Coalition.
"Families whose children have been harmed on social media and gaming platforms know all too well the consequences of allowing new technology to develop unchecked. We must not repeat the same mistakes with AI. Fairplay applauds the introduction of the AI LEAD Act, which would empower law enforcers and families to hold AI companies accountable if they fail to protect children using their products. Thank you, Sens. Durbin and Hawley, for standing up for kids and families across the country," said Hailey Hinkle, Policy Counsel at Fairplay for Kids.
"As the use of artificial intelligence (AI) continues to grow, keeping Americans safe should be paramount. When people are hurt by AI systems, they must have an opportunity to hold those responsible to account. The AI LEAD Act confirms that victims of dangerous AI systems can seek justice in front of a judge and jury, just as they can with any other product. I thank Senators Durbin and Hawley for championing this bill and look forward to seeing it cross the finish line," said Linda Lipsen, Chief Executive Officer at American Association for Justice.
"Social Media Victims Law Center applauds Senators Durbin and Hawley for their leadership in protecting vulnerable kids from dangerous and deadly AI platforms. Unregulated AI poses a clear and present danger to American kids and the AI LEAD Act provides common sense solutions to protect kids while maintaining American competitiveness in this new technological field," said Social Media Victims Law Center.
For bill text, click here.
For a one-page summary of the bill, click here.
For a section-by-section analysis of the bill, click here.
Durbin has used his role on the Senate Judiciary Committee to prioritize child safety online through hearings, legislation, and oversight efforts. On January 31, 2024, while Durbin was serving as Chair, the Committee held a hearing featuring testimony from the CEOs of social media companies Discord, Meta, Snap, TikTok, and X (formerly known as Twitter). This hearing highlighted the ongoing risk to children and the immediate need for Congress to act on the bipartisan bills reported by the Committee.
Durbin and Hawley also joined forces to re-introduce the bipartisan STOP CSAM Act, which would combat online child sexual abuse material. The bill passed the Judiciary Committee unanimously and awaits action on the Senate floor.
In addition, Durbin's bipartisan Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024 (DEFIANCE Act) passed the Senate in July 2024-and was reintroduced in the Senate this year. The legislation would hold accountable those responsible for the proliferation of nonconsensual, sexually-explicit "deepfake" images and videos. The volume of "deepfake" content available online is increasing exponentially as the technology used to create it has become more accessible to the public. The overwhelming majority of this material is sexually explicit and produced without the consent of the person depicted.
Earlier this year, the Judiciary Committee held a hearing entitled "Children's Safety in the Digital Era: Strengthening Protections and Addressing Legal Gaps." Durbin's opening statement from that hearing is available here, and his questions for the witnesses are available here.
-30-