12/12/2025 | Press release | Distributed by Public on 12/12/2025 15:34
Photo: Alex Wong/Getty Images
Commentary by Aalok Mehta
Published December 12, 2025
Yesterday, President Donald Trump signed an executive order (EO) targeting state regulations on AI and laying the groundwork for development of a national AI policy. Among other things, the order establishes an AI Litigation Task Force "whose sole responsibility shall be to challenge State AI laws" deemed to hinder AI innovation and takes steps to withhold Broadband Equity Access and Deployment and other federal funding from states that have "onerous" state AI laws, while calling for development of federal legislation on AI.
In targeting state laws well before a durable framework for regulating AI is in place at the federal level, however, the Trump administration remains deeply out of touch with what the American public-including prominent members of both political parties-wants, while simultaneously threatening to undermine both its domestic and international goals for AI technology.
The EO reflects the continued prominence of "AI accelerationists" within the administration, who argue that removing obstacles to development is perhaps the most important factor in ensuring that the United States can outcompete China in AI technology. The administration has twice attempted to codify this approach into law, without success. Congress stripped a moratorium on state AI laws out of the One Big Beautiful Bill Act by a vote of 99 to 1, and a more recent attempt to include a provision in the 2026 National Defense Authorization Act also failed.
In many ways, it's easy to be sympathetic to this viewpoint. In recent years, state governments have devoted significant attention to AI issues; in 2025 alone, they considered more than 1,000 AI bills covering a wide range of topics, including frontier AI models, deepfakes, and children's safety, with more than 100 signed into law. This builds on an already robust amount of AI legislative activity in 2024 and is likely to cause real issues in coming years. For example, the patchwork of privacy and data breach laws that have emerged at the state level in the absence of federal action impose a significant compliance burden on companies, disproportionately affecting small businesses and startups.
It is also the case, however, that the state legislators drafting AI bills are responding to real demand signals from their citizens. More than half of Americans are more concerned about AI than they are excited, a significant increase from the pre-ChatGPT era. A recent Gallup survey found that 80 percent of adults in the United States think government should maintain rules for AI safety and data security, even if it means developing AI capabilities more slowly; just 9 percent say "government should prioritize developing AI capabilities as quickly as possible."
The diversity of state laws reflects the breadth of American concerns about AI, including model safety; unauthorized use of copyrighted material for AI training; the potential connection between chatbot use and suicide; the proliferation of deepfakes, particularly unauthorized sexual content; children's safety; and the use of AI for high-impact decisionmaking such as access to housing or financial services..
The new EO recognizes the political challenges to its approach. Compared to an earlier leaked draft, the EO places greater emphasis on working with Congress to develop a national legislative framework for AI. The EO also now explicitly states that the administration does not intend to push for preemption of state laws on children's safety, data centers, and other select areas-a clear indication that the administration recognizes just how much political traction these issues have at the state level.
Ultimately, however, the EO puts the cart before the horse, marshaling the resources of the federal government to push back on state laws now while calling for development of a federal framework at some indeterminate point in the future. This is highly problematic for the United States' long-term AI ambitions.
First, the EO's approach is not aligned with what U.S. citizens and many lawmakers want when it comes to AI. Americans remain highly skeptical about the benefits of AI, as well as about the technology industry in general, and they are keen not to repeat the mistakes of the social media era. By preempting state laws before a federal framework exists, the EO stands to further increase concerns about AI technology and to slow down the pace of AI adoption, at a time when China is investing significantly in and seeing promising results from its own push on adoption.
Second, the concerns driving the development of this EO are largely a matter of false urgency. There is little evidence to date that state laws are having a significant impact on the pace of AI development, but they often address substantiated areas of harm. Without a federal standard, fragmentation is likely to become a more significant issue in the future, but it does not appear to be an issue that requires an immediate response. Moreover, state lawmakers and the public are right to be skeptical about claims of future legislation by Congress; the United States has been working for more than 20 years on comprehensive privacy legislation.
Third, the EO takes an unnecessarily antagonistic approach to the efforts of state lawmakers. AI is a revolutionary technology that stands to impact every sector of the economy at an unprecedented pace, but those same features make it particularly difficult to regulate well. As former U.S. Supreme Court Justice Louis Brandeis noted in 1932, "a single courageous State may, if its citizens choose, serve as a laboratory; and try novel social and economic experiments without risk to the rest of the country." State lawmakers are making genuine attempts to grapple with world-changing technology; the administration would benefit from learning from and incorporating their efforts rather than villainizing and dismissing them. This EO is also unlikely to have the desired impact; it hinges on uncertain legal interpretations that raise constitutional issues and are almost certain to be challenged in court.
Finally, the EO increases the challenges that the United States faces in engaging on AI governance internationally. As with state fragmentation, a patchwork of international laws that vary in requirements and scope could also impose significant, counterproductive, and inconsistent compliance requirements on U.S. companies, slowing their ability to innovate and expand into new markets. But the United States cannot credibly push for a unified regulatory approach globally without an affirmative vision for AI governance-backed up with legislative action-at home.
Preemption is a worthwhile goal that can address real concerns about the burdens of state fragmentation on AI innovation. An AI moratorium-accompanied by unsatisfying promises about future legislative action-is not. The EO's aggressive use of litigation and threats of withholding federal funding are out of tune with the American public, legally questionable, and unnecessarily antagonistic towards states; they will also certainly attract legal challenges and unproductively consume both federal and state resources.
The administration would be much better served by delaying these plans. Instead, it should focus on leveraging data from state legislative efforts and other emerging areas of consensus-such as model cards and scaling frameworks-to develop a realistic and politically viable legislative approach to AI to pair with preemption. Otherwise, the administration risks exacerbating public skepticism of AI technology, backsliding on its goals of increasing adoption of U.S. technology, and undermining its ability to lead internationally.
Aalok Mehta is director of the Wadhwani AI Center at the Center for Strategic and International Studies in Washington, D.C.
Commentary is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).
© 2025 by the Center for Strategic and International Studies. All rights reserved.