09/23/2025 | News release | Distributed by Public on 09/23/2025 01:26
September 23, 2025
At the Axios AI Summit in DC, leaders from government, media, and tech debated one of the core questions shaping our future: should we allow AI development to move as fast as possible, or should we prioritize guardrails and lean into regulation? The answer is nuanced, demanding a delicate balance between innovation and safety.
The dialogue felt like flipping through a history book while writing the next chapter. Discussions of the "AI race" with China harkened back to the Space Race of the 1950s - 70s. In a conversation with journalist Ashley Gold, Senator Ted Cruz took the audience back to the early days of the internet, praising Bill Clinton's "light touch" approach to regulation. These past moments seem like relics; however, they are timely reminders of the inflection point AI has reached in our workplaces, schools, healthcare system, government, and society as a whole.
Throughout the event, tech giants, White House officials, democrats and republicans, and journalists discussed competing priorities when it comes to AI innovation. Senior White House AI Policy Advisor, Sriram Krishnan, spoke about the importance of American dominance in AI, emphasizing the need to "beat China" and assert leadership in chip manufacturing.
Ted Cruz also called for AI acceleration; however, he pointed to safety concerns, sharing the story of a 14-year-old constituent who was victimized by nonconsensual intimate deepfake images online. In response, Cruz advocated for the "Take It Down Act," a law that requires tech companies to remove such images and makes posting this content a felony. This bipartisan bill demonstrates the necessary collaboration and safeguards needed to protect from the real harms made possible by AI.
In a conversation with journalist Ina Fried, Anthropic co-founder and CEO Dario Amodei, spoke of the need for transparency by AI companies, discussing practices used by Anthropic to scan the motivations of AI models "like an MRI." Understanding the why and how behind a model's outputs and conclusions is critical as tech developers audit AI and direct it on a responsible path.
Credo founder and CEO, Navrina Singh spoke with Fried about equity and bias, highlighting that conversations around AI bias have fallen to the wayside as the pressure to develop, adopt, and implement has intensified. Fried and Singh urged a refocus on this topic, indicating the need for context-specific evaluations of AI, disclosure reporting, and investment in trust and safety.
Senator Mike Kelly also spoke of AI bias; however, he pointed out the unfortunate truth that existing societal biases are already woven into models, which may never be undone. He compared this trajectory to mass media, where people choose outlets that confirm their own views.
Kelly also shared concerns about the workforce, discussing massive job disruptions. Congressman Ro Khana shared his concerns and stressed the need for AI academies, trade schools, and apprenticeships to prepare and upskill young Americans for AI-ready, resilient careers. Khana stressed the need for government intervention and job support while White House Economic Council Advisor Jacob Helberg disagreed. Helberg shared his view that increased government intervention and job opportunities would undersell the resilience of American workers; he pointed to the many jobs that could be created as a result of AI.
The Axios AI Summit showed there is space for collaboration between government, tech developers, and society to shape a responsible path forward. These conversations underscore that AI cannot be guided by acceleration or regulation alone. We need both urgency in innovation and care in oversight. Global competition, online safety, bias reduction, and secure job opportunities are all part of the equation.
As I waited for my Lyft to head home from the event, the man standing next to me said it best, "We're living through history," he continued, "AI is changing life as we know it." And, the question remains, how do we set priorities that balance AI's potential while protecting people from harm?
POSTED BY: Sarah Harper