09/18/2025 | News release | Distributed by Public on 09/18/2025 06:35
The recent killing of Charlie Kirk-regardless of one's political alignment-has intensified national reflection on the state of our political discourse. Violence against anyone for their beliefs is an assault on democratic values. This moment has sparked rare bipartisan calls to reject incendiary rhetoric and recommit to civil engagement.
Radicalization in politics is not new. But today, it is amplified, monetized, and normalized through the very platforms where our public discourse now lives. It feels more commonplace today than at any point in American history. That's not necessarily because it happens more often, but because it's more visible, more immediate, and more inescapable in an era of social media and live-streamed video and audio. We're seeing and hearing things we might never have been exposed to in the past.
This is no accident. Social media algorithms are explicitly designed to maximize user engagement-not accuracy, civility, or truth. The most inflammatory content is rewarded with amplification, regardless of whether it's true, defamatory, or dangerous. This creates a system where extremism is not just tolerated but incentivized. The result is an environment that's not just toxic - it's legally unaccountable.
Section 230 of the Communications Decency Act was originally intended to protect online platforms from liability for user posts. But today, it provides near-total immunity to the largest tech companies-even when their own algorithms actively promote harmful, illegal, or even deadly content.
For example, in Gonzalez v. Google, the family of a U.S. citizen killed in a Paris terrorist attack argued that YouTube's algorithm actively recommended ISIS content. And yet, courts shielded Google from liability under Section 230. When a multibillion-dollar company can engineer its systems in a way that results in the promotion of extremist propaganda and then disclaim all responsibility, we must ask: What is the purpose of a liability shield that protects this behavior?
Section 230 protections have shielded platforms from accountability even in tragic and preventable cases:
These are not edge cases. They reveal a systemic failure: social media companies face no consequences for design choices that would be unacceptable for other types of companies. Section 230 has become a legal firewall for product decisions that would not pass muster in any other industry.
Congress has an urgent responsibility to reform this law. At minimum, companies should be required to:
Tech companies argue that any effort to place guardrails on social media amounts to censorship. In reality, they are protecting their bottom line. Reform would threaten the low-cost, high-profit business model that relies on unfettered data extraction and behavioral manipulation.
In every other industry, companies are held accountable for the products they design-especially when harm to children is involved.
News organizations are held to account for what they publish-often in court. Whether it's a private citizen or a powerful public figure, individuals have legal recourse when they believe they've been wronged by the press. Consider the high-profile case of Hulk Hogan, who sued Gawker Media for invasion of privacy and won $140 million in damages-a verdict that ultimately forced the company into bankruptcy. That case underscores a fundamental principle: when media companies cause harm, they can be held liable.
In other industries, many major companies have been held liable for selling defective or unsafe products that led to the deaths of children, resulting in multimillion-dollar verdicts and settlements. IKEA settled for $46 million over dressers that tipped over; Fisher-Price paid a $13 million penalty and undisclosed settlement amounts and was forced to recall Rock 'n Play sleepers tied to over 100 infant deaths, and; Evenflo is currently facing multiple lawsuits and investigations for marketing "safe" booster seats even though they allegedly had internal data showing a high risk of injury or death.
It is outrageous that parents who have lost a child to suicide because of social media algorithms don't have the same opportunity for justice.
Considering the corrosive impact of online extremism, we must expect more-from platforms, from policymakers, and from ourselves. The best way to restore a healthier political discourse, a safer digital environment-and a safer world-is to make social media companies legally responsible for the products they design and the harm those products cause. This is not a debate about censorship. It's about accountability. If your business model profits from harvesting Americans' most personal data, you must also bear responsibility when that model causes real-world harm.