Ben Ray Luján

12/16/2025 | Press release | Distributed by Public on 12/16/2025 14:24

Luján, Heinrich, Rosen Press Meta on Dramatic Rise in Antisemitism on Their Social Media Platforms and Artificial Intelligence Models

Washington, D.C. - Today, U.S. Senators Ben Ray Luján (D-N.M.), Martin Heinrich (D-N.M.), and Jacky Rosen (D-NV) pressed Meta CEO Mark Zuckerberg on Meta's failure to address antisemitic content on its platforms and potentially promote antisemitism through its artificial intelligence models. Specifically, the senators highlight how a rise in antisemitic content can be linked to the recent changes in community standards and content moderation practices, which were announced by Meta in January. Additionally in the letter, the senators press Meta to disclose its policies and procedures to address antisemitism on their platforms and combat hate speech.

"In recent months, there has been a nearly fivefold increase in antisemitic behavior on Meta's platforms. The rise in antisemitic content has been linked to the recent changes in community standards and content moderation practices, which were announced by Meta in January," wrote the senators.

"We are deeply concerned about the increase of hate speech and antisemitic content on Meta's platforms. Platforming antisemitic speech normalizes it and breeds further hateful speech. Additionally, online hateful conduct can and often leads to real-world violence," continued the senators.

"Meta has an outsized role in its ability to combat antisemitism and we encourage Meta to consider how its policies and practices can be a force to unite communities and foster understanding rather than promote hate speech," concluded the senators.

The full text of the letter is available here and below.

Dear Mr. Zuckerberg,

We write regarding the drastic rise of antisemitic content on Meta's platforms and its artificial intelligence models. Before Congress, representatives of Meta, including yourself, have asserted Meta's responsibility in ensuring its platforms are not used to "hurt others." During a joint session of the Senate Commerce and Judiciary committees, you stated "It's not enough to just give people a voice, we have to make sure people aren't using it to hurt people." When Neil Potts, Meta's current Vice President of Content Policy, testified to Congress in 2022 on antisemitism, he said: "We also recognize that bad actors may seek to use our platform in unacceptable ways, and we take our responsibility to stop them seriously as we give people a voice. We want to ensure that they are not using that voice to hurt others."

In recent months, there has been a nearly fivefold increase in antisemitic behavior on Meta's platforms. The rise in antisemitic content has been linked to the recent changes in community standards and content moderation practices, which were announced by Meta in January. This change in policies stands in conflict with the assurances you and others representing Meta made to Congress: that it was Meta's responsibility to ensure its platforms are not weaponized.

We are deeply concerned about the increase of hate speech and antisemitic content on Meta's platforms. Platforming antisemitic speech normalizes it and breeds further hateful speech. Additionally, online hateful conduct can and often leads to real-world violence. Finally, allowing antisemitic content remaining on Meta's platforms while using that same content to train Meta's AI models could lead to Meta's AI models reproducing antisemitic hate speech and promoting antisemitic conspiracy theories.

With these issues in mind, we request written responses to the following questions by January 7, 2025:

  1. As part of the policy changes announced by Meta in January, it is no longer automatically removing "less severe" violative content. According to a study of Meta's transparency reports, last year there were 277 million posts correctly taken down that, under Meta's new policies, would be left up6.
    1. How does Meta plan to handle the influx of reports of violative content, assuming this number stays static?
    2. Has Meta hired additional reviewers to respond to the expected increases in reports?
    3. Are any additional mechanisms being enacted to respond to reports of violative content?
    4. What percent of posts that would have been removed under Meta's previous policies and are now being left up specifically target the Jewish community?
    5. What actions has Meta taken or does Meta plan to take to improve the timely takedown of reported violative content?
  2. Please describe Meta's current review process, expected response time, and actions Meta takes once content has been reviewed.
  3. In a letter to Senators Luján, Shaheen, Warren, Wyden, and Merkley in November 2024, Meta stated, "our automated systems flag and remove content that violates our policies. AI has improved to the point that it can detect violations across a wide variety of areas, often with greater accuracy than reports from users." Our staff has found examples of clear slurs on Meta's platforms that could be easily detected by keyword search and would not require an AI reasoning model in order to be identified and removed.
    1. Please detail the reasons why Meta has stopped auto-removing clearly violative antisemitic content, despite Meta's assertion that AI can now detect violations with greater accuracy than reports from users.
  4. Staff have found many examples of blatant antisemitism on Meta's platforms (see Appendix) that were reported and reviewed twice. In each case, the review resulted in claiming that the content "doesn't go against [Meta's] community standards."
    1. Please confirm that the changes to Meta's content moderation policies announced in January did not include a policy change toward antisemitic content and what content violates its community standards.
    2. Does using antisemitic slurs violate Meta's community standards?
    3. Does using antisemitic slurs only violate Meta's community standards when it includes language to incite violence?
    4. Do antisemitic slurs only violate Meta's community standards when that content is visible to enough people? How many users does an antisemitic slur have to be visible to before Meta determines it violates community standards?
  5. One study found that one week after reporting anti-Jewish hate to Group admins, 76% of content and 90% of accounts promoting anti-Jewish hate had not been removed or hidden.
    1. Does Meta take any steps to take down content, remove Admins, or limit posting in groups where violative antisemitic content is repeatedly posted?
    2. Does Meta take any steps in its content moderation practices to address the unique danger of the proliferation of violent and harmful content, especially anti-Jewish content, within Facebook Groups?
    3. With Meta's policy change to prioritize reviewing and taking down the most visible and most harmful content on its platforms, is Meta taking any steps to address the potential for harmful, violative content to proliferate on less visible parts of Meta's platforms, like private groups?
  6. Is Meta taking any action to prohibit participation in Meta's creator funds, or remove and suspend profiles of individuals or pages that repeatedly post antisemitic content that violates Meta's community guidelines?
  7. Does Meta train its AI models on content posted on its platforms?
    1. Does that include private content?
    2. Does that include content posted in private groups?
    3. Does that include content posted in public groups?
    4. What steps will Meta take to ensure antisemitic content that violates community standards is not included in training sets for its AI models?
  8. A recent study of antisemitic content in AI models found that Meta's Llama model is the most anti-Jewish of the major models tested7. Given the rapid rise of antisemitic content on Meta platforms, we are concerned Meta's models will become drastically more antisemitic.
    1. Does Meta remove content or clean data sets from posted content before using them to train their AI models?
    2. What specific steps is Meta taking to reduce antisemitism in its AI models?
    3. What metrics does Meta use internally to track antisemitic content in its AI models? Please share these metrics with us.
    4. What percent improvement of these metrics will you commit to over the next 6 months?

We remain deeply concerned with Meta's failure to address antisemitic content on its platforms and potentially promote antisemitism through its AI models. Meta has an outsized role in its ability to combat antisemitism and we encourage Meta to consider how its policies and practices can be a force to unite communities and foster understanding rather than promote hate speech.

Sincerely,

###

Ben Ray Luján published this content on December 16, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on December 16, 2025 at 20:24 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]