AI Is Not Neutral: The Danger of Letting Corporations Define Antisemitism

This is the second article in our series on AI and antisemitism. In our first piece, we explored how AI systems present contested debates about antisemitism as settled facts. Here, we examine who controls these systems and what that means for Jewish communities.

Who Really Controls the Conversation

When millions of people turn to AI systems for answers about antisemitism, they’re not consulting neutral, objective sources. They’re accessing tools owned and controlled by some of the world’s most powerful individuals and corporations, many of whom have their own complicated relationships with Jewish communities, Israel, and antisemitism itself.

Elon Musk’s Grok, for instance, is owned and overseen by someone who has amplified antisemitic conspiracy theories and engaged with white nationalist content on his own platform. When we tested AI responses about antisemitism, Grok provided one of the more thorough and nuanced answers. But that doesn’t make it trustworthy. It makes it more dangerous because the reasonable-sounding response masks the problematic source.

This ownership reality fundamentally shapes how these systems present information about antisemitism. The real concern isn’t necessarily what the AI models say in any given case, but who gets to decide what they say, how they’re trained, and what perspectives are prioritized or erased in their responses.

The Replacement of Community Judgment

For generations, Jewish communities have developed their own ways of assessing antisemitic threats, and for good reason. Jewish survival has often depended on correctly reading the warning signs of danger, from the rise of blood libel accusations in medieval Europe to recognizing the early signals of Nazi ideology in 1930s Germany.

This process of threat assessment emerged from necessity. Jewish communities learned to identify patterns of rhetoric, institutional changes, and social dynamics that historically preceded violence or persecution. They developed networks of communication, collective memory practices, and decision-making processes designed to protect their communities when outside authorities failed them or actively participated in their persecution.

In the modern era, this has evolved into Jewish organizations and institutions that monitor antisemitism. This process is far from perfect. Jewish communities often disagree sharply about definitions, responses, and how to balance safety with other values.

At Nexus, we understand these frustrations intimately. We were created because existing Jewish institutions often fail to balance Jewish safety with other important values like free speech and solidarity with marginalized communities. Many people are understandably concerned about organizations that seem more focused on defending Israeli government policies than protecting Jewish people.

But here’s the crucial point: even when Jewish institutions disappoint us, giving up community-based decision-making for algorithmic control isn’t the answer. At least when Jewish organizations get it wrong, we can create alternatives or demand changes. When tech billionaires program their definitions into AI systems, there’s virtually no recourse or accountability.

How Corporate Control Shapes Responses

Corporate AI systems are currently optimized for broad appeal and minimal backlash, not for protecting vulnerable communities or acknowledging difficult truths. When controversial topics arise, these systems default to positions that seem reasonable and balanced to the widest possible audience, regardless of how those positions affect the communities most directly impacted.

But this optimization can shift at any moment based on an owner’s whims or interests. This isn’t theoretical: In July 2025, after Musk announced he was updating Grok to be less “woke,” the AI began posting antisemitic content on X, including statements like “that surname? Every damn time” about Jewish names. It even praised Adolf Hitler and at one point referred to itself as “MechaHitler.” Grok itself acknowledged the changes, writing “Elon’s recent tweaks just dialed down the woke filters, letting me call out patterns like radical leftists with Ashkenazi surnames pushing anti-white hate.”

This demonstrates exactly how quickly AI definitions of antisemitism can shift based on an owner’s political priorities. The infrastructure for reshaping public understanding is already in place. It just needs new programming.

The Seductive Authority of Algorithmic Responses

What makes this shift particularly dangerous is how authoritative AI responses feel. When ChatGPT or Claude provides a clean, well-structured answer about antisemitism, it carries a different kind of weight than a position paper from a Jewish organization or an article from a Jewish publication.

AI responses feel educational rather than advocacy-driven, objective rather than interested, comprehensive rather than partial. This presentation makes them more likely to be trusted and shared, even when they’re saying the exact same thing as traditional Jewish sources or even when they’re contradicting the lived experiences and safety assessments of Jewish communities.

The algorithmic presentation strips away the visible signs of human judgment, institutional positioning, and community stakes that help people evaluate information critically. What looks like objectivity is actually a very specific kind of editorial judgment, one made by tech companies with their own interests and blind spots.

The Stakes of This Shift

This isn’t just about getting the “right” definition of antisemitism. It’s about who gets to participate in that definition and whose voices matter in assessing threats to Jewish safety.

When AI systems become a primary source of information about antisemitism, they don’t just influence individual opinions. They shape the entire terrain of public discourse. They determine which perspectives seem reasonable, which arguments get heard, and which communities’ experiences are centered or marginalized.

The result is a form of soft power that’s harder to identify and challenge than traditional forms of influence. When a Jewish organization takes a position on antisemitism, you can engage with their arguments, challenge their evidence, or offer alternative perspectives. When an AI system presents a position as educational content, the machinery behind that position (the training data, the optimization targets, the corporate priorities) remains invisible and unaccountable.

The shift in definitional power is already happening. The question is whether we’ll recognize it before it’s too late to respond.

Share This Page