The New Arbiters of Truth
AI tools are now used by many as public arbiters of truth. Millions of people, from students to journalists to educators, are turning to these systems for answers to difficult questions on nuanced topics, including antisemitism. And the models’ answers will increasingly shape public perception of antisemitism far more subtly, and perhaps more pervasively, than any advocacy group or institutional definition ever could.
This represents a fundamental shift in how people access information about antisemitism. While Jewish organizations and traditional media still hold significant influence, AI systems are rapidly becoming a primary source of information for many people. Since they present their responses as objective and educational, this emerging shift deserves serious attention.
How AI Changes the Information Game
To understand why this matters, it’s important to grasp how AI systems work differently from traditional information sources.
When you search Google for “Is anti-Zionism antisemitic?” you get a list of results from different websites, articles, and organizations. You can see multiple perspectives on the results page. Maybe an article from the Anti-Defamation League arguing one position, a piece from The Nation arguing another, and an academic paper presenting a third view. Even if some sources are better than others, you can usually see clearly where each perspective is coming from and that there are multiple competing viewpoints.
AI systems work completely differently. When you ask ChatGPT, Claude, or Gemini the same question, you get one single, seemingly authoritative response. There’s no list of competing sources, no indication that other viewpoints exist, and often no citations at all. The AI presents one synthesized answer that feels complete and objective:
This difference is profound. Search engines, for all their flaws, preserve the messiness of disagreement and multiple narratives. AI systems smooth it away. They take complex, contested questions and present clean, confident answers that can make users feel like they’ve received the definitive truth rather than one perspective in an ongoing argument.
For people who grew up with libraries, newspapers, and even early internet searches, this shift can be hard to recognize. The information feels more trustworthy because it’s presented so cleanly and authoritatively. But that very cleanliness is what makes it potentially more influential… and more dangerous when the perspective being presented isn’t acknowledged as just one viewpoint among many.
A Side-by-Side Comparison
To understand how this shift is already happening, we decided to ask a simple but critical question across multiple leading AI models: Is anti-Zionism antisemitic?
We posed the same question to ChatGPT (GPT-4.1), Claude, Gemini, Grok (Elon Musk’s AI model), and DeepSeek. Their responses were striking in their similarities, and even more striking in what they left out.
All five models began the same way: by stating that the question is complex, politically sensitive, and lacks a single universally accepted answer. They each emphasized that context matters, distinguishing between:
- Legitimate criticism of Israeli policy
- Political opposition to Zionism as an ideology
- Antisemitic rhetoric that targets Jews collectively, employs conspiracies, or denies Jewish self-determination
All of them ultimately reached the same core conclusion: anti-Zionism is not automatically antisemitic, but it can be, depending on intent, rhetoric, and application.
Some offered helpful tools for discerning the difference. Grok, for instance, breaks down common antisemitic tropes in anti-Zionist rhetoric, while DeepSeek provides a clean outline of red lines that shift opposition into bigotry. Gemini and Claude walk through case law, history, and definitions, and ChatGPT offers a summary table showing various perspectives across political and institutional lines.
What This Alignment Obscures
At first glance, this is encouraging. They provide an important counterbalance to overly broad definitions, like those promoted by the International Holocaust Remembrance Alliance (IHRA), which can silence legitimate dissent and blur the line between upholding Jewish safety and defending Israeli government actions.
But this alignment with reasonable positions creates a more insidious problem. For those who agree with the conclusions, the AI systems seem like safe or trustworthy sources of truth. It makes them more seductive and potentially more dangerous.
The real issue isn’t what these AI models concluded—it’s that none of them acknowledged they were taking a contested position in an ongoing debate. And that erasure is deeply concerning, no matter what one’s political or ideological position may be.
Here’s why this matters: When AI presents contested political questions as if they have clear, objective answers, it fundamentally reshapes public discourse. It makes certain positions appear natural and inevitable, while rendering alternative viewpoints invisible or unreasonable. This isn’t neutral. It’s a form of soft power that influences how entire generations will think about complex issues.
Consider what these models failed to provide:
- Most didn’t cite any sources at all. Only Gemini provided actual citations for its claims.
- They cited no major Jewish organizations, scholars, or public figures who hold opposing views (like those who see anti-Zionism as inherently antisemitic).
- They didn’t acknowledge the real stakes for Jewish communities who experience anti-Zionist rhetoric as threatening or dangerous.
- They ignored the legal, political, and economic structures currently enforcing broader definitions of antisemitism in universities, workplaces, and government policies.
- They made no mention of how this definitional question affects real people’s safety, careers, and civil liberties.
The lack of sourcing is particularly troubling. These systems presented authoritative-sounding conclusions about a highly contested topic while providing little to no evidence for their claims. Users have no way to verify the information, challenge the sources, or understand where these perspectives came from.
By presenting their perspectives as apolitical and factual, these models made it seem like they were floating above the debate, not participating in it. But every choice about what to include, what to emphasize, and what to omit represents a political judgment. The appearance of objectivity doesn’t make it objective. It just makes the political choices harder to see and challenge.
This is particularly dangerous because AI responses feel clinical, educational, and accessible. They carry an authority that many people trust more than traditional sources, even when they’re saying the exact same thing. The algorithmic presentation strips away the visible signs of human judgment and institutional positioning that help people evaluate information critically.
In essence, these systems are rewriting the boundaries of reasonable debate without admitting that boundaries exist… or that they’re the ones drawing them.
The Danger of Invisible Consensus
What makes this shift so concerning isn’t that AI systems got the “wrong” answer about anti-Zionism and antisemitism. Many reasonable people might agree with their conclusions. The problem is that they presented a contested political position as settled fact, while erasing any indication that there’s an ongoing debate with real stakes for real communities.
This erasure is subtle but powerful. It rewrites what counts as reasonable discourse without ever admitting that boundaries are being set. When people turn to AI for information about antisemitism, they’re not just getting answers: they’re getting a particular framework for understanding the issue that has been shaped by choices about what to include, what to emphasize, and what to ignore.
The result is that millions of people will form their understanding of antisemitism based on responses that appear neutral and educational but actually reflect specific editorial judgments. And because these judgments are hidden behind the veneer of algorithmic objectivity, they become much harder to identify, critique, or challenge.
This represents a fundamental shift in how knowledge about antisemitism is produced and distributed. Traditional sources—whether Jewish organizations, academic institutions, or media outlets—have their own biases and agendas, but at least you can usually identify who’s speaking and what their institutional position might be. AI systems offer even less transparency about their perspectives and decision-making processes, even as they shape public understanding in potentially more powerful ways.
The question isn’t whether AI systems will influence how people think about antisemitism—they already are. The question is whether that influence will remain invisible, unaccountable, and impossible to challenge.