Skip to content

Catholic Scholars Join Anthropic's Fight Against Military AI Demands

A bold alliance of faith and tech takes on the Pentagon. Could this legal clash redefine the boundaries of AI in warfare?

The image shows a cartoon depicting a group of people in a room, with some of them holding objects...
The image shows a cartoon depicting a group of people in a room, with some of them holding objects in their hands. In the background, there are birds flying in the air and a building. At the bottom of the image, there is text which reads "New Morality or the Promised Institutional of the High Priest of the Philosophicalthropies, with the Horatio and his Suite".

Catholic Scholars Join Anthropic's Fight Against Military AI Demands

A group of Catholic theologians and ethicists has backed AI company Anthropic in its legal battle against the U.S. Department of War. The scholars filed a legal brief supporting the firm's refusal to hand over its Claude AI model for military use. Their stance centres on ethical concerns over autonomous weapons and mass surveillance.

The dispute began after the Pentagon demanded full access to Anthropic's technology, threatening severe penalties when the company resisted.

On 24 February 2026, Trump administration Defence Secretary Pete Hegseth gave Anthropic a three-day ultimatum. He ordered the company to grant unrestricted military access to its Claude AI system or face consequences under the Defense Production Act. When Anthropic refused to remove safeguards against autonomous weapons and mass surveillance, the Pentagon took swift action.

It terminated all contracts with the firm and labelled it a 'supply chain risk', barring defence contractors from using its technology. A presidential order also required federal agencies to phase out Anthropic's systems within six months.

The theologians' legal brief argued that AI-driven weapons remove human judgment from warfare, violating Catholic teachings on just war. They stressed that even flawless autonomous systems would lack moral legitimacy because machines cannot exercise true ethical reasoning. The scholars also opposed mass surveillance, citing the Catholic principle of subsidiarity, which rejects excessive centralisation of power as a threat to individual freedom.

Their submission aligned with Anthropic's position but went further, rejecting autonomous weapons outright—even if proven reliable. The brief warned that such technology accelerates military decisions, obscures accountability, and risks enabling totalitarian control.

The theologians' intervention adds moral weight to Anthropic's legal challenge. Their arguments tie AI ethics to long-standing Catholic principles on human dignity, privacy, and the limits of state power. The case now hinges on whether courts will uphold these ethical boundaries against military demands.

Read also:

Latest