You can’t just kill people
AI and the abstraction of war
Welcome to AI and Our Faith! This is a monthly newsletter in which I offer my best insights and reflections on the ways in which theological thinking can inform the ethical (dis)use of artificial intelligence (AI). Look out for new releases on the 15th of each month!
Earlier this month, on March 4, 2026, The Washington Post published an article highlighting the ways in which Claude, Anthropic’s generative AI model, has been used by the U.S. military to wage war against Iran. A brief excerpt from the article:
As planning for a potential strike in Iran was underway, Maven [a system built by Palantir], powered by Claude, suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance, said two of the people. The pairing of Maven and Claude has created a tool that is speeding the pace of the campaign, reducing Iran’s ability to counterstrike and turning weeks-long battle planning into real-time operations, said one of the people. The AI tools also evaluate a strike after it is initiated, the person said.
Evidence suggests that one of the targets struck by the United States military on the first day of the war was the Shajarah Tayyebeh elementary school, which was located next to an Iranian naval base. It is reported that the strike killed at least 175 people.
A few days before the war against Iran began, Pete Hegseth, U.S. Secretary of Defense, demanded “full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic,” or else he would punish Anthropic by designating it as a supply-chain risk, preventing Anthropic’s models from being used in the military’s suppliers. What kind of “full, unrestricted access” was Hegseth so interested in obtaining? According to a statement from Anthropic, they refused to include two use cases in their contracts with the Department of Defense: mass domestic surveillance, and fully autonomous weapons. Since then Anthropic has seen a surge in app downloads while Hegseth has carried out his threat, leading to lawsuits.
I am writing about these events in my newsletter, not because I think the Internet needs another piece of punditry about a brutal act of violence against children, but because as a theologian of technology, I see these events as a dire warning that our current technological trajectory will lead us towards complete moral bankruptcy.
On some level, I think it’s commendable that Anthropic stuck by its hard lines. It would certainly be abominable if the Department of Defense was granted free reign to militarize Anthropic’s models and jam them into fully autonomous weapons. But a close reading of Anthropic’s statement suggests that the gap between what Pete Hegseth wants, and what Anthropic is willing to provide, is not that large:
Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer.
It is not the case that Anthropic has any philosophical objection to fully autonomous weapons. Rather, their objection is that they cannot make good enough weapons currently! From this point of view, the futures that Anthropic and the Department of Defense envision are not so different. Both want to increase the abstraction of war, putting more and more layers between the act of violence and the responsible human.
In Narrative and Technological Ethics, Wessel Reijers and Mark Coeckelbergh observe that sociotechnical systems shape our worldview because they abstract us away from the world of action, where actual, specfic people do actual, specific things. A sociotechnical system becomes a “quasi-entity,” something that we assign agency to in our narratives, even though agency ultimately belongs to particular human beings.1
Looking back at the quote from The Washington Post which I used to open this article, we see this kind of abstraction in full force in the very first sentence. “Maven, powered by Claude, suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance.” Does it really make sense to say that “Maven, powered by Claude” suggested targets? This kind of anthropomorphic language only makes sense if you were to think about Maven or Claude as though they were concrete entities with actual agency, which they decidedly are not.
It would be like if I flipped a coin to decide whether or not I should kill someone, and after killing them, I declared, “The coin decided to kill them!” Of course these AI systems are much more sophisticated than a coin flip, but the principle is the same. Claude doesn’t suggest targets; my coin doesn’t decide to kill people; it is humans who do those things. And from this point of view, the entire idea of a “fully autonomous weapons system” is simply a misnomer. There is no such thing because some number of actual human beings set the weapons system into motion, intending violence.
What I find most troubling in this pattern of increasing abstraction in war is that it will fundamentally close us off from our basic moral capacities as human beings. In the Gospel of Mark, Jesus opens his ministry by declaring, “The time is fulfilled, and the kingdom of God has come near; repent, and believe in the good news.”2 The Greek word translated here as “repent,” metanoeite, means something like “change your minds.” How are people going to change their minds when increasing layers of technical abstraction cause people to hide their own actions from themselves?

Even the image of Pilate washing his hands3 doesn’t quite capture the pattern I am concerned about. In the Gospel narrative, Pilate knew fully well what he was doing, but denied responsibility for his actions. What I am contemplating is a world in which people set up “fully autonomous” weapons systems thinking to themselves in all sincerity that the weapons are “deciding” who to kill, when at base it is the humans who are responsible. If this does not seem entirely believable to you, consider this: AI tools are already being used to abstract away the act of denying people healthcare. Is using AI to abstract away the act of killing people really that big of a next step?
One of the slogans that the cultural apologists of Silicon Valley love throwing around these days is the idea that “you can just do things.” Case in point:
It is, indeed, much easier to do things if you don’t know what you are really doing! The same is true of killing people. You can’t just kill people. You can’t just do things!
Wessel Reijers and Mark Coeckelbergh, Narrative and Technology Ethics (Springer, 2020), 102, https://doi.org/10.1007/978-3-030-60272-7.
Mark 1:14.
Matthew 27:24.



