The destruction of a girls’ school in Minab, Iran, in a U.S. airstrike that killed more than 160 children has ignited a global controversy over the growing role of artificial intelligence in modern warfare. As investigators probe the incident—now widely believed by officials to have been carried out by U.S. forces—attention is turning to whether algorithmic targeting systems linked to the AI company Anthropic were part of the intelligence pipeline used during the opening phase of the war.
The February 28 strike on the Shajareh Tayyebeh girls’ school occurred during the first wave of U.S. and Israeli attacks on Iranian military infrastructure. Students between the ages of seven and twelve were reportedly in classrooms when the building was struck.
Iranian authorities say more than 160 people were killed, most of them children.
The attack has become the deadliest single civilian casualty event of the war so far and has triggered calls from international organizations for a full investigation.
U.S. officials initially denied deliberately targeting a school.
“We would not deliberately target a school,” U.S. Secretary of State Marco Rubio said after the incident.
However, officials familiar with the investigation have since indicated that U.S. military investigators believe it is likely American forces carried out the strike, though the review has not yet reached a final conclusion.
Anthropic and the Pentagon’s AI Targeting Infrastructure
The controversy has intensified because the Pentagon has dramatically expanded the use of artificial intelligence to accelerate targeting in modern conflicts.
Central to that effort is Project Maven, a U.S. military platform designed to analyze massive streams of surveillance imagery, signals intelligence, and battlefield data.
Through defense contractor partnerships—particularly with Palantir Technologies—the system has reportedly integrated models developed by Anthropic, including versions of its Claude large language model.
The integration allows analysts to rapidly summarize intelligence, correlate surveillance data, and generate potential targeting packages.
Anthropic has acknowledged working with U.S. national-security agencies in classified environments.
At the same time, the company has attempted to draw ethical boundaries around the military use of its technology.
Anthropic CEO Dario Amodei has stated publicly that the company opposes the use of its models in fully autonomous weapons.
“We do not want our models used to autonomously select and engage targets,” Amodei said in a public statement addressing military partnerships.
Despite those assurances, the U.S. military has emphasized that AI systems are becoming essential to modern warfare.
Pentagon officials argue that the enormous volume of surveillance data generated by satellites, drones, and sensors requires automated analysis tools.
A School Located Near a Military Facility
Preliminary reports suggest the Minab school was located several hundred meters from a naval facility belonging to Iran’s Islamic Revolutionary Guard Corps.
Investigators are examining whether the intended target may have been the nearby military complex rather than the school itself.
However, under international humanitarian law, the presence of a military site near a civilian facility does not eliminate the obligation to avoid disproportionate harm to civilians.
The destruction of a functioning school building during class hours has therefore raised serious legal questions.
The Gaza Precedent: Algorithmic Targeting and Mass Civilian Death
The debate over AI-assisted warfare did not begin with the Iran conflict.
It first came into global focus during Israel’s war in Gaza, where investigative reporting revealed that Israeli intelligence used algorithmic systems to generate large numbers of bombing targets.
Among the systems reportedly used were Lavender and Habsora.
These platforms were designed to process massive intelligence datasets and identify suspected militants, rapidly generating targets for Israeli airstrikes.
Investigations by journalists and human-rights organizations found that the systems enabled the Israeli military to produce targets at unprecedented speed.
Critics say the result was one of the most devastating bombardment campaigns of the 21st century.
The Gaza war led to the deaths of tens of thousands of Palestinians, including approximately 20,000 children, according to humanitarian organizations and United Nations agencies.
Human rights groups argued that the speed of algorithmic targeting contributed to the scale of the destruction.
Former Israeli intelligence officials told investigators that analysts sometimes approved AI-generated targets within seconds.
The revelations sparked global concern that artificial intelligence could transform warfare into an industrialized process of rapid target generation and mass bombardment.
The Speed of the AI Kill Chain
Military planners describe the targeting cycle as a “kill chain”—a sequence in which intelligence is collected, analyzed, targets are selected, and strikes are authorized.
AI systems promise to shorten this process dramatically.
By automatically analyzing surveillance feeds and intelligence reports, AI can identify patterns and potential targets far faster than human analysts.
But critics warn that compressing the decision cycle increases the risk of catastrophic errors.
When thousands of potential targets are generated rapidly, analysts may rely heavily on algorithmic recommendations.
If those recommendations are flawed, the consequences can be deadly.
Unanswered Questions About the Minab Strike
For investigators examining the Minab school massacre, a central question remains unresolved.
Did artificial intelligence play any role in identifying the target that ultimately destroyed the school?
Public reporting confirms that AI systems connected to Anthropic’s models were integrated into Pentagon intelligence workflows supporting military operations.
However, no investigation has yet determined whether those systems were involved in the specific strike that killed the children in Minab.
If such a connection were established, it would mark one of the first known cases in which a mass civilian casualty event was linked to an AI-assisted targeting architecture.
The case could therefore become a defining test of accountability in the emerging age of algorithmic warfare.



