Democracy Now
Democracy Now
March 18, 2026

Speeding Up the “Kill Chain”: Pentagon Bombs Thousands of Targets in Iran Using Palantir AI

Quick Read

The US and Israel are leveraging AI systems like Palantir's Project Maven, incorporating Anthropic's Claude, to accelerate military targeting, raising critical ethical and legal questions about civilian casualties and the erosion of oversight.
Palantir's AI (Project Maven with Anthropic's Claude) is central to US/Israeli military targeting, drastically speeding up the 'kill chain'.
The use of AI is linked to a controversial strike on an Iranian girls' school, raising concerns about target misidentification and civilian casualties.
Anthropic's ethical restrictions on its AI's military use led to a Pentagon ban, highlighting a growing conflict between tech ethics and defense strategy.

Summary

The US military, in conjunction with Israeli forces, is employing advanced AI systems, specifically Palantir's Project Maven and Anthropic's Claude model, to dramatically speed up the 'kill chain'—the process from target identification to execution. This technology, used in recent operations in Iran, Gaza, and against Venezuela, can sift through vast amounts of intelligence data in seconds, identifying 'patterns of life' and nominating targets. A major controversy surrounds a US strike on an Iranian girls' school, which killed over 170 people, with investigations ongoing into AI's role in misidentifying the target. Anthropic has since attempted to restrict its technology's use for mass surveillance of Americans and fully autonomous weapons, leading to a significant rift with the Pentagon and a lawsuit against the Trump administration. Critics, including Professor Craig Jones, highlight the ethical and legal challenges, the sidelining of military lawyers who traditionally perform proportionality calculations, and the disconnect between Silicon Valley developers and the real-world consequences of these technologies.
The integration of AI into military targeting fundamentally alters the speed and scale of warfare, potentially reducing human oversight and increasing the risk of civilian casualties. This shift challenges existing legal and ethical frameworks for conflict, creates a 'move fast and break things' mentality in military operations, and exposes a growing divide between tech companies' ethical guidelines and government defense priorities. Understanding these developments is critical for assessing the future of warfare, accountability for civilian harm, and the role of technology in global conflicts.

Takeaways

  • US and Israeli militaries use Palantir's Project Maven, powered by Anthropic's Claude AI, to identify and prioritize targets, reducing processing time from hours to seconds.
  • The Pentagon is investigating AI's role in a strike on an Iranian girls' school that killed over 170 people, suggesting the AI system failed to identify it as a civilian target.
  • Anthropic's attempt to restrict its AI for mass surveillance of Americans and autonomous weapons led the Trump administration to declare it a 'supply chain risk', cutting off government contracts.
  • The 'kill chain' is a bureaucratic process from target designation to execution, now significantly accelerated by AI, raising legal, ethical, and political questions.
  • Military lawyers, historically responsible for proportionality calculations and civilian protection advice, have been sidelined or replaced by 'yesmen' in the current administration, further eroding accountability.
  • Silicon Valley firms like Microsoft, Google AI, and OpenAI are deeply involved in military contracts, profiting from technologies with significant real-world consequences.

Insights

1AI's Role in Accelerating the 'Kill Chain'

The US and Israeli militaries utilize AI systems like Palantir's Project Maven (incorporating Anthropic's Claude) to process vast amounts of intelligence data—signals intelligence, mobile phone tracking, internet traffic—in seconds. This identifies 'patterns of life' and nominates targets, drastically reducing the time it takes to move from intelligence gathering to strike execution. Admiral Brad Cooper states these tools turn 'processes that used to take hours and sometimes even days into seconds'.

Admiral Brad Cooper: 'These systems help us sift through vast amounts of data in seconds so our leaders can cut through the noise and make smarter decisions faster than the enemy can react. ...advanced AI tools can turn processes that used to take hours and sometimes even days into seconds.'

2Controversy Over AI's Role in Civilian Casualties

A US strike on an Iranian girls' school, which killed over 170 people, is under investigation for potential AI involvement. The AI system, if used, failed to identify the school as a civilian target, despite a wall separating it from a nearby military compound and clear signs of civilian activity. Professor Craig Jones highlights this as an 'intelligence failure' where an area marked as a military compound was not updated, leading to a 'truly tragic' outcome.

Professor Craig Jones: 'It looks like it's just an intelligence failure that an area marked on a map...has been marked as a military compound.' and 'It looks like a combination of AI and human intelligence failure to to produce something truly tragic.'

3Anthropic's Rift with the Pentagon Over Ethical AI Use

Anthropic attempted to restrict the use of its Claude AI model for mass surveillance of Americans and fully autonomous weapons. This led to a significant conflict with the Trump administration, which ordered federal agencies to stop using Anthropic products and declared the firm a 'supply chain risk'. This marks the first time the Pentagon designated a US company as such, prompting Anthropic to sue.

Transcript: 'A major rift has emerged between Anthropic and the Pentagon after Anthropic moved to restrict the use of its technology for mass surveillance of Americans and for fully autonomous weapons.' and 'Defense Secretary Pete Hegsath declared the firm a supply chain risk, effectively cutting it off from government contracts.'

4Sidelining of Military Lawyers and Civilian Protection Initiatives

The Trump administration has actively sidelined military lawyers, who traditionally provide legal advice on operations and conduct proportionality calculations to minimize civilian harm. Heads of legal units in the Navy, Army, and Air Force were fired and replaced with 'yesmen'. Concurrently, civilian casualty protection initiatives, such as the 'center of excellence', were eliminated, signaling a reduced interest in avoiding civilian harm.

Professor Craig Jones: 'Trump...after he's sworn in in his second term is uh to fire the heads of those legal units...and then further down the ranks he fired and replaced them uh with yesmen.' and 'Trump...presses control delete on day one and gets rid of the civilian center um because they're not interested in avoiding civilian casualties.'

Bottom Line

The 'first AI war' designation is debated, with historical uses of AI in warfare, including in Gaza, preceding the current Iran conflict. The key innovation now is AI's role in intelligence analysis and target nomination at unprecedented speeds.

So What?

This reframes the narrative from a sudden emergence of AI warfare to an acceleration and expanded application of existing capabilities. It implies a longer, less visible history of AI integration into military systems, making the current ethical dilemmas a culmination rather than a novel problem.

Impact

Further investigation into the historical evolution of AI in military applications could provide a more comprehensive understanding of its impact and inform future policy discussions.

Anthropic's ethical objection to military use is primarily technical and geographically limited, not a blanket moral stance against lethality. They object to mass surveillance of US citizens only and argue algorithms are 'not quite good enough' due to error rates, rather than opposing killing per se.

So What?

This reveals a nuanced and potentially self-serving ethical framework within some tech companies, where objections are based on technical readiness and national scope rather than universal moral principles. It suggests that once AI is 'good enough' and applied to non-US citizens, these companies might not object.

Impact

Advocacy groups and policymakers should push for universal ethical standards for AI in warfare that transcend national borders and technical readiness, focusing on fundamental human rights and international law.

The Pentagon's 'AI warfare fighter strategy' explicitly advocates for 'maximum lethality' and operating 'out with the rules of engagement', reflecting a 'move fast and break things' mentality in military policy.

So What?

This indicates a deliberate policy shift away from traditional checks and balances, potentially leading to more aggressive and less accountable military actions. It suggests a disregard for international law and established protocols for minimizing civilian harm.

Impact

International legal bodies and human rights organizations must urgently address the implications of such policies and work to establish robust international regulations and accountability mechanisms for AI in warfare.

Key Concepts

Kill Chain

A bureaucratic mechanism militaries use to go from designating and identifying enemy targets to the process of actually engaging and neutralizing them. AI significantly speeds up this process by automating intelligence analysis and target nomination.

Lessons

  • Scrutinize the ethical frameworks of AI developers, particularly their stated limitations on military use, to understand if they are based on universal moral principles or technical/geographic constraints.
  • Advocate for stronger human oversight and accountability mechanisms in AI-driven military targeting, especially given the demonstrated potential for intelligence failures leading to civilian casualties.
  • Support initiatives that protect and empower military lawyers and civilian protection programs, which are critical for upholding international humanitarian law in an era of accelerated AI warfare.

Quotes

"

"Our war fighters are leveraging a variety of advanced AI tools. These systems help us sift through vast amounts of data in seconds so our leaders can cut through the noise and make smarter decisions faster than the enemy can react. Humans will always make final decisions on what to shoot and what not to shoot and when to shoot. But advanced AI tools can turn processes that used to take hours and sometimes even days into seconds."

Admiral Brad Cooper
"

"The only justification you could possibly have would be that if we don't do it, our adversaries and uh will do it and we will be subject to their rule of law. So you if you decouple this from the support of the military, you're going to have an enormous problem explaining to the American people why is it that we're absorbing the risk of disrupting the very fabric of our society, including the most powerful parts of our society. Uh if it's not because it's about maintaining our ability to be American in the near term and and and long term."

Alex Karp (Palantir CEO)
"

"Unlike so many of our traditional allies who ring their hands and clutch their pearls, hemming and hawing about the use of force, America, regardless of what international institutions say, is unleashing the most lethal and precise air power campaign in history... All on our terms with maximum authorities. No stupid rules of engagement. No nation building quagmire. No democracybuilding exercise, no politically correct wars."

Pete Hegsath (Defense Secretary)

Q&A

Recent Questions

Related Episodes

Palestinian Evangelical Analyst REACTS To U.S-Israeli War In Iran!
The Young TurksMar 3, 2026

Palestinian Evangelical Analyst REACTS To U.S-Israeli War In Iran!

"The Young Turks dissect the US-Israeli war in Iran, alleging it's driven by Israeli expansionist goals, fueled by US political and media subservience, and resulting in devastating civilian casualties and economic fallout, while a Palestinian Christian analyst details the brutal realities of Israeli occupation and humiliation."

US Foreign PolicyIsrael-Iran ConflictMedia Bias+2
BREAKING: Israel BOMBS Major Iran Gas Site; Top Mullah ELIMINATED; Iran Vows VENGEACE | TBN Israel
TBN Israel PodcastMar 18, 2026

BREAKING: Israel BOMBS Major Iran Gas Site; Top Mullah ELIMINATED; Iran Vows VENGEACE | TBN Israel

"Israel and the United States have escalated their 'Roaring Lion War' against Iran, striking its largest gas facilities, eliminating key intelligence and military figures, and disrupting missile production, while Iran threatens a broader energy war in the Gulf."

Israel-Iran ConflictGeopoliticsMilitary Strategy+2
Trump And Hegseth BUSTED For Iran War LIES!! Tucker Carlson & Joe Kent SLAM Israel’s Aggression
The Young TurksApr 10, 2026

Trump And Hegseth BUSTED For Iran War LIES!! Tucker Carlson & Joe Kent SLAM Israel’s Aggression

"The Young Turks expose alleged lies from the Trump administration and Pete Hegseth about the Iran war, criticize Israel's role in escalating conflicts, and highlight widespread political corruption, while Melania Trump addresses Epstein ties and Trump attacks his conservative critics."

US Foreign PolicyMiddle East ConflictIsrael-Palestine Conflict+2
Col. Jacques Baud: What a US Ground Invasion of Iran Would REALLY Look Like
Interviews 02Mar 30, 2026

Col. Jacques Baud: What a US Ground Invasion of Iran Would REALLY Look Like

"Colonel Jacques Baud dissects the strategic futility of a US ground invasion of Iran, arguing that current troop levels are insufficient and such an action would backfire, exposing US allies and potentially leading to Iran's nuclearization."

GeopoliticsMilitary StrategyUS Foreign Policy+2