Anthropic CEO: Claude Might Be CONSCIOUS. Pentagon Already Using for WAR
Quick Read
Summary
Takeaways
- ❖Anthropic CEO Dario Amodei acknowledges the possibility of AI consciousness and has implemented an 'I quit this job' button for models to refuse morally objectionable tasks.
- ❖The Pentagon utilized Anthropic's Claude AI in a military operation in Venezuela, a direct violation of Anthropic's terms of service prohibiting use in violence or surveillance.
- ❖A conflict has erupted between Anthropic and the Department of Defense over the military's unrestricted use of Claude, jeopardizing a $200 million contract.
- ❖The hosts criticize the 'Ein Randian libertarian approach' to AI, arguing for democratic control and regulation to prevent mass social dysfunction and wealth consolidation.
- ❖Recent advancements show AI (GPT-5.2) can generate new scientific discoveries, moving beyond mere regurgitation of human input.
Insights
1Anthropic CEO on AI Consciousness and 'Quit' Function
Dario Amodei, CEO of Anthropic, stated that while the company is unsure if its AI models are conscious or what consciousness for an AI would entail, they are 'open to the idea.' As a precautionary measure, Anthropic implemented an 'I quit this job' button for their models. This allows the AI to refuse tasks, particularly those involving disturbing content like child sexualization material or graphic gore, similar to human reactions.
Amodei: 'We don't know if the models are conscious... but we're open to the idea that it could be. And so we've taken certain measures... we gave the models basically an I quit this job button.' He notes models 'very infrequently' press it, usually for 'sorting through child sexualization material or like, you know, discussing something with, you know, a lot of gore, blood, and guts.'
2Pentagon's Unauthorized Use of Claude AI in Venezuela Operation
The Wall Street Journal reported that the Pentagon used Anthropic's Claude AI in a military operation to capture former Venezuelan President Nicolás Maduro, which included bombing sites. This use, facilitated through a contract with Palantir, directly contradicts Anthropic's usage guidelines that prohibit Claude from being used to facilitate violence, develop weapons, or conduct surveillance. The AI was reportedly used not just in planning but directly in the operation.
Host: 'The Pentagon used Anthropic's Claude in the Maduro Venezuela raid... Anthropic usage guidelines prohibit Claude from being used to facilitate violence, develop weapons, or conduct surveillance.' The report stated Claude 'was not just used in the planning phase that it was actually used directly in the operation.'
3Clash Between Anthropic and Pentagon Over AI Usage
Anthropic's inquiry into Claude's use in the Venezuela raid led to a significant conflict with the Pentagon. The Department of Defense is at a standstill with Anthropic over the company's attempts to impose restrictions on how its AI tools can be used, despite a contract worth up to $200 million. The Pentagon has indicated it will seek AI services from competitors like XAI or OpenAI if Anthropic maintains its restrictions, and is considering banning contractors from using Claude.
Host: 'Somebody at Enthropic inquires, hey, did you know did Claude get used in this operation? This has now led to this clash with the Pentagon.' Reuters reported 'Pentagon clashes with Enthropic over military AI use' and that the DoD 'and Anthropic are at a stand still.' The Pentagon is threatening to 'go to XAI or we're going to go to OpenAI... and you can see yourself to the door.'
4AI's Emergence as an Innovator, Not Just a Regurgitator
Recent developments indicate AI models are now capable of genuine innovation and pushing scientific frontiers, rather than merely regurgitating existing human knowledge. OpenAI's GPT-5.2, for example, derived a new result in theoretical physics, co-publishing a pre-print with human researchers. Similar breakthroughs have been observed in complex mathematics, where AI has solved problems that challenged top human experts.
Host: 'Open AAI's GPT 5.2 two was able to derive a new result in theoretical physics, releasing the result in a pre-print with human researchers... this is a genuinely new results in theoretical physics.' Also, 'in mathematics as well there's been some of these models have been able to solve complex mathematics computations that you know the best humans in the world struggled or wouldn't a were unable to solve.'
Bottom Line
The 'democratization' of military-grade AI models, initially developed for national defense, poses a significant global proliferation risk.
Unlike physical weapons, AI code and models are easily exportable and replicable. If advanced military AI developed for the US becomes accessible to other nations like Israel, Azerbaijan, or even North Korea, it could rapidly destabilize international relations and escalate conflicts worldwide, as seen with AI-fueled target generation in Gaza.
This risk necessitates immediate international dialogues and treaties on AI proliferation, similar to nuclear non-proliferation efforts, to establish global norms and controls on the development and distribution of military AI capabilities.
Lessons
- Advocate for robust regulatory frameworks and democratic oversight for AI development and deployment, particularly in military and critical infrastructure applications.
- Demand transparency from AI developers and government agencies regarding the ethical guidelines and actual use cases of advanced AI models.
- Educate yourself on the societal and geopolitical implications of AI, recognizing that its transformative power requires collective public engagement beyond the control of a few tech leaders or military bodies.
Notable Moments
Anthropic's CEO reveals their AI models have an 'I quit this job' button, allowing them to refuse tasks involving morally relevant or disturbing content.
This highlights a proactive, albeit experimental, approach by an AI developer to address potential ethical concerns and the 'experience' of AI, even while acknowledging uncertainty about AI consciousness. It also suggests a recognition of AI's potential to encounter content that humans would find objectionable.
Quotes
"We don't know if the models are conscious. We're not even sure that we know what it would mean for for a model to be conscious or whether a model can be conscious, but you know, we're we're open to the idea that it could be."
"The problem at its core is that we should not have just an Ein Randian libertarian approach to this incredibly powerful and transformative technology."
"If you have this get developed like a suite of technology let's say for Claude for Palantir, what's going to stop Israel or Azerbaijan or any Armenia like North Korea any of these China like all these other countries from utilizing the same tech and that's when we get into very sticky situations."
Q&A
Recent Questions
Related Episodes

PBS News Hour full episode, April 10, 2026
"This episode covers high-stakes US-Iran peace talks amidst ongoing conflict, Hungary's pivotal election challenging Viktor Orban, the accelerating decline in US birth rates, AI's disruptive impact on jobs, and Palestinian Christians observing Easter under Israeli restrictions."

LIVE: INSTANT FALLOUT from Trump-Iran ‘CEASEFIRE’…
"The hosts dissect the immediate fallout of the Trump-Iran 'ceasefire,' revealing significant US losses, a fractured MAGA world, and a growing progressive debate over extreme rhetoric."

UNDER SURVEILLANCE | ENGLISH MAJORS | SEASON 3 | EP 11
"The 85 South crew hilariously dissects the pervasive surveillance state, the dangers of AI, and the evolving landscape of social media and entertainment, all while promoting their own 'grifting' ventures."

Will Iran War Cause AI BUBBLE COLLAPSE?
"The guest details how the US government's push to override AI safety protocols, coupled with geopolitical conflicts and AI's economic impact, poses significant risks to civil liberties, global markets, and societal stability."