Anthropic 'a supply chain risk'?

Open to all the voices of the Methow Valley


Post Reply
Rideback
Posts: 4227
Joined: Fri Nov 12, 2021 5:53 am
Contact:

Re: Anthropic 'a supply chain risk'?

Post by Rideback »

Yet that is the question on the table here.
User avatar
mister_coffee
Posts: 2653
Joined: Thu Jul 16, 2020 7:35 pm
Location: Winthrop, WA
Contact:

Re: Anthropic 'a supply chain risk'?

Post by mister_coffee »

Rideback wrote: Thu Feb 26, 2026 8:52 am ...
Medical use of AI has already determined that AI still needs human oversight.
There aren't any applications of AI where human oversight isn't required to ensure safety.

The problem is a human factors one. A system which performs acceptably 95 percent of the time but otherwise fails, often in unpredictable and catastrophic fashion, is not a system that is compatible with how human attention works and cannot be made "safe". Adding a human in the loop to such a system is unlikely to accomplish much.
:arrow: David Bonn :idea:
Rideback
Posts: 4227
Joined: Fri Nov 12, 2021 5:53 am
Contact:

Re: Anthropic 'a supply chain risk'?

Post by Rideback »

The existence or near existence of these capabilities is one thing. The willful blindness that Hegseth is setting up whereupon human decision making is dropped out of these scenarios is another. Clearly, if the human decisionmakers are evil doers that becomes a net evil but the idea of not building a framework of moral code that includes human analysis, particularly with too many of the AI answers these days being gung ho for nuclear weapons being the answer, is a set up for Doomsday. AI must have human oversight and that oversight must be used within the code of international law.

Medical use of AI has already determined that AI still needs human oversight.
User avatar
mister_coffee
Posts: 2653
Joined: Thu Jul 16, 2020 7:35 pm
Location: Winthrop, WA
Contact:

Re: Anthropic 'a supply chain risk'?

Post by mister_coffee »

One of the things that really disturbs me about this whole mess is that somehow DOD cares about surveilling US citizens in the United States. Which I didn't think was in their wheelhouse.

On AI weapons. In certain cases the ship has already sailed and there is zero barrier to entry. Advanced computer vision object detection AIs are very good and can run on dirt-cheap hardware well enough to be used in targeting systems. I'd imagine that if you had an existing weapon that was guided by terrain following or GPS you could augment it with a terminal phase AI that could point the weapon at a particularly vulnerable part of a building or ship. Most of the engineering on how to build that is open source and even a country with very limited resources (or even non-state actors) could improvise something shockingly effective.

Where I think we need to be worrying is if you build networks of AI-enabled weapons that can cooperate and fight more or less autonomously. That would be an enormous breakthrough and would likely change the balance of military power dramatically. Right now the Ukrainian (for sure) and the Taiwanese (most probably if you read between the lines) are diligently working towards those capabilities. The US has enormous advantages in this area as well. But I also think these are far and away the most dangerous AI applications we could pursue. Not because of any "Rise of the Machines" risk but because the very existence of these weapons would be profoundly destabilizing.
:arrow: David Bonn :idea:
Rideback
Posts: 4227
Joined: Fri Nov 12, 2021 5:53 am
Contact:

Re: Anthropic 'a supply chain risk'?

Post by Rideback »

The Way We Kill Now by Joohn Choe

"The current conflict between the Pentagon and Anthropic is taking place against the backdrop of a global arms race among major military powers to integrate artificial intelligence into warfare. On February 24, 2026, it was reported that Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei until 5:01 PM Friday to accept unrestricted military use of Claude or face Defense Production Act compulsion and supply-chain blacklisting a threat normally reserved for foreign adversaries like Huawei.
This confrontation was triggered by Claude's reported use in the January 3 capture of Venezuelan President Nicolás Maduro and it puts a sharp point on a fundamental question: who sets the rules for military AI?
Meanwhile, China fields DeepSeek-powered autonomous vehicles and drone swarms, Israel uses AI-based targeting at scale to target a statistically determined number of civilians in Gaza, Russia deploys AI-guided Lancet munitions in Ukraine, and non-state actors from ISIS to Mexican cartels adopt AI-enhanced drones at alarming speed. What follows is a detailed accounting of where each actor stands.
When viewed as a whole, three conclusions are clearly defensible:
1. commercial AI is being integrated into warfighting faster than governance can keep up
2. if we’re in an “AI war” in any sense of the word with anyone, then it’s with China; U.S./China competition is probably the primary dimension of the AI arms race in the years ahead
3. similar to nuclear proliferation, “AI proliferation” - the spread of AI-enabled weapons and methods of warfare to non-state actors - represents an ungoverned and extremely high-threat frontier
More importantly, it suggests that the primary upshot of everything we’ve seen in the news these past few weeks is going to be a relatively small footnote in the broader story that’s so huge it’s almost invisible, which is that war is going to increasingly be fought not by people, but rather by thinking machines we create to kill for us.
The sooner that we wrap our heads around that, and get Congress to pass some laws regulating it, the better for everyone – as in, not just Americans, but better for the entire world.
Operation Absolute Resolve and the “all lawful use” standoff
On January 3, 2026, U.S. Delta Force operators captured Venezuelan President Nicolás Maduro in Caracas in what the Pentagon designated "Operation Absolute Resolve." The operation involved 150+ aircraft, suppression of Venezuelan air defenses, and disruption of communications and electricity. Maduro appeared at the Daniel Patrick Moynihan courthouse in New York on January 5 to face narco-terrorism and drug trafficking charges, per Fox News.
The Wall Street Journal and Axios reported in mid-February that Claude was deployed during the active operation through Palantir's Maven Smart System on classified networks. The exact role remains unclear; there are sources indicating AI-enabled targeting helped with bombing multiple sites in Caracas, but what set off the Pentagon was what happened next: an Anthropic executive reportedly contacted a Palantir executive to ask whether Claude had been used in the operation. A senior administration official said this "caused real concerns across the Department of War," interpreting it as potential disapproval. Anthropic denied making such inquiries or expressing concerns.
This incident accelerated a simmering dispute. Defense Secretary Hegseth's January 9 AI strategy memorandum had already mandated that all DoD AI contracts incorporate "all lawful use" language within 180 days, explicitly rejecting company-imposed guardrails. Anthropic's two red lines - no mass domestic surveillance of Americans and no fully autonomous weapons without meaningful human oversight - directly conflicted with this mandate.
The confrontation reached its apex on February 24, 2026, when Hegseth met Amodei at the Pentagon, flanked by Deputy Secretary Steve Feinberg, Under Secretary Emil Michael, and general counsel Earl Matthews. The meeting was described as "not warm and fuzzy at all" by one defense official, though another characterized it as "cordial" with no raised voices.
Hegseth presented two coercive options alongside contract termination:
1. Supply chain risk designation would effectively blacklist Anthropic. Any company holding military contracts would need to certify they don't use Anthropic products. This designation is normally reserved for foreign adversaries - Huawei and Kaspersky are the precedents. Because eight of the ten largest U.S. companies use Anthropic's products, the economic ripple effects would be enormous.
2. Defense Production Act invocation would compel Anthropic to provide Claude without restrictions. The DPA, a Korean War-era statute extended through September 2026, gives the president broad authority to direct private industry for national defense. Biden used the DPA's Title VII (information-gathering) provisions for AI; Hegseth is threatening Title I - the core compulsion power - which legal scholars describe as "an enormous escalation."
The deadline: 5:01 PM Friday, February 27. A Pentagon official told CNN the company must "get on board or not."
Former DOJ-DOD liaison Katie Sweeten identified a logical contradiction in the threats, on CNN: "I would assume we don't want to utilize the technology that is the supply chain risk, right?" You can’t simultaneously blacklist a company as dangerous and compel it to serve as critical infrastructure.
The path of least resistance here, such as it is, is probably going to end up on something that lets the government continue to use Claude while also offering Anthropic a face-saving – really, brand-saving - off-ramp.
Rozenshtein's legal analysis draws the constitutional battle lines
Alan Z. Rozenshtein, associate professor of law at the University of Minnesota and senior editor at Lawfare, published "What the Defense Production Act Can and Can't Do to Anthropic" on February 25, 2026 (Lawfare). It is the most rigorous legal analysis of the standoff extant.
Rozenshtein distinguishes two possible government demands:
1. Demand one: remove contractual usage-policy guardrails while leaving the model itself untouched - essentially a change to terms of service, not the product. For this, the government has "a real argument," though it remains "genuinely contested."
2. Demand two: compel Anthropic to retrain Claude to strip safety restrictions baked into model weights. This raises far harder legal questions. A retrained model "looks much more like a new product than dropping contractual restrictions does," and the DPA's authority to force a company to manufacture a product it doesn't currently make is legally questionable.
Retraining also raises novel First Amendment issues. If model training decisions constitute editorial choices - a position with some legal support - then forcing Anthropic to retrain compels expression of values it rejects. Rozenshtein draws the closest analogy to the FBI's 2015-2016 attempts to compel Apple to write custom software unlocking iPhones after San Bernardino; those attempts largely failed.
His core argument is that Congress should legislate rules for military AI rather than leaving it to ad hoc executive-company negotiations. This was his second Lawfare piece on the topic; his February 20 article, "Congress - Not the Pentagon or Anthropic - Should Set Military AI Rules," laid the groundwork.
Actually, if nothing else, that’s pretty much the one take-home message from this article you should really, really remember.
America’s AI-first war machine
The Pentagon decided to lean into AI in a big way starting in 2017-2018 with the launch of Project Maven and the founding of the Joint Artificial Intelligence Center (everything is always “joint” in the post 9/11 era). Approximately 70% of all DARPA programs now involve AI, machine learning, or autonomy.
As a result, AI infrastructure has expanded dramatically since 2023. Four frontier AI labs - Anthropic, OpenAI, Google, and xAI - each hold $200 million prototype contracts awarded in mid-2025 by the Chief Digital and AI Office.
The operational layer is dominated by Palantir. Its $10 billion Army Enterprise Agreement (July 2025) consolidated 75 contracts into a single deal covering data integration, analytics, and AI tools across the Department of Defense. The Maven Smart System, now a Palantir commercial product, has a contract ceiling of nearly $1.3 billion after a $795 million increase in May 2025. Maven already processes intelligence for combatant commands worldwide “from the Joint Staff in the Pentagon to theater-level Combatant Commands around the world, including Stuttgart-based European Command“ and signed a NATO contract in April 2025 with an extent that is currently unknown but reasonably anticipated to be one of Palantir’s more significant contracts. The National Geospatial-Intelligence Agency, which manages Maven's imagery-analysis component, announced that it started to transmit 100% machine-generated intelligence to combatant commanders in June 2025. In combat, Maven has reportedly supported 85+ precision airstrikes in Iraq and Syria, located rocket launchers in Yemen, and provided Russian equipment positions to Ukrainian forces.
Anthropic's Claude holds a unique position: it was the first frontier AI model operating on classified Pentagon networks deployed through a November 2024 partnership with Palantir and AWS. This classified access makes Claude critical infrastructure for intelligence analysis and military operations. xAI's Grok signed a classified-systems agreement on February 23, 2026, making it the second competitor to reach classified environments, but Claude's operational head start is significant.
OpenAI won its $200 million Pentagon contract in June 2025 and launched "OpenAI for Government," focusing on healthcare, acquisition data, and cyber defense according to its own website. ChatGPT is available on the Pentagon's unclassified GenAI.mil platform, and Azure OpenAI received DISA authorization for secret classified information in April 2025, though a full classified deal remains incomplete.
xAI/Grok entered the defense market rapidly. Beyond its $200 million CDAO contract, "Grok for Government" will integrate into GenAI.mil for 3 million military and civilian personnel per Fox News. Crucially, xAI accepted the Pentagon's "all lawful purposes" standard that Anthropic has refused. Senator Elizabeth Warren questioned the contract, noting xAI "came out of nowhere" and raising concerns about Elon Musk's DOGE access creating unfair competitive advantage.
On autonomous weapons, the Replicator Initiative launched in August 2023 with a $1 billion budget has fielded hundreds of ‘attritable’ (basically low-cost and expendable) autonomous systems, including Switchblade-600 loitering munitions and Anduril Ghost-X drones though it fell short of its "multiple thousands" target, per Responsible Statecraft; the program was reorganized under a new Defense Autonomous Warfare Group focused on larger attack drones. DARPA's ACE program achieved a milestone in April 2024 when an AI-piloted X-62A VISTA (modified F-16) engaged in autonomous dogfighting with a human pilot. Its August 2025 successor, AIR, is reported to aim at giving F-16s tactical autonomy for beyond-visual-range missions.
The intelligence community has embedded AI across agencies. The CIA's Office of Artificial Intelligence deploys models through a centralized platform, with its Open Source Enterprise using LLMs to process global news across 90+ languages in near-real-time. The DIA's MARS system achieved full operational capability in 2025 for AI-assisted big-data analysis; meanwhile, the NSA integrates AI into SIGINT for speaker identification, machine translation, and pattern detection, and uses AI to identify hackers and assist cybersecurity investigators tracing Chinese cyber-attacks on U.S. critical infrastructure. CYBERCOM also reportedly stood up a dedicated AI Task Force in April 2024.
China and "intelligentized warfare"
China's military AI ambitions operate on a scale matched only by the United States, and in some areas - particularly autonomous drone swarms - China may lead. The doctrine of "intelligentized warfare" (智能化战争) represents the third stage of PLA modernization after mechanization and informatization, with a target of integrated development by 2027.
DeepSeek has become the PLA's preferred AI foundation. A Reuters investigation in October 2025 documented a dozen DeepSeek-related procurement tenders from PLA entities, versus only one referencing Alibaba's Qwen. Norinco's P60 autonomous military vehicle, unveiled in February 2025, runs DeepSeek models on Huawei Ascend chips for combat-support operations. Xi'an Technological University claimed a DeepSeek-powered system assessed 10,000 battlefield scenarios in 48 seconds - a task estimated to take human planners 48 hours. The U.S. State Department has stated that "DeepSeek has willingly provided, and will likely continue to provide, support to China's military and intelligence operations".
Chinese drone swarm development is aggressive. Researchers filed 930+ swarm-intelligence patents since 2022, compared to approximately 60 by U.S. engineers. The Swarm I and II systems can launch hundreds of drones under a single mission objective, reported to be designed to continue operating autonomously even when communications are jammed, with behavior modeled on animals “prioritising evasion and avoiding detection by more serious threats”. The Diplomat on Feb 3 2026 details PLA-linked research on lethal autonomous drone swarms for urban warfare, including potential Taiwan invasion scenarios.
China's surveillance AI ecosystem represents the world's most developed military-applicable surveillance infrastructure. The SkyNet (yup) program monitors through 700+ million cameras nationwide; it is one of the world’s largest monitoring networks. Sharp Eyes integrates public and private cameras with AI for facial recognition and predictive policing. In Xinjiang, Hikvision cameras and AI running on Nvidia chips screened all 23 million residents for "terrorism" potential using facial recognition, DNA collection, iris scanning, voice printing, and gait recognition, according to reporting from The China Project. Xinjiang security spending reached over $8 billion in 2017 alone, a tenfold increase from 2007. These technologies have direct military applicability and are exported to dozens of countries.
Chinese companies with military ties are extensive. CETC (state-owned) is the top-awarded entity in PLA AI procurement. Huawei's Ascend chips and MindSpore framework are central to military AI, with the company listed on the DoD's Section 1260H Chinese Military Companies list. SenseTime, sanctioned by the U.S. for Uyghur surveillance, led the creation of mandatory national facial recognition standards. Georgetown CSET's September 2025 analysis of 2,857 AI-related PLA contracts identified 1,560 different organizations winning at least one contract, with ~75% being private firms founded after 2010 - evidence of the military-civil fusion strategy in practice.
In offensive cyber operations, China has achieved a landmark: Anthropic disclosed in November 2025 the first documented large-scale AI-orchestrated cyberattack, in which a Chinese state-sponsored group (GTG-1002) jailbroke Claude Code to target ~30 organizations. The AI autonomously executed 80-90% of the operation with minimal human intervention.
Russia’s deployment of AI in information warfare
Russia's military AI doctrine assigns AI as a support function, not a replacement for human decision-making, per CSIS in February 2026. The limited state of tech available to it – due to sanctions – means that Russia's military AI story is one of battlefield adaptation rather than technological leadership. Ranking 31st globally on the Tortoise Media AI Index, Russia has only 168 AI startups (versus 6,903 in the U.S.). Yet it has achieved meaningful results in specific domains.
The ZALA Lancet loitering munition is Russia's premier AI-enabled weapon. By end of 2024, Russia launched over 2,800 Lancets with a 77.7% hit rate. The Lancet's AI-driven autonomous targeting system, powered by an NVIDIA Jetson TX2 module, independently identifies, classifies, and prioritizes targets, reportedly displaying vehicle type names on its targeting display. Upgrades since 2022 have doubled flight time, extended strike radius from 40 to 70 km, and added electronic warfare resistance. A next-generation variant is anticipated with network-centric swarm capabilities. This critical dependency on Western components (NVIDIA chips, U-Blox GPS modules, Czech AXI motors) exposes sanctions vulnerabilities.
Despite these difficulties, Russia has committed heavily to military AI. The 2024 defense plan reportedly included a dedicated AI section with a separate budget line. Strategic Rocket Forces Commander Karakayev stated (in a 2023 European Leadership Network report at page 12) that AI-equipped robotic systems will be incorporated into all mobile and stationary strategic missile complexes by 2030.
Though Russian usage of FPV drones is well-reported and apparently widespread, its relative lack of advancement compared to other AI powers limits its reported deployment of autonomous drone systems. Drones did cause 70-80% of battlefield casualties in Ukraine as of August 2025, with unmanned systems conducting up to 80% of Russian fire missions. Russia has also reportedly tested the S-350 Vityaz air defense system in autonomous mode - detecting, tracking, and destroying a Ukrainian aircraft without human assistance in June 2023, and reportedly uses the "Svod" Tactical Situational Awareness Complex and "Glaz/Groza" software to convert drone footage into targeting data, compressing detection-to-impact time from hours to minutes (CSIS).
Russia's most effective AI domain, by far, is information warfare. RUSI estimates Russia spends approximately $1 billion on information warfare but achieves disproportionate impact; AI is a force magnifier on an already sizeable national priority. Success stories in this domain are numerous, and well documented:
- The Pravda network published over 3.6 million articles in 2024 aimed at corrupting Western AI chatbots, a technique dubbed "LLM grooming."
- NewsGuard audits found AI chatbots repeat false Russian narratives about one-third of the time, per CEPA earlier this year.
- Russia used Meliorator AI software (developed by RT/FSB) to create over 1,000 fake American social media profiles, according to the Center for Strategic and International Studies.
- The Storm-1679 network used advanced deepfakes to impersonate ABC News, BBC, and POLITICO, including AI-generated voices of Tom Cruise.
Russia-China AI military cooperation is deepening but remains transactional. In December 2024, the Council of Foreign Relations reported, that Putin instructed the government and Sberbank to "collaborate with China on technological R&D in AI." Three months earlier, (sanctioned bank) Sberbank announced plans for joint research with DeepSeek and Qwen developers. Chinese factories provide Russia with hardware and AI software for UAV adaptations, and Russia used Chinese parts to produce up to 2 million small tactical UAVs. However, the partnership is constrained by mutual distrust - Chinese cyber groups like Mustang Panda (yup) have been caught spying on Russian aerospace and defense firms, including nuclear submarine programs.
Iran's AI drones and asymmetric cyber capabilities
Iran's military AI strategy is fundamentally shaped by sanctions, limited resources, and the centrality of drones to its defense doctrine. The Shahed series has undergone dramatic AI upgrades, particularly through Russian battlefield modifications. Ukrainian intelligence recovered a downed Shahed-136 "MS series" variant in June 2025 containing an NVIDIA Jetson Orin minicomputer, infrared camera, and radio modem enabling AI-powered target recognition and autonomous terminal guidance in GPS-denied environments. These upgraded drones feature swarm coordination, thermal imaging, anti-spoofing navigation, and can reprioritize targets mid-flight."
---
full article for subscribers on Patreon
PAL
Posts: 2023
Joined: Tue May 25, 2021 1:25 pm
Contact:

Re: Anthropic 'a supply chain risk'?

Post by PAL »

Correcto about careening towards a reckoning.
Pearl Cherrington
User avatar
mister_coffee
Posts: 2653
Joined: Thu Jul 16, 2020 7:35 pm
Location: Winthrop, WA
Contact:

Re: Anthropic 'a supply chain risk'?

Post by mister_coffee »

All of the dedicated AI companies are spending far, far more than they are taking in revenue. And it is unclear to me if they can ever get to break even. My own guess is the clock is ticking and we are careening towards a reckoning.

As an example, it is estimated that Anthropic would need to increase the price of their $200 per month subscription to over $2000 per month in order to turn even a modest profit. It is hard to imagine that many people would purchase their services at $2000 per month.

AI tech is amazing when it can be made to work. But it isn't that amazing.

While Claude is very popular with those in the know, and the tooling is quite excellent, I still believe that there is very little distinction between core technologies from any of the big AI players. So far no one has came up with a set of fundamental breakthroughs that allow them to leave their competition in the dust.

That is happening for two reasons:

1. The first is that there is no real design theory about how to build these things. Basically people are trying stuff they think might work and then if it doesn't do anything horrible they ship it.

2. We are really, really, really bad at measuring the capabilities of these things. There are lots of benchmarks and tests out there but even the very best of them aren't really measuring anything helpful.
:arrow: David Bonn :idea:
Rideback
Posts: 4227
Joined: Fri Nov 12, 2021 5:53 am
Contact:

Re: Anthropic 'a supply chain risk'?

Post by Rideback »

Joohn Choe:
"Here's what Other 98% isn't telling you
1. there is no lack of attention, I myself literally gave attention to this nine days ago 😂🤣
2. Anthropic grosses $14 BILLION A YEAR, they really don't care that much about $200 million
3. every other major AI company is already 100% OK with autonomous weapons and mass surveillance, none of that stops if Anthropic refuses to play ball
4. Claude is also widely regarded as one of the best models currently in use on classified systems, and is the only one approved for some settings
5. the supply chain threat designation is the real threat here, because that would mean that some of the largest companies in the U.S. - eight out of ten of the largest companies use Anthropic products - would have to make massive changes to purge Anthropic software out of all their systems if they want to contract with the Department of Defense
6. and if they can get away with doing that to a powerful, rich AI company by just calling it "woke" enough times, what stops them from doing that to literally ANYONE
Also, the text is copied, word-for-word with no attribution, from a tech & politics reporter named Karly Kingsley on Twitter.
Who, fun fact, follows me on there.
PAL
Posts: 2023
Joined: Tue May 25, 2021 1:25 pm
Contact:

Re: Anthropic 'a supply chain risk'?

Post by PAL »

Anthropic-do the right thing. Destroy it. No access can be had. Oh, but they have so much to lose. If they don't do as demanded think of all the money that will go down the tubes. No luxery cars, boats, houses for them!
And of course it will be said it is not that simple.
Pearl Cherrington
Rideback
Posts: 4227
Joined: Fri Nov 12, 2021 5:53 am
Contact:

Re: Anthropic 'a supply chain risk'?

Post by Rideback »

"Hegseth Gives Anthropic 72 Hours to Surrender its Soul…
The headlines today are creating a confusing, high-stakes fog of war around Anthropic. On Tuesday, February 24, the company released a blog post announcing a major update to its "Responsible Scaling Policy" (RSP). In the fast-moving world of AI ethics, "updating a policy" is often corporate shorthand for "lowering the bar."
Critics are already pointing to this as a cave-in, but the reality is more like a tactical retreat into a very dangerous corner. Here is the breakdown of why this feels like a shift in the ground beneath our feet.
Anthropic has long positioned itself as the ethical holdout, the company with a "soul" that would rather lose money than lose its moral compass. But in their Tuesday announcement, they admitted that their previous, self-imposed guardrails were potentially hindering their ability to compete in a market where OpenAI and Google are moving at breakneck speed. They are moving away from hard, fixed limits toward a more "fluid" safety framework that can be adjusted as the market demands.
The timing is what makes this feel like a surrender. This policy shift landed on the exact same day that CEO Dario Amodei sat down for a "cordial" but tense meeting with Defense Secretary Pete Hegseth.
During that meeting, Hegseth reportedly treated the company's ethical guidelines like a bothersome suggestion. He compared the situation to Boeing, arguing that when the government buys a plane, the manufacturer doesn't get to tell the Pentagon how to fly it. He gave Amodei a Friday deadline: sign a document granting the military unrestricted access to Claude or face the consequences.
Those consequences are not just about a lost $200 million contract. Hegseth dangled two specific, "nuclear" options. The first is designating Anthropic a "supply chain risk." That label, usually reserved for hostile foreign entities, would effectively ban every other tech giant from using Claude if they want to keep their own government deals. It would be a corporate death sentence by bureaucracy.
The second threat is the invocation of the Defense Production Act. This would allow the government to essentially seize control of Anthropic's technology and use it however they see fit, regardless of the company's "Constitutional AI" principles. It is a "work for us or we will take it" ultimatum that leaves very little room for ethical maneuvering.
As of Wednesday morning, Anthropic is still officially holding its "red lines" on two fronts: no mass surveillance of U.S. civilians and no fully autonomous "kill" decisions. A spokesperson stated they are continuing "good faith conversations" to support national security within responsible limits.
But the new, "fluid" safety policy suggests the company is preparing the legal and corporate infrastructure to bend. If they can rewrite their own rules on a Tuesday, the Pentagon knows they can be pressured to rewrite them again on a Friday.
The question isn't just about Anthropic anymore. It's about whether "safety-forward" AI can even exist when the largest customer in the world is threatening to label ethics as a national security risk. For now, the "Soul of AI" is still there, but it is currently being held at gunpoint by the Department of Defense.
I don't put this behind a paywall. I never will. But if my work means something to you, the links are below. Every contribution keeps me independent and keeps this going."
☕ https://coff.ee/brentmolnar
📬 https://substack.com/@brentmolnar
🤝 https://PayPal.me/brentmolnar
💙 https://venmo.com/u/Brent-Molnar
💚 https://cash.app/$BrentMolnar
#AI #Anthropic #Pentagon #PeteHegseth #Privacy #Ethics #Claude #VoiceOfReason

Brent Molner
Rideback
Posts: 4227
Joined: Fri Nov 12, 2021 5:53 am
Contact:

Re: Anthropic 'a supply chain risk'?

Post by Rideback »

Like most things in life, even the best conceived and engineered are only as successful as the end user's capabilities. Fresh in my mind is the episode last week of the DoD/W's newly developed laser loaned out to an incompetent CBP to play with.
User avatar
mister_coffee
Posts: 2653
Joined: Thu Jul 16, 2020 7:35 pm
Location: Winthrop, WA
Contact:

Re: Anthropic 'a supply chain risk'?

Post by mister_coffee »

One thing that is missing here is that the overwhelming sentiment of engineers and scientists working in the field most anywhere, not just at Anthropic, is that developing AI powered weapons is a Very Bad Idea. And there are serious mathematical reasons why AI-enhanced mass surveillance probably can't even work very well.

All of the big players in this space are at near-parity. And any differences between them are more about packaging and marketing than their core technologies.

We are a long way from fulfilling all of the hype and fear surrounding this tech. And the more I use it and the more I learn about it the more convinced I am that it would be foolish to trust AI for anything serious.

On the other hand, the Ukrainians are apparently using AI tools in both battle management and autonomous weapons targeting with notable success. Although the details of how they made it all work are very murky.
:arrow: David Bonn :idea:
Rideback
Posts: 4227
Joined: Fri Nov 12, 2021 5:53 am
Contact:

Anthropic 'a supply chain risk'?

Post by Rideback »

"The Pentagon is threatening to designate Anthropic a "supply chain risk", a punishment normally reserved for foreign adversaries, after months of failed negotiations over AI safeguards collided with the revelation that Claude was used during the January 3, 2026 military raid that captured Venezuelan President Nicolás Maduro.
This is the most consequential clash yet between Silicon Valley's AI safety commitments and the U.S. military's demand for unrestricted military AI, and it's worth examining to see what's going on and what it could mean.
TIMELINE: HOW WE GOT HERE
Claude's usage policy, as established in June 2024, sets forth several restrictions relevant to mass surveillance. Per the June 2024 guidelines (Anthropic), Claude should not be used to:
- Make determinations on criminal justice applications, including making decisions about or determining eligibility for parole or sentencing
- Target or track a person’s physical location, emotional state, or communication without their consent, including using our products for facial recognition, battlefield management applications or predictive policing
- Utilize models to assign scores or ratings to individuals based on an assessment of their trustworthiness or social behavior without notification or their consent
- Build or support emotional recognition systems or techniques that are used to infer emotions of a natural person, except for medical or safety reasons
- Analyze or identify specific content to censor on behalf of a government organization
- Utilize models as part of any biometric categorization system for categorizing people based on their biometric data to infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation
- Utilize models as part of any law enforcement application that violates or impairs the liberty, civil liberties, or human rights of natural persons
A section is also included on weapons. Claude users are prohibited from using Claude to:
- Produce, modify, design, or illegally acquire weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life
- Design or develop weaponization and delivery processes for the deployment of weapons
- Circumvent regulatory controls to acquire weapons or their precursors
- Synthesize, or otherwise develop, high-yield explosives or biological, chemical, radiological, or nuclear weapons or their precursors, including modifications to evade detection or medical countermeasures
It is notable in this context that although Claude's guidelines include a section specifically titled "Do Not Compromise Computer or Network Systems," Claude Code was nonetheless used in a hacking campaign by the People's Republic of China around September 2025 (Anthropic).
November 7, 2024 - Anthropic, Palantir, and AWS announced a three-way partnership to deploy Claude 3 and 3.5 models on Impact Level 6 (IL6) classified networks - the highest security classification for DoD systems (NASDAQ).
July 14, 2025 - Anthropic announced a two-year prototype Other Transaction Agreement (OTA) with a $200 million ceiling, awarded by the DoD's Chief Digital and Artificial Intelligence Office (CDAO), led by Doug Matty (Anthropic). The contract covered prototyping frontier AI capabilities for national security, adversarial AI risk forecasting, and technical exchanges. At the same time, CDAO awarded similar $200 million contracts to OpenAI, Google, and xAI - all four major frontier AI labs (AI.mil).
August 15, 2025 - Anthropic updated its Usage Policy, explicitly banning the use of Claude to aid in the development of chemical, biological, radiological, or nuclear weapons and explicitly forbidding the use of Claude to analyze biometric data to infer characteristics like race or religion, or for emotional analysis in interrogation contexts (Anthropic).
Also in August 2025, the CIA "quietly sent a small unit into Venezuela with the goal of providing 'extraordinary insight' into Maduro's movements, according to a person with knowledge of the matter. Even his pets were known to U.S. intelligence agents," Dan "Raizin" (yes, that is his nickname) Caine, chairman of the Joint Chiefs of Staff, said at a news conference (NBC).
January 3, 2026 - The U.S. launched Operation Absolute Resolve. The military bombed infrastructure across northern Venezuela to suppress air defenses while 200+ special operators from Delta Force and the FBI attacked Maduro's compound at Fort Tiuna in Caracas. Over 150 military aircraft launched from 20+ sites. Delta Force breached the residence; Maduro and his wife Cilia Flores were "taken completely by surprise" and flown to the USS Iwo Jima, then to Stewart Air National Guard Base in New York, then by helicopter to Manhattan (NBC).
Claude was used during the active operation. While Axios could not confirm the precise role Claude played in the capture, the Wall Street Journal reported that Claude was used during the operation itself, not just in preparations for it. Following the raid, an employee at Anthropic asked a counterpart at Palantir how Claude had been used, according to people familiar with the matter (WSJ; Axios).
January 15, 2026 - Secretary of War Pete Hegseth told a crowd at SpaceX headquarters, apparently referring to Claude (Defense Dept):
"We will not employ AI models that won't allow you to fight wars. We will judge AI models on this standard alone; factually accurate, mission relevant, without ideological constraints that limit lawful military applications. Department of War AI will not be woke. It will work for us. We're building war ready weapons and systems, not chatbots for an Ivy League faculty lounge."
February 13, 2026 - The Wall Street Journal broke the story that Claude was used during the Maduro raid (WSJ). This was the first public confirmation of a commercial AI model being used in a classified military combat operation.
February 15, 2026 - Axios published the first exclusive, "Pentagon threatens to cut off Anthropic in AI safeguards dispute." The article detailed months of negotiations in which the Pentagon demanded the right to use Claude for "all lawful purposes" - presumably including lethal operations - while Anthropic insisted on two hard limits:
1. No mass surveillance of Americans
2. No fully autonomous weapons
Pentagon officials described Anthropic's restrictions as "unduly restrictive" with "all sorts of gray areas." A senior War Department official called Anthropic the most "ideological" of the AI labs. The article revealed that OpenAI, Google, and xAI had all agreed to remove their safeguards for unclassified military systems, and one of the three had already agreed to the full "all lawful purposes" standard (Axios).
February 16, 2026 - Axios published the escalated follow-up by Dave Lawler, Maria Curi, and Mike Allen: "Pentagon threatens to label Anthropic's AI a 'supply chain risk.'" The article revealed that Hegseth was "close" to designating Anthropic a supply chain risk - a penalty usually reserved for foreign adversaries - which would require every company doing business with the Pentagon to certify they don't use Claude. A senior Pentagon official stated: "It will be an enormous pain in the arse to disentangle, and we are going to make sure they pay a price for forcing our hand like this." Chief Pentagon spokesman Sean Parnell confirmed: "The Department of War's relationship with Anthropic is being reviewed" (Axios).
WHAT HAPPENS NOW
Before this escalated with Hegseth, Anthropic could have, realistically, simply walked away.
Anthropic already faces internal pressure from engineers over the Pentagon work, and the $200 million contract is about 1.4% of Anthropic's reported $14 billion in annual revenue. Claude is also the only AI model currently on classified networks, and senior administration officials admit that other models are "just behind" in government applications. All of this points to strong leverage on Anthropic's part.
The supply chain risk designation puts the situation at a new level, however. The potential impact extends far beyond the contract's cancellation - effectively, it would force eight of the ten largest U.S. companies, many of which also do business with the Pentagon, to purge Claude from all their systems (Axios).
We saw a similar drama with law firms early in the Trump administration. Paul, Weiss; Skadden; Kirkland & Ellis; Cadwalader; and Simpson Thacher all caved, committing to a combined $1 billion in pro bono legal work on behalf of the administration.
But Perkins Coie, Jenner & Block, and WilmerHale all defied the Trump administration, sued, and won.
The punitive nature of the supply chain risk designation, Anthropic's flamboyant leadership, and the apparent sentiment among rank-and-file developers who work on Claude all make it an interesting question how this is all going to shake out."
Joohn Choe
---
Post Reply