AI systems work best when you can rigorously specify what questions they are supposed to answer and what the right answers are. They fail spectacularly when they cannot do that.
We can't rigorously specify what these systems need to do to replace human workers on a massive scale. Everyone assumes that problem is a detail but it is arguably the core problem that needs to be solved.
Then there are fundamental efficiency limitations that make it unlikely we can make them good enough to be trustworthy and reliable at scale. Without fundamental technology improvements. Probably all the way down to the basic math we use to run those things.
AI
- mister_coffee
- Posts: 2708
- Joined: Thu Jul 16, 2020 7:35 pm
- Location: Winthrop, WA
- Contact:
Re: AI
Re: AI
"The Orwellian Blacklist, Why the Pentagon's War on Anthropic Just Collapsed in Court…
The federal judiciary just slammed the brakes on one of the most blatant authoritarian overreaches of the current administration. In a massive and stinging rebuke, a federal judge has officially blocked the Pentagon from enforcing its outrageous attempt to label the American AI company Anthropic a "national security risk." This was not just a standard government contract dispute. It was a coordinated, state-sponsored attempt to destroy a private company simply because it refused to abandon its ethical guardrails.
To understand the sheer depravity of what the Department of Defense attempted to do, you have to look at the boundaries Anthropic was trying to protect. The company established two fundamental red lines for the military use of its Claude AI system. First, it completely prohibited the technology from being used to conduct mass domestic surveillance on American citizens. Second, it refused to allow its models to power fully autonomous lethal weapons without a human operator in the loop.
Those are not radical demands. They are basic constitutional safeguards and fundamental principles of international human rights. But to an executive branch that views any limitation on its power as a direct threat, those ethical boundaries were treated as an act of treason. When Anthropic refused to cave to the pressure, the administration did not just walk away from the negotiating table. They decided to weaponize the full weight of the federal government to make a public example out of the company.
The Secretary of Defense formally designated Anthropic as a "supply chain risk." This is an extreme classification historically reserved for hostile foreign adversaries and state-sponsored saboteurs. By applying that label to a leading American tech firm, the Pentagon essentially blacklisted the company, cutting off its access to federal contracts and forcing private defense partners to instantly sever ties. It was a targeted financial assassination designed to cost the company billions of dollars in revenue.
Fortunately, U.S. District Court Judge Rita F. Lin saw exactly what this was and refused to let it stand. In granting Anthropic's request for an injunction, she delivered a blistering reality check to the government. She correctly identified the blacklisting as illegal First Amendment retaliation, stating plainly during the proceedings that the administration's actions looked like a deliberate attempt to financially cripple the company just because it spoke publicly about the risks of its own technology.
The judge's written ruling cut straight to the core of the democratic crisis we are currently facing. She wrote that there is absolutely nothing in the law that supports the "Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government." That single sentence perfectly encapsulates the managed collapse of our institutions. The state is actively trying to redefine ethical dissent as a national security threat.
The staggering hypocrisy of the entire operation only makes it worse. While the administration was publicly crucifying Anthropic and claiming their internal safety standards made them a risk to the nation, the military was quietly relying on the exact same AI model. Multiple reports have indicated that the Department of Defense has continued to extensively use Claude in its ongoing, unauthorized military operations in Iran. They do not actually fear the technology. They only fear the accountability.
This manufactured crisis was also designed to send a chilling message to the rest of Silicon Valley: comply with the war machine and the domestic surveillance state, or the government will systematically destroy your business. We are already seeing the fallout. While Anthropic is fighting for its corporate life in federal court, other major AI competitors have eagerly stepped in to sign contracts, signaling a terrifying willingness to abandon ethical frameworks for a piece of the massive Pentagon budget.
This ruling is a massive victory, but we must recognize exactly what we are celebrating. We are celebrating the fact that a single federal judge managed to temporarily stop the executive branch from obliterating a private company for refusing to build tools to spy on the American public. The legal guardrails held this time, but the administration has shown us exactly what their blueprint looks like. They view ethical boundaries as sabotage and constitutional rights as inconveniences.
We cannot allow this behavior to be normalized. The government cannot be permitted to use its massive procurement power as a weapon to punish private entities that refuse to facilitate authoritarianism. The court has given Anthropic a temporary shield, but the broader war over who controls the future of artificial intelligence and civil liberties is just beginning. We must stay awake, because the moment we look away, the state will undoubtedly try this again."

This publication answers only to the truth. If that matters to you here is how to keep it going. For $5 you can subscribe on Facebook and support the work where you are already reading it.
https://www.facebook.com/BrentmolnarAVOR/subscribenow/
For the most reliable way to stay with me a permanent searchable archive and no algorithm deciding whether you see it:
https://substack.com/@brentmolnar
One time contribution in any amount:
https://coff.ee/brentmolnar
https://PayPal.me/brentmolnar
https://venmo.com/u/BrentMolnar
https://cash.app/$BrentMolnar
The federal judiciary just slammed the brakes on one of the most blatant authoritarian overreaches of the current administration. In a massive and stinging rebuke, a federal judge has officially blocked the Pentagon from enforcing its outrageous attempt to label the American AI company Anthropic a "national security risk." This was not just a standard government contract dispute. It was a coordinated, state-sponsored attempt to destroy a private company simply because it refused to abandon its ethical guardrails.
To understand the sheer depravity of what the Department of Defense attempted to do, you have to look at the boundaries Anthropic was trying to protect. The company established two fundamental red lines for the military use of its Claude AI system. First, it completely prohibited the technology from being used to conduct mass domestic surveillance on American citizens. Second, it refused to allow its models to power fully autonomous lethal weapons without a human operator in the loop.
Those are not radical demands. They are basic constitutional safeguards and fundamental principles of international human rights. But to an executive branch that views any limitation on its power as a direct threat, those ethical boundaries were treated as an act of treason. When Anthropic refused to cave to the pressure, the administration did not just walk away from the negotiating table. They decided to weaponize the full weight of the federal government to make a public example out of the company.
The Secretary of Defense formally designated Anthropic as a "supply chain risk." This is an extreme classification historically reserved for hostile foreign adversaries and state-sponsored saboteurs. By applying that label to a leading American tech firm, the Pentagon essentially blacklisted the company, cutting off its access to federal contracts and forcing private defense partners to instantly sever ties. It was a targeted financial assassination designed to cost the company billions of dollars in revenue.
Fortunately, U.S. District Court Judge Rita F. Lin saw exactly what this was and refused to let it stand. In granting Anthropic's request for an injunction, she delivered a blistering reality check to the government. She correctly identified the blacklisting as illegal First Amendment retaliation, stating plainly during the proceedings that the administration's actions looked like a deliberate attempt to financially cripple the company just because it spoke publicly about the risks of its own technology.
The judge's written ruling cut straight to the core of the democratic crisis we are currently facing. She wrote that there is absolutely nothing in the law that supports the "Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government." That single sentence perfectly encapsulates the managed collapse of our institutions. The state is actively trying to redefine ethical dissent as a national security threat.
The staggering hypocrisy of the entire operation only makes it worse. While the administration was publicly crucifying Anthropic and claiming their internal safety standards made them a risk to the nation, the military was quietly relying on the exact same AI model. Multiple reports have indicated that the Department of Defense has continued to extensively use Claude in its ongoing, unauthorized military operations in Iran. They do not actually fear the technology. They only fear the accountability.
This manufactured crisis was also designed to send a chilling message to the rest of Silicon Valley: comply with the war machine and the domestic surveillance state, or the government will systematically destroy your business. We are already seeing the fallout. While Anthropic is fighting for its corporate life in federal court, other major AI competitors have eagerly stepped in to sign contracts, signaling a terrifying willingness to abandon ethical frameworks for a piece of the massive Pentagon budget.
This ruling is a massive victory, but we must recognize exactly what we are celebrating. We are celebrating the fact that a single federal judge managed to temporarily stop the executive branch from obliterating a private company for refusing to build tools to spy on the American public. The legal guardrails held this time, but the administration has shown us exactly what their blueprint looks like. They view ethical boundaries as sabotage and constitutional rights as inconveniences.
We cannot allow this behavior to be normalized. The government cannot be permitted to use its massive procurement power as a weapon to punish private entities that refuse to facilitate authoritarianism. The court has given Anthropic a temporary shield, but the broader war over who controls the future of artificial intelligence and civil liberties is just beginning. We must stay awake, because the moment we look away, the state will undoubtedly try this again."
For the most reliable way to stay with me a permanent searchable archive and no algorithm deciding whether you see it:
One time contribution in any amount:
Re: AI
https://hub.jhu.edu/2026/02/23/will-ai- ... -obsolete/
This discussion suddenly reminded me of the response from the coal mining workers when first facing alternative energy growth and the likelihood of coal dropping into irrelevance. The miners, despite generations of poor pay and poor health and poor living standards balked at the discussions of new education that would teach new skills that would allow them to not only to adapt but raise their lifestyle standards. Change is too often attacked first and described in terms that generate fear.
AI is a game changer, it still and again makes mistakes and shouldn't be revered as omnipotent or all knowing. Because it is currently being built by people that are using it for power is the perfect demonstration of why governments must balance that use with strict and enforced laws. That's not available in our current govt which is an example of why the imbalance could wipe out more jobs.
This discussion suddenly reminded me of the response from the coal mining workers when first facing alternative energy growth and the likelihood of coal dropping into irrelevance. The miners, despite generations of poor pay and poor health and poor living standards balked at the discussions of new education that would teach new skills that would allow them to not only to adapt but raise their lifestyle standards. Change is too often attacked first and described in terms that generate fear.
AI is a game changer, it still and again makes mistakes and shouldn't be revered as omnipotent or all knowing. Because it is currently being built by people that are using it for power is the perfect demonstration of why governments must balance that use with strict and enforced laws. That's not available in our current govt which is an example of why the imbalance could wipe out more jobs.
- mister_coffee
- Posts: 2708
- Joined: Thu Jul 16, 2020 7:35 pm
- Location: Winthrop, WA
- Contact:
Re: AI
Good points but there is zero evidence that the current track AI is on is going to lead to massive job losses.
If you are optimistic and generous the current generation of AI tools can increase productivity by about twenty percent. In practice in the real world it is more like ten percent. That isn't the order-of-magnitude increases you'd need to see to produce large-scale displacement of the workforce.
Yes, many companies are reducing staff "because of AI". What's really going on is a bit different:
1. A lot of companies overhired after COVID dissipated and they are covering up that mistake by saying they are using AI to reduce staff.
2. Some companies are reducing staff and using the savings to increase investment in AI.
3. Some companies are counting their chickens before they are hatched and assume their AI initiatives will all succeed so they can start layoffs now.
The current generation of AI tools seems to have inherent limitations where they require human intervention often enough that we still need humans in the loop but work often enough that they work poorly with how human attention work. And they fail catastrophically and dangerously often enough that we will have a Chernobyl moment around AI in our near future.
If you are optimistic and generous the current generation of AI tools can increase productivity by about twenty percent. In practice in the real world it is more like ten percent. That isn't the order-of-magnitude increases you'd need to see to produce large-scale displacement of the workforce.
Yes, many companies are reducing staff "because of AI". What's really going on is a bit different:
1. A lot of companies overhired after COVID dissipated and they are covering up that mistake by saying they are using AI to reduce staff.
2. Some companies are reducing staff and using the savings to increase investment in AI.
3. Some companies are counting their chickens before they are hatched and assume their AI initiatives will all succeed so they can start layoffs now.
The current generation of AI tools seems to have inherent limitations where they require human intervention often enough that we still need humans in the loop but work often enough that they work poorly with how human attention work. And they fail catastrophically and dangerously often enough that we will have a Chernobyl moment around AI in our near future.
Who is online
Users browsing this forum: No registered users and 3 guests