Last week, a conflict unfolded in the United States that should concern anyone working in technology, and frankly anyone living in a democracy. Not because it was about military contracts, but because it exposed what happens when ethical principles around artificial intelligence have no legal foundation to stand on.
Anthropic, the company behind the AI model Claude and one of the most advanced players in the field, refused to abandon two red lines in its contract with the US Department of Defense: a prohibition on using Claude for mass domestic surveillance of American citizens, and a prohibition on deploying it in fully autonomous weapon systems without human oversight.
The Pentagon demanded unrestricted access — "for all lawful purposes" — and refused to enshrine those limitations in the contract. Not because it plans to use AI for those purposes today, it claims, but because it believes a private company should not dictate how the government uses technology.
The result was dramatic. President Trump ordered every federal agency to immediately cease using Anthropic's technology. Defence Secretary Pete Hegseth designated the company a "supply chain risk to national security" — a classification normally reserved for entities linked to hostile nations such as China or Russia. And on Truth Social, Trump called Anthropic "Leftwing nut jobs" trying to overrule the military.
The irony that says everything
What elevates this from a matter of principle to political theatre is what happened mere hours later. OpenAI, Anthropic's direct competitor, announced it had struck a deal with the Pentagon — with precisely the same restrictions Anthropic had requested. No mass surveillance. Human oversight for the use of force. The Pentagon accepted those terms without objection.
Let that sink in. The very same red lines the Pentagon spent days denouncing as "woke" and "philosophical" when proposed by Anthropic were accepted without protest from OpenAI. It is difficult to avoid the conclusion that this conflict was at least partly politically motivated.
Why this is admirable
As someone who works with AI models daily and builds a platform around them, I find Anthropic's stance genuinely admirable. Not because it is easy — quite the opposite. This position places Anthropic in a precarious situation. The company risks not only the loss of a large contract, but potentially the loss of customers who do business with the Pentagon and may now be forced to choose.
Yet Anthropic held firm. CEO Dario Amodei wrote: "Threats do not change our position: we cannot in good conscience accede to their request." The company is challenging the supply chain risk designation in court and has pointed out that the Pentagon's two threats are inherently contradictory: one labels Anthropic a security risk; the other treats Claude as essential to national security.
This demonstrates that Anthropic's principles are not a marketing exercise but fundamental values the company will not abandon, even under extreme pressure from the most powerful government on earth.
The real problem: a legal vacuum
But here lies the deeper lesson. Amodei has said it himself repeatedly: he is "deeply uncomfortable" with the fact that these decisions are being made by a handful of companies and a handful of people. The CEO of an AI company should not be the last line of defence against the use of AI for mass surveillance or autonomous weapons. That belongs in legislation.
And that is precisely where the United States is failing. There is currently no federal law imposing restrictions on AI or safeguarding the safety of the technology. While Anthropic draws ethical lines voluntarily, there is no legal framework compelling other companies to do the same. The commercial pressure to abandon those lines grows stronger by the day.
Amodei has repeatedly urged the US Congress to develop legislation — transparency obligations, testing standards, clear boundaries. But Congress moves too slowly. AI evolves in months; legislation in years.
Europe has a lesson to offer
And this is where Europe enters the picture. For all its imperfections — and there are many — the AI Act provides at least a legal framework. The EU has drawn lines around biometric surveillance, high-risk AI applications, and transparency obligations. Those lines are not perfect, and they will need to evolve alongside the breathtaking pace of developments in the field. But they exist.
The difference is fundamental. When the French military signs a deal with Mistral AI, Europe's leading player, both parties operate within a legal framework that already defines what is and is not permissible. Mistral does not need to fight its own government single-handedly to defend ethical principles — the law already does that.
In the United States, Anthropic has found itself in the absurd position of having to defend, as a private company, on its own strength and at its own risk, the fundamental rights of American citizens against their own government. That is not a sustainable situation. Not for Anthropic, not for the AI sector, and not for democracy.
But Europe should not be complacent
Here is where the story takes a darker turn, and one that receives far too little attention in public debate.
Anthropic's red line specifically protects American citizens from mass surveillance. But that protection does not extend to European citizens under US law. The Fourth Amendment's protections against unreasonable searches apply only to US citizens. Non-US citizens can be subjected to extensive surveillance in the interest of national security — and the legal architecture to do so is already firmly in place.
The US CLOUD Act of 2018 requires American companies to hand over data stored anywhere in the world upon receiving a valid US government demand — regardless of where that data resides or what data privacy agreements say. This creates a direct and irreconcilable conflict with the GDPR. Every private contract between a European customer and a US cloud provider is ultimately subordinate to US federal law. Worse still, CLOUD Act demands frequently carry non-disclosure orders: the provider may be legally prohibited from informing the European customer whose data is being accessed.
Alongside the CLOUD Act sits FISA Section 702, the legal basis for the PRISM programme revealed by Edward Snowden. Section 702 authorises mass surveillance of non-US citizens and was renewed in 2024 with expanded scope, extending these powers until 2026. Companies such as Microsoft and Google can share data on Europeans without a warrant.
The implications are staggering. Any European data held by AWS, Azure, Google Cloud, or any other American provider is legally accessible to the US government. The GDPR and AI Act cannot block this as long as the data sits with an American provider. And with AI capabilities, that data can now be processed at a scale and speed that was previously unthinkable.
The supposed safeguards are fragile at best. The EU-US Data Privacy Framework, the current mechanism governing transatlantic data transfers, is built on a Biden-era executive order. In early 2025, the Trump administration removed three of five members of the Privacy and Civil Liberties Oversight Board — the body overseeing DPF commitments — leaving it without a quorum. The two predecessors of the DPF, Safe Harbour and Privacy Shield, were both invalidated by the European Court of Justice. Microsoft's own chief legal officer in France admitted under oath before the French Senate that the company cannot guarantee EU data is safe from US access requests.
This is not a theoretical risk. It is a structural reality.
Harari warned us — and he was right
In his 2024 book Nexus: A Brief History of Information Networks from the Stone Age to AI, Yuval Noah Harari articulates precisely why this moment is so dangerous. His central argument is one that deserves far more attention than it receives: throughout history, totalitarian regimes have always desired total surveillance, but lacked the capacity to achieve it.
Harari uses the example of Communist Romania, where the dreaded Securitate and their informant network kept the population under constant watch. Any neighbour, friend, or even relative could be an informant. Yet even such a police state could never monitor every waking moment of the population. The bureaucratic machinery simply was not up to the task. The same was true of the Stasi in East Germany, the KGB in the Soviet Union — the ambition was total surveillance, but human limitations made it impossible.
AI changes that equation entirely. As Harari argues, an AI monitoring system that never sleeps, linked to every device in every home, could achieve what the Securitate never could. The technology now exists to process vast amounts of data, making it feasible for governments to monitor citizens, detect dissent, and maintain control at a scale that was previously the stuff of dystopian fiction.
Harari's analysis goes further. He concludes that advances in AI are likely to disproportionately favour totalitarian systems by enabling unprecedented levels of surveillance, dissident detection, and suppression of dissent. He illustrates this with contemporary examples, such as Iran's use of AI-enabled facial recognition to enforce hijab requirements — ubiquitous cameras sending smartphone warnings within seconds, with punishments for non-compliance.
This is not science fiction. It is happening now. And the legal framework that would allow something similar to be done with European citizens' data, held by American companies, already exists.
The convergence
Step back and consider the full picture. The US government demands unrestricted access to the most powerful AI models. It has the legal authority, through the CLOUD Act and FISA Section 702, to access European data held by American companies without consent, without notification, and without judicial review in Europe. The technological capability to process that data at scale now exists. And the oversight mechanisms meant to prevent abuse are being systematically dismantled.
Harari offers four principles to counter these dangers: ensure data collection serves people rather than manipulates them; never allow information to be concentrated in one place; if surveillance of individuals increases, surveillance of those in power must increase in equal measure; and always leave room for change and correction.
The AI Act codifies some of these principles into law. Anthropic defends them as company values. But the Anthropic-Pentagon conflict reveals what happens when neither protection holds — when there is no law and a company's principles are overruled by political force.
What is at stake
This conflict is about far more than a multi-million contract. It is about who decides how the most transformative technology since the splitting of the atom is deployed. And the answer to that question must not depend on the courage of a single CEO or the principles of a single company.
America needs what Europe already has: a legal framework that sets clear boundaries, that can evolve with the technology, and that ensures governments and companies can collaborate in good faith. Not because private companies cannot be trusted — Anthropic proves the opposite — but because the protection of fundamental rights must not depend on the goodwill of a market player.
At the same time, Europe must recognise that the AI Act alone is not enough. As long as 90 percent of Europe's digital infrastructure is controlled by American companies subject to US law, the legal protections the EU has built are structurally undermined. True data sovereignty requires not just regulation but infrastructure — European cloud providers, European AI platforms, and the political will to invest in both.
The AI Act is not a panacea. But it is a start. And right now, America — and Europe's own data — would benefit enormously from having even that much.
Share to social media