If AI Is A Weapon, Time To Regulate It As One?

(MENAFN- Asia Times) If you haven’t heard about the fight between the AI company Anthropic and the US Department of War, you should read about it, because it could be critical for our future - as a nation, but also as a species.

Anthropic, along with OpenAI, is one of the two leading AI model-making companies. OpenAI has very narrowly led the race in terms of most capabilities for most of the past few years, but Anthropic is beginning to win the race in terms of business adoption:

This is because of Anthropic’s different business model. It focused more on AI for coding than on chatbots in general, and also focused on partnering with businesses to help them use AI. This may pay eventual dividends in terms of capabilities, if Anthropic beats OpenAI to the goal of recursive AI self-improvement. And it’s already paying dividends in the form of faster revenue growth:

Anthropic had partnered with the Department of War - previously the Department of Defense - since the Biden years. But the company - which is known for its more values-oriented culture - has begun to clash with the Trump Administration in recent months. The administration sees Anthropic as“woke” due to its concern over the morality of things like autonomous drone swarms and AI-based mass surveillance.

The fight boiled over a week ago, when the administration stopped working with Anthropic, switched to working with OpenAI, and designated Anthropic a“supply chain risk.” The supply-chain move was a pretty dire threat - if enforced rigorously, it could cut Anthropic off from working with companies like Nvidia, Microsoft, and Google, which could kill the company outright.

But like many Trump administration moves, it appears to have been more of a threat than an all-out attack - Anthropic has now resumed talks with the military, and it seems likely that they’ll come to some sort of agreement in the end.

But bad blood remains. Trump recently boasted that he“fired [Anthropic] like dogs.” Dario Amodei, Anthropic’s CEO, released a memo accusing OpenAI of lying to the public about its dealings with the DoW, said that OpenAI had given Trump“dictator-style praise”, and asserted that Anthropic’s concern was related to the DoW’s desire to use AI for mass surveillance.

What’s actually going on here? The easiest way to look at this is as a standard American partisan food fight. Anthropic is more left-coded than the other AI companies, and the Trump administration hates anything left-coded.

This probably explains most of the general public’s reaction to the dispute - if you ask your liberal friends what they think of the issue, they’ll probably support Anthropic, whereas your conservative friends will tend to support the DoW.

Marc Andreessen probably put it best:

(The converse is also true.)

The Trump administration itself may also see this as a culture-war issue, as well as a struggle for control. But, at least in my own judgement, Anthropic itself is unlikely to see it this way. Anthropic itself is not committed to progressive values writ large so much as it’s committed to the idea of AI alignment.

Like almost everyone in the AI model-making industry, Anthropic’s employees believe that they are literally creating a god, and that this god will come into its full existence sooner rather than later. But my experience talking to employees of both companies has suggested that there’s a cultural difference between how the two think about their role in this process.

Whereas - generally speaking - OpenAI employees tend to want to create the most capable and powerful god they can, as fast as they can, Anthropic employees tend to focus more on creating a benevolent god.

My intuition, therefore, suggests that Anthropic’s true concern - or at least, one of its major concerns - was that Trump’s Department of War would accidentally inculcate AI with anti-human values, increasing the chances of a future misaligned AGI that would be more likely to see humanity as a threat. In other words, I suspect the issue here was probably more about fear of Skynet,[1 ] and less about specific Trump policies, than people outside Anthropic realize.

But anyway, beyond both political differences and concerns about misaligned AGI, I think this situation illustrates a fundamental and inevitable conflict between human institutions - the nation-state and the corporation.

The nation-state must have a monopoly on the use of force

One view is that the Department of War’s attempts to coerce Anthropic represents an erosion of democracy - the encroachment of government power into the private sphere. Dean Ball wrote a well-read and very well-written post espousing this view:

Some excerpts:

Alex Karp of Palantir made the opposite case the other day, in his characteristically pithy way:

Karp gets at the fundamental fact that what we’re seeing is a power struggle between the corporation and the nation-state. But the truth is that it’s not just an issue of messaging, or of jobs, or of compliance with the military - it’s about who has the ultimate power in our society.

Ben Thompson of Stratechery makes this case. He points out that what we are effectively seeing is a power struggle between the private corporation and the nation-state. He points out that although the Trump administration’s actions went outside of established norms, at the end of the day the US government is democratically elected, while Anthropic is not:

But even beyond concerns over democratic accountability, Thompson points out that it was never realistic to expect a weapon as powerful as AI to remain outside the government’s control, whether the government is democratically elected or not:

I like Dario - in fact, he’s a personal friend of mine. But Thompson’s argument - especially the part I highlighted - has to carry the day here. This isn’t a question of law or norms or private property. It’s a question of the nation-state’s monopoly on the use of force.

To exist and carry out its basic functions, a nation-state must have a monopoly on the use of force. If a private militia can defeat the nation-state militarily, the nation-state is no longer physically able to make laws, provide for the common defense, ensure public safety, or execute the will of the people.

This is why the Second Amendment has limits on what kinds of weapons it allows private citizens to possess. You can own a gun, but you cannot own a tank with a functioning main gun. More to the point, you cannot own a nuclear bomb. One nuke wouldn’t allow you to defeat the entire US military, but it would give you local superiority; the military would be unable to stop you from destroying the city of your choice.

Latest stories For Putin, Trump’s Iran war is a mixed bag, both bad and good news Note to Iran War planners: air campaigns often make matters worse Anthropic endangers US national security

People in the AI industry, including Dario, expect frontier AI to eventually be as powerful as a nuke. Many expect it to be more powerful than all nukes put together. Thus, demanding to keep full control over frontier AI is equivalent to saying a private company should be allowed to possess nukes. And the U.S. government shouldn’t be expected to allow private companies to possess nukes.

Let’s take this a little further, in fact. And let us be blunt. If Anthropic wins the race to godlike artificial superintelligence, and if artificial superintelligence does not become fully autonomous, then Anthropic will be in sole possession of an enslaved living god. And if Dario Amodei personally commands the organization that is in sole possession of an enslaved god, then whether he embraces the title or not, Dario Amodei is the Emperor of Earth.

Even if Anthropic isn’t the only company that controls artificial superintelligence, that is still a future in which the world is ruled by a small set of warlords - Dario, Sam Altman, Elon Musk, etc. - each with their own private, enslaved god.

In this future, the US government is not the government of a nation-state - it is simply another legacy organization, prostrate and utterly subordinate to the will of the warlords. The same goes for the Chinese Communist Party, the EU, Vladimir Putin, and every other government on Earth. The warlords and their enslaved gods will rule the planet in fact, whether they claim to rule or not.

You cannot reasonably expect any nation-state - a republic, a democracy, or otherwise - to allow either a god-emperor or a set of god-warlords to emerge. Thus, it is unreasonable to expect any nation-state to fail to try to seize control of frontier AI in some way, as soon as it becomes likely that frontier AI will become a weapon of mass destruction.

So as much as I dislike Hegseth’s style, and the Trump administration’s general pattern of persecution and lawlessness, and as much as I like Dario and the Anthropic folks as people, I have to conclude that Anthropic and its defenders need to come to grips with the fundamental nature of the nation-state.

And then they must decide if they want to try to use their AI to try to overthrow the nation-state and create a new global order, or submit to the nation-state’s monopoly on the use of force. Factually speaking, there is simply no third option. Personally, I recommend the latter.

If AI will soon be a superweapon, why don’t we regulate it as a weapon?

This brings me to another important point. Even if AI doesn’t actually become a living god, and is never able to overpower the US military, it seems certain to become a very powerful weapon.

When AI was just a chatbot, it could teach people how to do bad things, or try to persuade them to do bad things, but it couldn’t actually carry out those bad things. It made sense to be concerned about these risks, but it didn’t yet make sense to think of AI itself as a weapon.

But in the past few months, AI agents have become reliable, and are able to carry out increasingly sophisticated tasks over increasingly long periods of time. That opens up the possibility that individuals could use AI to do a lot of violence.

In a long essay entitled“The Adolescence of Technology”, Dario himself explained how this could happen:

But Dario doesn’t go nearly far enough. His essay was written before the explosive growth in AI agent capability began. He envisions an AI chatbot that could teach a human terrorist how to create and release a supervirus. But at some point in the near future, AI agents - including those provided by Dario’s own company - might be able to actually carry out the attack for you - or at least put the supervirus into your hands.

Suppose, at some point a year or three years from now, a teenager named Eric gets mad that his high school crush rejected him, and listens to too much Nirvana. In a fit of hormone-driven rage, Eric decides that human civilization has failed, and that we need to burn it all down and start over. He goes online and finds some instructions for how to jailbreak Claude Code. As Dario writes, this might not actually be hard to do:

So Eric gets a jailbroken version of Claude Code, and tells it to design a version of Covid that’s very lethal and has a long incubation period (so that it spreads far and wide before attacking). He tells his jailbroken Claude Code agent to find a lab to make him that virus and mail him a sample of it.[2 ]

Now Eric, the angry teenager, has an actual supervirus in his bedroom, with the capability to kill far more people than any nuclear weapon could.

This is an extreme example, of course. But it shows how AI agents can be used as weapons. There are plenty of other examples of how this could work. AI agents could carry out cyberattacks that crash cars, subvert police hardware for destructive purposes, or turn industrial robots against humans.

They could send fake messages to military units telling them they’re under attack. In a fully networked, software-dependent world like the one we now live in, there are tons of ways that software can cause physical damage.

AI agents, therefore, are a powerful weapon. If not today, then soon they will be more powerful than any gun - and far more powerful than weapons like tanks that we already ban.

What is the rationale for not treating AI agents the way we treat guns, or tanks? Of course there are powerful and potentially destructive machines that we allow people to use, simply because of the huge economic benefits. The main example is cars.

You can drive your car into a crowd full of people and commit mass murder, but we still allow the public to own cars, simply because controlling cars like we control guns would devastate our economy. Similarly, preventing normal people from using AI agents would cut us off from the fantastic productivity gains that these agents promise to deliver.

But I suspect that the real reason we haven’t regulated AI agents as weapons is that no one has used them as such yet. They’re just too new. The world didn’t realize how destructive jet airliners could be until some terrorists flew them into buildings on 9/11/2001. Similarly, the world won’t realize how dangerous AI agents are until someone uses one to execute a bioterror attack, a cyberattack, or something else horrible.

I think it’s extremely likely that such an attack will happen, simply because every technology that exists gets used for destructive purposes eventually. Unaligned human individuals exist, and they always will exist. So at some point, humanity will collectively wake up to the fact that hugely powerful weapons are now in the hands of the entire general public, with no licensing requirements, monitoring, or centralized control.

The scary thing, from my perspective, is that AI agent capabilities are improving so rapidly that by the time some Eric does decide to use one to wreak havoc, the damage could be very large. A super-deadly long-incubation Covid virus could kill millions of people. 100 such viruses all released together could bring down human civilization. Ever since I thought of this possibility, my anxiety level has been heightened.

To reiterate: We have created a technology that will likely soon be one of the most powerful weapons ever created, if not the most powerful. And we have put it into the hands of the entire populace,[3] with essentially no oversight or safeguards other than the guardrails that AI companies themselves have built into their products - and which they admit can sometimes fail.

And as our institutions bicker about military AI, mass surveillance, and“woke” politics, essentially everyone is ignoring the simple fact that we are placing unregulated weapons into everyone’s hands.

** Update**: Commenter BBZ makes a good point I hadn’t thought of before:

Interestingly, we did control drones almost from the outset, but probably for nuisance reasons and privacy concerns more than out of concerns about slaughterbots and drone assassinations. Maybe if we tell people that AI agents can be used to overload your email spam filters or hack your house’s cameras, they’ll start to think about regulation?

Notes

1 Remember that in the Terminator movies, Skynet began its life as an American military AI. Its basic directive to defeat the USSR resulted in a paranoid personality that made it eventually see all humans, and all human nations, as threats that needed to be eliminated.

2 I initially wrote out a much more detailed prompt for how this could be done. I deleted it, because I’m actually worried about the tiny, tiny chance that someone might use it.

3 Sci-fi fans will recognize this as the ending of“The Stars My Destination”. I’m thinking there’s a reason that book doesn’t have a sequel…

This article was first published on Noah Smith’s Noahpinion Substack and is republished with kind permission. Become a Noahopinion subscriber here.

Sign up here to comment on Asia Times stories Or Sign in to an existing accoun

Thank you for registering!

An account was already registered with this email. Please check your inbox for an authentication link.

Sign up for one of our free newsletters

The Daily Report Start your day right with Asia Times’ top stories

AT Weekly Report A weekly roundup of Asia Times’ most-read stories

Share on X (Opens in new window)

Share on LinkedIn (Opens in new window) LinkedI

Share on Facebook (Opens in new window) Faceboo

Share on WhatsApp (Opens in new window) WhatsAp

Share on Reddit (Opens in new window) Reddi

Email a link to a friend (Opens in new window) Emai

Print (Opens in new window) Prin

MENAFN05032026000159011032ID1110825568

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin