How to Stop AI

Anthony Galli
6 min readDec 19, 2023

I believe one of the great battle lines of the future will be between artificialists who’d support improving AI and sapienists who won’t.

The two will largely divide along our existing ideological lines where leftist technocrats leveraging the poor will start with a competitive advantage over the American middle class, but as AI improves it could lift more Americans into the middle class, therefore, strengthening our ability and will to stop it or it could be used to exacerbate inequality so that artificialists can continue to hold the door open to oblivion.

Artificialists will want to keep the door open to a self-generating monster even as the risks of AI increasingly outweigh the rewards for a variety of reasons: curiosity, hedonism, learned helplessness, misanthropism, thanatophobia, AI worship, etc.

But I think before AI could destroy us, a foreign power would dominate us.

You see, during the Cold War, the US and the USSR both had enough nukes to destroy the other, but they didn’t do so because the other would’ve had enough time to launch a retaliatory attack, i.e. mutually assured destruction.

With AI, however, WW3 could be won with the flip of a switch.

AI advisor: “Hi Mr. President, if we hack China we have a 99% chance of taking it over whereas if China hack’s first they have a 96% chance of taking us over. Would you like to preemptively hack or put your faith in the Chinese Communist Party that they won’t do so either?”

AI will tip the offense-defense balance in favor of offense or at least because of its black box fast-improving potentially all-powerful nature no nation could be sure it hasn’t already been tipped, therefore, they’ll be forced to act as if it has.

I hope, believe, and will fight for America to win the AI arms race so that we can ultimately stop AI on our terms.

And then rather than taking over the world with it, I recommend we “encourage” our peers to sign a non-development treaty similar to what we did with nuclear weapons, except here, we’d be much more effective at enforcing compliance because this…