Commentary: Draw a red line against AI in nuclear war
Published in Op Eds
On Sept. 22, a group of more than 200 prominent individuals, including 10 Nobel Prize winners, published an open letter calling for urgent action to enact binding international safeguards against dangerous uses of artificial intelligence, or AI.
“AI holds immense potential to advance human wellbeing, yet its current trajectory presents unprecedented dangers,” the letter says. “Some advanced AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in the world.” The signatories warn that without safeguarding, AI could even be used in decisions regarding nuclear war.
Modern AI is unlike traditional computer software. While traditional software is written by humans, modern AI is more “grown,” produced by crunching massive piles of data in huge supercomputers. Truly understanding how this process works and developing methods for controlling its outputs are still wide open problems, even for the inventors of this technology, many of whom have signed the open letter. This opens the bleak possibility that increasingly powerful systems that we understand less and less will be put in ever-greater control of our lives, our economy and our warfighting capabilities, including our nuclear arsenals.
In response, the letter calls for policymakers to agree to set “red lines” against the use of AI for purposes including nuclear war by the end of 2026.
We live in precarious times: AI companies are recklessly rushing to build “superintelligence” — smarter-than-human AI systems which could increase the risk of future nuclear warfare, and risks the disempowerment of humanity as a whole.
What can be done to safeguard against AI use when so many billion-dollar companies are pushing ahead, exploiting the geopolitical tensions between global superpowers as they develop technology with the capacity to deliver horrifying results?
To answer this question, we can look back to the state of nuclear weapons circa 1960. At the time, 13 nations were considering, pursuing or in possession of nuclear weapons, and the number promised to swell. But today, there are only nine nations worldwide with nuclear weapons, and only two others considering or pursuing them.
This encouraging result comes from multiple efforts during the Cold War, perhaps most significantly with the 1970 signing and ratification of the Treaty on the Non-Proliferation of Nuclear Weapons (NPT), which the signers of the open letter point to as evidence that “cooperation is possible despite mutual distrust and hostility.”
Currently, the NPT is signed by all but five nations: India, Israel and Pakistan, which declined the opportunity to participate in the initial signing; South Sudan, which did not join after declaring its independence in 2011; and North Korea, which initially joined in 1985 but withdrew in 2003. The central bargain of the NPT is that nuclear-weapon states will help non-nuclear-weapon states to develop civil nuclear energy, but that this help will be heavily monitored and tracked to ensure that it is not diverted towards nuclear weapons. In so doing, the NPT aims to encourage the beneficial uses of nuclear technology while curbing the high-risk uses of nuclear weapons as much as possible.
Such an impressive international agreement is a great inspiration for what must happen around superintelligence. Just like nuclear energy, AI can be used in many empowering and productive ways; the problem lies with the extreme risks of superintelligence, which put everyone on the planet in danger.
That is why we need the nations of the world to come together once again and agree to impose sensible restraints on the potentially devastating power of AI. We need to develop ways to monitor AI development, and the means to rein it in.
The situation is dire: The potential means of human extinction are receiving staggering investment and being developed at an alarming speed. But the example of nuclear governance shows us that we can pass international agreements to address extinction-level risks. And, as with so much else these days, it seems as though time is running out.
_____
Connor Leahy is the CEO of Conjecture, an AI safety research company in London, and an advisor to ControlAI, a nonprofit campaigning organization pushing for meaningful regulation of powerful AI systems. This column was produced for Progressive Perspectives, a project of The Progressive magazine, and distributed by Tribune News Service.
_____
©2025 Tribune Content Agency, LLC.
Comments