Politics

/

ArcaMax

Commentary: Draw a red line against AI in nuclear war

Connor Leahy, Progressive Perspectives on

Published in Op Eds

On Sept. 22, a group of more than 200 prominent individuals, including 10 Nobel Prize winners, published an open letter calling for urgent action to enact binding international safeguards against dangerous uses of artificial intelligence, or AI.

“AI holds immense potential to advance human wellbeing, yet its current trajectory presents unprecedented dangers,” the letter says. “Some advanced AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in the world.” The signatories warn that without safeguarding, AI could even be used in decisions regarding nuclear war.

Modern AI is unlike traditional computer software. While traditional software is written by humans, modern AI is more “grown,” produced by crunching massive piles of data in huge supercomputers. Truly understanding how this process works and developing methods for controlling its outputs are still wide open problems, even for the inventors of this technology, many of whom have signed the open letter. This opens the bleak possibility that increasingly powerful systems that we understand less and less will be put in ever-greater control of our lives, our economy and our warfighting capabilities, including our nuclear arsenals.

In response, the letter calls for policymakers to agree to set “red lines” against the use of AI for purposes including nuclear war by the end of 2026.

We live in precarious times: AI companies are recklessly rushing to build “superintelligence” — smarter-than-human AI systems which could increase the risk of future nuclear warfare, and risks the disempowerment of humanity as a whole.

What can be done to safeguard against AI use when so many billion-dollar companies are pushing ahead, exploiting the geopolitical tensions between global superpowers as they develop technology with the capacity to deliver horrifying results?

To answer this question, we can look back to the state of nuclear weapons circa 1960. At the time, 13 nations were considering, pursuing or in possession of nuclear weapons, and the number promised to swell. But today, there are only nine nations worldwide with nuclear weapons, and only two others considering or pursuing them.

This encouraging result comes from multiple efforts during the Cold War, perhaps most significantly with the 1970 signing and ratification of the Treaty on the Non-Proliferation of Nuclear Weapons (NPT), which the signers of the open letter point to as evidence that “cooperation is possible despite mutual distrust and hostility.”

 

Currently, the NPT is signed by all but five nations: India, Israel and Pakistan, which declined the opportunity to participate in the initial signing; South Sudan, which did not join after declaring its independence in 2011; and North Korea, which initially joined in 1985 but withdrew in 2003. The central bargain of the NPT is that nuclear-weapon states will help non-nuclear-weapon states to develop civil nuclear energy, but that this help will be heavily monitored and tracked to ensure that it is not diverted towards nuclear weapons. In so doing, the NPT aims to encourage the beneficial uses of nuclear technology while curbing the high-risk uses of nuclear weapons as much as possible.

Such an impressive international agreement is a great inspiration for what must happen around superintelligence. Just like nuclear energy, AI can be used in many empowering and productive ways; the problem lies with the extreme risks of superintelligence, which put everyone on the planet in danger.

That is why we need the nations of the world to come together once again and agree to impose sensible restraints on the potentially devastating power of AI. We need to develop ways to monitor AI development, and the means to rein it in.

The situation is dire: The potential means of human extinction are receiving staggering investment and being developed at an alarming speed. But the example of nuclear governance shows us that we can pass international agreements to address extinction-level risks. And, as with so much else these days, it seems as though time is running out.

_____

Connor Leahy is the CEO of Conjecture, an AI safety research company in London, and an advisor to ControlAI, a nonprofit campaigning organization pushing for meaningful regulation of powerful AI systems. This column was produced for Progressive Perspectives, a project of The Progressive magazine, and distributed by Tribune News Service.

_____


©2025 Tribune Content Agency, LLC.

 

Comments

blog comments powered by Disqus

 

Related Channels

The ACLU

ACLU

By The ACLU
Amy Goodman

Amy Goodman

By Amy Goodman
Armstrong Williams

Armstrong Williams

By Armstrong Williams
Austin Bay

Austin Bay

By Austin Bay
Ben Shapiro

Ben Shapiro

By Ben Shapiro
Betsy McCaughey

Betsy McCaughey

By Betsy McCaughey
Bill Press

Bill Press

By Bill Press
Bonnie Jean Feldkamp

Bonnie Jean Feldkamp

By Bonnie Jean Feldkamp
Cal Thomas

Cal Thomas

By Cal Thomas
Christine Flowers

Christine Flowers

By Christine Flowers
Clarence Page

Clarence Page

By Clarence Page
Danny Tyree

Danny Tyree

By Danny Tyree
David Harsanyi

David Harsanyi

By David Harsanyi
Debra Saunders

Debra Saunders

By Debra Saunders
Dennis Prager

Dennis Prager

By Dennis Prager
Dick Polman

Dick Polman

By Dick Polman
Erick Erickson

Erick Erickson

By Erick Erickson
Froma Harrop

Froma Harrop

By Froma Harrop
Jacob Sullum

Jacob Sullum

By Jacob Sullum
Jamie Stiehm

Jamie Stiehm

By Jamie Stiehm
Jeff Robbins

Jeff Robbins

By Jeff Robbins
Jessica Johnson

Jessica Johnson

By Jessica Johnson
Jim Hightower

Jim Hightower

By Jim Hightower
Joe Conason

Joe Conason

By Joe Conason
Joe Guzzardi

Joe Guzzardi

By Joe Guzzardi
John Stossel

John Stossel

By John Stossel
Josh Hammer

Josh Hammer

By Josh Hammer
Judge Andrew P. Napolitano

Judge Andrew Napolitano

By Judge Andrew P. Napolitano
Laura Hollis

Laura Hollis

By Laura Hollis
Marc Munroe Dion

Marc Munroe Dion

By Marc Munroe Dion
Michael Barone

Michael Barone

By Michael Barone
Mona Charen

Mona Charen

By Mona Charen
Rachel Marsden

Rachel Marsden

By Rachel Marsden
Rich Lowry

Rich Lowry

By Rich Lowry
Robert B. Reich

Robert B. Reich

By Robert B. Reich
Ruben Navarrett Jr.

Ruben Navarrett Jr

By Ruben Navarrett Jr.
Ruth Marcus

Ruth Marcus

By Ruth Marcus
S.E. Cupp

S.E. Cupp

By S.E. Cupp
Salena Zito

Salena Zito

By Salena Zito
Star Parker

Star Parker

By Star Parker
Stephen Moore

Stephen Moore

By Stephen Moore
Susan Estrich

Susan Estrich

By Susan Estrich
Ted Rall

Ted Rall

By Ted Rall
Terence P. Jeffrey

Terence P. Jeffrey

By Terence P. Jeffrey
Tim Graham

Tim Graham

By Tim Graham
Tom Purcell

Tom Purcell

By Tom Purcell
Veronique de Rugy

Veronique de Rugy

By Veronique de Rugy
Victor Joecks

Victor Joecks

By Victor Joecks
Wayne Allyn Root

Wayne Allyn Root

By Wayne Allyn Root

Comics

John Cole Steve Breen Dave Whamond RJ Matson Margolis and Cox Adam Zyglis