Politics

/

ArcaMax

What to watch from Trump's national AI standard

Allison Mollenkamp, CQ-Roll Call on

Published in Political News

WASHINGTON — President Donald Trump’s executive order on artificial intelligence promised a proposal to Congress for a national artificial intelligence law, one meant to unify U.S. policy and address industry fears that conflicting state laws could slow AI growth.

More than two months later, the White House has offered few specifics on what that strategy will include, beyond preemption of certain state laws, or when it will be unveiled.

Still, experts predict it will lean toward standards rather than strict regulation and take advantage of bipartisan congressional work already happening on kids’ safety online.

The call for a legislative proposal came in the president’s December order that directed the Justice Department to sue states deemed to have “burdensome” AI laws. The order tasked two White House officials focused on AI and technology with drafting a recommendation on preemption and a “uniform federal policy framework.”

The order also specified that the legislation should not preempt state AI laws on child safety, data center infrastructure or state government procurement and use of AI.

Once released, the White House’s plan will head to a Congress that has so far been unwilling to take away most state authority to regulate AI. Advocates of state preemption, including Senate Commerce Chair Ted Cruz, R-Texas, fell short last year on attaching preemption to the GOP budget reconciliation law or the fiscal 2026 defense policy law.

While the order did not give a set deadline for the framework, stakeholders are keeping their eyes peeled in the near future, as the 90-day deadline approaches for the Commerce Department to release a list of state laws to be fought by the new AI Litigation Task Force at the Justice Department.

While experts agreed there is consensus on the need for a national standard and that there is political willpower behind the push, they pointed to razor-thin Republican margins and a busy election year as reasons to be skeptical of a broad framework’s passage in 2026.

Adam Thierer, a senior fellow focused on technology and innovation at the center-right R Street Institute, said the legislation could come around the same time as the list.

“The No. 1 pushback against the moratorium was, you can’t preempt something with nothing,” Thierer said.

Amy Bos, vice president of governmental affairs for industry group NetChoice, also noted the possibility that the White House moves quickly. “Whether it’s a couple of weeks, whether it’s a couple of months, we’re hoping for … sooner rather than later,” she said.

When the proposal is announced, here are five key things experts predict could be included:

Preemption

The executive order directs the Justice Department to sue states based on their “burdensome” AI laws, including on grounds that they unconstitutionally regulate interstate commerce. But the executive branch doesn’t have the power to actually preempt state laws — Congress does.

“We obviously strongly, strongly believe AI doesn’t stop at state borders and AI policies shouldn’t either,” Bos said. “So we really need explicit, federal preemption of conflicting state laws — one national standard.”

She posited that in last year’s negotiations, “Congress maybe got a little tripped up” by the idea that preemption would eliminate all state authority over AI. Instead, Bos said that states could continue to enforce more general consumer protection laws.

Hodan Omar, senior policy manager for the Center for Data Innovation, offered a similar caveat to the preemption push. The center is part of the Information Technology and Innovation Foundation, a nonprofit organization whose funders include many of the nation’s largest technology firms.

“A sensible national framework would establish clear lanes for states, carving out areas such as state procurement and narrowly defined child safety protections,” Omar said via email. “This allows states to govern their own operations and residents without disrupting interstate commerce.”

Thierer expects that preemption could look more limited than what was offered last year, and that a framework would have to answer “challenging questions.”

“How do you articulate what constitutes interstate versus intrastate commerce for purposes of algorithmic regulation? Also, what is a quote-unquote, generally applicable law?” Thierer said, noting that states will still have power over general protections for consumers and civil rights. He added that, “sometimes you could write a law in such a way that it’s regulating AI, but you never say AI and just say it’s generally applicable.”

Kids’ safety

Bos said there’s “been some talk about leveraging, or using, possibly, a kid’s package … to get AI across the finish line,” though she said it’s mostly speculation at this point.

Existing bills show there is bipartisan support for protecting kids online, including in their interactions with AI. Sen. Marsha Blackburn, R-Tenn., sponsored a bill, known as the Kids’ Online Safety Act, or KOSA, to create a “duty of care” for companies to design their online platforms to avoid causing certain harms to kids (S 1748).

Sen. Josh Hawley, R-Mo., sponsored a bill to ban AI companions for children and stop chatbots from soliciting kids to engage in sexually explicit conversations or encouraging them to harm themselves or others (S 3062). Blackburn’s bill has 75 co-sponsors, split between both parties. Hawley’s has 13, also crossing the aisle.

Thierer called kids’ safety the “trickiest” portion of a potential framework, including what provisions make sense for an AI-focused bill.

“This is percolating at the state level in a very active way, and there’s a lot of concern about AI safety and online safety that flows from the social media wars that we’ve seen over the last 10 years. It would be very hard for me to believe that we won’t see additional activity on that front,” Thierer said.

 

Cody Venzke, a senior policy counsel for the American Civil Liberties Union, pointed to the executive order’s carve-out for state laws protecting child safety as evidence that, “maybe this is a space that the administration might be interested in looking at.”

“Obviously, there’s a lot of interest from Congress in regulating those spaces, including Sen. Blackburn,” Venzke said.

Blackburn has already been a key figure in the fight over AI and preemption, at first joining with Cruz last year in negotiations for a deal to preempt state AI laws, and then leading the charge to remove the language from the reconciliation bill.

She announced her own AI framework in December. It would incorporate the KOSA protections as well as “requirements for companies providing AI chatbot and companion services to protect kids.”

Safety and testing

Some of the most-discussed state laws that could be preempted are focused on the safety and training of AI models.

California’s Transparency in Frontier Artificial Intelligence Act, SB 53, requires large frontier developers to publish AI frameworks to explain how they incorporate standards and best practices and file a summary of a catastrophic risk assessment. The law also establishes a reporting mechanism for AI safety incidents. Another California law, AB 2013, requires AI developers to publish documentation of the data used to train their models.

In New York, the law known as the RAISE Act, which Democratic Gov. Kathy Hochul signed late last year but which looks likely to be amended this session, will similarly require large developers of frontier models to publish safety and security protocols and report safety incidents within 72 hours.

Thierer predicts that despite political angst between the Trump administration and California and New York, a national framework could still take cues from those bills.

“It will incorporate some of the elements of what we see in the California and New York bills, but maybe with softer edges. It could be that it’s a little bit more focused on best practices and, you know, multi-stakeholder processes and standards, than on formal regulation or liability,” he said.

Blackburn’s framework would require platforms to conduct risk assessments, implement protocols to mitigate catastrophic risk and publish transparency reports, according to a summary of the forthcoming bill.

Samir Jain, vice president of policy at the Center for Democracy and Technology, suggested that safety standards could be focused specifically on national security risks, such as models developing chemical or biological weapons.

“The federal government obviously has an important and legitimate interest in mitigating those kinds of risks,” Jain said, going on to add that, “transparency around what the models have done to mitigate those risks … makes sense to address.”

CAISI

Thierer noted that a framework would need to assign enforcement of any safety and transparency standards, along with other requirements for AI developers and deployers, to a specific office. He said legislation could create a new office or work with what already exists — the National Institute of Standards and Technology’s Center for AI Standards and Innovation, or CAISI.

“That could … become something that Congress statutorily blesses and transfers some authority to, having to do with frontier model safety or other types of oversight functions,” Thierer said.

CAISI is the new name for what was, under the Biden administration, the AI Safety Institute. Thierer noted that even as the Trump administration worked to undo much of the Biden legacy on technology, CAISI survived, though more focused on standards than regulation.

At a January hearing of the House Science, Space and Technology Subcommittee on Research and Technology, Chair Jay Obernolte, R-Calif., said he intends to introduce a bill that would codify CAISI to “advance AI evaluation and standard setting.”

Clear definitions

To regulate AI, any framework will have to decide what, exactly, counts as artificial intelligence.

“You know, if everything is AI, then nothing is regulated well,” Bos said. “We need narrow, technically grounded definitions.”

Thierer said the question of definitions doesn’t stop there, or at what size of AI company or model that should be governed by any given standard.

“What is a ‘developer,’ versus ‘deployer,’ versus ‘distributor,’ versus ‘integrator,’ and other terms of legal art?” Thierer said, adding that, “There’s been fights at the state level … between industry segments who are basically looking to make sure they’re one but not the other, and then try to impose more liability on somebody else than … covers themselves.”

_____


©2026 CQ-Roll Call, Inc., All Rights Reserved. Visit cqrollcall.com. Distributed by Tribune Content Agency, LLC.

 

Comments

blog comments powered by Disqus

 

Related Channels

The ACLU

ACLU

By The ACLU
Amy Goodman

Amy Goodman

By Amy Goodman
Armstrong Williams

Armstrong Williams

By Armstrong Williams
Austin Bay

Austin Bay

By Austin Bay
Ben Shapiro

Ben Shapiro

By Ben Shapiro
Betsy McCaughey

Betsy McCaughey

By Betsy McCaughey
Bill Press

Bill Press

By Bill Press
Bonnie Jean Feldkamp

Bonnie Jean Feldkamp

By Bonnie Jean Feldkamp
Cal Thomas

Cal Thomas

By Cal Thomas
Clarence Page

Clarence Page

By Clarence Page
Danny Tyree

Danny Tyree

By Danny Tyree
David Harsanyi

David Harsanyi

By David Harsanyi
Debra Saunders

Debra Saunders

By Debra Saunders
Dennis Prager

Dennis Prager

By Dennis Prager
Dick Polman

Dick Polman

By Dick Polman
Erick Erickson

Erick Erickson

By Erick Erickson
Froma Harrop

Froma Harrop

By Froma Harrop
Jacob Sullum

Jacob Sullum

By Jacob Sullum
Jamie Stiehm

Jamie Stiehm

By Jamie Stiehm
Jeff Robbins

Jeff Robbins

By Jeff Robbins
Jessica Johnson

Jessica Johnson

By Jessica Johnson
Jim Hightower

Jim Hightower

By Jim Hightower
Joe Conason

Joe Conason

By Joe Conason
John Stossel

John Stossel

By John Stossel
Josh Hammer

Josh Hammer

By Josh Hammer
Judge Andrew P. Napolitano

Judge Andrew Napolitano

By Judge Andrew P. Napolitano
Laura Hollis

Laura Hollis

By Laura Hollis
Marc Munroe Dion

Marc Munroe Dion

By Marc Munroe Dion
Michael Barone

Michael Barone

By Michael Barone
Mona Charen

Mona Charen

By Mona Charen
Rachel Marsden

Rachel Marsden

By Rachel Marsden
Rich Lowry

Rich Lowry

By Rich Lowry
Robert B. Reich

Robert B. Reich

By Robert B. Reich
Ruben Navarrett Jr.

Ruben Navarrett Jr

By Ruben Navarrett Jr.
Ruth Marcus

Ruth Marcus

By Ruth Marcus
S.E. Cupp

S.E. Cupp

By S.E. Cupp
Salena Zito

Salena Zito

By Salena Zito
Star Parker

Star Parker

By Star Parker
Stephen Moore

Stephen Moore

By Stephen Moore
Susan Estrich

Susan Estrich

By Susan Estrich
Ted Rall

Ted Rall

By Ted Rall
Terence P. Jeffrey

Terence P. Jeffrey

By Terence P. Jeffrey
Tim Graham

Tim Graham

By Tim Graham
Tom Purcell

Tom Purcell

By Tom Purcell
Veronique de Rugy

Veronique de Rugy

By Veronique de Rugy
Victor Joecks

Victor Joecks

By Victor Joecks
Wayne Allyn Root

Wayne Allyn Root

By Wayne Allyn Root

Comics

Joey Weatherford Al Goodwyn Bill Day Ratt Randy Enos Christopher Weyant