A contentious clause embedded in the budget bill led by Republicans could dramatically alter the future landscape of artificial intelligence (AI) regulation in the United States. This provision, which has elicited both support and dissent, suggests an extensive 10-year hiatus on state-level AI regulations.
Critics contend that this proposed pause is excessively broad and would hinder states from instituting laws to safeguard consumers against potentially detrimental AI applications—like biased hiring systems, deepfakes, and addictive chatbot technologies. Without the capacity for regulation, states would be impotent to act, leaving consumers to depend entirely on federal laws that, at present, do not exist. If Congress does not legislate such measures, those adversely affected by AI systems would have limited options beyond initiating legal proceedings against corporations.
Proponents of the provision, including the U.S. Chamber of Commerce, argue that the suspension would assist in preserving America’s dominance in AI by removing a perplexing assortment of state regulations. They assert that a cohesive national strategy would alleviate compliance burdens, particularly for smaller enterprises, and stimulate innovation.
Nonetheless, critics assert that the proposal is unprecedented in both its reach and duration—and constitutes a significant victory for the tech sector. Numerous corporations poised to gain from the moratorium have previously contributed to former President Donald Trump’s campaigns. This week, a coalition of 77 advocacy organizations—including Common Sense Media, Fairplay, and the Center for Humane Technology—called on Congress to eliminate the clause from the budget bill.
“By erasing all current and future state AI legislations without instituting new federal protections, AI firms would achieve exactly what they desire: a lack of regulations, absence of accountability, and total dominance,” the coalition stated in an open letter.
Several states have already enacted or suggested AI-related laws. Tennessee, for instance, enacted the ELVIS Act, which bans the unauthorized replication of a musician’s voice via AI. Republican Senator Marsha Blackburn, representing Tennessee, applauded the law and stressed that a federal moratorium should not take effect before national regulations are established.
California is also taking steps. Legislators there have introduced two bills targeting the protection of youth from potentially harmful AI companion platforms. One such bill would prohibit high-risk AI applications, like anthropomorphic chatbots capable of emotionally manipulating children.
Camille Carlton, policy director at the Center for Humane Technology, recognizes that smaller AI firms might be apprehensive about navigating various state laws. However, she emphasizes that the majority of state proposals are specifically tailored and concentrate on safety issues such as fraud and privacy. Many also incorporate tiered responsibilities based on a company’s size, rather than imposing broad restrictions that could hinder innovation.
Carlton further highlights a broader trend: tech companies frequently oppose state regulations while pushing for federal legislation—only to subsequently lobby against those very federal initiatives. She argues that this pattern has contributed to Congress’s inability to enact meaningful consumer protections in the digital age. She cautions that AI could follow a similar path.
Some experts express particular concern regarding the potential ramifications of a laissez-faire approach. They make comparisons to the nascent phase of social media, when minimal oversight led to numerous adverse effects, including impacts on the mental health of youth.
Gaia Bernstein, a professor at Seton Hall University School of Law and a tech policy expert, asserts that states have been crucial in regulating social media and tech companies, notably in areas like data privacy and child safety. She believes they should assume a similar role in AI oversight.
“Most protections are arising from the states. Congress has mostly been unable to act,” Bernstein states. “If you’re indicating that states cannot take action, then it’s extremely concerning, as there would be no source for any protections.”
As the discussion progresses, the future of AI regulation in the U.S. remains uncertain—caught between the allure of innovation and the imperative for accountability.