As the Trump administration loosens AI rules, states look to regulate the technology
The Trump administration's move to relax regulations on artificial intelligence has prompted states to consider implementing their own rules to oversee the technology. Concerns about the potential impact of AI on privacy, bias, and job displacement have led some states to take action. With AI becoming increasingly integrated into various industries, the debate over regulation is expected to continue. States are exploring ways to strike a balance between fostering innovation and ensuring ethical use of AI.

A Serve Robotics autonomous delivery robot, which utilizes AI and is emissions-free, operates on a sidewalk on March 19, 2024 in West Hollywood, California. Several companies are operating self-driving sidewalk vehicles which deliver food and other smaller items in parts of the Los Angeles area. (Photo by Mario Tama/Getty Images)
Trump Administration and State Policies on Artificial Intelligence
Even as the Trump administration lowers some artificial intelligence guardrails in hopes of boosting innovation, states continue to establish policies for the safe use of AI. During his first week in office, President Donald Trump signed an executive order revoking some Biden-era programs promoting the safe use of artificial intelligence.
A Biden administration order had directed more than 50 federal entities to implement guidance on AI safety and security. Some agencies, including the U.S. Department of Justice, were tasked with studying the effects of AI bias and how the technology could affect civil rights.
Besides rescinding that policy, Trump’s order also calls for the development of an AI Action Plan, which will outline policies to “enhance America’s position as an AI powerhouse and prevent unnecessarily burdensome requirements.” But states are still pursuing legislation that aims to keep residents safe.
The measures range from requiring companies to implement consumer protections to outlawing fake photos and videos to regulating the use of AI in health care decisions. States will need to take a bigger role in regulating artificial intelligence, said Serena Oduro, a senior policy analyst at Data & Society. The nonprofit research institute studies the social implications of data-centric technologies, including AI.
“If we continue with the road that Trump is on, I think states will have to step up because they’re going to need to protect their constituents,” Oduro said. “What’s unfortunate is people are already scared.”
State Legislation and Regulations
In 2024, 31 states adopted resolutions or enacted legislation regarding artificial intelligence, according to a database from the National Conference of State Legislatures, a nonpartisan public officials’ association. This year, nearly every state has introduced AI legislation. Colorado last year became the first state to implement sweeping AI regulations.
Virginia this year became the second state to pass comprehensive AI anti-discrimination legislation, which would make companies responsible for protecting consumers from bias in areas such as hiring, housing, and health care. If signed by Republican Gov. Glenn Youngkin, the new law would go into effect in July 2026. The legislation also would require companies developing and using “high-risk” AI systems, such as those used for employment decisions or financial services, to conduct risk assessments and document their intended uses.
Many states are hoping to curb the rise of deepfakes — digitally altered photos and videos — on the internet. Lawmakers in some states, including Montana and South Dakota, are aiming to deter people from using political deepfakes during elections. Other bills would establish civil and criminal penalties for sharing sexually explicit deepfake images without the subject’s consent, such as in Hawaii and New Mexico.
Lawmakers in a number of states, including Arkansas, California, Maryland, and more, also introduced legislation that would regulate the use of artificial intelligence in health care and insurance decisions. The Utah legislature, for instance, passed a bill last week that would provide protections for mental health patients interacting with chatbots that use AI. The measure is currently awaiting action from Republican Gov. Spencer Cox.
Efforts at AI Regulation
California Assemblymember Rebecca Bauer-Kahan, a Democrat, is helping lead the state’s efforts to create a framework for AI regulation. Following her successful legislation from last year that now defines “artificial intelligence” within the state’s code, Bauer-Kahan is currently working on six bills related to AI. They would require generative AI developers to publicly document materials used to train their systems, crack down on deepfake pornography services, regulate the deployment of automated decision systems, and more.
In Washington, Republican state Rep. Michael Keaton said he filed legislation to help small businesses that want to invest in AI innovation. After retiring from active duty in the Air Force, Keaton later began working for the service as a contractor. Collaborating with engineers, Keaton said he learned that it’s important to strike a balance between tasks for humans and tasks that can be automated — and how this balance can be used in the public’s interest.
Earlier this month, the Washington state House approved Keaton’s bill, which would create a grant program for small businesses that use artificial intelligence for projects that have statewide impact, such as for wildfire tracking, cybersecurity, or health care advancements. The bill now sits in a Senate committee.
With the emerging patchwork of AI legislation across the states, it could be challenging for AI developers to keep up, said Paul Lekas, the senior vice president and head of global public policy and government affairs at the Software & Information Industry Association, a trade association representing the digital content industry. Not only are states creating their own definitions of artificial intelligence but they’re also outlining different rules for different actors, such as AI developers, distributors, or consumers.
“I think the industry is struggling to figure out how to comply with all of these laws were they to pass,” Lekas said.
Stateline reporter Madyson Fitzgerald can be reached at mfitzgerald@stateline.org.