Nearly Every State Now Considering AI Legislation
As of April 2026, 47 out of 50 U.S. states have introduced active legislation related to artificial intelligence, according to a new analysis by the National Conference of State Legislatures (NCSL). The three holdout states — Wyoming, South Dakota, and Alaska — have pre-filed AI-related bills expected to be introduced in their next legislative sessions.
The explosive growth in state-level AI legislation represents a 340% increase from 2024, when only 14 states had active AI bills. The surge reflects growing public concern about AIs impact on jobs, privacy, elections, and safety, as well as frustration with the slow pace of federal action.
Key Legislative Themes
Analysis of the 340+ active bills reveals several dominant themes:
- AI transparency requirements (38 states): Mandating disclosure when consumers interact with AI systems or when AI is used in consequential decisions
- Deepfake regulation (35 states): Criminalizing non-consensual AI-generated images and requiring labeling of AI-generated media
- Employment and hiring (32 states): Regulating the use of AI in hiring, firing, and performance evaluation decisions
- AI in education (28 states): Setting guidelines for AI use in schools and restricting AI surveillance of students
- Government AI use (25 states): Establishing oversight frameworks for AI deployment in government services
“The states are filling a vacuum left by Congress. Businesses and consumers need rules of the road for AI, and they cannot wait indefinitely for Washington to act.” — Tim Storey, CEO, National Conference of State Legislatures
Most Consequential Bills
Several state bills have attracted national attention for their potential industry impact:
California SB-1047 (AI Safety Act): Would require developers of AI models above certain compute thresholds to conduct pre-deployment safety testing and maintain kill switches. The bill has been called the “most ambitious AI safety legislation in the U.S.”
Illinois AI Fairness Act: Would create a private right of action for individuals harmed by biased AI decision-making, potentially opening the door to class-action lawsuits against AI companies.
Texas AI Innovation Act: Takes a pro-business approach, establishing regulatory sandboxes for AI development and preempting local AI regulations.
Industry Response
The tech industry has raised alarm about the patchwork nature of state-level regulation. A coalition of major tech companies, including Google, Microsoft, OpenAI, Meta, and Amazon, sent a joint letter to Congress in March urging federal preemption of state AI laws.
The letter argued that “a fragmented regulatory landscape with potentially 50 different compliance frameworks would stifle innovation, increase costs, and ultimately harm American competitiveness in the global AI race.”
Compliance Challenges
For businesses, the proliferating state laws create significant compliance complexity:
- Multi-state companies may need to implement different AI practices for different states
- Startups and smaller firms face disproportionate compliance burdens compared to large tech companies
- Online services accessible nationwide must determine which state laws apply to which users
- Conflicting requirements between states could create situations where compliance with one law requires violating another
Federal Action Stalled
Despite bipartisan acknowledgment of the need for federal AI regulation, Congress has made limited progress. The Senate AI Working Group, led by Majority Leader Schumer, released a framework in 2024 but no comprehensive legislation has advanced beyond committee stage.
The White House Executive Order on AI Safety, issued in October 2023, established some federal guidelines but lacks the force of law and does not preempt state action. Industry observers increasingly believe that meaningful federal AI legislation is unlikely before 2027, ensuring that state-level activity will continue to accelerate.