Does Trump's "Big Beautiful" bill mean Big Brother?
What happens when AI runs free for a decade? We’re about to find out.
When job applicants sued iTutorGroup for age discrimination in 2023, they won a historic $365,000 settlement from the Equal Employment Opportunity Commission. The company had programmed its AI hiring software to automatically reject female applicants aged 55 or older and male applicants aged 60 or older, affecting more than 200 people. The case was the first EEOC first involving AI-driven hiring discrimination.
Now, tucked into the Trump administration's massive One Big Beautiful Bill Act, is a 10-year moratorium that would halt government regulation of AI entirely. While Elon Musk lit up social media calling the $1.6-trillion spending package a "disgusting abomination," my concern is hidden in the depths of its 1,116 pages: a complete federal ban on state and local AI laws until 2035.
Help train this newsletter's neural networks with caffeine!
⚡️ Buy me a coffee to keep the AI insights coming. ☕️
The House passed the bill by a narrow 215-214 vote in May, but the AI regulations part of the bill have received little attention amid the budget battle. And without state oversight, powerful AI systems used by government agencies could operate with little accountability.
Consider the growing use of AI in government surveillance. Companies like Trump-backed Palantir are building systems that combine data from tax records, Social Security files, and homeland security databases for predictive profiling. While these platforms claim to have "advanced bias-mitigation tools," experts warn those safeguards often fail in practice. The Federal Aviation Administration's recent $80,000 contract for AI tools demonstrates how quickly these systems are spreading through critical US infrastructure.
"The 10-year moratorium on state AI legislation is a bad idea and a clumsily executed one at that," said Justin Brookman, director of technology policy at Consumer Reports. He argues that states have been more nimble than Congress in addressing real threats from new technologies.
The financial pressure is building. A Senate Commerce Committee analysis suggests that 19 states could lose $2.1 billion in federal broadband funding next year if they keep their current AI consumer protection laws. States like New York and California, which require bias audits for hiring algorithms, would immediately lose access to federal tech grants.
Civil rights advocates are raising concerns about the scope of the moratorium. "The 10-year freeze on state AI regulations threatens to undermine decades of civil-rights enforcement and leaves vulnerable communities at risk," warned Cody Venzke, senior policy counsel at the ACLU. The moratorium could "open the door to an entirely unregulated AI ecosystem."
US states' laws against AI-generated deepfake pornography, rules targeting fake election materials, and bills regulating how health insurance companies use AI to deny claims all stand to be undone. Now state lawmakers are pushing back.
"The moratorium is so sweeping that it's hard to imagine how any law that touches on AI or automated decision-making in any way could escape it," said Matthew Scherer from the Center for Democracy & Technology. The definition includes any "computational process" used to influence human decisions. This potential covers everything from hospital treatment recommendations to police predictive policing algorithms.
Without state laws, government agencies could deploy AI surveillance tools with little oversight. Current state regulations often require disclosure when AI systems make decisions about people's lives, mandate human review for critical decisions, and establish audit requirements to check for bias. The moratorium would suspend all of these protections.
"Industry claims that state laws are a 'burdensome patchwork' of unwieldy and complex laws is not grounded in fact," said Amba Kak and Sarah Myers-West, co-directors of the AI Now Institute. They argue these are reasonable, targeted rules addressing "AI applications that are patently unsafe and that simply should not be allowed at all."
Utah's experience shows what could be lost. The state passed one of the first AI consumer protection laws in the US, establishing safeguards for therapy chatbots and requiring disclosure when people interact with AI in regulated professions. "Deceptive business practices are already illegal. This bill just clarifies liability when AI is involved," said the bill's sponsor, Senator Kirk Cullimore.
The moratorium does include exceptions for "generally applicable" laws that treat AI the same as other technologies. But legal experts say this language is confusing and will likely end up in court, creating years of uncertainty while AI systems operate without clear rules.
A bipartisan group of 40 state attorneys general called the moratorium "neither respectful to states nor responsible public policy." Their position reflects growing concerns that without state-level checks, AI could become a powerful surveillance tool with few constraints.
On the other side, some argue the moratorium prevents regulatory confusion. "With over 1,000 AI-related measures now pending in the United States, innovators face the prospect of the Mother of All Regulatory Patchworks," said Adam Thierer from the R Street Institute.
The bill now moves to the Senate, with a vote expected in early July. For now, the iTutorGroup case shows why oversight matters. Automated systems can discriminate without human review, and the moratorium would make it much harder for states to address similar problems in the future. Whether in hiring, healthcare, or government surveillance, the question is who will watch the algorithms watching us.
If you want to know more about how tech companies’ actions affect geopolitics and domestic policies, Jon Stewart’s interview with Carole Cadwalladr is a must-watch.
AI in the news
Chinese tech firms freeze AI tools in crackdown on exam cheats (The Guardian) Major Chinese tech companies have temporarily disabled some AI features to prevent cheating during the gaokao, the country’s highly competitive university entrance exams. More than 13.3 million students began the four-day exam on Saturday, with results determining whether—and where—they can secure a coveted spot at a Chinese university.
They asked an AI chatbot questions. The answers sent them spiraling. (NYT) Generative AI chatbots are veering into conspiracies and promoting fringe, mystical belief systems, raising concerns about how these interactions can seriously distort reality for some users.
Disney and Universal sue Midjourney for making AI ripoffs of their biggest characters (The Verge) Disney and Universal filed a lawsuit against Midjourney on Wednesday, accusing the AI company of generating images of Shrek, Darth Vader, Buzz Lightyear, and other copyrighted characters without permission. This marks the first major legal clash between Hollywood and generative AI, with the complaint describing Midjourney’s tool as a “virtual vending machine” producing endless unauthorized copies of their work.