How the US, the EU, and Canada are regulating AI
From deregulation to policy limbo, this is how three global powers are writing the future of AI.
As AI shapes our futures, three major economies are taking divergent paths on its oversight. The choices we’re making now will determine whether you can challenge an AI decision that affects your livelihood, understand how algorithms are scoring your work, or know if AI is making major decisions about your life. So let’s get into this week’s newsletter and take a look at how the US, the EU, and Canada are regulating AI.
Help train this newsletter's neural networks with caffeine!
⚡️ Buy me a coffee to keep the AI insights coming. ☕️
What’s happening with AI policy this week (and why it matters)
In the past week:
The Trump administration released “America’s AI Action Plan,” a sweeping AI strategy centred around deregulation
The EU is preparing to enforce its landmark AI Act, with more laws coming into place over the next week
Canada is committed to funding AI infrastructure, but there’s still no regulation in sight
These laws (or lack of laws) will be blueprints for how societies will balance human judgment, algorithmic power, and workers’ rights.
🇺🇸 The US
After the Big Beautiful Bill backfires, Trump introduces another kind of deregulation
In June, I wrote about President Trump’s “Big Beautiful Bill”—a $1.6 trillion omnibus spending package that tucked a sweeping AI policy into its 1,116 pages. The bill included a 10-year moratorium blocking states from passing their own AI laws, effectively banning local oversight until 2035. Civil rights groups, technologists, and even some Republicans called it a power grab. I wrote that it was a dangerous bet.
After backlash, the Senate stripped the AI moratorium from the legislation in a 99-1 vote on June 27, days before passing the rest of the package.
On July 23, the White House released America’s AI Action Plan, a 90-point strategy that prioritizes deployment speed and global dominance over federal oversight. Three executive orders followed immediately:
Preventing “Woke” AI in the Federal Government
Accelerating Federal Permitting of Data Center Infrastructure
Promoting the Export of the American AI Technology Stack
“The American people do not want woke Marxist lunacy in AI models,” Trump said last Wednesday.
What this means in practice:
Federal agencies are now directed to remove diversity, equity, and climate considerations from AI risk evaluations
Companies in the US can roll out AI tools for hiring, surveillance, and performance monitoring without informing anyone or giving them a way to contest decisions
There’s no federal requirement to disclose AI usage, explain AI decisions, or test for bias
If you work for a US company:
AI tools may roll out quickly with limited testing
You might not be told when AI is involved
Appeals depend on company policy, not law
Speed takes priority over safety
The underlying logic is that faster deployment makes the US more competitive, especially against China. But critics warn this strategy leaves people with no protection from automated harms.
Even OpenAI CEO Sam Altman, whose company stands to benefit from lax rules, called for federal guardrails in an interview with Theo Von last week:
“There have to be some rules here. There has to be some sort of regulation at some point… I think like one countrywide approach would be much easier for us to be able to innovate and still have some guardrails.” — Sam Altman
For American companies and consumers, the result is a policy vacuum. Now there’s a growing list of decisions made by systems you can’t see, can’t question, and can’t hold accountable.
My prediction is that, as companies and other bodies start using AI, lawsuits will be launched and the results of those lawsuits will set new precedents, which will determine the law. And that seems like an inefficient way to do things.
🇪🇺 The EU
Risk rules, red tape, and AI accountability
The EU AI Act entered the conversation about a year ago on August 1, 2024, with staggered implementation:
Unacceptable risk: AI systems like biometric surveillance were banned from February 2, 2025
High-risk: Systems used in hiring, credit, and monitoring face strict rules beginning August 2, 2026
Limited risk: Tools like chatbots must identify themselves
Minimal risk: AI tools like grammar correctors must disclose AI involvement
Real-world impact:
Job applicants in the EU will gain rights to transparency, explanation, and appeal starting 2026
Companies that fail to comply could face fines of up to €35 million or 7% of global revenue
If you work for a European company (or serve EU customers):
You’ll be entitled to know when AI is used and why (starting in 2026)
AI systems must be tested for bias and explainable
Formal appeal processes are built in
The rollout may be slower, but protections are stronger
This law applies globally to any company doing business in the EU.
🇨🇦 Canada
Strategic spending, no law in sight
Canada’s proposed Artificial Intelligence and Data Act (AIDA) died with the prorogation of Parliament in January, after nearly three years of debate.
What’s happening instead:
A $2.4 billion federal AI investment package was announced in April 2024
$2 billion for compute infrastructure
$50 million for a new AI Safety Institute
$5.1 million to enforce now-defunct AIDA standards
The regulatory reality:
Laws vary in different provinces (For example, Ontario will require employers to disclose AI use in hiring starting in January)
There’s no federal AI law governing workplace or consumer protections
Rules are opt-in only
If you work for a Canadian company:
Rights vary significantly by province
No comprehensive federal AI law exists
Companies may choose to follow international standards, or not
Infrastructure investments are strong, but rules are soft
Canada’s current approach emphasizes investment over enforcement.
The stakes beyond compliance
The AI systems we build today will shape the world we live in. Depending on your geography, AI may help your career or harm it, and you may or may not have any legal say in the matter.
Having trouble reading the paywalled links in this story? Check out your local library website. Where I live, in Toronto, our public libraries provide free access to many international news sites.
AI in the news
AI platform designs molecular missiles to attack cancer cells (EurekaAlert!) Researchers have developed an AI platform that can rapidly design protein “minibinders” to train a patient’s immune cells to recognize and kill cancer. This dramatically reduces the treatment development timeline from years to just 4 to 6 weeks. The platform, tested successfully on multiple cancer targets, includes built-in safety screening to avoid harming healthy tissue and could pave the way for personalized, AI-driven cancer immunotherapy within five years.
AI summaries cause ‘devastating’ drop in audiences, online news media told (Guardian) A new study warns that Google’s AI Overviews (which summarize search results with AI-generated text) can cause up to an 80% drop in traffic to news sites by pushing links below the fold and giving users little reason to click through. Media organizations say this shift threatens the survival of independent journalism, as Google monetizes their content without driving the referral traffic that sustains their work.
Google AI Mode will generate fake clothes to help you buy real ones (The Verge) Google is launching new generative AI shopping tools, including “AI Mode,” which creates images of clothing and decor based on user descriptions to help shoppers find visually similar real products. Alongside it, a virtual try-on feature will let users upload a photo to see how clothes might look on them, offering a more personalized shopping experience across Search, Shopping, and Google Images.


