The AI future I want to live in
A hopeful look at how Canada could lead the world in ethical AI with real policies, equity-first funding, and smarter, safer tech for all.
What if Canada gets AI right?
It's 2030, and Canada is a global leader in best-in-class AI adoption. Blending technological advancement with equity, sustainability, and economic resilience, we are killing it. We've broken away from Silicon Valley and ushered in our own full-blown transformation where AI makes life better, for everyone.
Canada has cracked the code on responsible AI, turning what could have been a tech horror story into a best case scenario. And it’s not driven by profit, but by reimagining how technology can serve people. That being said, generative AI now contributes $180 billion+ annually to our GDP.
Clean energy is thriving. Hospital wait times are down. Wildfires are caught and put out early. Workers are thriving, not scrambling. The game-changers? Innovations like worker-owned AI co-ops, portable benefits, and data governance that stops AI going off the rails. Algorithmic audits catch and prevent bias early, creating a high level public trust in AI.
From climate-resilient agriculture to personalized education tailored to your brain, the job market is a whole new world.
Canada goes from "nice country" to "most innovative country", proving that technology can be the rising tide that lifts all boats. The future isn't something that happens to us. In Canada, we're actively designing it.
By 2030, Canada harnesses AI to build a society that’s smarter, fairer, and more resilient than ever.
My last newsletter warned that AI was coming for our jobs, but this week I’m taking a break from doomscrolling to imagine a future where AI builds something better. And to ask: what if Canada leads the way? So I put together a best-case scenario rooted in real policy and vision.
One of my favourite futurists, Amy Webb, just released the Future Today Strategy Group’s latest report and it’s 1000 pages of tech forecasting goodness. In it, she writes:
“The decisions we make in the next five years will determine the long-term fate of human civilization. This isn’t hyperbole— it’s the sobering conclusion drawn from our best available data.”
Try to fully digest and process that statement. Consider that we, now, have the privilege, the opportunity, and the duty to make a mark on history. So let’s get to work!
The step-by-step plan
If we class AI as a public good, like healthcare, we can start to imagine AI creating jobs, helping society, and closing gaps. It’s ambitious, but it’s also completely possible.
STEP 1: Make rules
First up: rules. Canada is currently updating its Artificial Intelligence and Data Act (AIDA). Rather than giving us a “task force” that releases a report, this overhaul needs to include mandatory risk checks for AI, an ombudsperson to investigate suspicious practices, and impactful fines for bad actors.
STEP 2: Train people, don’t replace them
We’re not going to stop AI from changing work. But we can put a plan into place to retrain a million Canadians—especially in retail, admin, and manufacturing—with practical, job-ready AI skills.
By 2027, we should aim to retrain 5% of the workforce annually. This would give workers the skills to stay relevant, flexible, and future-proof. This will be a challenge, because, as I wrote last week, the people who are losing their jobs are probably working outside of tech. But it can be done if the training is designed well.
STEP 3: Build equity into the code
If your tech only works for the privileged, it doesn’t work. We make sure BIPOC- and women-led firms get their fair share of federal AI grants. Every public AI system would go through bias testing. People pushed out of their industries would get employment insurance (EI) while they reskill.
With bold policy, corporate accountability, and civic engagement, Canada can turn its AI advantage into a blueprint for progress.
Help train this newsletter's neural networks with caffeine! ⚡️ Buy me a coffee ☕️
AI in the news
Tracing the thoughts of a large language model (Anthropic) In this YouTube video, Anthropic explains their new interpretability methods allow us to trace their (often complex and surprising) thinking. With two new papers, Anthropic's researchers have taken significant steps towards understanding the circuits that underlie an AI model’s thoughts.
The tech behind signalgate + Dwarkesh Patel's "Scaling Era" + Is AI making our listeners dumb? (Hard Fork) I enjoyed the Hard Fork interview with Patel. He talks about what he thinks the future of AI will bring us, and surprise (!), it’s not doom or gloom.
OpenAI unveils new image generator for ChatGPT (New York Times) On Tuesday, OpenAI beefed up its ChatGPT chatbot with new technology designed to generate images from detailed, complex and unusual instructions.