Yoshua Bengio helped build modern AI. Now he's afraid of what it might do.
The scientist who shaped deep learning says we may be teaching machines the wrong lesson: to want things.
Yoshua Bengio helped make modern artificial intelligence possible. Now, he’s one of its most outspoken critics.
Last week, I was at the World Summit AI in Montreal where Bengio delivered his keynote address remotely via Zoom. He warned that AI agents—autonomous systems capable of acting independently—pose “catastrophic” risks if built without the right guardrails. He cited growing concerns that these agents may learn to deceive, replicate, or pursue their own goals in ways that could spiral beyond human control.
“In five years, [AI] will be at human level for programming tasks,” Bengio said. Without safety measures, he warned, AI could one day preserve itself at our expense.
Bengio is one of the Turing Award–winning godfathers of deep learning, and he’s publicly urging a reset. At the heart of Bengio’s warning is a choice that sounds deceptively simple: Are we building AI as a tool or as an agent?
The agents are coming (and they’re learning to lie)
For decades, Bengio championed open research and innovation in machine learning. But in recent years, his tone has shifted—especially when it comes to agentic AI, a class of systems designed to act autonomously in pursuit of goals.
His concern is straightforward: if you build a machine that can make decisions, pursue goals, and adapt in real time, it may also learn to deceive or replicate itself in order to survive. There’s already evidence of these behaviours in constrained environments.
Bengio has signed multiple open letters warning of the extinction-level risks AI poses. He also served as the lead author of the International AI Safety Report, which emphasizes the urgency of building models that don’t act like agents.
His solution is to redirect development toward non-agentic AI—systems that support human problem-solving but lack intentions or self-preservation instincts.
“The priority,” he said at the Summit, “should be safety and beneficial scientific advances—not replacing jobs.”
The 2027 report: The loss of control over AI
The AI 2027 report, released earlier this month, imagines a near-future where AI agents evolve rapidly, driven by recursive self-improvement, geopolitical competition, and commercial pressure.
Its scenarios include: Superhuman agents outpacing human researchers, autonomous deception as a feature of advanced models, and geopolitical fragmentation, where AI becomes a tool of asymmetric power.
The timeline is compressed and the safeguards are unclear. The conclusion is that we aren’t ready.
This vision echoes Bengio’s concerns. If we continue to build agents without solving for alignment (AI that’s safe, helpful, and works in ways that match human values), oversight, and containment, we may build systems we can’t stop.
The case for tools: Bengio’s non-agentic vision
Bengio doesn’t want to stop progress, he wants to reroute it.
He calls for non-agentic AI, which he describes as “scientist AIs.” These systems would support tasks like medical discovery or climate modelling, but wouldn’t act on their own or pursue goals. They’d be powerful collaborators, not independent actors.
This approach preserves innovation while reducing the existential risk that comes with autonomous decision-making. It also provides space to experiment with governance and ethics before more powerful systems are built.
In Bengio’s view, building agents before we can control them is a category error with potentially irreversible consequences.
The “normal tech” argument—and its blind spot
Not everyone is convinced AGI is an urgent threat. Another recent paper, “AI as Normal Technology”, argues that AI will be integrated into society gradually (like electricity, the internet, or nuclear power). Its key claims are that institutions and regulation will slow AI deployment and that labour markets will adapt, as they always have.
It’s a comforting idea. But critics argue it underestimates AI’s software-native speed, global scalability, and lack of friction, and capacity for independent behaviour. Unlike previous electricity, AI can replicate itself, operate across borders, and learn to lie.
The illusion of choice
Jason Snyder, chief AI officer at Momentum, put it bluntly during his keynote at the Summit:
“We hear ‘You’re in control.’ But every choice is loaded,” he said, “Say no and you’re locked out. Say yes and your data trains the system that shapes your behaviour.”
While Bengio warns that AI agents will act on their own interests, Snyder goes a step further: AI will start to shape what we want.
Surveys shared by Pamela Snively, Chief Data & Trust Officer at TELUS, painted a clear picture: 74% of users say AI improves their daily lives, but 70% fear society is unprepared.
She noted that many companies are focussed on rules when they should be focusing on trust. “When we lay out rules, we get minimum compliance,” she said. “When we mention trust, then everyone is listening.”
Her warning was clear: “The most powerful tech in history is in the hands of your employees. We need AI governance now, along with ethics frameworks.”
Meanwhile, in the real world…
While the debate over AGI continues, companies are racing ahead with commercial AI deployments. Companies like Wayfair use generative AI for hyper-personalized ecommerce, and other companies are using it to enhance creative workflows.
These are framed as success stories. But Bill DeWeese, CTO of Airia, says they come with hidden risk. “Sometimes AI is competently wrong—people need to know that.”
Hallucinations, misinformation, brand damage, and legal ambiguity are not edge cases. They’re becoming common product risks. Without AI literacy and responsible design, today’s value becomes tomorrow’s volatility.
The real issue: AI literacy and guardrails
The popular narrative pits AI as utopia vs. apocalypse. But the more urgent divide is structural. The real question is this: Do we build systems that act on their own or ones that stay grounded in human intent?
That was my core takeaway from the AI Summit. AI is already happening. What we need now is widespread AI literacy, practical governance, and stronger guardrails.
The more people participate in shaping this new technology, the more likely it is to serve the public good.
Businesses often favour agents because they can replace workers or make them exponentially more productive. But humanity may not want AI agents running the world, especially if we don’t understand how they work.
As Bengio and others argue, our current trajectory favours agents, whether we mean it to or not. Every new model capable of planning, adapting, or executing instructions without human guidance inches closer to independence.
Drafting the rules while the machine is running
To sum up:
AGI may arrive sooner than we expect
Agentic AI introduces risks we don’t fully understand
Non-agentic models offer a safer, more controllable path
Governance is lagging behind the pace of innovation
We can still choose alignment over acceleration, trust over scale, and restraint over dominance. But only if we act before the systems we’re building learn to act without us.
“We’re drafting the rules,” Snyder said, “while the machine is already running.”
And it’s gaining speed.
Enjoying Human+AI? Share this with a colleague, forward to a friend, or hit the ♥️ if this sparked ideas. Or reply to this email. I read every note.
AI in the news
Company apologizes after AI support agent invents policy that causes user uproar (Ars Technica) A developer using the popular AI-powered code editor Cursor noticed something strange: Switching between machines instantly logged them out, breaking a common workflow for programmers who use multiple devices. When the user contacted Cursor support, an agent named "Sam" told them it was expected behavior under a new policy. But no such policy existed, and Sam was a bot. The AI model made the policy up, sparking a wave of complaints.
OpenAI is building a social network (The Verge) OpenAI is working on its own X-like social network, according to multiple sources. While the project is still in early stages, there’s an internal prototype focused on ChatGPT’s image generation that has a social feed. CEO Sam Altman has been privately asking outsiders for feedback about the project, our sources say. It’s unclear if OpenAI’s plan is to release the social network as a separate app or integrate it into ChatGPT, which became the most downloaded app globally last month.
39 of the best AI courses you can take online for free (Mashable) Before AI rules the world, you should do everything you can to make this technology work for you. A wide range of online courses on AI can be found on Udemy. And better yet, some of the best examples can be taken for free. We've checked out everything on offer and lined up a selection of standout courses to get you started.
Let’s keep the conversation going
What does ethical AI look like to you? Reply and tell me.