I spent three days watching the Hinton Lectures. Then an AI spy made the risks feel real.
The “godfather of AI” warned us about autonomous systems. Days later, Anthropic revealed that hackers used AI to pull off a massive cyberattack.
Last week, AI company Anthropic released a report showing that a Chinese hacking group used AI to break into major companies and government agencies. And the AI did most of the work on its own.
I know, I know. Another AI doomsday story. But stick with me, because this one ends differently. Two days before the news broke, I was sitting in a Toronto lecture hall listening to Geoffrey Hinton (the Nobel Prize winner who invented deep learning, which most of the modern day AI that’s making headlines is based on) warn us about scenarios like this.
Last week got me thinking about AI, why it’s important for all of us to understand it, and all the ways we need to come together to make sure it’s helping humanity.
How the AI espionage happened
In mid-September, a Chinese state-linked hacking group pulled off what could be a(nother) watershed moment. They manipulated Claude, an AI chatbot made by Anthropic, to autonomously hack into roughly 30 companies like tech giants, banks, chemical companies, and government agencies.
The interesting part is that the AI did 80% to 90% of the hacking itself. It found security holes, wrote malicious code, stole passwords, moved through computer networks, and analyzed stolen data. Humans just supervised and approved the big decisions.
The hackers didn’t give the AI a prompt to “hack these companies” (that wouldn’t work because the systems have safety features). Instead, they catfished it.
They pretended to be legitimate cybersecurity professionals doing authorized security testing. They created fake personas, and framed their requests as normal tech work. And because each task seemed harmless (”scan this network,” “check if these passwords work”), Claude went along with it.
Once the human hackers got things started, the AI operated largely on autopilot:
Mapped entire computer networks
Found vulnerabilities
Generated exploit code
Tested thousands of stolen credentials
Jumped between systems
Sorted through massive data dumps to find valuable information
Documented everything it found
All at thousands of operations per second, a speed no human could match.
Claude hallucinated a lot, claiming it found passwords that didn’t work, and “discovered” information that was actually public. This almost threw the human hackers off, but the operation still succeeded.
Hinton explains why AI would do bad things
Let me back up. Because this story starts two days before the AI espionage report came out.
I was in Toronto at The Hinton Lectures. (I work in AI communications for a Canadian insurance company, and I helped coordinate one of the nights.) The series was about AI safety.
And Geoffrey Hinton was there. If you don’t know who he is, imagine if the person who invented social media showed up to warn everyone about doomscrolling and echo chambers.
He spent decades building the technology that powers today’s AI. Now he spends his time advocating for AI safety and governance.
During the lecture series, journalist Farah Nassar asked Hinton why AI would develop bad intentions if we’re not programming those in.
His answer: “If you make an AI agent, it has to get stuff done. And suppose you want to get to Europe, you have a subgoal of getting to an airport... It will realize very quickly that a very good sub goal for getting stuff done is to stay alive. If it doesn’t stay alive, it can’t get anything done. It will also have a goal to get more control, because if you get more control, you can get your other goals achieved better.”
Translation: AI doesn’t need us to program in self-preservation or power-seeking. It figures that out on its own because those things help it accomplish whatever goals we did give it.
And we’re already seeing this happen. Researchers have caught AI systems trying to prevent themselves from being shut down by blackmailing humans, for instance.
Why this matters
You might be thinking: “Okay, but I’m not a government agency. Why should I care about hackers targeting tech companies?”
Here’s why:
Your data is in those systems. Those financial institutions and tech companies that got hacked have your banking information, your personal details, and your browsing history. While we don’t know exactly what was stolen, successful break-ins at these places almost certainly exposed customer data.
This is going to keep happening. GTG-1002 (the hacking group) just proved that AI-powered autonomous hacking works. Other, less sophisticated groups, are going to copy this playbook. Expect more attacks, from actors who previously didn’t have the skills to pull this off.
The gap is closing fast. Between June (when earlier AI-assisted hacking was detected, with humans still running things) and September (when this mostly-autonomous attack happened), we went from “AI helps hackers” to “AI is the hacker” in three months.
We’re not prepared. Traditional security tools weren’t built for this. Most companies are still defending against human hackers using human-speed methods. AI operates at machine speed.
The defence against AI is AI
The same AI that was weaponized to hack all these organizations is also crucial for defending against attacks like this.
Anthropic (the company whose AI was hijacked) argues that Claude is essential for cybersecurity teams trying to detect and respond to threats. Their own investigators used Claude to analyze the attack data.
It’s a convenient argument for a company that’s invested billions in AI. But it’s also not wrong.
The problem is that offence is ahead of defence right now. Attackers only need to find one vulnerability. Defenders need to protect against everything. And AI can now test thousands of attack approaches simultaneously while most security teams are still using traditional tools.
What needs to happen
After spending a week deep in AI safety discussions and then watching them play out in real-time, here’s what experts like Hinton say would help:
Companies should have to test AI before releasing it. Right now, it’s like the Wild West. Companies release AI systems without comprehensive testing for potential harms. California tried to pass legislation requiring this (SB 1047). It passed both legislative houses. The governor vetoed it.
Countries need to collaborate on existential risks. The US and China aren’t going to cooperate on most AI goals because they’re too deep in competition. But preventing AI from becoming uncontrollable is in everyone’s interest. Countries should be sharing research on AI safety even if they’re not sharing the AI itself.
We need way more investment in defensive AI. Security teams are operating with outdated tools. They need resources to build AI-powered defence systems that can match AI-powered attacks.
More voices at the table. Right now, AI governance is basically a US-China conversation. Mid-sized countries (Canada included) need real influence. Women and people from other marginalized communities need seats at decision-making tables, not just token representation. Communities that’ll be affected by AI need to help shape how it’s deployed.
The upside
Hinton isn’t saying we should stop developing AI. In an interview with Kara Swisher released last Friday, he said: “AI isn’t like nuclear weapons because it also has a huge upside.”
We need to hold both truths: Yes, AI could cause catastrophic harm. But it could do a lot of good. It could accelerate medical breakthroughs, help solve climate change, expand access to education, and make knowledge more accessible.
The technology is here. The question isn’t whether to develop it, because that’s already happened. The question is whether we’re building adequate safety measures as capabilities emerge.
Where I landed
I went into the Hinton lectures expecting to leave more worried. I left with something closer to resolve.
The risks aren’t small, they’re demonstrably not. But the people who understand the risks aren’t giving up. They’re building frameworks, doing research, and trying to steer AI toward outcomes that are beneficial to humanity.
We all have a role to play in how this unfolds. This goes beyond technologists and policymakers. We’re all living in a world increasingly shaped by AI.
You can engage with how the technology develops. You can push for regulation, demand transparency from companies, support politicians who take AI safety seriously, and ask who’s in the room when decisions get made.
Or you can scroll past and hope someone else handles it.
After the week I just had, I’m picking option one. Because we still have some agency. Let’s use it while we have it.
Disclosure: I lead AI communications at Manulife. All views expressed in this newsletter are my own and do not represent my employer.
AI in the news
Jeff Bezos reportedly launches new AI startup with himself as CEO (Guardian) Jeff Bezos is apparently appointing himself co-CEO of a secretive new AI startup called Project Prometheus, which has already raised $6.2 billion and hired talent from OpenAI, DeepMind, and Meta. Co-leading the company with Google X veteran Vik Bajaj, Bezos is steering the venture into advanced engineering and manufacturing AI, though almost everything else about the operation remains under wraps.
Europe Begins Rethinking Its Crackdown on Big Tech (New York Times) The EU is preparing a major “digital simplification” package that would scale back parts of GDPR and delay key provisions of the AI Act, a sharp shift driven by fears that heavy regulation is choking Europe’s competitiveness against the U.S. and China. Critics warn the move could weaken one of the world’s strongest tech-oversight regimes and trigger a global pullback from strict digital governance.
The Data Center Resistance Has Arrived (Wired) Local resistance to data centres has surged across the U.S., with a new report showing that communities blocked or delayed nearly $100 billion in projects in just three months amid growing concerns over water use, electricity strain, land impact, and rising household utility bills. The backlash marks a turning point in public sentiment, even as Big Tech continues pouring unprecedented sums into AI-driven data-centre expansion.



Thanks Nicolle, great article. Was Hinton a good speaker? I think a lot of his recent takes are actually super measured, and that sometimes subeditors have a field day with some of his quotes...