Meta’s plan to launch facial recognition while we’re distracted
Seven million pairs of Meta smart glasses are in the wild. Soon they could be able to identify anyone with a public social media profile. Here's what we know.
Isobel Thomason had just turned down a guy on the street when he came back with an announcement. “I’m actually a content creator,” he told her, pointing to his glasses. The frames looked completely ordinary. “And I’ve been filming this.”
His glasses were Meta Ray-Bans, the world’s best-selling AI wearable. The video of Thomason, taken without her knowledge on a UK street, was intended for TikTok. Many similar clips of men approaching women and secretly capturing their reactions are racking up millions of views on social media. The comments are often misogynistic.
“I had no idea I was being filmed until he told me,” the 22-year-old told The Independent last month. “I just thought: Oh my God, this is so dystopian, so bizarre.”
“Name Tag” is Meta’s internal name for a new feature that would let a wearer identify a stranger in their field of view. It could surface their name, employer, and any other information found online or on socials, in under 90 seconds. A leaked report shows that Meta is preparing to launch this real-time facial recognition technology as early as this year.
Meta's internal memo: Their very deliberate timing
In May 2025, someone inside Meta’s Reality Labs wrote a document. This line was in the planning language for Name Tag:
“We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”
Meta looked at the current political situation and thought, perfect timing. The people who would normally fight us on this are going to be too busy.
Meta’s public response, provided to the New York Times, was, “We’re building products that help millions of people connect... while we frequently hear about interest in this type of feature, we’re still thinking through options and will take a thoughtful approach if and before we roll anything out.”
Meta's facial recognition history
To understand how we got here, we need to go back to November 2021, when Meta announced that it was shutting down Facebook’s facial recognition system and deleting over a billion facial recognition “templates”.
What Meta didn’t mention was that it was keeping DeepFace, the algorithm that ran the system. Their 2021 statement, which promised Meta would move toward a “narrow set of use cases,” did a lot of heavy lifting.
Meta built their facial recognition system with your data. Every photo you tagged on Facebook since 2010, every public Instagram post, every time the app suggested a friend’s name over a face and you hit confirm, those were data points. You were labelling training data for a facial recognition model.
Deleting the templates cleared Meta’s legal liabilities, but the templates they deleted in 2021 were just summaries. The photos, and the human-verified labels attached to them, stayed. So did DeepFace, the algorithm that can generate new templates from all of it.
2025: When seeing isn't believing
For the past few years, I’ve experimented with almost every significant AI tool that’s been released. As someone who writes about the intersection of humanity and AI, it’s essential for me to understand the tech. I use the tools, observe their limitations, and I consider what they might mean for how we work, create, and make sense of the world.
Harvard students built smart glasses with facial recognition in 2024, the I-XRAY project
In October 2024, Harvard students AnhPhu Nguyen and Caine Ardayfio published a project called I-XRAY. Using regular Ray-Ban Meta glasses paired with a third-party facial recognition tool called PimEyes, they walked up to complete strangers on the Boston subway, called them by name, and referenced details of their lives. All of it pulled from public data, in near real-time. Their demo video got over 20 million views.
Meta’s response was to point to the small white LED on the glasses’ frame that’s supposed to light up when the cameras are recording. The students showed that a piece of tape takes care of that.
Name Tag makes I-XRAY look like a rough draft, if the Times’ description holds up. Instead of scraping fragmented public web data through third-party tools, Name Tag connects directly to Instagram and Facebook’s databases, which contains fifteen years’ worth of photos of billions of people. They’re high-resolution, multi-angle, and fully labelled.
Covert smart glasses filming already targets women
What happened to Isobel Thomason isn’t a one-off. Dr. Olga Jurasz, who directs the Centre for Protecting Women Online, told The Independent that covert filming using smart glasses is rising and it’s being normalized.
“I think we generally, over the past 10 years, have seen not only a rise but commonality of these behaviours,” she said.
The harms start to add up: potential deepfakes, the constant low-level anxiety of being watched in public, comment sections full of strangers that the creator can monetize.
“Women do not consent to this,” Dr. Jurasz said. “We are well overdue action tackling such behaviours, and that includes legislatively.”
Right now, today, before Name Tag exists in the wild, a stranger can film a woman without her knowledge on a Canadian street. Once Name Tag launches, he can also potentially learn her full name, where she works, and where she went to school in the time it takes to walk past her. The glasses Thomason called dystopian are about to get worse.
Canada's facial recognition law gap: Why PIPEDA can't protect you
If you’re Canadian and you’re reading this thinking someone in government must be on top of this, I have bad news.
Bill C-27, the federal government’s proposed update to Canada’s main privacy law, PIPEDA, died in January 2025. It would have brought in a new Consumer Privacy Protection Act and an Artificial Intelligence and Data Act (AIDA), but it didn’t pass.
As of February 2026, Canada is still running on a privacy law written in the year 2000, with no federal AI legislation in place.
That means companies like Meta can argue that the commercial value of Name Tag outweighs your right to walk through a park without being identified. There’s currently no federal law that says otherwise.
The consent problem is almost absurd in its scale. To identify one person in a crowd, Name Tag has to scan everyone around them first. Every face in a Toronto subway car gets processed, even if only one match gets surfaced. Even if the non-matching faces are discarded immediately, the Privacy Commissioner’s own guidance says that collecting and processing biometric data without consent, even briefly, is a serious concern the current law can’t address. The current law being one that predates the iPhone.
Quebec's Law 25: The only Canadian shield against smart glasses facial recognition
One province is ahead of the rest of Canada on this. Quebec’s Law 25 is stricter than anything at the federal level. Any company wanting to deploy a biometric system in Quebec has to notify the province’s Commission d’accès à l’information (CAI) at least 60 days before launch. Then they have to pass a necessity test: Is the reason for collecting this data legitimate and important, and is the privacy violation proportionate to what you’re trying to achieve?
The CAI has already blocked facial recognition for employee time-tracking and grocery store security, ruling that less invasive options existed. Under that test, Name Tag has a real problem. “I need to identify someone at a cocktail party” probably isn’t going to clear the bar of “legitimate, important, and proportionate” when the cost is scanning every face in the room.
Which creates a weird situation for Canadians. A user on the Ontario side of the Alexandra Bridge in Ottawa could legally use Name Tag to identify a stranger. The moment they cross into Gatineau, Quebec, they could be violating provincial law. The patchwork and federal-provincial gridlock is yet another version of the “dynamic political environment” Meta’s memo was counting on.
The Ray-Ban Meta 'privacy indicator’: Why the white LED doesn't protect you
Every pair of Ray-Ban Meta glasses has a small white LED in the corner of the frame. It’s supposed to blink when the cameras are on, a signal to anyone nearby that they might be recorded.
Isobel Thomason didn’t notice it. The strangers on the Boston subway didn’t notice it. Meta told The Independent that the glasses have “tamper detection technology” to stop people covering the light, a claim that sits uneasily alongside the fact that purpose-built LED blockers for these exact glasses are available on Amazon, for next-day delivery.
Over seven million pairs of these normal-looking glasses sold in 2025. They cost around $400.
The people wearing them in your favourite coffee shop or on the bus might not be thinking about facial recognition, but maybe that’s the point. Ambient surveillance doesn’t arrive with a press release. It arrives during a news cycle packed with bigger things to worry about, during exactly the kind of “dynamic political environment” Meta’s memo described.
What can Canadians do to protect themselves from Meta facial recognition?
If Name Tag launches, we have no reliable opt-out. You can’t consent before your face is scanned on a sidewalk.
That said, three things are worth doing.
If you have a Meta account, lock down your profile photos and set your Instagram to private. Name Tag draws on Meta’s social graph, so a private account is harder to match against.
Remove yourself from PimEyes, the face search engine that powered the Harvard I-XRAY experiment. Doing this is free and covers the class of third-party tools that can already be paired with Ray-Ban glasses today. Do the same for FaceCheck.ID.
Write to your MP. No privacy setting fixes a legislative gap. Quebec has meaningful protection under Law 25, but the rest of Canada doesn’t. Let your MP know you support privacy laws.
AI in the news
OpenAI did not mention Tumbler Ridge shooter’s posts in meeting with B.C. officials day after mass shooting: province (Globe and Mail) OpenAI employees tried to flag the perpetrator of Canada’s worst mass shooting in history, but management decided against it. Canada’s AI Minister calls this “deeply alarming.”
Met police using AI tools supplied by Palantir to flag officer misconduct (Guardian) Scotland Yard is using Palantir’s AI to scan internal data for absences and overtime in an effort to flag officers who may be falling short. They say the software spots patterns, then pass the information on to humans.
Decoding the AI beliefs of Anthropic and its CEO, Dario Amodei (New York Times) Dario Amodei is having quite the week. First, he refused to hold hands with Sam Altman, and now he’s picking fights with the US Pentagon. This piece looks further into Amodei’s philosophy.




Ugh the whole thing is just deeply concerning and unpleasant.