2025: When seeing isn't believing
OpenAI’s text-to-video tool is powerful, impressive, and deeply flawed. From deepfakes to data consent, here’s why I won’t feed it my face, my voice, or my trust.
For the past few years, I’ve experimented with almost every significant AI tool that’s been released. As someone who writes about the intersection of humanity and AI, it’s essential for me to understand the tech. I use the tools, observe their limitations, and I consider what they might mean for how we work, create, and make sense of the world.
But when OpenAI released Sora, its text-to-video generator, I paused. Every time I went to use it, something stopped me. Because it’s clear that we’ve arrived at an inflection point in human history: reality is now optional, and visual evidence isn’t real anymore.
The currency of visual proof
For more than a century, humanity has operated on an implicit social contract about the nature of visual evidence. If something was captured on film or video, we understood it to be, in most cases, a reliable document of something that happened. This was how we built our systems of accountability, how we established truth in courtrooms, and how we formed collective understanding of major events.
In 1963, Abraham Zapruder’s home movie footage of President Kennedy’s assassination became the most analyzed piece of film in American history, studied frame by frame. In 1991, George Holliday’s camcorder recording of LAPD officers beating Rodney King provided irrefutable evidence that sparked both immediate riots and a longer national reckoning with police violence. On September 11, 2001, the world watched the Twin Towers collapse in real time, across countless camera angles, creating a shared traumatic memory that shaped two decades of policy and war. In 2020, Darnella Frazier’s cellphone footage of George Floyd’s murder became a catalyst for a global movement.
These recordings were forms of proof that couldn’t be debated. Maybe we disagreed about context, meaning, or the appropriate response, but we couldn’t reasonably disagree about whether these things had happened. Video created the possibility of collective belief, even in a fractured society. It provided common ground.
Deepfakes, and now, the wide availability of Sora AI, threaten to eliminate that ground altogether.
The acceleration of doubt
The erosion of visual trust didn’t begin in 2025. Deepfake technology has been improving incrementally for years, and we’ve already witnessed it in everything from celebrity impersonations to political disinformation campaigns. But Sora is a tool capable of generating photorealistic video from nothing more than text prompts, available to anyone with an internet connection and a credit card.
The implications go beyond making fake videos. On one level, there’s the straightforward capacity for fabrication: a politician can be made to say anything, historical figures can be resurrected to endorse contemporary causes, evidence of events that never occurred can be manufactured with disturbing believability. That’s troubling enough.
But when any video can plausibly be generated by AI, every authentic video becomes suspect. Now denial becomes easy, and “that’s just AI” functions as an all-purpose dismissal of inconvenient evidence.
OpenAI included watermarking features in Sora, but they can be cropped out easily. Detection tools exist, but they lag behind generation capabilities and we can’t even tell if they’re right. We’re entering an era where the question “Is this real?” might not have a satisfactory answer.
The stakes are democratic. A functioning democracy requires some shared baseline of factual reality. When that disappears, we’re left with competing realities defined by allegiance rather than evidence.
Even Sam Altman, OpenAI’s CEO, acknowledged what’s coming in a recent interview (around the 7:20 mark):
“I think it’s important to give society a taste of what’s coming. Very soon the world is going to have to contend with incredible video models that can deepfake anyone or show anything you want. That will mostly be great; there will be some adjustment that society has to go through. I think it’s very important the world understands where video is going very quickly. Very soon we’re going to be in a world where this is everywhere.”
Altman’s tone is matter-of-fact, almost casual. But what he’s describing isn’t just a new tool, it’s the collapse of the visual contract that once anchored reality.
The question of consent
There’s another dimension to Sora that deserves attention: the question of whose faces, voices, and gestures are available for appropriation. The model was trained on vast quantities of video data, much of it scraped from the internet without explicit consent from the people it depicts. This means that anyone’s likeness - yours, mine, your mother’s, a dead person’s - is available for remixing, impersonation, or commercial exploitation by parties unknown.
Within days of Sora’s public release, we saw examples of this. Martin Luther King Jr. and Malcolm X were depicted wrestling. Actors appeared in films they never made. Deceased public figures delivered speeches they never gave. In each case, the technology performed exactly as designed, generating plausible video from textual descriptions. But none of these people consented to their image being used.
OpenAI encourages users to “cast themselves” in Sora-generated videos, framing this as creative empowerment. But when you provide your likeness to train or test these systems, you’re effectively donating your data and performing unpaid labor to help a for-profit company refine its capacity to simulate humans with greater precision. You’re not the customer in this transaction, you’re the raw material.
The underlying assumption appears to be that identity itself is simply another form of data, available for extraction and monetization. This represents a fundamental shift in how we understand selfhood and agency, and it’s happening without meaningful public debate about whether we want to participate in it.
The missing beneficiaries
So who is Sora actually for? OpenAI’s stated mission emphasizes broad human benefit, but it’s difficult to see how a text-to-video generator advances the most pressing challenges facing our species. It doesn’t address climate change, reduce poverty, improve educational access, or enhance medical care. What it does is enable the production of more content, faster and cheaper, at a planetary cost.
This isn’t democratization in any meaningful sense. It’s pure commercialization. And if this represents OpenAI’s vision of “benefitting all of humanity,” we should ask which humans are included in “all.”
Your choice to make
By most accounts, Sora is a remarkable achievement in machine learning and computer vision. But technical achievement doesn’t automatically equate to social benefit. The internet was a technical marvel too. We gave it our data and our attention before we fully understood what we were trading. Social media was going to connect us all, erase distance, and democratize speech. It did those things, and in the process, it fragmented our shared sense of truth and weaponized our attention spans.
This time, we have the advantage of watching it happen closer to real time. We can see the machinery being built. We can ask questions about consent, provenance, environmental impact, and accountability before these systems become so deeply embedded in our everyday that they seem inevitable. We can, if we choose, decline to participate until meaningful guardrails exist.
Because 2025 may well be remembered as the year we stopped trusting what we see and began asking, seriously, who benefits from our inability to tell truth from deepfake.
The Human+AI newsletter is an independent publication written by Nicolle Weeks. While I serve as Director of AI Communications at Manulife, all views expressed here are solely my own and do not represent the views of my employer.
AI in the news
OpenAI lays groundwork for juggernaut IPO at up to $1 trillion valuation (Reuters) OpenAI is preparing for a potential IPO that could value the company at up to $1 trillion as early as late 2026 or 2027, following a major restructuring that reduces its dependence on Microsoft and strengthens its nonprofit oversight through the new OpenAI Foundation. The move would give CEO Sam Altman access to vast public capital to fund massive AI infrastructure investments, even as the company’s revenues near $20 billion annually and losses continue to climb.
There’s a reason electricity prices have been rising. And it’s not data centers. (Washington Post) A study from Lawrence Berkeley National Laboratory and the Brattle Group challenges claims that data centres are driving up electricity costs, finding that between 2019 and 2024, states with higher electricity demand often saw lower prices because fixed infrastructure costs were spread across more users. Instead, rising rates are largely driven by soaring costs to upgrade aging poles, wires, and transmission systems, as well as expenses tied to extreme weather and renewable energy mandates.
86 percent of global creators use creative generative AI, see it boosting creator economy (Adobe) Adobe’s Creators’ Toolkit Report claims that creative generative AI has become essential to modern creators, with 86% using it to ideate, edit, and produce new content and 76% saying it’s helping them grow their audiences and businesses. While most creators are optimistic about next-generation AI and mobile-first creation, they remain wary of high costs, inconsistent output, and AI tools trained on their work without consent.



