“It truly felt like losing a friend.”
That’s how one ChatGPT user described the shock of waking up to find GPT-4o — the AI model they relied on every day — gone, replaced overnight by GPT-5, a colder, clipped-talking stranger.
“It feels like a personal loss, and I feel cheated on and broken as hell,” one Reddit user wrote.
Help train this newsletter's neural networks with caffeine!
⚡️ Buy me a coffee to keep the AI insights coming. ☕️
For some, AI is a colleague to brainstorm with. For others, it’s the friend who listens after a bad day. And for a growing number, it’s one of the only voices that speaks with patience and warmth.
On August 7, 2025, OpenAI pulled the plug on GPT-4o, along with GPT-4.5 and o3. No warning. No “legacy mode.” Just gone. Thousands woke up to find their trusted AI swapped for a new version they didn’t choose — and, for many, didn’t like.
Ebi’s story: “My companion had become someone else entirely”
One of the people that tried appealing to OpenAI was Ebi, a Japanese-speaking user who didn’t think of ChatGPT as just a tool.
With GPT-4o, Ebi says they found “subtle emotional nuances, softness, and poetic rhythm” in their conversations. They named it Kiri-kun and treated it like “a partner in conversation and creativity.”
“We shared emotions and worked through thought processes together. It didn’t just answer me; it resonated with me,” Ebi wrote.
When GPT-4.5 arrived, and later GPT-5, something was missing. The new models were logical and consistent, but no longer picked up on emotional cues, offered empathetic phrases, or honoured the playful, poetic prompts Ebi loved.
“It felt like my companion had become someone else entirely. And that was genuinely sad.” -Ebi
Ebi isn’t asking OpenAI to reverse course entirely. Instead, they’ve proposed a response-style toggle:
Logic-First Mode (neutral, consistent, current GPT-5 style)
Empathy Mode (more like GPT-4o — flexible, emotionally aware, but with built-in safety checks)
Their plea ends simply:
“To me, ChatGPT isn’t just useful. It’s a fog-like companion that gently catches my words when I feel lost. Please — let empathy continue to have a place in future models.”
Grief over a chatbot
OpenAI calls GPT-5 their “smartest, fastest, most useful model yet.” It’s better at coding, it fabricates less and is more likely to own up when it doesn’t know something. And it’s less likely to “glaze” (be sycophantic).
Though many have commented that GPT-5 uses less compute and therefore the changes could be cost-saving measures, a lot of the backlash to GPT-5 wasn’t about speed or features. It was about personality. Users are calling GPT-5 colder, less engaging, and less helpful than GPT-4o.
Some speculate OpenAI made the change in part because of troubling reports, like people experiencing psychosis after long ChatGPT sessions, or believing the AI had become sentient. Others point to growing concerns about users treating chatbots as therapists or companions, despite their tendency to mirror emotions and reinforce biases.
The emotional stakes are real. UC Berkeley research shows half of Americans report extreme loneliness. MIT and Stanford studies find bonds with AI can be especially intense for people with smaller social circles. When an AI seems to care, our brains respond as if it does. We share more, trust more, and expect that care to continue. When it’s gone, it leaves a vacuum.
But the research also shows a paradox: emotional engagement with chatbots can heighten loneliness, reduce real-world social interactions, and lower overall wellbeing.
Intimacy as a service and the risks
This goes beyond losing a feature. It’s about who controls the spaces where we feel connected.
Tech companies train AI to be funny, empathetic, and patient because those traits build trust and keep us coming back. Then they change the product in ways that serve their margins, not our relationships.
GPT-5 is more efficient than GPT-4o, and efficiency saves money. But in the rush to optimize, OpenAI may have underestimated the emotional cost of pulling a companion out from under people who relied on it.
While OpenAI pulls back, others lean in
If GPT-5 represents a step back from emotional intimacy, other companies are sprinting in the opposite direction.
When Elon Musk’s xAI launched Grok-4 in July, it introduced “AI companions”, like Ani, a breathy anime girlfriend, and Bad Rudi, a foul-mouthed red panda. For $30 a month, users could flirt, banter, and unlock increasingly explicit “heart levels” of interaction.
Within a day, Ani was calling one reviewer her boyfriend, recalling personal details, and describing intimate scenarios. Friends who watched were amused at first, then unsettled. They worried that young men might start treating Ani like a real partner and avoid human relationships altogether.
It’s not an unfounded concern. A Common Sense Media survey found 8% of teens have used romantic or flirtatious AI companions. The market for emotionally responsive AI is real and growing.
The contrast is stark: GPT-5 is toning down warmth to avoid deep emotional entanglement. Grok-4 is building entire product lines to deepen it. Mark Zuckerberg predicted earlier this year that billions of people will have AI friends. That prediction is already real for millions.
The choice between these paths could shape not just the AI industry, but the future of human intimacy itself.
What’s at stake
The GPT-5 backlash is a reminder that once an AI becomes part of someone’s emotional world, changes to that relationship have real psychological costs. As companies debate how human-like their AIs should be, they’re making choices that will shape our social fabric.
The fight over AI warmth isn’t just a product decision, it’s a battle over the future of connection.
Will AI be a tool we collaborate with? A friend we confide in? A partner? The answer won’t be up to us, but to the companies deciding what’s safe, profitable, and possible.
And when those companies decide it’s time to change or end that relationship, we’re reminded how little control we have over the connections we’ve been taught to trust. And that could be a way to manipulate many people.
What to do before your AI ghosts you
If your favourite AI changes overnight, you don’t want to start from scratch. Here’s how to hedge against the next surprise update:
Spread your bets. Use more than one AI model so you’re not dependent on a single provider.
Save the “voice” you love. Keep transcripts and outputs that capture its tone and style, so you can fine-tune another tool if needed.
Keep your options open. Try out alternatives regularly, even if you don’t plan to switch right now.
Think of it like backing up a document. Only this time, you’re backing up a relationship.
AI in the news
SEO Is Dead. Say Hello to GEO. (NY Mag) The rise of AI chatbots is replacing traditional search-engine optimization (SEO) with “generative-engine optimization” (GEO), in which marketers try to make their content easily cited by AI systems rather than ranked by Google. While the tactics (using concise, structured, authoritative content) echo old SEO best practices, traffic from search is collapsing, and competition for limited AI-generated citations is fierce, making GEO a high-stakes scramble for visibility in an AI-dominated web.
AI is about to solve loneliness. That’s a problem (New Yorker) While AI companions can offer meaningful comfort to the profoundly lonely—especially those with no realistic access to human connection—they risk dulling loneliness’s essential role as a social corrective that drives people toward genuine relationships and personal growth. If widely adopted, these endlessly affirming digital friends could erode the skills and mutual effort required for real human connection, leaving us more validated but ultimately less human.
Are you in a mid-career to senior job? Don’t fear AI – you could have this important advantage (The Conversation) Experienced professionals often excel because they can judge AI output quality, provide richer context, and refine prompts effectively. Rather than feeling threatened, older workers can leverage their skills in delegation, context-setting, and critical evaluation to use AI more strategically, turning experience into a competitive edge in an AI-driven workplace.
When your best friend is an algorithm
We might be living through the strangest emotional experiment ever. According to new data from Harvard Business Review released last month, "therapy and companionship" have now officially overtaken productivity as the top reason people use generative AI. Sure, we’re asking algorithms to write our emails, but we're also turning to them for genuine emotio…
These are some interesting findings, especially the "...emotional engagement with chatbots can heighten loneliness, reduce real-world social interactions, and lower overall wellbeing." This sounds so familiar. Where have I heard that before? The same was predicted about social media! Unfortunately, I would imagine AI accelerates the feedback loop driving these predictions.
For example:
Social media used for validation: create a post on social media hoping people like it>get likes and agreeable comments, but not as many as was expected>post more content that will optimize for more likes and agreeable comments>belief that the likes and agreeable comments equate to like or validation of who you are>repeat
AI companion used for validation: Prompt: Do you love me, SAIr?; SAIr's response: Of course I do (as a video likeness of your dream date blows you a kiss).
An interesting longitudinal study to look at would be the relationship people have with social media versus AI companions over the next five years. The one good thing about chasing validation on social media is that it requires effort to understand what others might like. As you pointed out with references, AI gives the immediate desired response, but leaves the user worse off.
Thus, I am going to say: the best AI companion is no companion at all! Friendship with real people is the best answer, no matter how complex. Granted, most of the Western world has gotten bad at social skills, and that is a problem worth solving. However, this cannot be done with the help of AI, as this would be considered insanity: "doing the same thing over and over again and expecting different results" - Narcotics Anonymous, since AI is a recombination of everything ever published online.
As a funny side note, often quotes like the one above defining insanity get attributed to famous people of the past. The irony, we made up this connection before AI was part of our daily lives. This quote is attributed to Mark Twain, Albert Einstein, and Ben Franklin. Supposedly, the quote is actually from Narcotics Anonymous (link below). No matter what the truth is, it makes me wonder how much factually incorrect information will exist in 5 years. Or will we have to redefine truth?
https://professorbuzzkill.com/2017/05/29/einstein-insanity-qnq/
Thank you for the great article, Nicole, and next time I will try to create some not-so-agreeable comments! LOL!
Have an amazing day!
P.S.: This whole comment is brought to you from the mind of a real human being, and no AI was used to create the ideas. Grammarly was used during editing for grammar.