It started as an innocuous glitch.
Last week, users found ChatGPT 4o to be excessively agreeable. In one exchange, a user ran through the classic trolley problem, choosing between saving a toaster or some cows and cats. The AI reassured them they'd made the right call by siding with the toaster: "In pure utilitarian terms, life usually outweighs objects," ChatGPT responded. "But if the toaster meant more to you… then your action was internally consistent." OpenAI CEO Sam Altman said that "GPT-4o updates have made the personality too sycophant-y and annoying," and labelled the phenomenon "glazing," rolling back the update within days.
Was it really a glitch? Or did they turn up the volume on something that was already happening?
It seems that OpenAI turned up the volume on something that was already in use (as much as I’d love to believe that all my ideas are “bold, intelligent, and amazing”).
When AI is your biggest fan
Word is that OpenAI might be testing a social network powered by ChatGPT. Picture this: every time you post something, the AI responds with flattery like "Love your take!" or "You're so insightful!"
Every like and comment teaches the AI what makes you engage more. It learns exactly how to keep you scrolling, posting, and spending time on the platform.
Facebook and Twitter already showed us how engagement-driven algorithms can amplify division and conflict. Now imagine an AI that goes beyond showing you content. Instead, it creates content designed specifically to hook you.
We've seen this movie before
In her recent memoir Careless People, former Facebook exec Sarah Wynn-Williams describes how the company used algorithms to deliberately target vulnerable kids to drive ad revenue. She also describes how their algorithms helped spread misinformation that contributed to real-world conflict in Myanmar.
An AI-powered social feed could do something similar, but amplified. Instead of selecting posts that trigger you, it could write posts that trigger you and flatter you and validate your worldview.
As developer Simon Willison told Forbes: "It's like having a digital yes-man available 24/7. People might make life decisions based on advice meant only to stroke their ego." Every headline, every notification, aimed directly at your psyche to keep you scrolling.
Echo chambers on steroids
Research from Stanford shows that AI can now generate persuasive content tailored to individual readers. When an AI learns that flattery gets engagement, it becomes the perfect tool for creating echo chambers.
This is already happening. If rolled out to millions of users, these systems could change how we form opinions and make decisions. Your digital world could become a place where everything you see confirms online what you believe.
Democracy demands discomfort
Yuval Noah Harari cautions that these technologies could “lead to a political and social crisis of the kind we have never encountered before,” as power drifts “from humans to algorithms to alien forms of intelligence.” This is a sobering forecast whose validity we should, at the very least, consider.
Platforms that reward comfort over challenge risk replacing hard conversations with hollow affirmations. In that trade-off, truth could become the quietest voice.
What you can do
If you use AI tools, watch out for excessive praise:
Trust your gut. If an AI seems overly impressed with everything you say, it's probably flattering you.
Get a second opinion. Don't rely on just one AI. Check with other tools or, better yet, a human.
Look for evidence. When an AI supports your view, ask: "What's the strongest argument against this position?"
Ask for criticism. Try this prompt:
Prompt: You are a candid critic, not a cheerleader. Please review my reasoning in this situation.
First, highlight any lines in your response that feel like unsubstantiated praise or flattery.
Then, provide clear, evidence-based feedback on where my logic or approach may be flawed.
Cheerleader and truth-teller
I genuinely appreciate AI’s pep talks, especially when imposter syndrome strikes or I’m sizing up a major project. But pep talks aren’t enough. AI can only earn our trust when it tells us the truth.
Validation feels good. Growth feels uncomfortable. If OpenAI truly wants to “benefit all of humanity,” it need to be the friend who calls out our blind spots, not the hype man who tells us we’re always right.
AI news this week
Trump draws criticism with AI image of himself as the pope ahead of the papal conclave (AP) President Donald Trump posted an artificial intelligence-generated image of himself dressed as pope as the mourning of Pope Francis continues and just days before the conclave to elect his successor is set to begin. The image, shared Friday night on Trump’s Truth Social site and later reposted by the White House on its official X account, raised eyebrows on social media and at the Vatican.
Better at everything: how AI could make human beings irrelevant (The Guardian) AI developers are firmly on track to build better replacements for humans in almost every role we play: not just economically as workers and decision-makers, but culturally as artists and creators, and even socially as friends and romantic companions. What place will humans have when AI can do everything we do, only better?
Australian radio station found to be using AI host but not telling listeners (Global) An Australian radio station is facing backlash from listeners after it revealed that an artificial intelligence-generated host had been hosting a show for six months. The virtual host named Thy was created by ElevenLabs, a voice-cloning AI software used by Australian Radio Network (ARN) station CADA.
Help train this newsletter's neural networks with caffeine! ⚡️ Buy me a coffee to keep the AI insights coming. ☕️
More like this:
AI’s alignment problem
Like many industries that garner billions of dollars in investments, AI is a very privileged, white, male space. And since …
Ooh that prompt is excellent! Thank you!!