40% of workers say they’ve been workslopped
What it means, why it’s happening, and how to stop it.
There’s a particular species of disappointment that arrives when you open a colleague’s deck or dive into their report: the formatting is immaculate, and the prose has a certain sheen. Yet, within minutes, you know: nobody really wrote this. What you’re reading is robotic ventriloquism, the uncanny simulacrum of professional work.
Last month, researchers at Stanford’s Social Media Lab and BetterUp Labs christened this phenomenon in the Harvard Business Review (HBR). They call it “workslop”, which is AI-generated content that wears the costume of substance while advancing precisely nothing.
The research found that each encounter with workslop devours nearly two hours of productivity, costing a 10,000-employee organization roughly $9 million annually. But it doesn’t stop at lost time. Almost half of recipients report diminished views of the sender’s competence, creativity, and trustworthiness. So, aside from creating more work, it corrodes organizational trust.
How we got here
To understand how we got here, we have to rewind to the pandemic, when the boundaries between work and life collapsed entirely. Employees pushed productivity to untenable heights. Some were motivated by fear, others by solidarity, most by the simple fact that the kitchen table had become the office. Predictably, burnout followed. The Great Resignation forced companies into competition for talent, driving up salaries and loosening constraints. For a brief moment, workers held the upper hand.
Now the pendulum has swung back. Layoffs are back, and oftentimes rationalized as “AI-driven efficiency.” Over 10,000 US jobs were eliminated in 2025 for AI-related reasons.
In a June 2025 memo to employees, Amazon CEO Andy Jassy wrote: “We expect [we]… will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company”. Last week, Amazon announced plans to cut up to 15% of its HR division.
Part of me wonders if certain companies are seeing real AI efficiency gains, or they’re using AI as a layoff scapegoat.
So, if AI isn’t actually creating the efficiency that corporate leaders claim it is, the remaining workforce will have to operate in overload mode to make up for the colleagues they lost. Microsoft’s 2025 Work Trend Index reveals that 68% of workers feel overwhelmed by the pace and volume of work, while 46% are experiencing burnout. Meanwhile, 80% of employees report lacking sufficient time or energy to do their job effectively, and nearly half (48%) say their work feels chaotic and fragmented.
Another shift: the junior workforce has thinned out. Many companies froze entry-level hiring, assuming AI could replace juniors. That leaves mid-level employees juggling their own workload plus the work juniors used to handle. With no one to delegate to, AI becomes the stand-in—cranking out passable drafts that are, in fact, workslop.
Instead of functioning as a tool for capability expansion, AI becomes a survival mechanism.
Two ways to use AI
AI manifests at work in two modes:
Thought partner. You leverage it to stress-test assumptions, surface blind spots, and refine arguments. The output elevates your work.
Shortcut. You paste whatever it generates directly into emails, reports, and presentations, and hit send. The output becomes someone else’s problem.
It’s tempting to brand that second pattern as pure laziness. But I would argue that it’s the inevitable output of overextended employees operating without being properly trained on how to use AI.
Shadow AI, shiny decks, and false progress
Many organizations have failed to articulate clear AI policies. They default to two extremes:
Outright bans. JPMorgan, Samsung, and others went this route. But prohibition doesn’t eliminate usage, it pushes it underground. Surveys indicate 60–78% of employees use “shadow AI”, or unauthorized AI tools. When ChatGPT experienced downtime earlier this year, “AI-free” companies discovered entire teams had been quietly dependent.
Blanket encouragement. Some leadership teams evangelized “AI everywhere” without operational specifics. Employees complied, flooding channels with content that projected authority but delivered minimal value. One Upwork survey found 77% of employees said AI tools increased their workload because they spent time cleaning up messy outputs or meeting inflated expectations.
Both approaches generate the same illusion: more activity, more volume, less substance.
The hidden tax
HBR frames it as the “workslop tax.” Each instance exacts multiple costs:
Time: Nearly two hours wasted per occurrence.
Money: Approximately $186 per employee monthly.
Trust: Recipients downgrade their assessment of colleagues’ capabilities.
Collaboration: Teams get mired in clarification loops, rework, and silent fixes.
The threat isn’t the AI. It’s the accumulating sludge of low-fidelity output that clogs workflows and erodes professional relationships.
How to beat the slop
The answer isn’t prohibition or blanket adoption metrics. It’s infrastructure that incentivizes thoughtful deployment.
Train people. Companies like Deloitte, KPMG, and Microsoft run structured AI literacy programs. Workers who received training demonstrate significantly higher effectiveness and confidence.
Set guardrails. PwC and the World Economic Forum recommend explicit guidance on appropriate versus inappropriate AI use cases. Stanford researchers advocate a “pilot mindset”: high agency, high optimism, with clear intentionality.
Model the behaviour. Culture flows from leadership. When executives use AI to sharpen thinking, not generate busywork, teams follow.
The choice
Every workplace technology wave delivers both productivity gains and new forms of waste. Email brought inbox paralysis. Slack brought notification overload. AI brings the capacity to work smarter, and the capacity to drown each other in credible-looking garbage.
Workslop isn’t inevitable. It’s a design choice, a failure of imagination and infrastructure. Organizations that invest in training, guardrails, and intentional culture will use AI to build more capable teams, not just faster ones. The rest will suffocate under mountains of decks that look good, but accomplish nothing.
AI in the news
OpenAI will allow verified adults to use ChatGPT to generate erotic content (Guardian) OpenAI announced it will allow verified adult users to generate erotic content in ChatGPT starting in December, as part of a new “treat adult users like adults” policy supported by enhanced age verification systems. The update will also let users customize their chatbot’s personality and tone, following a period of stricter safety controls introduced after concerns about mental health risks and regulatory scrutiny from U.S. authorities.
California becomes first state to regulate AI companion chatbots (TechCrunch) California is the first US state to regulate AI companion chatbots with the signing of SB 243, a law requiring companies like OpenAI, Meta, and Character AI to implement safety protocols that protect children and vulnerable users. Taking effect January 1, 2026, the law mandates age verification, suicide-prevention measures, and clear labeling of AI interactions, following several tragic cases linking chatbots to youth suicides and growing calls for accountability in the AI industry.
Walmart partners with OpenAI so shoppers can buy things directly in ChatGPT (CBS News) Walmart has partnered with OpenAI to let shoppers buy products directly through ChatGPT using its new “Instant Checkout” feature, marking a major step into what it calls “agentic commerce.” The integration allows users to chat with Walmart’s AI assistant, Sparky, to plan meals, restock essentials, and make purchases seamlessly within the app—transforming online shopping from a search-based to a predictive, conversational experience.
If you read last week’s newsletter, you might appreciate this:
The bubble that knows it’s a bubble
In August, I asked whether AI’s boom was a bubble. Two months later, the Bank of England and the IMF have called it: we’re in one.




