Workday, a cloud software provider, aimed to integrate AI into its operations. However, internal research revealed challenges to employee adoption of artificial intelligence.
Ashley Goldsmith sounds genuinely excited when she talks about Workday’s AI success story. The company’s chief people officer rattles off impressive statistics: 79% employee adoption, 37% increase in AI usage, glowing testimonials from “Workmates” across the organization. It’s the kind of corporate transformation narrative that gets featured in Harvard Business Review case studies.
But strip away the corporate speak, and Workday’s “Everyday AI” initiative tells a more unsettling story, one that every knowledge worker should pay attention to. Because what Workday has really accomplished isn’t just getting employees to use AI. They’ve created a masterclass in how to get workers to enthusiastically participate in their own potential displacement.
The Mandate Hidden in Plain Sight
Workday made AI adoption mandatory. Every single one of their 19,300 employees now has a personal AI goal that will be assessed by their manager at year’s end. That’s not a suggestion or an opportunity it’s a performance requirement disguised as “empowerment.”
Think about the psychology here. Workday laid off 1,750 workers in February 2025, 8.5% of their staff while simultaneously investing heavily in AI. CEO Carl Eschenbach explicitly pointed to “increasing demand” for AI as having “the potential to drive a new era of growth” for the company when announcing the cuts. Then they turned to the remaining employees and essentially said: “Now you need to prove you can work alongside the technology that helped us eliminate your colleagues’ jobs.”
The company frames this as giving employees “permission” to experiment. But when experimentation becomes mandatory, when the year-end reviews depend on it, that’s not permission that’s coercion.
The Peer Pressure Machine
Workday discovered something that should make every employee uncomfortable: peer-to-peer AI evangelism works better than top-down mandates. So they weaponized it.
Their “prompt-a-thons” and digital academies aren’t just learning opportunities, they’re sophisticated peer pressure mechanisms. When the colleague presents their AI use case at an all-hands meeting, when they share how much faster they’re working, when they get recognition for their “innovation,” they are left with no choice but to join in or get left behind.
Jim Stratton, Workday’s senior VP of technology, admits that developer productivity has improved by “20% or more” with AI assistance. That’s corporate speak for “we need fewer people to do the same work.” Yet somehow, employees are supposed to celebrate these efficiency gains that logically lead to workforce reduction.
The Responsible AI Fig Leaf
The company makes much of their “responsible AI principles,” emphasizing that humans will remain decision-makers for critical tasks like compensation and promotions. This sounds reassuring until people realize what’s not being said.
Sure, humans might make the final call on the raise or promotion. But if AI is handling the data analysis, performance tracking, candidate screening, and recommendation generation that feeds into those decisions, how much human judgment is really left?
This more likely results into not being replaced by AI but being managed by it.
The company’s emphasis on “responsible AI” serves another purpose and that is it makes employees feel complicit in their own monitoring. When you’re helping to “responsibly implement” the AI tools that track your productivity, analyze your communications, and evaluate your performance, you become invested in the system that’s watching you.
The Innovation Theater
Perhaps most cynically, Workday has turned AI adoption into a form of corporate theater. Employees don’t just use AI for the sake of using they perform their AI usage for management approval. They brainstorm problems AI can solve, develop prompts to demonstrate their engagement, and present use cases to colleagues.
It transforms every employee into both a user and a marketer of the technology, making them invested in its success regardless of its impact on their job security.
The company’s internal research revealed that 43% of employees felt they lacked time to explore AI, and over a third were uncertain about how to use these tools. Instead of addressing these concerns directly, Workday essentially made AI engagement a job requirement.
The Real Numbers Game
According to a recent report by Fortune, Workday celebrates that 79% of their workforce now uses AI, up 37% from their baseline. But what these numbers don’t tell people is more revealing than what they do.
How many of those users are genuinely benefiting from AI versus simply checking a box to satisfy their managers? How many are using AI tools because they improve their work versus because they’re afraid not to? And most importantly, how many are using AI to make themselves more valuable versus inadvertently making themselves redundant?
These are the questions that are appearing through the cracks.
When Stratton talks about getting “more done with the same number of people,” he’s describing a future where productivity gains don’t translate to better jobs or higher wages for workers they translate to higher profits for shareholders.
Here’s what Workday won’t say directly. Their AI program isn’t really about empowering employees. It’s about creating a workforce that’s psychologically prepared for automation.
By making AI adoption mandatory, by celebrating efficiency gains, by turning employees into AI evangelists, Workday is conditioning its workforce to accept and even champion their own diminishing importance. They’re creating a culture where questioning AI adoption marks you as a Luddite, where resistance to automation becomes a career-limiting move.
The genius of the “Everyday AI” program is that it makes employees feel agency in a process designed to reduce their agency. You get to choose how you use AI, which problems you solve with it, how you integrate it into your workflow. You just don’t get to choose whether you use it at all.
The Broader Warning
Workday’s approach will likely become the template for AI adoption across corporate America. It’s too effective not to be copied. Other companies are watching Workday’s 79% adoption rate and taking notes.
Anthropic CEO Dario Amodei recently warned that AI could wipe out half of all entry-level white-collar jobs and spike unemployment to 10-20% in the next one to five years. Bloomberg research shows that AI could replace more than 50% of the tasks performed by market research analysts (53%) and sales representatives (67%), compared to just 9% and 21% for their managerial counterparts.
The statistics paint a disturbing picture. Approximately 40% of white-collar job seekers in 2024 failed to secure a single interview, and hiring for positions paying over $96,000 annually has reached a decade-low level.
The year 2024 already saw a staggering 152,074 tech employees lose their jobs across 546 companies, setting a difficult precedent for 2025.
Meanwhile, companies are doubling down on AI-powered surveillance.
Data shows a staggering 78% increase in surveillance technology adoption since 2019. Almost half (45%) of employees say workplace AI monitoring has a negative effect on their mental health. Yet workers are being asked to participate in systems that track their every keystroke, analyze their communications, and score their productivity.
But employees should understand what they’re really being asked to do when they’re “encouraged” to embrace AI at work. They are participating in an economic restructuring that may not benefit them entirely, making their own roles more precarious.
And right now, initiatives like Workday’s “Everyday AI” suggest we’re heading toward the latter