Why AI Transformation Fails Before It Even Starts
There's a conversation happening in boardrooms across Europe that most consultants and analysts aren't capturing yet. I've had it at industry events, over coffee with C-level peers, and in the corridors of conferences where people speak more freely than they do on stage. And what I hear again and again is this: the challenge of AI transformation is no longer what we thought it was.
For years, the narrative was consistent — data governance, data quality, security infrastructure, centralization. Fix your data foundation, and AI will follow. That's still relevant, but it's no longer the primary obstacle. The organizations I speak with today, particularly those in ecommerce and digital-first sectors, are wrestling with something far more human: how do you put AI in the hands of every single employee — safely, responsibly, and at scale?
That question keeps more CEOs awake at night than any data architecture challenge ever did.
The Fleet Car Problem Nobody Saw Coming
Here's an analogy that crystallizes the challenge perfectly. Think about a corporate fleet program. For years, companies gave employees a fixed car — one make, one model, company-wide. Then fleet programs evolved: you get a budget, you choose your car. Employees loved the freedom. Productivity and satisfaction improved.
AI tools are following exactly the same trajectory — only faster and with far higher stakes.
Today, every professional has access to a growing portfolio of AI interfaces: ChatGPT, Claude, Perplexity, Gemini, and dozens of others emerging every quarter. Each has its strengths. Each attracts different users based on workflow, skill level, and personal preference. A copywriter might swear by one tool. A data analyst by another. An ecommerce manager by a third. And just like with fleet cars, employees increasingly expect — and demand — the freedom to choose.
The problem? Every new interface an organization activates means a new licensing agreement, a new security assessment, a new set of compliance questions, a new monitoring framework. What started as democratization quickly becomes an IT overhead of staggering proportions.
According to a 2024 McKinsey Global Survey, nine in ten employees already use generative AI for their work — yet only 13 percent of those employees consider their organization to be an early adopter. McKinsey & Company The gap between personal adoption and institutional readiness has never been wider. Employees are racing ahead. Organizations are scrambling to catch up.
The Accountability Gap at the Top
When I probe deeper in those C-level conversations, the frustration I encounter isn't really about technology. It's about accountability. Who is responsible when an employee pastes confidential client data into a public AI interface? Who owns the consequences when a prompt contains sensitive medical or financial information?
The honest answer: the CEO. Always the CEO.
And that realization is both clarifying and paralyzing. I've seen organizations scramble to embed AI usage policies into employee handbooks and labor agreements — only to discover that the legal complexity makes it nearly impossible to shift accountability to the individual employee. The instinct is understandable: just as a drunk driver bears personal responsibility for an accident with a company car, organizations want employees to own the consequences of reckless AI use. But here's the critical difference. When someone gets behind the wheel, there's a shared rulebook — a century of traffic law, licensing requirements, and established norms. With AI, we're writing the highway code while the cars are already doing 200 kilometers per hour.
McKinsey's 2025 workplace AI report put it well: "Soon after the first automobiles were on the road, there was the first car crash. But we didn't ban cars — we adopted speed limits, safety standards, licensing requirements, drunk-driving laws, and other rules of the road." McKinsey & Company The same logic applies here. The answer isn't to restrict AI. It's to build the infrastructure of responsible use — and that takes time that most organizations feel they don't have.
Two Companies, Two Philosophies
I've encountered two real-world approaches that represent opposite ends of the spectrum — and both are instructive.
The first is IKEA. They built a proprietary global platform accessible to all employees worldwide, offering access to multiple large language models in one environment. Employees can even build and deploy their own AI applications within this ecosystem. Because it sits between the organization and the underlying AI models, it provides strong governance, security, and monitoring by design. It's a control-first model. The tradeoff? Functionality is inevitably constrained. Some of the most powerful features of tools like Claude or Copilot don't survive the middleware layer.
The second is Lighthouse, a hotel management platform. Their Chief Security Officer described a fundamentally different bet: open everything up, accept the risks, compensate through policy, and above all — invest relentlessly in training and cultural change. Their belief is that genuine behavioral change will follow genuine freedom. The speed of AI adoption across their organization has been remarkable. The risk exposure is real, but it's managed through awareness rather than architecture.
Neither approach is universally right. The right path depends on the organization's size, regulatory environment, risk appetite, and — crucially — the maturity of its culture. What I'd recommend for most mid-sized European businesses is a deliberate hybrid: full democratization of AI tool access, paired with a clear and human-readable policy framework, a pre-built crisis response plan that every employee knows, secure technical infrastructure for experimentation, and relentless internal communication that showcases wins and builds momentum.
The Klarna Warning Every Executive Should Internalize
If you need a cautionary tale about what happens when organizations get this wrong, look no further than Klarna.
Between 2022 and 2024, Klarna cut approximately 700 jobs and replaced them with AI-powered solutions. CEO Sebastian Siemiatkowski publicly admitted that the AI-driven transition negatively affected service and product quality. Following increased customer complaints and operational issues, Klarna began rehiring human staff. MLQ
Siemiatkowski's own admission said it plainly: "We focused too much on efficiency and cost. The result was lower quality, and that's not sustainable." LaSoft
Klarna's story has since become something of a parable in AI circles — what some observers now call "The Klarna Effect": the arc from triumphant announcement of AI-driven headcount reduction to quiet, expensive rehiring. The lesson isn't that AI can't transform operations. It absolutely can. The lesson is that organizations that frame AI primarily as a cost-cutting instrument will pay a price that doesn't show up in the initial business case — in brand damage, customer dissatisfaction, and the eye-watering cost of unwinding a failed strategy.
Interestingly, Klarna also offers a more nuanced secondary lesson. Before the reversal, they had communicated a policy of natural workforce reduction: no active layoffs, but no replacements either when people left voluntarily. And they were actively rewarding employees for identifying AI-driven efficiency gains. That part of the model deserves more attention than it gets.
The Real Roles of HR, IT, and Legal — And Who Orchestrates It All
One of the things I find most striking in conversations about enterprise AI transformation is how often it falls into an organizational no-man's land. Everyone acknowledges it's important. Nobody owns it.
In my view, HR's role is not optional here. It needs to embed AI literacy — including risks and responsibilities — into every training program and every recruitment process. I'd go further: AI awareness should become a hiring criterion. Can this candidate demonstrate informed judgment about AI tools? Do they understand where the guardrails are? These questions belong in every job interview, not just technical roles.
McKinsey's research concluded that employees are broadly ready for AI — and that the biggest barrier to transformation success is leadership. McKinsey & Company That finding should land with some force on HR departments that are themselves still catching up with the tools their employees are already using every day.
IT's role is to build the sandboxes — secure environments where employees can experiment freely without putting the organization at risk. The goal is maximum creative freedom within minimum viable governance. Legal's role is to become a pragmatic enabler rather than a blocker: mastering the intersection of GDPR, data governance, and emerging AI regulation, and translating that complexity into guidance that empowers the business rather than freezing it.
And who orchestrates all of this? In the early phases of transformation, I believe that role belongs to the CEO. Not because CEOs necessarily have the deepest technical knowledge — they often don't — but because the decisions required cut across every department. Only someone with cross-functional authority can break the silos, make the tradeoffs, and set the pace.
Here's a pattern I've noticed, and it's almost a dirty secret: many of the most AI-forward CEOs I've spoken with are using AI tools daily — on their personal devices, on personal accounts, entirely outside the corporate infrastructure. They see the potential viscerally. The irony is that the very person who needs to lead the institutional transformation is operating outside the institution's own AI environment. That gap between personal conviction and organizational readiness is perhaps the most honest measure of where we actually stand.
The Fear Question Nobody Wants to Address Directly
In my own brand and creative team, I see it up close. Copywriters and designers are talented, experienced, and — quietly — worried. The question underneath every AI conversation in that team is the same: will I still have a job?
My answer is consistent and I believe it completely: the goal is never to do the same work with fewer people. The goal is to do more, better work with the same people. AI-augmented copywriters don't write fewer words — they write more strategically, test more variants, and spend their human energy on the work that actually requires human judgment. AI-augmented designers don't produce fewer assets — they produce faster, iterate more boldly, and focus their craft on what a model cannot replicate.
The employees who will struggle are not the ones who resist AI because they find it hard. They're the ones who refuse on principle. History has a name for that position: it's the same one held by the professionals who refused to learn word processing when the PC arrived. The technology moved on. The holdouts did not.
McKinsey found that the most effective organizations reward employees not just for using AI, but for demonstrating new competencies, sharing insights with colleagues, and helping others navigate the learning curve — noting that social recognition often proves more powerful than financial incentives. McKinsey & Company That insight resonates deeply with what I'm building: a culture where AI progress is celebrated visibly, where early adopters become internal champions, and where the team's collective capability grows faster than any individual tool.
Think in Added Value Per Capita, Not in Economics
If I could say one thing to every CEO reading this — something they may not want to hear but absolutely need to — it's this:
Stop measuring AI transformation in cost savings. Start measuring it in added value per capita.
The organizations that will win over the next two years are not the ones that used AI to cut the most headcount. They are the ones that used AI to make each person on their team dramatically more capable, more creative, and more impactful. McKinsey's research identifies only around 6% of organizations as genuine AI high performers — defined as those reporting significant business value and attributing more than 5% of EBIT to AI. Libertify That number is both inspiring and sobering. The gap between the leaders and the rest is not about the tools they use. It's about the culture they've built around those tools.
McKinsey estimates the annual value potential of generative AI at between $2.6 and $4.4 trillion across 63 use cases Generation Digital — but that value will not flow to organizations that treat AI as an accounting exercise. It will flow to the ones brave enough to put it in the hands of their people, build the culture to support it, and measure success by what their teams can now do that they simply couldn't before.
The car is already moving. The question isn't whether to get in. It's whether you'll be the one behind the wheel — or watching it drive past.

