Citrix Blogs

The invisible 80%—what corporate-led AI transformations can’t see

My core thesis about AI in the workplace is that corporate-led AI initiatives fail to truly transform knowledge work, and that real transformation instead comes from individual workers finding and using AI tools on their own. One of the reasons for this is outside observers can only see a small subset of a knowledge worker’s true work. (I’m calling that subset “20%” in this post, but that’s an arbitrary number I’m using to represent the concept.) An outsider can only see the results and outputs of the work: emails, chats, documents, deliverables, app usage, meetings attended (and transcripts), etc. 

The other 80%, (arguably the “real” knowledge work), lives in workers’ heads: Their reasoning behind decisions, the pattern recognition based on years of experience, their thinking process which isn’t something that can be written down, etc. 

This is why we have the mainstream narratives that corporate-led AI initiatives underwhelm and fail to show ROI, while worker-led adoption keeps crushing it. Since workers create all their work output based on their own 100%, they’re able to design AI workflows for themselves based on that same full 100%. Meanwhile IT can only design AI processes around the 20% of work they can see and measure. 

A real example of what it takes to crack into the worker’s 80%

I came to this 20/80 realization from via a LinkedIn post from Jack Weissenberger. He wrote about a guy named AJ who runs a home restoration company in Dallas. AJ’s been doing this for decades. His gross margins are 3x the industry average, not due to some specific technology, but the 40 years of his tacit knowledge about negotiating with insurance adjusters.

AJ knows what each insurer will compromise on. He knows how to cascade concessions and when to give up something small to hold onto something big. He knows which line items always come as packages and which can be separated. None of this is written down. He just prints out estimates, marks them up with highlighters, and makes decisions based on patterns he’d internalized over four decades.

Weissenberger’s company wanted to build an AI system to replicate AJ’s work. They had stacks of his contracts and results, and they knew how his differed from others. But they kept failing until they flew to Dallas, sat with him for three hours, and learned how he thinks through his work.

They finally got an AI system that could match and exceed AJ, but only because of that three-hour in-person session. AJ’s knowledge was invisible to outsiders, and no amount of “waiting for better AI” was going to magically figure it out. AJ’s “80%” didn’t show up in the data the people building the AI could see.

Workers are their own anthropologists

Over the holidays, I vacationed with a couple of anthropologists. (That sounds like a joke setup, but they’re just friends who happen to be anthropologists.) I learned a bit about what they actually do and realized that’s essentially what Weissenberger was doing with AJ.

Anthropologists don’t just interview people, they watch them work. They look for the gap between what people say their process is and what they actually do. They pay attention to the small rituals that seem meaningless but turn out to be important. And they dig for tacit knowledge: the stuff people don’t even realize they know, because they’ve never had to explain it to anyone.

This is why worker-led AI has such an impact. Each worker is their own anthropologist. They know (even if they don’t know they know) the full picture of their own work. They know the shortcuts they’ve developed and where those shortcuts can be applied and should be avoided. They know the workarounds nobody documented. They know the patterns they’ve never articulated.

Trying to scale that across a real enterprise is impractical. Even using the AJ example, the AI they built after interviewing him for 3 hours wasn’t a total replacement of AJ, it was just replacing the single negotiation ruleset task (which had outputs that were easily verifiable).

A typical knowledge worker doesn’t have just one “negotiation ruleset” to document. They have dozens of unseen interrelated judgment calls across different contexts, stakeholders, and tools. And most of it they couldn’t articulate even if you gave them a week with a consultant, because they’ve never had to put it into words. It’s muscle memory and a gut feeling of “I just know.”

I’m sure some companies will think, “Great! We’ll just hire consultant anthropologists to sit with every worker to extract their tacit knowledge!” But this quickly turns into the huge consulting-process-driven, gunky, mucky, enterprise AI transformation that’s expensive, slow, and fails to live up to expectations.

The simpler path? Let workers figure it out themselves. They know what they need and where AI can help. When a worker wires Claude into their daily routine, they’re not just adopting a the latest tech tool. They’re running countless mini-experiments on workflows that are invisible to everyone else. They’re incorporating their 80% directly without having to articulate it first.

The uncomfortable truth for enterprises hoping AI transformation will happen top-down is the 80% that matters most can’t be extracted, documented, or scaled through traditional IT processes or consultants. It can only be unlocked by the people who already have it. Your knowledge workers are their own anthropologists.

Worker-led AI isn’t “Shadow AI.” It’s the only path to real transformation that actually works.


Read more & connect

Join the conversation and discuss this post on LinkedIn. You can find all my posts on my author page (or via RSS).

Exit mobile version