A Citrix colleague and I were talking about AI’s impact on knowledge work. I was droning on (and on and on) about agents doing the work, apps dissolving, and the cognitive stack becoming the new workspace, when she stopped me and said: “Okay, but if AI is doing all of that, what’s actually left for humans?”

Way to get to the point!

My first instinct was to list all things that won’t change: accountability, budgets, risk management … but these are all pretty obvious and covered by umpteen LinkedIn listicles and probably not what she was asking about.

Any answer about what stays the same for humans is a point-in-time snapshot based on the capabilities of AI at that moment. To really understand what’s left for humans, we need to look at how the human parts of work will evolve as AI evolves. Here’s where that evolution is heading right now.

Governing the machine will be a bigger job than doing the work

In the foreseeable future, regulators are never going to accept, “The AI did it!” And boards aren’t going to accept, “The agent made that decision.” A human will still need to sign the audit, face the SEC, and go to jail if fraud occurs. That won’t change soon.

What will change is the scope of that human’s job. A compliance officer who today reviews work from 50 people will tomorrow review the work of 50 people and 500 agents. What they’re accountable for will grow, but the accountability structure will still be human.

The same will happen with identity. Today it’s one human = one credential = one session. Tomorrow it will be one human + twelve agents = 13 identities, scopes, and sets of permissions. The human will still be the root of trust, but the identity graph is going to branch in ways we’ve never had to manage before. (How will accountability flow from agent to agent to agent to human?)

Same for data sensitivity. In the future, PII will still be PII, and health data will still be health data, but AI will introduce new kinds of sensitivity problems. Individual data points which are not sensitive on their own can be mined and analyzed by AI in ways that make the combination of them sensitive.

We can use phone metadata as an example. Today, location and web history (widely collected by countless data brokers) are not considered sensitive. But when AI analyzes all that non-sensitive data, it can map location data to doctor’s offices, diagnostic labs, and web searches for medical conditions, and suddenly the combination of many “non sensitive” data points creates a sensitive health record which requires HIPAA compliance.

AI will do this kind of synthesis constantly, across essentially every data source, creating lots of new sensitive data that human compliance officers will need to govern. This is not in today’s playbooks.

The same is true for risk. All the traditional risk categories will persist in the future (operational, reputational, legal, vendor…), while new categories will emerge alongside them: model hallucination, prompt injection, AI supply chain, agent autonomy, and so on. Like the other categories, the risk management function not only isn’t going away, but its scope is also going to expand enormously.

The economics of human workers

“What’s left for humans?” sounds like a philosophical question today, but it’s about to become a budget line item. As AI costs drop and capabilities improve, and as AI digitizes workers’ cognitive processing, every knowledge work task will soon have a quantifiable price tag. In this world, companies are going to know what it costs for AI to do a task versus what it costs for a human to do it.

When the CFO of the future asks, “Why are we paying a human to do this?”, only two answers will fly: “because this task genuinely requires human judgment, accountability, or relationships,” or “because a human is cheaper.” Everything else migrates to AI.

Well, almost everything else. Some of today’s human tasks won’t migrate to AI, they’ll just disappear. Things like status reports, coordination meetings, and weekly summaries only exist because humans are bandwidth-constrained and need ways to stay in sync. AI isn’t. These “work about work” tasks will start to disappear as AI does more of the actual work, since they were just overhead created by the limitations of humans.

Another category of tasks that won’t migrate to AI will be those that humans can do cheaper. AI tokens are expensive, so for some tasks, a human will simply be cheaper than burning through tokens. Much like how global outsourcing chases the cheapest cost for a given result, there will be some tasks that AI is perfectly capable of handling, but if a human worker can do it cheaper, the company will assign it to a human.

The shifting bottleneck

Everything I’ve described so far are the parts of work that will remain human for the near future. But that’s not to suggest these remain human for the long-term future. As I’ve written before about the bitter lesson of workplace AI, you never completely eliminate all bottlenecks, you just keep knocking them out and finding the next one.

We’re already seeing this happen in software engineering. Now that AI solved the coding bottleneck, the work for humans moved to testing, verification, and orchestration. AI will soon solve those too, and the bottleneck will shift again to something like writing specs or judgment about what’s worth building. Eventually AI will get better at that, and the bottleneck will shift again.

This is the pattern now. The answer to “what’s left for humans?” is just a point-in-time snapshot, not an ultimate destination. Governance will be a human job until AI is trusted enough to govern itself. Knowledge curation will be a human job until AI can judge what’s worth knowing. Judgment and taste will be human until the copilot becomes the autopilot.

The question “what’s left for humans?” assumes there’s a stable answer somewhere. There isn’t. We’re optimizing the horse, but at some point the car is going to be invented and our whole framework will shift. AI enabling everything will eventually make what it’s enabling irrelevant, and we’ll need a completely different question.

But none of that changes what you need to do right now. Your governance isn’t ready for 500 agents per compliance officer. Your identity infrastructure wasn’t designed for multi-entity trust chains. And your cost models still assume per-seat pricing. Luckily you have years to fix this, not months. (But also, years, not decades, so get moving!)

Then you can get ready to do it again.


Read more & connect

Join the conversation and discuss this post on LinkedIn. You can find all my posts on my author page (or via RSS).