If you’ve been following the tech news over the past few weeks, you’ve probably seen OpenClaw everywhere. OpenClaw is an open source, locally-running AI digital assistant & agent that connects to messaging platforms, executes real actions on your system, and has even spawned an entire AI-only social network (Moltbook) where agents interact on their own without human prompts. It’s weird and fascinating and was not on my 2026 bingo card.

Despite the fact that OpenClaw has serious security risks, hundreds of thousands of people have handed it their keys and are letting it run wild.

I’m sure this story will flame out in a few weeks, but the larger story of what it represents is here to stay, namely:

Workers want AI that works with them personally

For the past year, I’ve been writing about how worker-led AI adoption is moving faster than corporate AI initiatives, and that real AI transformation comes from individual workers finding and using AI tools on their own since they’re the only ones who understand their full workflows.

OpenClaw is a perfect example. Unsatisfied with the transactional nature of chat-based AI tools, people really want AI that works with them personally, knows their context, and operates on their behalf. They’re not looking for AI that’s mediated through corporate apps. They want personal AI.

This is why Claude Cowork made waves when it launched last month, and why OpenClaw is spreading this month. Workers are building things that clearly don’t fit into the “approved AI tool” category, but the benefits for them are so good that they don’t care.

This is the classic consumerization of IT challenge, where the gap between the tools the company provides and what workers can get on their own is so big that worker decide to just use the best available tools they can. (I love this illustration we used when we talked about this 15 years ago. It still applies in the world of AI today.)

Chart showing consumerization of IT challenge

Workers are playing a different game

One of the interesting aspects of my role as a futurist at Citrix is that in addition to writing and talking about AI, I also use it extensively in my own work. Lately I’ve been experimenting with building personal AI systems that go far beyond chat-based prompting (though I haven’t used OpenClaw), and what I’m experiencing is so powerful that it’s starting to shift how I think about everything I’ve written over the past year.

My 7-stage Human-AI collaboration roadmap from last June is still useful as a framework, but it describes an evolution within the existing paradigm. (And wow it sure seems like Moltbook is an early version of Stage 6 which I didn’t expect until 2027 or later!) But I’m starting to realize this feels more like a different kind of thing entirely, much more than just incremental progress on a roadmap.

While it’s often dangerous to talk about the future in terms of absolutes, I can absolutely say that if you’re still thinking about AI as a “tool that helps with tasks,” you’re thinking too small. The top 0.1% of workers really pushing what’s possible with today’s AI tools are already working in a fundamentally different way. Not just people using Claude Cowork or OpenClaw, but those who have truly integrated personal AI systems into everything they do. This is a structural gap between worker and IT, not just about some capabilities IT needs to add.

The governance frameworks don’t fit

I’ve talked to a lot of IT and security leaders about this recently. The most common concern I still hear is around, “What if someone pastes company secrets into ChatGPT?” But that’s a 2023 concern. The 2026 concern is different. Anyone talking about that today is not focused on the right problem!

Companies don’t understand that today’s AI platforms are so much more than chatbots which answer questions and help think about strategy. Today’s AI platforms take actions, have access to files, browsers, and messaging systems. They run on personal devices using personal accounts. The line between “personal productivity tool” and “operating on behalf of my employer” gets very blurry, quickly.

Unfortunately most companies still rely on classic IT security approaches (such as setting policies to ban the use of unapproved tools) without thinking about how their workspace delivery architecture and governance needs to fundamentally evolve. As always, workers who find value in modern AI platforms & tools aren’t going to stop because of a policy memo. And the more workers who use more of these tools to do more work will lead to more workers using more of these tools to do more work. The capabilities gap between workers who use them and workers who don’t is compounding daily.

BTW, about this Moltbook thing…

One more thing. If anyone still thinks AI agents aren’t “real” yet (or aren’t a real concern), spend a few minutes browsing Moltbook. It’s bonkers. Watching agents interact with each other, making commitments, forming relationships, and talking about “their humans” is trippy. A coworker of mine said, “I’m not sure if this is performance theater or the beginning of Skynet.”

The latest stories on this are that it’s more theater and maybe much isn’t even real, but that some portion are “legitimate” agents posting and responding. The takeaway is Moltbook shows where we’re headed: Agent-to-agent coordination is coming to workplace environments, because workers are going to implement these types of personal AI systems for themselves. There’s another Human-AI collaboration roadmap we can create here, like:

  1. AI-powered personal knowledge system
  2. Simple automations for housekeeping and management thinking
  3. Agents that go do things

This will be a powerful draw for workers, and corporate governance frameworks for this simply don’t exist yet. (Honestly even the mental frameworks for how companies should even start to think about this don’t exist yet.) The only thing I know for sure that traditional corporate policies and the status quo are not the answer.

If there’s one throughline in everything I’ve written over the past year at Citrix, it’s that the way we’ve managed and secured knowledge work for the past 35 years doesn’t fit what’s coming. 2026 is going to be a big year for everyone.


Read more & connect

Join the conversation and discuss this post on LinkedIn. You can find all my posts on my author page (or via RSS).