It’s looking like summer 2025 is going to be the season of the “AI-First” CEO memo. You’ve probably seen them discussed online or talked about them with colleagues. If not, here are some highlights of recent memos CEOs of several companies sent out to all their employees:

“Before asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using AI.”Tobi Lütke, Shopify

“Duolingo is going to be AI-first.”Luis von Ahn, Duolingo

“We are focused on building an AI-first company.”Aaron Levie, Box

(There are more, but these are the ones that kicked off the wave.)

While these memos are very clear in their intent, (“from now on, we’re an AI-first company”), they aren’t roadmaps. They light the fire, but then walk away, leaving every layer of the company to figure out how to interpret, translate, and operationalize it.

Each level of the company will see the memo differently and respond based on their context, incentives, and fears. Executives hear “urgency”, directors hear “resource constraints”, and individual contributors hear “I’m going to be replaced by AI”. But no one hears specific instructions about what to do. This ambiguity forces each layer to react reflexively: some will innovate, some still stall, and some will just kick the can downstream to let the level below figure it out.

Reframing the conversation

I’ve touched a bit on “AI strategy” over the past few months (here and here). But in my recent conversations with leaders, I’m realizing that many people (myself included) have been treating “AI strategy” like a checkbox. But it really deserves a more intentional examination which becomes clear as people try to unpack these memos. Asking, “What’s our AI strategy?” is like asking, “What’s our electricity strategy?” It’s the wrong question. The better one is:

“How does our strategy for X change in a world where anyone can use AI tools to move faster and do more?”

In other words, many things we currently see as valuable, (because they are slow, expensive, or require human effort), might no longer hold the same value in a world where AI makes them fast, cheap, and easy.

Unfortunately most people get too fixated on the “AI-ness” of the AI-first memo. Execs scramble to align top-line objectives. Department heads start rewriting OKRs and asking who the “AI person” is on their team. Procurement starts getting requests for laptops with GPUs. Someone in IT nervously googles whether ChatGPT violates their compliance policy.

It’s a waterfall of improvisation.

It’s not about becoming AI-first. It’s about recalibrating around what provides value.

The real challenge isn’t integrating AI. It’s rethinking what you value in a world where AI is accessible to everyone. (And I don’t just mean your employees, I mean everyone, which also includes your customers, vendors, partners, board, etc.)

If a task that once took three experts five days can now be done by a junior employee with ChatGPT in an afternoon, that is way more than simply “improving productivity.” It changes what you fund, what you reward, and what you call “strategic.” It changes what customers expect and what competitors will soon deliver. It changes what it means to be good at your job, what kind of work feels meaningful, and how we fundamentally evaluate performance & productivity.

If, throughout all this, your strategy still prioritizes pre-AI value signals (time spent, complexity, human exclusivity), then you’re optimizing for a world that no longer exists.

This is why these memos land like grenades. They don’t come with plans, they come with existential questions:

  • What does excellence look like when anyone can access world-class tools?
  • What roles or workflows were only valuable because we assumed slowness or friction was unavoidable?
  • What does it mean to be a top performer when every individual can operate like a team?

Ultimately, this all boils down to a single question: What if we’re overpaying for old value signals while underinvesting in new ones?

This is what I mean by saying that AI is recalibrating value. It’s not about adopting new tools. It’s about reassessing what’s worth doing, what’s worth rewarding, and what’s worth rethinking—in every department and at every level of your organization.

How do you enable this while maintaining control?

AI is turning individual workers into mini organizations. A single person with the right tools can write, analyze, build, automate, test, summarize, translate, and orchestrate entire workflows in ways that were unimaginable just a year or two ago. In this world, work is outgrowing the structures that used to contain it. It’s no longer tied to a device, a role, or even a person. It’s something that emerges dynamically wherever apps, data, identity, and context converge.

At Citrix, we’ve spent decades helping organizations adapt to how work actually happens, across devices, apps, networks, and people. That hasn’t changed. What has changed is the nature of the worker.

We’ve learned that you can’t manage this new world by locking things down or waiting for top-down clarity. You need infrastructure that keeps pace with how fast work is changing and how differently work happens now. It needs to flex with the worker (human or AI) while maintaining policy, security, and observability. Whether it’s a human at a laptop, a VDI session in the cloud, or AI operating a browser, the core challenges are the same:

  • How do you manage this new kind of human-AI hybrid superworker?
  • How do you support their freedom without compromising security?
  • How do you provide access to what they need while maintaining control?
  • How do you secure the work itself, without getting in its way?

This is what Citrix does. Not as a slogan, but as an architecture. We secure the work, across all devices, apps, identities, and worker types.

So what happens when your CEO sends the memo?

You probably won’t get a plan. You’ll get a sentence or two, and everyone will look at each other and ask “Now what?”

That’s where the real work begins. The hardest part isn’t receiving the memo; it’s recognizing that your existing operating assumptions no longer apply. So ask yourself:

  • Are we still working from a value system that was built before AI changed the game? (If yes, stop and figure out how to recalibrate this before proceeding.)
  • Are our strategies designed for how work used to happen or how it’s actually happening now?
  • Are we optimizing for alignment and consistency or for agility and experimentation?

Whether this memo comes this summer or next year, you can be sure it’s coming. The real question is whether you’re ready to do something meaningful with it.

End note: A better way to write the memo

Finally, if you’re a leader who will be writing a memo like this (which can be a great thing), I want to share a thought.

I’ve mentioned in the past that I love The Artificial Intelligence Show podcast, which is my top recommendation for a “must listen to” series. One of the hosts, Paul Roetzer (worth following on LinkedIn), recently shared a perspective that stuck with me. I’m paraphrasing:

When I hear “AI-first”, that tells me employees are second. That’s why we encourage companies to think in terms of “AI-forward”, “AI-native”, or “AI-emergent”—because those terms imply you’re putting people first.

That distinction matters. If you’re going to send one of these memos, recognize that many of your employees are excited but also anxious, uncertain, and vulnerable. The language you choose isn’t just semantics, it reflects how seriously you take the human impact of these changes.

The future of work can be great. AI can unlock immense potential if we approach it thoughtfully. A lot will change. It’s going to be more important than ever to build with intention: to value human contribution, support rapid progress, and stay grounded in the kind of workplace we actually want to create together.

Let’s get it right.

Join the conversation and discuss this post on LinkedIn or Bluesky. You can find all my posts on my author page on the Citrix blog (or via RSS). In future posts, I’ll explore how this shift affects security, access, and even what it means to “log in” to work.

My upcoming talks:

  • EUCtech Denmark: Closing Keynote, The Future of Work in an AI-Native World — Billund, Denmark, May 22
  • Citrix Connect London: Opening Keynote: Citrix Vision & Strategy — London, June 16-17
  • MAICON 2025: AI at Work: The Employees’ Revolution! — Cleveland, Ohio, Oct 14-16