In our Innovation Hub at Citrix Synergy, we demonstrated a number of ideas and prototypes that showed parts of our vision for how Citrix will shape the future of work, empowering users of all kinds with unified, secure, and reliable access to all of the apps and content they need to be productive.

One of our demos, the Contextual Workspace, illustrated a number of aspects of our vision, including:

  1. Personalized experiences
  2. Automation and assistance
  3. Fourth-generation human-computer interfaces

We showed a glimpse into the future using a combination of existing Citrix and third-party technologies, brand new prototypes, and novel integrations. If you want to cut to the chase, you can find a video of the demo at the end of this blog post.

The Innovation Hub showcase at Citrix Synergy 2019

Personalized Experiences

For our demo we chose the setting of a modern workplace, set up in a flexible manner without assigned desks, cubes, and offices, but rather, with a set of spaces that can be used by any employee. This is becoming a common model, which is great for the employers, who benefit from more efficient use of space and therefore lower costs. But it can also be good for employees, who aren’t limited to a single kind of space. They can move around during the day and use spaces that are suited to the task they are performing at the time.

However, the lack of a personal, assigned space does have other consequences. Permanent personalization of that space is, of course, no longer possible, and whenever a user moves to a new space, they must spend some time setting it for themselves. For example adjusting the chair and desk, the lighting, the window blinds, and the thermostat to their preferences. They also lose things like somewhere to put a photo of the family, dog, or a favorite vacation. This might seem minor, but it’s all part of creating a work environment where the user can be comfortable and happy — and a happy and comfortable employee is a productive employee.

The Innovation Hub at Citrix Synergy 2019, showing three desks in an open office configuration.

Of course Citrix Workspace can already virtualize apps and data such that a user can securely access them from anywhere. We set out to show how we could extend this concept from software assets to the physical world. Can we virtualize physical space, too?

Our demo builds on the growing capabilities of the smart, or connected, office. Many newly constructed or refurbished buildings are utilising connected furniture, which has the capability to adjust itself to a user’s configured preference. Lighting and temperature control are progressing from the comparatively closed world of the older CAN bus systems to those connected via IP and having APIs — in a similar manner to how smart lighting and temperature control systems have become commonplace in the home.

As Citrix Workspace can already look after the personalization of your virtual desktop, why not your physical desk, too?

To achieve this, we built a prototype that stores user preferences in the cloud, using a combination of statically configured preferences and those learned from how a user manually sets up their space. When a user enters a space (which may be a desk in an open office, a meeting room, a huddle space, etc.), they associate themselves with that space. For the Synergy demo, we did this by having the user scan a QR code displayed on the screen with the Citrix Workspace app running on their mobile device, but it could equally well be done using location technologies such as beacons or NFC.

This sets in motion the personalization: the Workspace hub acts as an IoT hub, communicating with the various devices in the space, including the LOGICDATA LOGIClink height controller for the sit-stand desk, the lighting APIs, and even the digital photo frame. Additionally, the user’s Citrix Workspace session roams to the workspace hub and its screen, in a conceptually similar manner to our existing Citrix Casting, but in this case for the entire Workspace rather than a single virtual app or desktop.

A personalized space with the lighting and desk height adjusted to the user's preferences, the digital photo frame showing the user's vacation photos, and the user's Workspace session having been roamed to the workspace hub.
A personalized space with the lighting and desk height adjusted to the user’s preferences, the digital photo frame showing the user’s vacation photos, and the user’s Workspace session having been roamed to the workspace hub.

All of this means that within seconds of entering the space, the user has a working environment that’s familiar to them, and they’re ready to begin work immediately. Not only does this improve the working environment, it saves time setting up the space manually, enabling the user to spend more time on productive activities. Many of the Synergy attendees that saw this demo recognized from their own organizations the challenges created by non-assigned working spaces and could see how a solution based on the concepts in the demo could help address them.

Automation and Assistance

Wouldn’t it be great if everybody had an assistant that would help them to quickly find what they need and where they need to be and would take care of repetitive and mundane tasks? At Citrix we’re working on the Citrix Virtual Assistant, which will be part of the Citrix Workspace and will do just that. For us, a virtual assistant (VA) is more than just a voice or chatbot interface to existing interfaces, it’s an intelligent service than can use knowledge built up by observing users to understand their needs and proactively help them to be more productive.

In our Synergy demo, we illustrated this with a number of examples.

First, we showed a simple, transactional example of finding a document using a voice search. The demo showed a simple search for recent files of a given type. However the natural-language nature of a VA makes it suitable for more complex queries where a traditional user interface would require the user to input or choose values in several UI fields, which adds to the time taken to perform the search. Imagine being able to type or say, “Find the presentation I edited on Monday afternoon last week when I was on the train,” and having the VA turn that into a suitable query, refined by location data, to know the specific time period I was on the train.

Second, we wanted to illustrate how a VA can go beyond transactional query/response cases, and perform delegated tasks. The demo scenario was a case where one user wanted to get a second user (we pretended it was our CEO, David Henshall, in the demo) to provide some input to a spreadsheet. This activity is, of course, possible today with Citrix Workspace. The first user can create a shared folder, then add the second user to it by finding them in the address book. They then copy or move the spreadsheet into that folder and have the system send a message to the second user notifying them that the file has been shared. After providing the required input the second user would message back to say the task had been completed.

In our demo we showed the VA automating this entire process. The first user simply states, “Ask David to edit this document.” The VA uses context to infer what the user meant. Is there a “David” in the user’s workgroup? Is there a “David” that the user often interacts with? Is there a “David” logged in to a nearby workspace hub? In this case it is the latter, based on us knowing where each desk is on the map. The VA, having asked for confirmation, automates the process of creating the shared folder and sharing the document with David. It then sends a notification to David, in the form of a microapp, which enables David to see this task among his other prioritized tasks and open the document in a single click. After he makes his changes, the VA takes over and communicates the changes back to the originating user.

Inferring the user's intentions from an ambiguous statement and context.
Inferring the user’s intentions from an ambiguous statement and context.

This short demo showed how a VA can save a few precious minutes that would otherwise have been spent on manually navigating UIs. Imagine how a few minutes saved on things like this, which occur many times every day, could add up. It also allowed David to complete his task without having to switch over to email, click on a URL in an email, and so on — he could do his work right there in Citrix Workspace.

Fourth Generation Human-Computer Interfaces

The way we’re interacting with computers is changing. Steve Wilson, Citrix VP for Cloud, describes this progression in his blog post IoT and the Dawn of the 4th Gen User Interface. In essence, I like to think about it as the progression from “computer-shaped” interfaces to “human-shaped” interfaces, allowing people to communicate with computing systems in new ways such as voice and gesture recognition. Of course, much like many “new” technologies, voice control has been around for a long time. However, advances in speech recognition, natural-language processing and understanding, and the availability of sufficient computing power have made voice interfaces far more useful and reliable than ever before.

Several Synergy attendees noted how the use of voice interfaces, including the use cases we were demonstrating, is important in their organizations to help users with various special needs access their workspace resources.

In this demo we wanted to use voice interfaces as part of the illustration of the assistant and automation concepts described above. We chose to use Amazon Echo (“Alexa”) in the demo, partly because of the growing acceptance of this device within organizations due to the Alexa for Business service.

An Amazon Echo being used with Citrix Workspace

But this creates a problem. Typically, in a domestic environment, a user will link their accounts with their Amazon account to enable Alexa to access things such as their calendar. This permanent account pairing isn’t suitable for a corporate account in a shared environment such as our demo desks, where it would be easy to walk away and leave the device with access to my corporate resources, available for anyone to walk up and use. For this demo we developed a new model of temporary account linking, which, just like the physical space personalization described above, is set up automatically and seamlessly when the user enters the space.

The demo takes advantage of the fact that Citrix Endpoint Management is able to manage Alexa for Business devices and Workspace hubs, and, therefore, we can build a relationship between those two devices in each space. When a user with a logged-in mobile device associates with a Workspace hub, they are also, in effect, associating with the Alexa device. Therefore, our Alexa skill knows exactly which user is present and can access that user’s Workspace only during the time the user in associated with the space. When the user leaves the space, or otherwise logs out, the chain is broken and the Alexa skill will have no user accounts to connect to.

A conceptual overview of what a future intelligent virtual assistant platform may look like, including how Endpoint Management is used to link Alexa for Business devices and workspace hub devices.
A conceptual overview of what a future intelligent virtual assistant platform may look like, including how Citrix Endpoint Management is used to link Alexa for Business devices and workspace hub devices.

This is an example of our vision for Citrix Workspace being an experience that can be accessed via a range of devices, in this case a device which is transiently brought into the user’s Workspace in a secure and seamless manner.

There’s More Innovation Ahead!

In this demo we’ve seen some of Citrix’s thinking on how personalization, automation, virtual assistants and next-generation computer interfaces can help users to be more productive. Take a look at the demo video below to see it all come together, and stay tuned to see more innovations!