At Citrix, we’re diving deep into the Internet of Things (IoT). While we’re working with several customers on projects, we’re also starting to use these technologies ourselves.

We’ve been rolling out Smart Conference rooms in our offices across the globe. These rooms have been enhanced with IoT technologies like iBeacons and smart motion sensors which enable the rooms to listen to beacons from physical things, such as laptops or mobile devices. These event streams are then integrated into traditional IT systems like Active Directory, Microsoft Exchange and Skype for Business.  Together it makes for a near-magical experience where the space around you just seems to know what you want to do and does it for you.  We’re also working on special purpose versions of this technology for industries such as Healthcare.

Living in these spaces has started to permeate my thinking about the nature of user interfaces.  In fact, I’m now convinced we’re in the middle of a fundamental shift in how we think about Human-Computer Interaction (HCI).

We’ve now entered the era of the 4th Generation User Interface – and the shift is going to be dramatic over the next few years. Computer applications will no longer be contained behind glass screens, as they have been for the past few generations; they’re going to come out and interact with your physical environment in new and exciting ways.

What I’m calling the Fourth Generation User Interface will be applications where the user interacts with multiple computing devices. These will leverage technologies like ubiquitous connected devices, location-based services, speech recognition, computer vision, biometrics and even augmented reality.  This isn’t your dad’s computing environment.

So, why am I calling this a Fourth-gen experience?  Let’s look at the first three generations and then dive into the fourth.

First-Gen Computer User Interface

Early computers were designed to handle batch jobs, for lack of a better term. The computer was fed data (often painfully) through a mechanism like Punch Cards. Results came back via a mechanism like a printer. My dad told me a story once about one of his early computer programming experiences where he literally dropped his computer program on the floor (a stack of punch cards) and it took hours to re-sort it.

ibm_7070_7074

Source: Wikipedia (Public Domain Image)

While this may seem like ancient history, this was the way computers worked for most of the 1950s, 60s, and even (in diminishing proportions) into the 70s and 80s.

Second-Gen User Interface

Character-based user interfaces gradually replaced things like punch cards. Users could interact with the computer in real time and get results instantly. These types of Command Line Interfaces (CLI) were dramatically more efficient than the generation prior. However, these required the user to understand a, sometimes arcane, system of commands and syntax to be productive. Systems like MS-DOS and UNIX defined this era.

freedos_beta_9_pre-release5_command_line_interface_on_bochs_sshot20040912-1

Source: Wikipedia (Public Domain Image)

CLI is still common for system administrators and software developers, but most end-users never see these types of interfaces any longer.

Third-Gen User Interface

This is where you spend most of your time today. Originally coming from Xerox PARC, this became Apple Mac and Microsoft Windows. Graphical metaphors that represent a two-dimensional desktop metaphor.  Largely, most iOS and Android apps fall in this category, as well (although in some cases there are now bridging to what’s next).

apple_macintosh_desktop

Source: Wikipedia (Public Domain Image)

windows_10_build_14393_redstone

Source: Wikipedia (Public Domain Image)

1984 Mac vs. latest greatest Windows 10. Surely a lot more graphics processing power these days but fundamental metaphors have changed little.

The Dawn of Fourth-Generation

This is the world we can live in now – office spaces that anticipate worker needs and automatically adjust and configure themselves. Smart hospital rooms that remove the need for repetitive data entry from the doctor, instead automatically displaying key information to the doctor and automatically logging vital signs.  Workspaces that mix physical and virtual controls to enable the incredible.

Let’s look at a handful of examples we’re pursuing today.

Here’s an example of a Smart Conference room. We have several of these at Citrix and are continually enhancing the Citrix Octoblu flows that power it as we get ready to make it a commercial offering.  You can see more about this concept here and read more about what we’re doing in healthcare here.

smartconf800

In another example, one of our developers created a system that uses Citrix Octoblu to bridge from a consumer-grade Amazon Echo to an industrial-grade medical diagnosis engine from partner Infermedica to create a hands-free, artificially intelligent doctor:

Octoblu Alexa AI Doctor

And lastly, we’ve been working on a project link Microsoft’s Hololens to real-world devices. This enables us to use mixed reality controls to manipulate the real world.

hololenshelmet

Here you see one of our developers wearing a Hololens.  Behind him are several pieces of paper with colored circles and lamps.

However, when he turns around, he sees holographic “buttons” overlaid on top of those pages. When he pushes one of those virtual buttons (with his real finger!) it triggers and Octoblu flow that changes the color of his desk lamp. The lamp is just a demonstration, but it shows how that type of gesture could be used to initiate any workflow.

hololenstargets

The lights are just a proxy for all kinds of augmented reality uses, as that type of gesture could be used to initiate any workflow. For example, imagine an employee moving through a manufacturing floor, viewing status “pinned” in augmented reality and having the option to start any number of workflows to alter behavior, order more supplies, or even halt a production line.

You can read more about this one here.

Wrapping up

For 40 years, HCI has been dominated by humans staring at a piece of glass in front of a keyboard.  These Fourth-Gen interfaces represent the biggest change to the basic interaction model since the punch card was made obsolete.

The ideas discussed here are just a few of the wildly varied ways that users are now interacting with computer systems. Together, these will usher in a new age of Human-Computer Interaction. The world is changing and it’s going to be pretty exciting to experience.

octoblu banner