Biometrics is becoming an increasingly important tool for providing secure and easy access to apps and data. If you joined Christian Reilly and me for our Future of Work breakout session (SYN128) at Citrix Synergy this year, you’ll have seen a demo we showed to illustrate how continuous biometrics can be used within a collaborative environment to secure and manage access to documents. The demo also illustrated how biometrics can be used not only to login to Citrix Workspace, but right down into individual workloads.

Imagine the scenario: you’re involved in the potential acquisition of a company and all information about the activity is extremely confidential. The corporate legal team is managing an information firewall to control who knows about the acquisition, and has issued each person who is “over the wall” (i.e. is allowed to know about it) with a confidentiality notice reminding them of the need to keep things under wraps. Your job is to write a report on the cultural fit of the acquisition target with your own company; you place this document in ShareFile and share it with others who are over the wall.

So far, everything is going well, the ShareFile access control list is ensuring only people who are supposed to know about the acquisition can see the document, and everyone is maintaining the necessary confidentiality. But then you organize a meeting with the senior leadership team to review the report. One of the attendees is Charlie, who you assumed — wrongly as it turns out — was over the wall. Charlie isn’t a bad person, but because she wasn’t issued the confidentiality notice, she doesn’t realize the importance of keeping information about this acquisition to only people who are over the wall. Charlie has some concerns about something raised in the meeting and goes to one of her colleagues, Dave, who is also not over the wall, for a second opinion, sharing the name of the acquisition target with Dave. Later that day Dave chats about the potential acquisition with some other employees and by the end of the day several dozen people in the office, none of them over the wall, know about it. By the next morning, the information has leaked outside the company and finds its way onto social media. Ultimately, this causes the acquisition to fall through.

People in a meeting

What went wrong here? Charlie probably should have been over the wall because she was critical to decision making for the acquisition. But an oversight somewhere along the way meant that she wasn’t, and hence didn’t realize that this particular acquisition required a greater level of confidentiality than usual. Nobody here was being malicious.

But why didn’t technology help us? Access control lists and other protections are great at managing access to documents, but the moment an authorized user connects their laptop to a projector and lets others view the screen, all of those protections are lost — it now becomes the responsibility of that user to manage access. But people are fallible, as we saw in this story.

This is where continuous biometrics can help. Imagine that the meeting room in the story had a webcam above the screen pointing at the attendees (exactly how one would have a meeting room set up for video conferencing). If that webcam could identify all the attendees, that list could be checked against the ShareFile access list, and the document owner or meeting organizer alerted to any anomalies. In the above story, this would have identified that Charlie was viewing the document without being “over the wall” and then you, as meeting organizer and document owner, could then get Charlie officially brought over the wall and given the confidentiality notice. With this, she would know that she couldn’t openly discuss the acquisition with Dave and, therefore, the uncontrolled spread of the information, social media leak, and ultimate failure of the acquisition would probably not have happened.

Of course, a system such as this isn’t a bullet-proof protection against malicious users trying to view documents they’re not supposed to. A user could attempt to obscure their face, for example, but as a check against honest errors, as in the story above, it doesn’t need to be.

James and Christian on the SYN128 stage

In the Synergy Future of Work session, we demonstrated this concept using the session’s PowerPoint presentation as the confidential document, Christian as an authorized user and me as an unauthorized user. As soon as I stood in front of the webcam the screen automatically blurred to prevent me seeing anything further and, at the same time, a request was sent to Christian to ask him to approve me to view the document (and hence bring me over the wall). Once Christian had approved, a second request was sent to me to agree to the confidentiality notice; only after I’d agreed to this was I added to the document access list and the screen un-blurred.

How did this all work?

Demo architecture

We created a prototype “biometrics engine” as a small local service written in Python. This could be run in various places including the virtual desktop, or on the physical (thin) client device, such as the Workspace Hub as in the diagram above. The biometrics engine used the webcam to capture an image every couple of seconds. It then used OpenCV, a popular computer vision library, to count the number of human faces in the captured image. If the number of faces changed, or a time of 20 seconds had elapsed without change, the captured image was sent to the Microsoft Azure Face API (part of Azure Cognitive Services), which performed its own face detection, followed by identification of each face against the database of faces enrolled in our account. For this demo, just Christian and I enrolled our face images however Azure Face API allows for up to one million faces to be enrolled in each database — more than enough for most organizations.

In parallel with this, the virtual desktop registered the document it was showing in the foreground, along with the list of authorized users for the document, with the biometrics engine. In the demo this information was hard-coded but in a production system this would likely be done via the virtual desktop agent (VDA).

When it was only Christian in front of the camera the call to the Azure Face API recognized him and the biometrics engine cross-checked this with the access list for the document and found him on it, therefore no further action was taken. When I moved in front of the camera and started to look at Christian’s laptop screen the OpenCV check noticed the additional face and hence triggered another call to the Azure Face API. This API call returned having identified both viewers, and the check against the access list showed that one of the identified users, me, wasn’t on it. At this point, the biometrics engine immediately blurred the screen (for demo purposes this was a screen capture passed through a Gaussian blur filter using the Python Imaging Library and then displayed as an image) and contacted a prototype cloud service to initiate the approval requests.

Webcam capture
Actual webcam capture from the live demo annotated with names identified by Azure Face API.

The approval service illustrated two different mechanisms for approvals. Firstly it exposed APIs for an approvals “inbox”: one API to GET the inbox items, one API to POST the approve/reject response. We extended the “Actions” panel in Citrix Workspace to call this API to populate the action cards from items in this inbox – this is broadly how the Workday time-off demo from the Synergy keynote, and repeated in SYN128, worked. When the biometrics engine initiated the approval sequence the document name, user-to-be-approved, and a link to the captured webcam image was added to Christian’s inbox. This then showed as an action card in Christian’s Workspace giving him “Approve” and “Reject” buttons.

Citrix Workspace showing action cards including the document approval.

When Christian clicked “approve,” Workspace called the response API, which triggered the second mechanism, a Slack app. This used a Slack bot to send a direct message to me, informing me I’d been approved and asking me to accept the confidentiality terms. The message included Slack “action buttons” to provide my response back to the approval service via a callback API.

Slack screenshot showing a direct message from the approval service asking the user to accept confidentiality terms.

Having clicked “accept,” the approval service notified the biometrics engine that the entire approval workflow had been completed, and in parallel updated ShareFile via APIs to share the document with me. The biometrics engine then un-blurred the screen and added me to the local copy of the document access list. Future webcam captures and Azure Face API calls that identified me then found me in the access list and therefore didn’t blur the screen again.

Although not part of the demo, the system could perhaps also link into Citrix Analytics to both provide a source of events (e.g. me showing up as an unauthorized user may be an interesting event that could elevate my risk index) and take risk index into account (e.g. if a high-risk user is identified in an image that could itself initiate an alert or approval).

In producing and performing this demo, we’ve seen how biometrics can help to enhance security beyond login, and, when integrated with other components of a smart workspace, be a genuinely useful tool to employees.