Most people have a fairly well developed ability to assess trust when interacting with other people, which enables them to sense when situations are safe or not and whether to trust what other people say or to be suspicious of their intentions.
Let me give you an example, which perhaps just illustrates how some people (like me) aren’t actually that well attuned to the signs. A couple of months ago, my wife and I agreed to let a double-glazing sales rep visit; we’ve been thinking we need to replace our windows before long anyway, so when a salesman rang we thought why not get a quote? (Hey, not everyone’s sense of suspicion kicks in right away; I do try not to be cynical.)
So we agree a date and time with the sales guy on the phone, and no-one shows up. We’re a bit irritated, because we changed our plans to wait in. Next week, the phone sales guy rings again, and when confronted apologizes that the rep who was supposed to visit the first time didn’t come – he says the rep claims he visited and gave us a quote. It’s impressive how persuasive good phone sales guys can be – he says he will offer us a 60% discount because of it – and after some hesitation we agree to a new time.
This time a rep does show up, and right away apologizes for the other rep not showing first time – and says “he was fired”. (Hmmm.) Anyway, this one seems amiable enough and does the necessary inspection then gets down to the sales pitch. It is rather drawn out but finally he quotes a rather large number – then my wife points out that the phone sales guy promised us a 60% discount. The rep manages to look suitably furious and rings the office on his mobile, talks angrily for 10 minutes about how the phone sales guy has done it again and how he will have to honour the discount but it makes him so cross etc, then finishes the call and tells us the phone sales guy “will be demoted this time”.
Okay, so if you were in my position, what would you be thinking now? Is this an honest situation, where circumstances and events should be taken at face value?
We respond sceptically about the price (the one with the 60% discount), and are about to start serving supper. The rep then quickly rings the office again and comes back with “we’ve got a job near here in three weeks and could do it that day – I’ll give you an extra discount if you decide tonight”. As it happens, I’ll be travelling that day so we decline politely and show him out the door. Afterwards, we realize he seems to have forgotten to leave the written quote. (Surprising, that.) A quick web search reveals some interesting comments from previous customers about the sales tactics employed by this company, the commission structure for its reps and the quality of their products and work. Suffice to say, we aren’t regretting our decision to pass on the generous offer we were made that night.
My point is that when interacting with another person face to face (or on the phone), we usually have plenty of indicators we can look at to assess how much we should trust the other person, how risky the situation is, whether things are what they seem. “The other rep was fired”, “The phone sales guy will be demoted this time” – yes, I wonder whether he would answer today if we ring and ask for Frankie (“like the fish”), sounding as cheerful and matey as ever.
How does this relate to Citrix? Simple: trust is critical to what we do in supporting remote application delivery and how people perceive that, as customers and users.
Trust me, I’m a software developer. (I’ve got a compiler a certificate!)
Trust is an issue in many ways and on many levels – for example, do you trust Citrix to write good products that work as they should and don’t pose a risk to your systems? If so, why do you trust Citrix to do this? Is it because of how many customers we have, who we have as customers, what customers and third parties say about our products, or because you tested our products in your own environment? Or maybe you know someone you respect who works at Citrix and that gives you confidence in what we produce?
Already you can see that often there are ways open to you to assess something of the true state of affairs, and so reduce the need to blindly trust us to have written solid products. But it isn’t so easy in every situation.
Right now, I’m wrestling with trust issues that relate to how Web Interface works and how users are or should be involved in making decisions affecting security, especially of their own system. Most of the key decisions come down to questions of trust.
The core of the problem is that people have very few indicators they can make sense of when visiting a web site on the Internet, to judge accurately whether that site is what they think it is, and whether they are safe visiting it. The current acute problem with phishing is evidence enough of that.
If that site is Web Interface, one of the first things it does is try to use an ActiveX control – that alone can trigger a warning from the browser these days. If not found, Web Interface will normally provide a link to a program that must be downloaded and installed to use the site. If the user elects to do that, they will be faced with a couple of rather cryptic security questions that talk obliquely about trust, but don’t really give the user anything useful to go on:
How is an employee of a business partner remotely accessing your system supposed to make an informed and accurate judgement about these questions? Nothing in either dialog tells the user that the program, if run, will actually try to install something on their computer (unless they know that .msi files are installation packages). The link just takes the user to the Citrix home page; they would have to really dig to find out what the Web Client actually does. The publisher link shows them the detailed digital certificate information – barely useful even if you know how to inspect the certificate fields and interpret them.
The business partner employee is implicitly faced with the question “do you trust Citrix to write good products” but without the benefit of the types of evidence someone purchasing the product would likely have.
Even worse than the browser security dialogs are some that the ICA client itself throws up, usually when the user is in the middle of doing something else. Here is the classic example (this typically appears when browsing My Computer in the Open and Save dialogs of most applications):
I won’t go into a critique of this one now – I want to finish this post today!
Can we translate trust judgements about remote systems into judgements about things people instinctively understand?
That is the question I am trying to answer in the context of Web Interface, to improve the way users experience our products and maximize their actual level of security. I’m willing to forego a theoretically stronger security mechanism that in practice would often be undermined or bypassed for an apparently weaker security mechanism if it will be so easy to use and understand that most people would receive its full benefit.
The idea is that instead of allowing security to depend on individual users making judgements about technical risks they often don’t even understand let alone have the means to assess, we should find ways to design the technology so that users are only part of the chain of trust decisions when they are in a position to understand what’s happening, and have more natural ways of assessing risks – more akin to the face-to-face indicators I had when judging whether the double-glazing salesman was being honest with me, or trying to lure or pressure me into accepting an offer that wasn’t really good value (for me).
Certainly that is possible in some cases. For example, employers could give employees who need access from home a CD or USB stick that will configure their home PC to trust the company’s Web Interface site without needing them to enter the correct URL or manually reconfigure their PC. The employee is making his primary trust decision on the basis of a person he knows (his employer or an IT person at work) giving him a CD for this particular purpose; he will probably assess intention and competence to some extent without even realizing it. He can also make a secondary trust decision about whether the CD or USB stick could have been tampered with or swapped for a trojan once he received it, because it is a physical object he took home with him and people are fairly good at knowing how to look after small objects. (Or at least better at it than knowing when they have visited a phishing site.)
Of course the software installation process started by using that CD or USB stick needs to allow the individual sufficient control over what is happening, as well as ensuring they are properly informed (in terms they can actually understand) about what is happening to the security settings or posture of their system, why that is desirable or necessary, how the user can change their mind later etc. This is where Citrix needs to do its homework, to make sure we design our products to minimize the chance they could be used maliciously to endanger anyone’s systems. (That still doesn’t quite answer how any particular individual, who might be a third-party user of a Citrix system, can tell whether they should trust Citrix software to behave this way though.)
There are lots of even more complicated scenarios that get interesting, when you look at the relationships between application provider, application consumer, application data owner, device owner and application + device user. I’ll talk more about those actors and their relationships another time.
As I said earlier, trust is critical to how Citrix products work and are used, and there are many varied facets and levels of trust we could talk about. Let me know if I’ve oversimplified things, or missed something you think is vitally important.
PS. I’ve deliberately been imprecise in using the word trust, to try and speak to the various ‘natural’ meanings that different people assume. It turns out there are some quite interesting articles that talk at length about the nature of trust, which make insightful reading. One that I’ve been reading recently is “Designing Systems That People Will Trust”, which is included in Security and Usability.