29.1. Users and TrustAs part of continual usability research, usability engineers at Microsoft had observed hundreds of users answering questions posed by the computer (consent dialogs) in Internet Explorer, Windows Client and Server, Microsoft applications, and other companies' products. It was clear that users often weren't following the recommendations that the products made. The question was: why not? Having seen this behavior over multiple usability sessions, we ran some specific studies to gain more insight. We conducted in-depth interviews about trust with 7 participants, and lab-based research with 14 more. We then used the results of this work to develop user interface prototypes that incorporated design elements suggested by the initial research, and observed a further 50 participants working with various iterations of the designs in different trust scenarios. Later, we had the chance to verify the concepts and designs with participants who were helping us evaluate the interface for Windows XP Service Pack 2 both in multiple lab sessions and through feedback and instrumentation from a very large user panel. We found that it was not just that users didn't understand the questions being posed by the computer, although that was definitely part of it. It was also that the computer was not their only source of trust information. It turns out that users aggregate many "clues" about trustworthiness and then trade those off against how much they want the item in question. Interestingly, computers weren't presenting all of the clues that they could have to help users, and some of the clues they were presenting were so obscure that they just confused users.
29.1.1. Users' Reactions to Trust QuestionsTrust questions appear at many points in computer interfaces. Typically, they are shown as dialogs when the computer requires input or consent from the user before proceedingfor example, before downloading a file or before performing an action that could lead to data loss. These trust question dialogs are often designed to serve a useful dual purpose of both informing users and requesting input. During usability research at Microsoft, we found that these dialogs regularly failed on both counts from users' perspectives. Some observations we made about the information and questions in trust dialogs were:
So, users do not respond to dialogs the way we might anticipate. This is because they are often forced to make a decision that is at odds with their understanding of the situation, and the information being provided is both incomplete and only partially intelligible to them. 29.1.2. Users' Behavior in Trust SituationsThe research I performed also showed that users have some interesting things going on in their heads during their interactions with trust situations on their computers:
Users do not tend to consider events requiring trust decisions in the same way that technologists do. This is because their focus is not on the technology, but on the outcome of the trust event and its impact on their lives. 29.1.3. Security Versus ConvenienceThe worst dilemma for users, and the one that is also the hardest to resolve through user experience design, is that from a user perspective, increases in security are most frequently accompanied by a reduction in convenience. Likewise, when users try to accomplish a task in a convenient way, they often encounter security warnings. For instance, choosing to set the browser security level to High in Internet Explorer or other browser products will turn off many of the features of the product that can be used to exploit users. However, this same action can degrade the browsing experience to a point where most users will be dissatisfied, as they will no longer have access to the plug-in components and scripting functions that they have come to expect on a web site. It is this dilemma that user experience designers must seek to resolve for users, presenting them instead with understandable options that allow them to perform their tasks with a minimum of inconvenience. 29.1.4. Making Decisions Versus Supporting DecisionsIt is important to note that the emphasis here is not on allowing the computer to make trust decisions, but on how a computer can assist users with their trust decisions. Of course, there are some instances where the computer can make that decisionfor instance, when it detects the presence of a known virus in something the user plans to download. Here, the decision is easyprotect the user from the virus. Computers can be programmed to make this kind of decision. Most of the time, however, the decision is less clear cut, and so it still rests with the user. The challenge is to achieve the correct balance between exhausting the user with multiple questions and automating the process to the point where the computer runs the risk of making erroneous decisions. Having observed that users have a tendency to simply dismiss any dialog that gets in their way, the tendency among interface designers is often to try to remove the dialog. If the dialog can be completely removed (if the computer can make the decision), that's great. If, however, the dialog still needs to exist, our studies have shown that users make a much more secure, appropriate, reasoned decision if the dialog is presented in the context of their task. Placing the decision in an initial options screen or hiding it in a settings dialog removed in space and time from the point where users carry out their task requires them to think in a logical rather than an emotional way about a class of task rather than about a specific instance. As noted earlier, users found it easier to make a specific decision rather than a generic decision. It was much easier for them to agree to trust a specific person at a specific time for a specific transaction than to agree to trust a whole category of people every time a transaction occurred. Users could easily make a decision without too much interruption to their task if the dialog presented the facts they needed in a way they could understand. We classified this as presenting a decision, not a dilemma. For common or repetitive tasks, obviously the fewer interruptions a user experiences, the better. In these situations, it makes sense to give the user an option to always apply his current decision to the situation. If you can scope the situation suitably, the user will be happy to have that decision applied consistently. For less common tasks, it's not necessarily the number of screens between a user and his goal that determines the quality of the interaction. Instead, a major factor is whether all of those screens are perceived by the user to be flowing toward his end goal. After eliciting from users some of the clues they use, and understanding the philosophies that they bring to their trust interactions, we worked out which clues can be provided by a computer, and then worked out how and when to present them in the trust process such that they aided in the decision. The tone of the interaction was dictated to a large degree by a wish to stay within users' comfort zones while simultaneously educating them. |