My November column discussed how to identify the system functions you need in a demand generation system. However, functions are only one piece of the puzzle, and many vendors argue that their real advantage actually lies in superior “ease of use.” How do you measure that?

 

This is a serious challenge. Functionality is relatively straightforward - either a system has a particular capability or it doesn’t - but ease of use is more subtle. Formal, objective measurement is difficult and expensive, so most people end up relying on subjective assessments instead. And even subjective assessments are hard to organize.

 

To make any sense at all of the topic, we need to take a couple steps back. First, ease of use is one component of the larger and ultimately more important concept of “usability.” This is defined in different terms but generally includes ease of use, ease of learning and functionality.

 

Second, both ease of use and usability are highly contextual. They can only be measured for specific tasks for specific users in specific circumstances. This makes perfect sense if you think about it for a moment: a system may be quite easy to use for one task, yet very difficult for another. But it means that a single “usability score” makes much less sense than a single “functionality score.” (Not that functionality scores make much sense to begin with. The point of last month’s column was to measure only the functions that matter to you.)

 

The good news is that accepting the contextual nature of usability is the first step toward a reasonable way to measure it. Specifically, it hints that you should start by defining the contexts you need to consider. Then you can actually assess usability within those contexts.

 

As mentioned, the three broad types of contexts are tasks, users and circumstances.

 

The tasks you need to consider are essentially the same ones identified in the use cases that drive your functional requirements. Usability analysis adds one new dimension to these: measuring the effort to do a task the first time compared with the effort to do it repeatedly. The difference is setup effort, which applies to the first iteration only. This turns out to be a significant differentiator among demand generation systems, because some are optimized for simple, one-off campaigns (not much setup but little reuse) while others are best at creating many variations within a fixed theme.

 

Users come in many varieties. Some dimensions to consider are: experience with the system, frequency of system use, skill sets (marketing, analysts, technologists, etc.) and system access (end users versus administrators). You will first identify the characteristics of your users, recognizing that you will probably have several sets of users who fall into different groups. Then, determine which types of users will perform which tasks, bearing in mind that some tasks may be shared across different groups. For example, administrators who are technically skilled and frequent users may set up program templates that are then completed by infrequent end users who are primarily marketers.

 

Circumstances vary as well. Will your users be focused on the demand generation system or will they be in chaotic environments with many distractions? Will they be under extreme time pressure or be able to plan ahead? Is there some leeway for error or must everything be perfect the first time, perhaps for regulatory reasons? The answers bear on system attributes such as display style, alert functions, error checking capabilities, approval workflows, versioning and security. Again, different tasks will likely be performed in different circumstances.

 

Once you have worked through these issues, your original task list will now be extended to include information on which types of users will perform each task, and in what kinds of circumstances. This allows you to assess the usability of each task against the right criteria.

 

This still hasn’t answered the question of how to do the assessment itself. In an ideal world, and sometimes in the real world when the stakes are high enough, you would install a test version of the software, train the appropriate users and let them work with it. But most evaluation projects don’t have the time or resources to make such an investment.

 

Even if you’re limited to what can be accomplished in a couple of vendor demonstration sessions, you can ensure that you are looking at usability in the right context. Some specific suggestions include:

  • Have the vendor demonstrate tasks that would be performed by expert users. This assumes that the vendor’s demonstrator, either a salesperson or engineer, is likely to be an expert. The demonstrator will surely make things look easy, but your own experts will eventually learn the same tricks, so that’s okay.
  • Have the vendor show you how to perform tasks that would be performed by casual users. That is, you should operate the system rather than watch. If you are already an expert, it might be better to have a typical end user at the controls instead. The goal is to see how well someone unfamiliar with the system can work with it, at least for tasks likely to be done by such users.
  • Look for features relevant to the appropriate user groups. Expert users can invest the time to learn system shortcuts, set up reusable templates and make correct choices in unstructured environments. Casual users rely more on the intuitive choice being the right choice, expect more guidance and error checking, and may prefer graphical interfaces. Casual users may also need to rely on templates and other components produced by the experts.
  • When different kinds of users will share the system, look for options to tailor the interface to individual needs. These include capabilities to turn help messages off and on, to limit the capabilities offered to casual users and to present predefined workflows.

Be sure to prepare a specific list of these items in advance of the demonstration itself, so you know exactly what to look for and have a place to record your observations. The more structured your process, the more you’ll be able to cover, and the more clear you’ll be on what you actually saw.

 

Finally, remember to look beyond the demonstration itself. Reference clients are a particularly important source of insight. Anybody the vendor recommends is almost certain to be happy (although slip-ups are more common than you might think), but you still need to assess whether the reference context (tasks, users and circumstances) is similar to your context. If the vendor can’t connect you with someone similar, you’ll have to work harder to assess the product directly. You can also look at other elements of the customer experience, such as training, support and user forums. These have the advantage of being objectively measurable, but bear in mind that they often have more to do with the size and maturity of the vendor than the quality of the product itself.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access