Selecting the right software cannot guarantee the success of a project, but picking the wrong system can ensure failure. This month’s column will focus on three common mistakes made in selecting software - overly detailed requirements, canned demonstrations and uninformed evaluations - and how to avoid them.

Overly detailed requirements. This is such a common error that it is sometimes mistaken for a best practice. Selection teams list hundreds of desired system functions and embed them in a request for proposal. Vendors are expected to answer, often in a yes/no format, without any opportunity to understand the reasons for the requirements or to explain how their solution would meet them. The selection team then scores the results, and the software with the most ticks wins.

This approach creates much work and little information. Vendors cannot describe how their products would best meet the clients’ needs, and clients gain little insight into how the products function. It is better to help vendors build a thorough understanding of your situation and goals, and then let them propose a solution based on their product. In addition to getting the vendor’s best ideas, this process gives your team a good idea of what the vendor will be like to work with.

This approach does not do away with the need to understand your requirements. It’s perfectly possible for a vendor to propose a solution that won’t actually meet your needs. You must build your requirements list in advance so you can compare it against the vendor’s recommendations. And, yes, you should share this list with the vendors - this is a business project, not a child’s game of “gotcha.”

Canned demonstrations. Project teams often follow their request for proposal with invitations for the most promising vendors to demonstrate their software. Demonstrations play the same role as an automobile test drive: they let you discover what it’s like to actually use the product. But just as you wouldn’t be satisfied with sitting in the passenger seat while the dealer drives for you, you can’t simply watch someone else run a piece of software. You need to take control, which includes both running the system and choosing what to test. For an automobile, this might mean driving on different roads in different weather conditions. Maybe you’d even hook up a trailer if that’s your intended use. The software equivalent is running through the relevant business processes - setting up a campaign, processing an order, handling a phone call and so on.

The first step in this type of testing is personally running the tasks on the demonstration system. This can be enlightening because vendors often structure their planned demonstrations to avoid known weaknesses in their product. On an even more basic level, something that looks simple in the hands of an experienced demonstrator can turn out to be considerably more painful when you are pushing the buttons yourself.

But you’ll usually want to go beyond the demonstration system to see how the product would function in your own environment with your own data. If actually connecting to your own systems is not practical, the vendor can still show you the steps required to do it. This will give you a much clearer idea of the work required to deploy the software and will help identify challenges in adopting it to your data models. It’s important that your team members have the right technical experts present for this discussion so they can provide information, ask the right questions and understand the implications of the vendor’s answers.

Uninformed evaluation. Many software vendors offer evaluation copies of their products. This is the exact opposite of a controlled demonstration because users can do whatever they want. But users testing a product on their own may underestimate its capabilities because they don’t understand them properly. This is why the auto salesman shows you the controls before your test drive. Software vendors often provide an evaluation guide or tutorial that illustrates key product features. Some share the complete user documentation. For complex products, assistance may extend to offering the time of sales engineers or technical support staff.

The mistake here is not to take advantage of those resources. Evaluators often try to learn the products just by loading and running them. They sometimes rationalize this as “testing for ease of use,” but unless you actually plan to deploy the software without training your staff, that’s a poor excuse. You will eventually try to run your own processes on the evaluation system, but you must first start by learning how it works. Remember, your ultimate goal is to gather correct information about each product. Undervaluing a product due to poor evaluation is as much an error as overvaluing it because of vendor hype.

Different as they are, these errors all have one thing in common: avoiding them requires creation of scenarios that illustrate how the system will be used. Scenarios provide a reference point for vendor proposals, determine which features to explore during a demonstration and structure the time spent with an evaluation copy. They ensure the evaluation is grounded in actual business needs and that it covers the key processes from start to finish. Although creation of scenarios is hard work, it is the best way to avoid the ultimate selection nightmare of purchasing a product, installing it and discovering it doesn’t do what you really need.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access