In my June 2009 column, I described the process of building a strong business case for your master data management initiative.
Now that youve finished the business case, present it to senior management and get their approval to proceed with the project. Once youve done that, give yourself a huge pat on the back, and start looking at your functional, technical and general requirements in depth.
These requirements are the starting point for your selection criteria. Make sure you capture everything thats needed to achieve the business goals of your project.
Your selection criteria will probably break down into categories similar to the following:
General, including assessment of the vendors customer base, revenues, growth rate and geographic coverage; the depth of their partner network; their overall financial strength; the depth and breadth of their product vision and roadmap; and the availability of customer references in your industry.
Functional requirements, including the underlying design; quality of the user interface; degree of support for data governance; data profiling, data quality and integration capabilities; matching, deduplication and merging; enrichment with third-party content; survivorship rules; multilanguage support; inbound data acquisition and outbound data publishing; business services and workflow; event and metadata management; and ease of implementation.
Technical requirements, including the overall architecture, supported data domains and use cases; support for open standards; operating system, database and hardware support; ability to handle double-byte characters; support for Web services and service-oriented architecture; security and privacy controls; built-in reporting and analytical capabilities; performance and scalability; quality of technical support, training and documentation; and ease of maintenance.
Cost, including the software license, annual maintenance, consulting, change management, implementation services, training, etc. Make sure to look at both initial and ongoing costs.
Once youve developed your list of criteria, use that to build a vendor evaluation model, where each criterion is assigned a weighting factor accounting for its importance relative to the final decision.
For example, you might end up with 25 different criteria, so each would contribute (on average) about 4 percent to the final decision. But obviously some criteria are more important than others, so youd bump the weighting on those up significantly, while dialing down the less important ones. Try to make the score fall on either a 1 to 5 scale or a 1 to 100 scale, to keep things simple.
This model will make your evaluation process more objective and comprehensive. It will also help your team build consensus and better understand the different vendors offerings.
One big question that usually comes up at this point is how many vendors should be on short list and which ones. Try to keep the number as low as possible otherwise, your short list wont be very short! Youll be doing a fair amount of due diligence on each vendor, so youre only multiplying your workload if you have a large number on the list. Contact a broader group for an initial round of information gathering, but be strict about whos allowed to proceed to the next round of the process. My suggestion would be to limit it to two to four vendors.
Now that youve developed your evaluation model and contacted a small group of MDM vendors to start the process, youll probably start with an introductory meeting with each vendors team, including the inevitable PowerPoint presentation and high-level product demo. This is a good start, but its not sufficient to make an objective decision.
Youll want to prepare for and then conduct scripted demonstrations or proof of concept sessions with selected vendors, using your own, realistic master data, in order to see how they do in bringing it into their hub and what it looks like in their user interface.
This will give you a chance to kick the tires with some actual data and check out the finer points of their functionality. The amount of time each vendor needs to prepare for and conduct the POC session will also give you an early indication of how hard it will be to implement that system, as well as what theyre like to work with as a company.
After each session, debrief as a team, in order to get perspective on the vendors presentations. Ask everyone on the selection team to give that vendor a score for each criterion, and then average everyones scores in order to come up with that vendors total score.
I dont advocate blindly awarding the project to the vendor with the best score. Sometimes, youll have two vendors whose scores are very close, but as a team, youll have reached a consensus that the vendor scoring second would actually be a better choice. Trust your instincts on this. The scoring system should help you objectively eliminate vendors who just arent a good fit for your requirements and your organization.
At this point, youre ready to provide a brief report to senior management on the selection process that summarizes your findings and makes a recommendation on which vendor and product to purchase.
This may seem like a fair amount of work, but youre much better following a transparent and objective process that can be explained in a straightforward way to people in the business or in IT. Ive seen a lot of one-sided selections or even instances where vendors were awarded large contracts through the strength of their contacts and relationships, rather than the strength of their product. Making the right product choice will make a huge difference in an overall projects probability of success.