A lucky handful of marketers live in information-rich environments where sophisticated promotion measurement is long established. They have the luxury of debating the finer points of presentation methods and advanced analytical techniques.
But most marketers are still struggling with performance data that is limited, fragmented and barely accessible. They face a more fundamental challenge: how to make the best use of the information available today, while getting ready to add new information in the future.
The solution is to build a core marketing measurement platform - a beginner's dashboard, as it were. This requires a single set of performance measures that can be applied to different promotion types at different levels of detail. These measures are disseminated throughout the organization, becoming a shared reference point that demystifies marketing results and can be expanded over time to include more sophisticated information.
By definition, the core measures must be quite simple. One approach is to look at two types of information: customer data in general and promotion response in particular.
Two Types of Information
Customer information is usually available in a warehouse or marketing database. Nearly any such system will include customer demographics and purchases. Many also capture promotion history and costs of products, customer service and marketing contacts. Such information can provide basic "state of customer" measures, including counts, retention and cross-purchase rates, profitability and lifetime value. (We'll leave out the nuances of how these are calculated, except to note that the system could start with simple techniques and improve these over time.) Demographics and promotion history can often identify customers who were exposed to a particular treatment, such as people in the trading area of a store that ran a promotion. This allows inferences about results when promotion response cannot be measured directly.
Direct measurement generally means the order is accompanied by a code that identifies the promotion which triggered it. The code might represent a targeted contact such as a mailing or email, or anonymous media like television, print or Web ads. Direct measurement is considered ideal by many marketers, particularly those with a background in direct marketing. But this attitude is increasingly questionable in a world where customers receive many messages before making an actual purchase. It also fails to recognize that one purchase is just a fragment of the entire customer relationship and makes sense only in the larger context. Yet promotion response still provides important information. It should definitely be part of the basic marketing metrics, so long as it's treated appropriately. Specific measures might include response rate, cost per response and return on investment.
This handful of measures may not seem like a lot of information. But each measure is repeated for each unit of analysis, including customer segments, business units (regions, channels, product lines, etc) and promotion audiences. In addition, each measure must be placed in context by comparing it against one or more reference points: plans, past periods, comparable entities, thresholds, trends, rankings and so on. In practice, even the simplest business will have hundreds of data points available.
This is exactly why having a limited number of simple measures is so important. Users quickly learn the definitions, leaving them free to concentrate on the values. (An important side benefit is that the data to calculate simple measures is more likely to be available throughout the business and at finer levels of detail.) A reasonable dashboard might open with all the standard measures at the highest level relevant to a particular viewer: the entire division for a division manager, a store for a store manager, a product for a product manager and so on. This would compare current values against one or two standard reference points.
The dashboard must also identify exceptional results, either good or bad. This requires digging beneath the aggregate results through automated analysis of subsets. These would be defined by slicing the data on whatever dimensions are relevant to the viewer. Subset results would still be reported using the standard measures, although such drill downs could also be a jumping-off point for analyses using other data and techniques. Arguably this is where the dashboard ends and other types of business intelligence begin, although the transition should be seamless from the user's viewpoint.
The physical design of the dashboard should reflect its functioning: simple, consistent displays of standard elements with highlights to identify exceptions. Again, the goal is to let users focus on the substance of the information. Interface design and user training should concentrate on the two main choices users are called on to make: how to segment the data and which comparisons to use as reference points.
Of course, no dashboard is ever complete, and the beginning dashboard is just that - a beginning. But users who have mastered the basic dashboard will find it easier to absorb new elements when they are placed in a familiar context. As the company adds information capabilities, the dashboard will grow with it.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access