“Know thyself”

- Socrates


“I have but one lamp by which my feet are guided, and that is the lamp of experience”

- Patrick Henry


At last, we attack the root question to which the prior three installments in this series have been leading: how do you assemble a total performance management solution? We have thus far defined what corporate performance management (CPM) is, decomposed it into discrete functional units that can be organized according to the human needs they fulfill (see Figure 1), and set forth criteria by which to evaluate the implementation path (namely, assessment of each component’s risks, return and dependencies). Now let’s consider where to start and how to proceed with implementation. To guide us, let’s return to Figure 1 and move around it, considering what risks, return and dependencies exist depending on where we are on the diagram.



First Observation: Risk generally increases as you move out all three axes at once. One would be ill-advised to entrust a system to interpret external events over which the enterprise has no control, determine in predictive fashion how those events will affect performance and automatically enact operating changes absent any intervening human review or approval. Call me a techno heretic, but ceding that level of operational control to a machine just strikes me as dumb – even a little spooky. Perhaps, there are situations in which this level of automated, predictive operational control is warranted, but only where the parameters are tightly defined and agreed upon in advance by human stakeholders. For example, a contract between trading partners by which a firm’s products or services are automatically repriced over time corresponding to changes in the consumer price index (CPI). Such an arrangement might be reasonably subject to complete automation, even though it involves acting upon external, leading performance indicators. Generally speaking, however, in our fascination with the power and potential of technology, we ought not relinquish or diminish the importance of human judgment in managing business performance.


Second Observation: Much of the market focus in the CPM space is around productivity and process automation (using systems to address the needs down the Degree of Assistance axis), i.e., improving the workflow and time-consuming, error-prone “spreadsheet march” that characterizes many corporate planning and budgeting cycles, along with the monitoring of actual performance against those plans. What is the risk in attacking CPM from this angle? Probably none from the perspective of business disruption, and potentially moderate from the perspective of achieving ROI (and the ROI risk is more likely related to poor and undisciplined implementation than poor tools). Assuming a skillful and disciplined implementation, how large a return should be expected? The answer, of course, is the ubiquitous qualifier – “it depends.” It depends on how cumbersome and costly the current process is, and how repeatable, if a discernible current process even exists. For some companies, a CPM productivity suite is called for. For others, homegrown spreadsheet templates and email are probably sufficient. The size and complexity of the business and the volatility of the external environment against which that business must plan and continually adjust largely contribute to the complexity of the planning process and therefore the attractiveness of an off-the-shelf CPM productivity tool.


Third Observation: There is currently considerable interest within the data warehousing (DW)/ business intelligence (BI) space, as well as within CPM, to incorporate more external data and better leverage predictive analytics to help companies more quickly and optimally respond to performance impacting trends or events (both internal and external) as they occur. That is to say, there is momentum to push back the walls of traditional DW/BI along the axes labeled as Level of Control and Time Scope.


From the perspective of risk, is this a smart move? Provided that system-generated predictions are not accepted blindly, but warily, and that care is taken to integrate external, potentially unstructured data into the DW environment in a controlled manner, the risks in doing so are minimal.


What about Return? The return from including more external information into the DW and using advanced statistics to unlock patterns and nuggets of business value from the data morass is potentially high – potentially being the operative word. One need not be a CFO or CEO – as long as they have any experience managing financial affairs – to appreciate the rationale. Successful financial performance has a lot more to do with managing and responding to one’s current environment and planning for foreseeable future events (with eyes open in preparation for a range of potential risks or opportunities), than on re-hashing the past, from which one can learn, but about which one can do nothing. Thus, it is an attractive proposition to harness the computational power of a machine to better track how external factors drive internal performance and provide the sort of detailed, cause-effect insight that might be too hard or time consuming to construct for a busy manager of a large, complex organization. That being said, the return of systematizing these sorts of functions into a total CPM solution should be thoughtfully questioned:


  • Is systematizing increasing amounts of external data likely to offer much more value than simply leaving the data where it is for intelligent managers to discover and put to use? Put simply, if for example I as a manager have a demand forecast, or some info about increasing costs up or down the value chain, or some competitive intelligence, do I really need that data formally systematized and integrated into the enterprise information architecture to make proper use of it?
  • Is the nature of the data broad and obvious enough such that the actions which it suggests don’t require any sort of systematic assistance to help me make the best decision? Or, is the nature of the external data such that it might suggest finer, more tactical operational refinements that wouldn’t necessarily be obvious to me? And if the latter is the case, is the nature of the relationship of that external data to the company’s internal operations well enough understood that the inferences drawn by a system could be trusted?
  • Finally, is a focused effort on data mining and predictive analytics more likely to achieve the sorts of break-through discoveries such as the famed “beer and diapers” relationship discovered by one grocer, or more “no duh” findings, like the “wet wipes and diapers” relationship.

I don’t propose answers to these questions – indeed the answers probably vary from situation to situation. I merely put forward the questions. These sorts of issues should be considered in determining the likelihood of achieving real return by increasing the amount of external data brought in to the DW and the degree of use of sophisticated mining techniques.


Further, in weighing the potential return, the effectiveness of simpler and cheaper alternatives should also be weighed. What is the likelihood that multivariate regression analysis will yield incremental value beyond what can be achieved by simply putting some intelligent design to an OLAP cube and providing analysts with high quality, interactive data visualization tools? Again, the answers may vary from situation to situation, but the likelihood of achieving real return and the most cost-effective way to do so, should be critically assessed.


Finally, what about dependencies? Are there any prior concerns that should be addressed that might impact the value returned by predictive analytics? Put differently, what are the practical uses to which such findings can be put and what sorts of inputs are required to maximize that end use? Traditionally, such mining efforts have focused heavily on optimizing revenues via initiatives such as customer segmentation, price demand elasticity analysis, advertising effectiveness and fraud detection. While topline improvements are a critically important goal, the bottom line in business nevertheless remains just that – the bottom line, the margin left over after all costs of doing business have been accounted for. Bottom line intelligence requires additional insight into a company’s cost and capacity structures – what resources it spends its money on whether those resources are human, equipment, property or otherwise, how much it spends on those resources and the amount of productive work those resources can deliver (capacity). The next frontier in analytic insight, therefore, involves an integration of revenue intelligence with cost and capacity intelligence for bottom-line optimization. Put another way, given the predicted effects of various external trends or events upon a business, how can the enterprise most profitably respond and deploy its resources?


Many companies lack the deep and comprehensive cost and capacity insight, let alone the integrated whole of revenue, cost and capacity, for systematic predictive profit analytics. In other words, for many enterprises, the words of Socrates and Patrick Henry, quoted at the beginning, should be heeded. Either before, or in conjunction with initiatives to improve outward and forward-looking insight, many companies could do a better job looking backward and inward. Or, from the perspective of Figure 1, mastery of those factors which an enterprise can control (namely how it spends its money and leverages the resources paid for), based on its historical experience, is an important prior or collateral requirement to handling the myriad of performance-affecting factors which it can’t control.


Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access