“Chart a course for every endeavor that we take the people's money for, see how well we are progressing, tell the public how we are doing, stop the things that don't work, and never stop improving the things that we think are worth investing in.”

--President William J. Clinton, on signing the Government Performance and Results Act of 1993

Performance measurement is a key driver of business intelligence. I'd guess that most BI deployments today are at least in part supporting PM. Indeed, performance measurement is seen by many as an indispensable tool for assessing the effectiveness of business strategy. For the highly-regarded Balanced Scorecard, “a strategic planning and management system used to align business activities to the vision and strategy of the organization, improve internal and external communications, and monitor organizational performance against strategic goals”, performance measurement is central, touching the last five of Building and Implementing the Balanced Scorecard's Nine Steps to Success. The Scorecard's definition of PM as "The process of developing measurable indicators that can be systematically tracked to assess progress made in achieving predetermined goals and using such indicators to assess progress in achieving these goals.”, demonstrates that importance, engaging the larger methodology in the disparate areas of strategy development, tracking and assessment. If strategy can be viewed as the development of theories and hypotheses linking business activities with desirable outcomes, performance measurement is the methodology for operationalizing and testing those theories and hypotheses.

Business strategy is not alone in its contributions to the public understanding of performance measurement. Government divisions, educational systems and other social intervention agencies have long engaged in program evaluation for which PM is an important tool to assess program activities. In this context, performance measurement “is the ongoing monitoring and reporting of program accomplishments, particularly progress towards preestablished goals, typically conducted by program or agency management. Performance measures may address the type or level of program activities conducted (process), the direct products and services delivered by a program (outputs), and/or the results of those products and services (outcomes).” If you think of a program as “any activity, project, function, or policy that has an identifiable purpose or set of objectives”, the connection to strategy mapping is immediate and compelling. Programs are linked to outputs and outcomes in a series of cause and effect linkages, much as strategy is linked to changes in leading and lagging indicators in the Balanced Scorecard. Programs are akin to the BS's Internal Perspective fueled by Learning and Growth, while outputs are similar to leading indicators delineated in the Customer Perspective, and outcomes are lagging indicators generally articulated in the Financial Perspective.

Yet another angle on performance measurement comes from the arcane world of investment science. As described in Feibel's Investment Performance Measurement, this highly-quantitative endeavor has as primary goal: “The calculation of portfolio return, risk and derived statistics stemming from the periodic change in market value of portfolio positions and transactions made into and within a portfolio, for use in the evaluation of historical fund or manager performance”. The four key steps in the investment PM methodology include

  • Measurement of returns earned by portfolios, managers and benchmarks;
  • Measurement of risks taken to earn the returns;
  • Measurement of indicators of manager skill; and
  • Attribution of segment and risk contributions to performance in contrast to the value-add of management decisions.

The PM perspectives from the program evaluation and investment worlds have much to offer BI. In follow-up blogs I'll review several statistical studies – one evaluating the effectiveness of a government program, another showcasing the tools of investment science to distinguish “great” from “lucky” business performance.
Steve Miller also blogs at miller.openbi.com.