Enterprise decision management depends on the ability to understand the current state of the business process and what might happen to that process if inputs changed. Historically, though, the data needed to guide management decisions has been sparse, narrowly focused and of uncertain quality. To address these limitations, many organizations have turned to data warehousing, with its promise of a single version of the truth, and considerable progress has been made. Standardization of analysis has lagged behind standardization of data. As a result, managers are often faced with conflicting interpretations of the truth, all based on the same facts. As William Faulkner once observed, "Facts and truth really don't have much to do with each other."

Microsimulation modeling - an analytic method pioneered in the public sector - offers an innovative solution to the challenges surrounding enterprise decision management. First developed in the 1960s, microsimulation models were originally maintained on government supercomputers, where they spun through large data files to simulate the impact of proposed changes to tax and transfer programs. The ambitious scope and granularity of these models led a panel of the National Research Council to conclude in 1991, "No other type of model can match microsimulation in its potential for flexible, fine-grained analysis of proposed policy changes."

Until recently, the applicability of microsimulation models was limited to a handful of government programs due to the enormous cost of computing resources and the lack of appropriate microdata. Now, with the power of supercomputers on every desktop and with enterprise-wide information sharing enabled by mature data warehouses, an opportunity exists to repurpose microsimulation modeling for private-sector use.

One of the virtues of microsimulation models is that they establish a stable and consistent framework for conducting analysis of structured data. This quality makes microsimulation an attractive tool for bridging the gap between facts and truth, offering a platform for the comparison of alternative business strategies. This article describes the key characteristics of a microsimulation model and presents a brief description of the steps required to build and use such a model. In the future, we expect to see more data warehouse-enabled organizations using microsimulation to accelerate enterprise decision management.

Key Characteristics of a Microsimulation Model

At its core, microsimulation is a computational technique used to predict the behavior of a system by predicting the behavior of microlevel units that make up the system. Microsimulation models operate by taking a representative sample of units (e.g., individuals, households, transactions) and applying parameterized algorithms to simulate processes, behaviors and outcomes. The parameters that govern the model are then varied to simulate the impact of changes to a process, policy or procedure at the individual case level, and overall results are aggregated to address a broad range of management questions.

Microsimulation is best suited for the analysis of systems where decision-making occurs at the transaction or unit level, where interactions are numerous and complex and where it is important to understand both aggregate and distributional impacts. All of these traits are evident in enterprise management, particularly for organizations that handle large volumes of customer interactions.

Microsimulation models present a view of the business based on a detailed representation of the process used to perform a particular business function. This process is described down to a level of detail where outcomes for particular cases are determined. A case could be an individual order, customer interaction or manufactured part. Whatever the modeled process, the attributes of each observed case are measured, and the path each takes through the production process is tracked.

The process is then represented in a modular fashion, with all key process steps considered in isolation. After individual models are produced for each process step, the process steps are combined into a larger model, with the outputs of upstream process steps feeding into the downstream steps that occur later. It is important to recognize that these steps are constructed somewhat differently than a simple process simulation, which can be similar in appearance to the modular structure of a microsimulation model. In a typical process simulation, where the units of observation are often identical, variation in outcomes is caused solely by randomness in the process. In contrast, microsimulation modules typically use business rules or mathematical probabilities to reflect the correlation between characteristics of each distinct unit and the outcomes that it experiences. Note that this formulation may also include random effects, as in the case of a simple process simulation.

The completed microsimulation model is a reusable analytic resource targeted toward the forecasting of future results and the improvement of a specific process. In order to exploit this resource for maximum gain, the modeling team builds scenarios that introduce changes in the population, process parameters or environment. Simulation of alternative scenarios - structured to capture key features of proposed business strategies and tactics - facilitates management understanding of the drivers of organizational performance.

Building and Using a Microsimulation Model

The flowchart in Figure 1 shows the typical sequence of steps involved in the design, development and use of a microsimulation model.

Figure 1: Steps to Building a Microsimulation Model


Establish Foundational Data

Microsimulation modeling requires high-quality data that is complete and clean. As a result, the data warehouse is the foundation of the microsimulation modeling effort. First, the data warehouse should be tapped to create a data set containing records for past transactions. These transactions should be paired with customer characteristics and other information in the warehouse that might be correlated with the outcome of the transaction.

Second, the business rules and parameters that describe the underlying business process must be assembled. These might include standard policies such as shipping charges, allowable wait times for telephone orders or shipping consolidation policies.

Third, data on external factors must be obtained. This could include factors such as prices of similar items available from other vendors, the cost of transportation to and from a standard retail location, weather conditions or economic factors such as changes to average household income. This data is typically assembled from multiple external sources.

Finally, the data that will be used to calibrate the microsimulation model must be obtained. It is likely that some or all of these values will be found in management reports that are currently produced, as these reports should reflect variables of interest to those running the organization. However, it is possible that some control totals will not be independently tallied. Sometimes an additional query against an existing database will produce the required values. Regardless of their source, key control totals must be obtained and validated prior to their use in the model.

Several challenges can arise in the creation of the microsimulation data mart. First, microsimulation techniques require data that is cleaner and more complete than most other modeling approaches. Often, missing data must be replaced through imputation or other statistical methods. Similarly, important data elements may be missing from the data warehouse. This is especially likely in the case of variables that represent intermediate steps in the production process or those that represent conditions external to the production process, such as economic factors and other facets of the environment.

Select Representative Sample

While it is possible to create a microsimulation model using the entire population of historical transactions, it is not computationally feasible to do so in most cases. As a result, a sample of past transactions is typically used to reduce computational burden. Sample selection should take into account best practices in the design of statistical samples as well as the features of interest, the size of the model and available computational resources. At this point, a key modeling and analysis decision must be made - choosing between a simple random sample and a stratified random sample that includes additional cases for parts of the population that are of special interest. An oversample of these individuals might be taken to provide added detail and accuracy. Oversampling strategies will require adjustments with sample weights to "unwind" the earlier oversampling of these cases. If oversampling techniques are used, the modeling team should include an experienced statistician to navigate complexities at the nexus of sample design, model specification and statistical estimation.

Create Modules

Microsimulation modeling uses a module-based approach to model design, with key process steps typically having their own submodel structure encapsulated within a module. These submodels might be developed using a variety of analytic methods, including tabulations of averages, statistical or econometric methods, data mining, business rule creation or a combination of these. Each module must yield an estimate of the expected output produced by that process step, conditional on the inputs, process step parameters and environmental conditions. In order to capture the key features of the modeled process, the modeling team should include modelers (e.g., statisticians, economists) as well as domain experts throughout model design and development.

Calibrate the Model

Before using the model to produce estimates when changes are made to the system, the modeling team must ensure that the model is producing results calibrated to those observed in reality. This will help to determine whether the individual submodels that form the basis of the microsimulation model are properly linked together and that the model structure as a whole is sound and sensible.

In order to calibrate the model, the microsimulation should be run on the current sample with existing business rules and parameter settings. Environmental parameters should be set to historically accurate values. The results of the microsimulation will be a set of outcome variables. Each member of the representative sample will have their own outcome variables. These results may be aggregated to produce summary outcome variables that match known control totals. Outcome variables should include at least the final results of interest. Areas where there are discrepancies between the simulation runs used for calibration and the observed results should be identified. Should such areas be found, it is often useful to establish control totals for intermediate steps in order to identify those process steps where error is introduced.

Calibrations of the microsimulation model bring the summary outcome variables in line with the control totals. Calibration may take one of several forms, including adjustments to the composition of the sample of modeled transactions, adjustments to the models to fine-tune business rules or re-estimations of process submodels where it appears that intermediate control totals do not match submodel estimates. No matter which of these methods is used to calibrate the microsimulation, careful attention should be paid to the quality of the control total data used in the calibration. Overcalibration of the microsimulation model is similar to the "overfitting" of a statistical model, with similar results. An overcalibrated microsimulation model will match control totals but will make inaccurate predictions when parameters are changed. This is especially likely when the control totals themselves are measured inexactly.

Simulate Alternative Scenarios

The modular nature of the microsimulation model facilitates the examination of changes in the predicted behavior of the system versus changes in the parameters, population or environment. The simplest form of scenario development involves changes to the process parameters that are under the control of management. Another facet of scenario analysis could include the analysis of changes to external variables correlated with process outcomes. A more complex scenario analysis could include changes to business rules applied to particular process steps. The changed business rules would then be implemented in one or more of the modules.

Perhaps one of the strongest features of a microsimulation model is its ability to adapt to changing demographic and economic conditions. In many countries, particularly in North America and Europe, ongoing demographic shifts are having a profound impact on business and government. Microsimulation accounts for these shifts by "aging" the sample, which allows analysts to simulate the behavior of the system under new demographic assumptions.

Tabulate and Present Results

The strong structural components of a microsimulation model give it added robustness when dealing with forecasting future conditions compared to aggregate-level models. Although prediction about any particular case is subject to error, the overall averages of the outcomes are far more stable. Also, the modularity of the microsimulation model structure allows for the examination of intermediate results, which may point to useful changes in the overall functioning of the simulated process.

The value of the simulation is increased by the inclusion of the costs of each process step. These costs could include the amount of time or money needed to successfully complete the process step or other costs that arise if a process step is not successfully completed. The microsimulation model allows the analyst to examine the population to determine if there are particular segments that are especially costly or financially attractive under changing conditions.

An important feature of microsimulation models is that the results they produce are available on the unit or transaction level. This means that tools typically used for the presentation of OLAP data are well structured to report the results of microsimulation models. This includes the development of on-the-fly queries and management dashboards. One type of reporting that is especially helpful (and unique to microsimulation) is a "gainer/loser" table. The representative sample is ranked according to the increase (or decrease) in profitability to the company, order size or another key performance indicator. Patterns in the gains and losses can then be examined using business intelligence tools to see if there is a disparate impact of different types of customers.

Microsimulation modeling offers an innovative solution to the challenges surrounding enterprise decision management. By creating a stable and consistent analytic framework, these models extend the single version of the truth from the data warehouse to the management dashboard and beyond. We expect that in the near future, microsimulation will become an essential element of the business intelligence toolkit.


Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access