Continue in 2 seconds

Select Dashboard Visuals Using Six Sigma Design of Experiments

  • August 01 2004, 1:00am EDT
More in

My previous three columns focused on the visual aspects of the dashboard including the framework, the color palette and the graphs. As you may recall, the framework alternatives alone included a board of director viewer, briefing book, balanced scorecard, portfolio chart and operational dashboard. Twenty-five potential graph choices could be selected from basic, difference, distribution and specialized graph styles. With all the possible variations to choose from, how does one determine the preferred visuals to meet the needs of a particular business-user constituency?

The performance management needs of the senior executive are certainly quite different than those for a line-of-business (LOB) or operations manager. Tailored environments and "management at a glance" are now the operational mantras of the day. The Six Sigma toolkit includes an optimization technique known as design of experiments (DOE) which can streamline the selection process of the preferred visuals. By presenting business users with combinations of alternatives, DOE can statistically separate the preferred visuals from the less desirable options.

Let's start with the definition and process for DOE, and then examine a case study of the selection of dashboard visuals. Although more statistical definitions exist, let's define DOE as: a structured, organized approach for effectively and efficiently exploring the cause-and-effect relationship between numerous impact variables (the Xs) and the resulting performance variable (the Y).

In our case, we can think of the Xs as the input parameters of the dashboard (the framework, color palette and graph type) while the Y is the resulting dashboard's "look and feel." Although DOE designs can become quite sophisticated with exotic names such as Box-Wilson, Plackett-Burman and Taguchi, we will focus on the more basic "full factorial" design. This will provide a fundamental understanding of DOE and the process involved without the statistical complexities of the more elaborate design schemes.

The full factorial design will help us to identify the most important sources of variation and quantify the impacts of the important Xs on the output Y. In the following section, we will walk through the steps required to successfully implement a DOE with a real case study that builds on our existing knowledge of dashboard visuals.

One of the first steps in setting up a full factorial design is to select the number of factors and levels that need to be evaluated (see Figure 1).

Figure 1: Factor/Level Matrix

Step 1: Select three factors to be tested - framework, color and graph type.

Step 2: Select two different test levels for each factor.

  • A = framework (briefing book, portfolio chart)
  • B = palette color (vivid, subdued)
  • C = graph type (stacked bar, radar chart)

This combination of factors and levels is often called a "treatment." The number of treatments required can then be calculated with a mathematical formula (nk) where "n" represents the number of levels and "k" represents the number of factors. Therefore, in our example, nk = 23 or 8 treatments. This allows all variations of the levels to occur at least once so both the main impacts and interaction impacts can be calculated.

Step 3: Develop treatment matrix to incorporate all variations of the selected factors and levels (see Figure 2).

Figure 2: Treatment Matrix

The interaction effects (+) or (-) are determined by multiplying the (+) or (-) of the respective main effects. For example, the AB interaction effect (-) for treatment #3 is the result of the A main effect (+) multiplied by the B main effect (-). Keep in mind those old rules from algebra where: (+) x (+) = (+); (-) x (-) = (+); (+) x (-) = (-); and (-) x (+) = (-).

In some cases, the interaction impacts (e.g., briefing book and subdued color palette) may have more impact than either of the factors individually.

Step 4: Develop ratio ratings scale (from 1-10), with 10 as the preferred dashboard alternative and 1 as the least attractive dashboard visual.

Because we will be focusing on the performance management needs of the senior executives, we will need to present the various dashboard options and record their reactions on a 1 to 10 scale.

Step 5: Randomize the treatment order and select eight senior executives to review dashboard visuals.

Step 6: Record results from testing in the last column (Run x) of treatment matrix (see Figure 3).

Figure 3: Treatment Results

In our example, we are assuming that the numbers in the last column for Run x would be symbolic of the results for the eight executives interviewed and reflect the average values for these executives.

Step 7: The same result numbers are then used to populate the entire matrix while retaining the existing (+) and (-) signs from the treatment matrix.

Step 8: Add the values in each column and put the appropriate result in the "sum" cell of the treatment matrix (e.g., for Factor A = +19).

Step 9: Divide the total estimated effect of each column by half the total number of treatments this results in the average estimated impact between the (+) level and the (-) level (e.g., for Factor A = +19/4 = 4.75).

With the grunt work calculations completed, the case analysis becomes much more interesting.

Step 10: Determine the factors (or combination of factors) which have the greatest impact by looking at the magnitude of their respective averages.

From the treatment results, we can rank the resulting averages from high to low.

Impact Levels

  • Main effect (factor A): 4.75
  • Main effect (factor B): 2.25
  • Interaction effect (factors A & C): 1.25
  • Interaction effect (factors A & B): 0.75
  • Interaction effect (factors A, B & C): 0.25
  • Main effect (factor C): -0.25
  • Interaction effect (factors B & C): -0.25

We can now see the impact, but how can we tell if these results are statistically significant or just random variation? At this juncture, we need to develop confidence limits which will assist us in making this decision. Because the last three impacts (ABC interaction, C main effect and BC interaction) do not seem to have much effect, we can use these to estimate our error. This also gives us the DF (degrees of freedom) equal to 3.
Step 11: Determine the acceptance criteria, a 95% confidence level, and calculate the confidence limits (see Figure 4).

Figure 4: Confidence Limits

The formula in Figure 4 is just a simple calculation that incorporates the confidence level, DF and error terms to determine confidence limits. If one revisits the proverbial student "t" distribution table for 3 DF and a 95% confidence level, the result is 3.18. This is multiplied by the error estimates to obtain a confidence limit of ± .795.

Step 12: The main effect and interaction effect impacts can then be compared to the confidence limits to determine which effects are statistically significant.

From the impact levels mentioned previously, it is obvious that Main Effect Factor A, Main Effect Factor B, and Interaction Effect Factors A and C are statistically significant. In English parlance, this means that the executives felt the different types of framework and color palettes made the biggest difference in visuals.

Step 13: Decide which visual options were preferred.

From the treatment effects in Figure 3, we can draw several conclusions about the preferred visuals. Adding the absolute levels for each of the factor main effects and interaction effects shows the following:

Main Effect and Interaction Effects

  • Factor A (Framework): Briefing Book (28) / Portfolio Chart (9)
  • Factor B (Palette): Subdued Colors (23) / Vivid Colors (14)
  • Interaction AC (Framework/Chart Type Combination): Briefing Book/Stacked Bar (21) / Portfolio Chart or Radar Chart (16)

The bottom line is that the executives prefer the briefing book framework, the subdued color palette and the briefing book/stacked bar framework/ chart type combination.
The result of this analysis can prove extremely helpful in designing the optimum visual for the executive dashboard. Less time and resources can now be spent developing visuals, and more effort can be devoted to content development for the dashboard. This same approach can be mirrored to determine the optimum dashboard visuals for the LOB or functional managers. The pièce de résistance is that the final dashboard visuals always meet or exceed the tailored needs of the specific business user.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access