We spend an inordinate amount of time and resources simply reporting the obvious data within our organizations. Is it accurate to associate reporting the obvious with the notion of business intelligence (BI)? The short answer is no.
BI is about actionable insight, supporting better decision making by identifying business opportunity or challenge and adapting to business change. Being able to achieve BI requires architects to go well beyond merely reporting the obvious data about business operations. Publishing pie charts of aging reports or inventory turns is simply not BI.
Building systems to support better decision making requires BI architects to answer two questions:
What is the decision-making process? There exists a deep, mature body of research regarding decision support. The notion of helping organizations make better decisions and become more efficient is hardly new. To effectively implement systems that support the decision-making process of your user communities requires that you understand the decision support subject area.
How can the BI environment make that process better? Once the BI architect has a grasp of decision support, it is important that he/she answers two subordinate questions:
- What are the most predominant decision-making process patterns used in my organization?
- What technologies and techniques can I implement in my BI architecture to support those decision-making patterns?
It may be a shock to some readers that not all user communities follow the same process path to make decisions and not all decision-making requirements require that the same process be applied. As a matter of fact, if architects truly want to support better decision making, they should include defining the decision-making process for each BI requirement being designed and implemented. This is done to some extent when concepts such as collaboration and workflow enter into our analysis.
To understand the BI architect role in decision support, we must first understand what is involved in the decision-making process. Although there are many models that attempt to formalize this process, this article has chosen Rasmussen's Decision Ladder because it is closely associated with a growing body of research attempting to model the human-machine interaction necessary for today's complex, real-world systems. Rasmussen defines eight steps to the decision-making process as illustrated in Figure 1.1
Figure 1: Decision Process
Assuming that these eight steps accurately summarize the decision process, BI architects must ask how much of this process can be addressed by the BI infrastructure we implement for our organizations.
For example, the diagram on the right in Figure 2 places the majority of the decision process within human intervention. The machine activity is limited to the more traditional role of activating the process once an event is detected, interpreting and predicting the impact of the event, and finally executing the plan chosen by human intervention. Although this balance may be effective in some cases, it simply does not scale. When we increase the data volume for any particular decision and couple that with requirements of real-time or near real-time analysis and a fairly complex problem, leaving most of the analytic processing to a human exposes your organization to very high risk. The diagram on the left in Figure 2, however, requires more machine participation in the decision-making cycle. This configuration affords greater scalability.
Figure 2: Control Level Given Based on MIQ
To illustrate this point, let's examine a modern nuclear reactor, a complex system requiring sophisticated controllers to ensure high performance and operational safety. The ever-increasing demand for higher operational requirements such as reliability, lower environmental impact and improved performance, coupled with the complexity and uncertainty associated with the scale and scope of the effort (data and analytic domain), necessitate the use of autonomous, machine-based intelligent control methods. Most researchers agree that the tasks involved during nuclear reactor design and operation involve important human cognition (thinking, learning and adapting) and decision-making processes that are better handled by expert systems, fuzzy logic, neural networks and genetic algorithms. In other words, because the reactor environment is so complex and the analytic domain necessary to operate that environment is so vast, it is better to leave much of the decision-making process to computers.
By combining conventional control systems and machine intelligent components (fuzzy logic, neural networks, genetic algorithms and expert systems), we can achieve higher degrees of performance in reactor operations such as reactor startup, shutdown in emergency situations, fault detection and diagnosis, nuclear reactor alarm processing and diagnosis. With the advancement of new technologies and computing power, it is feasible to automate most of the nuclear reactor control and operation, resulting in increased safety and economic benefits.2
Many individuals may consider running a nuclear reactor and managing a global company 24x7 to be activities that are significantly different in complexity. Granted that if a nuclear reactor has trouble, lives are at stake, which is not the case for many companies. On the other hand, if poor decisions are made in these large companies, millions of dollars vanish, jobs are lost, market shares shrink and stock prices tumble. It is frightening how the fall of one domino cascades.
Your Machine Intelligence Quotient (MIQ)
There was a revival of artificial intelligence (AI) and intelligent systems in the late 1990s. This was a direct result of leading information processing exhibiting abilities to reason, adapt and learn - performing rational decision making with little or no human intervention - essentially smart systems.
Professor Zadeh is credited with coining the term "MIQ" (machine intelligence quotient) to describe the measure of intelligence of these man-made systems with machine intelligence. Zadeh believes systems exhibiting high levels of MIQ are mostly hybrid intelligent systems, combining hard and soft techniques, often implemented as a network of agents.
Financial investment planning might use neural networks to watch for patterns in the stock market, genetic algorithms applied to predict interest rates or fuzzy logic (to approximate reasoning) to determine client risk tolerance with regard to financial investments. Each of these - neural networks, genetic algorithms and fuzzy logic - is considered a soft computing technique. There are, of course, hard computing techniques that can be applied to complex problems such as business rules engines and expert systems. Those systems with the highest MIQ combine hard and soft techniques with traditional reporting and process control to achieve the promise of actionable insight.
There is considerable debate surrounding machine intelligence - everything from the original Turing test to an array of sophisticated indexing, numerical or linguistic frameworks attempting to quantify machine intelligence.3,4 MIQ is a measure, a means to assess and quantify the intelligence of an autonomous system. Immersing ourselves in the debate goes beyond the scope of this article, so we will refrain from doing so. Instead, we will focus on using a simple inventory of techniques and technologies associated with machine intelligence to achieve two objectives:
- To illustrate the various components important to machine intelligence.
- To provide a means for readers to quickly, quantifiably estimate the level of MIQ in their own environments.
From the perspective of a BI architect, measuring the MIQ of your BI environment starts with a simple of inventory of the technologies and techniques integrated into supporting better decision-making processes for your user communities. Figure 3 contains four elements that help quantify the level of machine intelligence in your BI system. Next to each element are a few examples that serve to indicate the application of the techniques or the existence of the necessary infrastructure and autonomy of its agents.
Figure 3: MIQ Inventory
Readers should quickly be able to discern the level of MIQ in their organizations by examining the tangible components that serve as measurable evidence of machine IQ. If you are not exploiting any of them, chances are that your environment is more of a passive repository, reporting obvious operational data to users. It may efficiently do so, but it can hardly be described as providing business intelligence. On the other hand, if you have or are implementing three or four of the components, then your MIQ is quite high, especially if you are implementing some component from each of the elements listed. A high MIQ does not guarantee that your BI environment is providing the actionable insight to all user communities, but it does suggest that you have the necessary infrastructure and foresight to do so.
Implementing Machine Intelligence Using Agents
It is easy to think of agents in the context of artificial intelligence or robotics; however, an agent is any object that serves a purpose. For example, my coffee cup is an agent because it holds the coffee I drink every morning. The same coffee cup can be used to store my pens and pencils, which means that it is still serving a purpose and therefore can be considered an agent.
Agents can play a range of roles. My coffee cup's role is passive, and its goals are imposed by me. That is not necessarily true of a robot. A robot may be capable of proactively manipulating its environment in the quest of achieving its goals.
Agent techniques are well suited for implementing the complex systems necessary to achieve a higher level of MIQ. An agent is an encapsulated application capable of autonomous action. It can be implemented as standalone programs invoked by traditional applications or as a component of a larger hierarchy and network of agents.
Characteristics of agents include such things as persistence, autonomy, goal-orientation, reasoning and the ability to communicate. Agents are able to essentially think, learn, remember and adapt (cognitive functions) to their environment while being productive, achieving the task they've been designed for.
Going back to Rasmussen, he defined three layers of cognitive control that can be used in the decision-making process, including skill, rules and knowledge. Depending on the situation, humans can operate at any level and even jump between them. Agent technology may not be as nimble as humans with regard to jumping between levels; however, a network of agents can potentially achieve the same flexibility. Figure 4 defines each of these layers in terms of humans and agents (machine intelligence).
Figure 4: Human-Machine Interaction
The theory is that humans oscillate between intuition and analysis. During the intuitive phase, there is a low level of cognitive control, an unconscious, rapid processing of the situation with a recommendation made with high confidence. When experts are forced to make quick decisions, they simply rely on pattern recognition and associate that pattern with a known course of action. Consequently, in complex situations, intuition is quite often wrong. At the other end of the spectrum, the analytic process requires a high level of cognitive control, but the process is slowed considerably and the solution is often chosen with a low level of confidence.
There exists plenty of room for your BI environment to improve the performance of decision making. For instance, decision tables and decision trees (not the mining form, but a decision process flow) organize alternatives, probabilities and possible outcomes. Expert systems and business rule engines encapsulate subject matter expertise that can then be applied on demand. Essentially, your BI environment has the opportunity to extend the cognitive decision-making abilities of your user communities in a number of ways:
- Intelligent search for information that enhances, complements and supplements the subject area.
- Highlight cues (information) pertinent to the alternatives under consideration.
- Warn of value changes.
- Identify discrepancies, outliers.
- Provide visualization of the data domain and possible solution set.
- Make predictions of outcomes, probabilities, costs and benefits.
- Suggest solution sets.
Two well recognized examples of machine intelligent agent technology working for us today include grid computing and the Internet itself. A grid system uses multiple agents working together toward a single goal, exploiting available computer resources for high-performance computing. Within the Internet, agents have been used for years in intelligent searches and information-gathering activities.
Today's competitive, ever-changing business climate makes greater demands on the decision-making processes of our organizations. To remain competitive means making better decisions more quickly. It dictates widening the scope and scale of the data domain, the analytic landscape and the technological infrastructure.
Efficiently churning out solutions that address these real-world forms of complex problems has remained the exclusive domain of subject-matter experts. Their vast experience has taught them to spot patterns in the complexity and associate existing strategies to address those patterns. They alone have a proven track record of being able to make quick decisions in complex situations. However, as critical as these experts are for each of our organizations, they also represent our Achilles' heal. The simple truth is that these subject matter experts are rare. They cannot be all places addressing all problems. They cannot be monitoring our organization 24x7. And, unfortunately, they are often wrong when confronted with new situations whose patterns they don't recognize or misapply.
The relationship between human and machine intelligence must be the focus of BI architects going forward. Machine intelligence has proven itself as the cornerstone of our ability to manage more complexity, and the level of MIQ will serve as the watermark for measuring the capability of our BI environments.
It is the responsibility of the chief BI architect to lobby for and establish an intelligent service layer in the BI infrastructure, using MIQ to discern those modern technologies that support this new service layer from archaic products. Machine intelligence, tools and techniques, implemented as a multi-agent network will provide the decision support platform promised by BI teams and envisioned by company executives. Increasing the MIQ of BI environments will be the clarion call for architects.
- Rasmussen, Jens. Information Processing and Human-Machine Interation - An Approach to Cognitive Engineering. New York: Elsevier Science Inc., 1986.
- Basher, H. and Neal, J.S. Autonomous Control of Nuclear Power Plants.
- Turing, Alan. "The Imitation Game." Computing Machinery and Intelligence, Vol. 59, No. 236, pp. 433-460.
- Park, Hee-Jun. "Measuring the Machine Intelligence Quotient (MIQ) of Human-Machine Cooperative Systems." IEEE Transactions on Systems, Man, and Cybernetics, Vol. 31, No. 2, March 2001.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access