According to IDC, approximately $44 billion was spent worldwide on customer relationship management (CRM) initiatives in 2000. How is it possible that after billions of dollars invested to improve customer relationships, the average customer feels that enterprise-wide CRM is worse than ever?
Consider these examples:
- You call your credit card company and the cold, computerized voice says: "Please enter your account number, followed by the pound sign." So you do. Ten minutes later, when you finally make it through to a real person, what's the first question they ask? "What's your account number?"
- You sign up for a frequent flyer program online. A few flights later, you have some questions about the program. After trying in vain to glean the answers from volumes of information on the Web site, you send an e-mail but only receive a vague, automated response. So you place a call, but the call center attendant says you don't exist.
Why do these disconnects occur? The answer is actually quite simple: lack of true enterprise integration.
As today's corporate environment becomes more competitive, companies cannot afford to simply accept the limitations and disadvantages of offline data integration. They must find ways to build the intelligence they garner about customers into their operational environments, enabling them to fulfill customer demands at every touchpoint in a coordinated fashion. By operationalizing business intelligence (BI) not just data forward-thinking companies can transform themselves into true customer-focused enterprises with distinct competitive advantages.
Operationalized Business Intelligence
Incorporating BI into the operational environment is the critical step for improving customer interactions. Through this process, a company can synchronize its enterprise around the customer rather than just synchronizing data around the customer.
The key to deploying BI is a flexible business rules management system that can house and deliver the rule results. This information hub becomes the marketing coordination system across the enterprise, housing all activities related to customers. Because the scores, rules and other intelligence are housed within the system and applied to the real-time data streams as they occur, the system does not rely on batch scoring or batch data feeds.
These business rules management systems have a variety of requirements, including:
- High-performance and scalable architectures to support large transaction volumes.
- Intelligent processing in the real-time environment to look only at the data required by the business rules.
- Low-maintenance architecture that adapts to changes in data attributes and business rules.
- Facilities for analysts to create business rules easily and assure rule validity.
- Ability to deliver rule results in many forms across multiple channels.
- Capability to use existing operational data stores, data warehouses and data marts.
- Flexibility to cross-reference and reuse business rules.
- Capacity to verify existing rules quickly and easily when the underlying data architecture is modified.
In order to achieve truly intelligent enterprise customer management, the information architecture must seamlessly integrate business intelligence. Figure 1 illustrates the business needs associated with such an architecture, the information technology (IT) issues raised by those needs and the architectural imperatives necessary to accomplish true integration.
Load of Analysis Environment
The load process for the analysis environment has very specific business needs. The system must efficiently load detailed event, clickstream and other data into a database to build analytical models of consumer needs and profiles. The database needs to contain a rolling horizon of detail data for a sample of customers as well as summarized data for all customers and historical triggered events. This type of environment will allow database marketing analysts to develop rules to identify very targeted audiences for advertising and marketing campaigns, increasing the perceived value of the network for advertisers and increasing valued response for other users. The load of the analysis environment is portrayed in Figure 2 as a traditional batch load. The load may also be a continuous data stream through real-time data warehousing techniques.
Deployment of Knowledge
The deployment of business rules will allow knowledge gained through the analysis of customer data to have impact through targeted advertising or campaigns. Business rules are typically classified into five categories (see Barbara von Halle's article, "Building a Business Rules System," Part 2 of 5, DM Review, February 2001):
- Constraint: Test data values and restrict behavior.
- Guideline: Test data values and offer warnings.
- Computation: Arrive at a new data value by applying a formula to known data values or results from other rules.
- Inference: Arrive at a conclusion by testing conditions.
- Action-Enabling: Evaluate data values prior to initiating action.
For the purposes of this article, we have proposed additional definitions of the rules to illustrate typical real-time triggers in a marketing context.
- Simple Action Enabler: Rule using criteria such as counting to trigger a positive response (e.g., if these three events occur within a time period of three days, then trigger event A).
- Sequencing Action Enabler: Rule that requires events to occur in a specific order to trigger a positive response (e.g., if these three events occur in this order, trigger event B).
- Population Computation: Segmen-tation, valuation and profiling techniques that involve computation of a value or attribute for a customer based on customer a comparison to other customers (e.g., customer A is rank X in the frequent, high-value retail shopper segment).
- Customer Computation: Equations developed by calibration on historical data yielding a model result per customer (e.g., value of customer Y is $232/year).
- Composite Computation: Use of multiple rules to yield a new result (e.g., if these three events occur and the customer is in the top 10 percent of the high-value retail shopper segment, then trigger event C).
The rules will be created either by using the editor or importing rules from integrated analysis applications. The business rules editor provides two very important features: verification that all rules created are valid and validation of existing business rules in the event there is a change in any of the data stores used by the business rules. To achieve this type of verification and validation, a meta data layer is housed within the business rules repository and used by both the rule triggering engine and the business rules editor.
Operationally Optimized Systems
The real-time calculation of business rules allows companies to have a timely impact on the customer's behavior by delivering an event at the right time. During a periodic process, the business rules repository will pre-calculate as much information as possible and provide this to the rule triggering engine. Then, as information is provided to the engine through messaging, events will be triggered as appropriate. For example, let's assume the rule is "Trigger campaign A when events A, B and C occur within 20 days." During the initialization phase, the count for customer A was that event A and B had occurred, but event C had not. The rule triggering engine then "sniffs" the incoming data intelligently (only looking for attributes related to determining event C). When event C occurs, the rule triggering engine provides a message to the rule triggering interface allowing fulfillment of campaign A.
The architecture depicted in Figure 2 facilitates the three critical processes that provide an adequate framework to transition corporate data into actionable knowledge to "operationalize" the business intelligence.
Detail Behavior Data: Data provided by ongoing operations from all channels. Examples include banking platforms, credit card systems, retail point of sale (POS), Web site data, etc.
Business Rules Repository: Repository where all business rules are stored. The business rules editor allows rules to be "published" into the business rules repository. From there, the rule triggering engine, initialization database and ETL processes can reference the rules and integrate them into their processing. Included in the business rules repository is a meta data layer housing information about all data access methods and data models within the information architecture. In this way, the processes utilizing the business rules repository can properly access and utilize information from the entire information architecture, thereby removing the need for redundant data except for performance and availability requirements.
Rule Initialization Database: The initialization database contains a rolling horizon of detail data about all customers with a data model the same as the sample database in the analysis environment. The purpose of this data store is to provide a way to initialize rules that reference detail behavior across a time horizon. For example, if a rule is deployed that says, "Trigger campaign A when event A, event B and event C occur within 20 days," the initialization data store allows the behavior over the last 20 days to be summarized and counted rather than waiting 20 days from
initial deployment for the rule to be fully populated. The option exists to create triggers if the event criteria were met in the time horizon covered by the initialization database or to just summarize events without triggering. Following are two examples using this rule to help better describe this behavior:
- Trigger historical events: Within the initialization database, count the number of times each of the three events has occurred within the past 20 days for each customer. If a customer has met the criteria, notify the rule triggering engine to take the appropriate action (actually trigger the event). While initializing, dynamically create a database housing the results of initialization for each customer and hand off to the ongoing update process.
- Don't trigger historical events: Within the initialization database, count the number of times each of the three events has occurred within the past 20 days for each customer. Even if a customer has met the criteria, take no action. While initializing, dynamically create a database housing the results of initialization for each customer and hand off to the ongoing update process.
The purpose of having these two options is to provide the ability to make new rules retroactive for the period of the rolling horizon in this case, 20 days.
The initialization process can task many hardware platforms. Based on Daman's experience, the solution could take many forms, including near-line or offline storage systems, distributed parallel processing, etc. By using the meta data layer within the business rules repository, data may be distributed in a cost-effective manner allowing timely deployment of rules to address any business needs that arise.
Raw Event Logs: Raw data from the rule triggering engine is provided from triggered events. During this process, it is transformed and loaded into the relational environment of the analysis database either in real time or on a batch basis.
Extraction, Transformation and Load (ETL): Takes the incoming data from the daily load files, applies the business rules and loads it into the relational database supporting the analysis environment. It is important that rules on derived attributes used in the real-time engine are the same as those used in the ETL process. This provides a single point of management for computation of derived attributes and/or data transformations.
Analysis Database: Relational database system housing three major categories of data: detail data on a sample of customers, summarized historical data for all customers and campaign data. The analysis database provides the input to the analysis tools to derive knowledge about the customers. The business rules repository feeding the ETL process provides derived attributes for the analysis database. This process also assures consistency between offline analysis and rules used to trigger real-time events. Addition of derived attributes after initial implementation is handled through the dynamic creation of tables and updates to appropriate meta data. This can all be handled as an automated process.
Analysis Environment: The analysis environment is a set of tools used to mine information from the analysis database. Typically there are three classes of tools in this environment: reporting, modeling and rule deployment. The reporting and modeling tools are those off-the-shelf applications such as MicroStrategy, Brio, SAS, SPSS, etc. The business rules editor is used to create the business rule in the rules repository either by publishing objects created by the mining tools or allowing analysts to create combinations of rules based on coded action-enablers and core rules.
Rule Triggering Engine (RTE): The RTE manages the pre-calculation of business rules and the incorporation of additional data as it comes in from messaging or detail data updates. The RTE maintains its own data store of pre-computed values and handles the application of the business rules. It maintains "copies" of the business rules so it can immediately apply the rules to selected incoming data. On a scheduled basis, it will update the pre-computed information using the daily log files, the initialization database and other data stores as appropriate.
Behavior Trigger Interface (BTI): The BTI provides the interface into triggered rules. The BTI can range from a simple service to a service integrated into transaction and load-balancing software.
Fulfillment Database: The BTI can provide a trigger to a fulfillment database that contains the full information on the marketing promotion that is to be delivered. This may be another network, Web site, legacy system, etc. By keeping this system separate from the underlying architecture, it is more easily maintained by third parties or other marketing personnel without impacting the underlying triggering mechanisms.
Other Marketing Stakeholders: In the most generic sense, the BTI is a messaging hub that can have information requested or delivered through any network or API. The interfaces are typically for those who are either managing the customer interface or providing fulfillment of the message.
Of course, the implications of true enterprise integration go far beyond customer relationship management, although CRM applications are perhaps the easiest to grasp. However, true enterprise integration benefits the entire enterprise, facilitating sales force automation, vendor relations, procurement the list goes on and on. The key is the integration of business intelligence and the management of targeted business rules.
All of this takes on even greater importance during an economic downturn. In such a climate, enterprise efficiency provides an avenue for retaining profitability without shedding personnel en masse or shuttering whole operations. The bottom line is this: Corporations that take advantage of today's integration technologies and tactics will probably survive; those that don't will likely falter.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access