Continue in 2 seconds

CPG Leverages Customer Data

  • February 01 2007, 1:00am EST

DM Review welcomes Tony Politano as a new monthly columnist.

Consumer products goods (CPG) companies manufacture and sell products to the general consumer via the retail chain. Therefore, the CPG markets to two levels of customers, with retailers like Wal-Mart and Target representing the first level and the individual consumer representing the second level.

The duality of its customer base and disparate data sources make maximizing customer data in business decisions very difficult for CPG companies. Like any business, CPG wants to better understand its customer needs and buying habits so that it effectively targets product design and distribution. In the short term, CPG companies need to match supply with demand to accurately gauge future demands and seize growth opportunities. In the long term, CPG wants to leverage customer-buying habits; project future needs and use the data to drive development, sales, marketing and distribution.

Therefore, CPG needs specific demographic information as well as overall sales data. In a recent interview, an executive at a global CPG aptly summed up the problem. "I know I can get data about how my products are being sold at various retailers, such as Wal-Mart and Target. I also subscribe to syndicated data from Nielsen and IRI. My problem is that I cannot analyze this data in concert. Instead, I need to do individual analysis and then combine this with data from our sales system in order to answer a simple question: are my sales to retailers aligned with the sales and demand of consumers?"

Traditionally, data syndication services, such as Nielsen and IRI, provided CPG companies with insight into consumer purchasing patterns. The CPG company could obtain sales data by accessing the syndicator's system through a Web site and running queries on its products' selling patterns. Now CPG companies can receive this data in house for a substantial fee. Large retailers are contributing to data analytics efforts by making their point-of-sale data available to CPG companies.

Despite the sizable fee, CPG companies still have to analyze large volumes of data themselves before they can derive the value they want. The problem perpetuates, as CPG has to analyze disparate syndicator and retailer data alongside data from its own standalone customer relationship management (CRM), sales force automation (SFA) and enterprise resource planning (ERP) systems.

Now CPG wants more value from the data for less cost, and they want IT to deliver it. One CPG executive simply requested, "A single point where the internal sale data, retailer data and syndicated data can display all sales, demand and consumption data." However, collecting, processing and analyzing customer data requires a substantial amount of resources, which the company may prefer to invest in growing its business.

Having it all doesn't mean the company should consolidate and process all of the data. Rather, CPG needs to take a hard look at the available data, identify what it wants to obtain and collaborate with IT to determine the best methods. In many cases, this magic wand question helps direct these efforts: "If we could wave a magic wand and get what (sales information) we needed from IT, what would it be?"

The answer may sound simple but seldom is, as business managers rarely understand the underlying complexity. Integrating data from multiple sources and presenting it logically, on demand in a single interface is anything but simple.

Proceed with caution should mark the data management (DM) professional's first order of business, as collecting and consolidating volumes of consumer data could quickly surpass even the largest data warehouse projects.

Business strategy should drive these efforts. With help from senior management, DM should define the most valuable data and guide management through a divide, conquer and absorb approach. Divide each data source and supporting source, identify exactly what to analyze and absorb the data into a larger model. Focus on category captains, letting business need dictate the process with high-impact data (products or product type) being the first to grab and analyze.

DM will still amass volumes of data from CRM, ERP and SFA systems, syndicated RSI and Nielsen services and retail POS data. The data model must grow horizontally to reflect source volume and vertically to present data granularity, although the model can re-use some dimensions across the data sources. Consideration for time retention and history and the dimension method chosen also increases volume. However, the model's granular data often delivers the strategic information that business leaders want, as it can track sales data down to the customer level and shipment data down to the warehouse or store level. Therefore, as new data increases granularity, the model must change to reflect bridges from disjointed levels, adding complexity of the ETL process and aggregation strategy.

The business solution should be rolled out in an iterative fashion, which requires open and frequent communication between business and DM and forces data managers to deal directly with syndicators. DM has to build a scalable architecture that can accept new and unanticipated data to support granularity and volume increases. This could mean re-architecting the current data warehouse.

The thirst for data from business is high. DM has to manage risk by controlling the scope, particularly the amount of outside data brought in per iteration. Business depends on data managers to navigate the highly complex domain. 

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access