Continue in 2 seconds

Measure by Measure

  • October 01 2007, 1:00am EDT
More in

Jim MacIntyre
Visual Sciences

Visual Sciences' CEO Jim MacIntyre launched his first software venture at age 17, which is not so long ago when you consider he just recently turned 40. An entrepreneur by nature, MacIntyre picked up a whole different set of business skills in his twenties working for the international holding company run by the Cisneros family. He learned CEO and director skills running various Latin American firms and later built early networks for the U.N. and Department of Energy. He went on to launch a pair of Internet service providers, which led him to the personalization craze of the 1990s and to targeted direct marketing, where he came to love the analytics side of the business. It all culminated with the founding of Visual Sciences, its February 2006 merger with WebSideStory and a fast-growing footprint, as MacIntyre recently explained to DM Review Editorial Director Jim Ericson.

DMR: What was the trend in marketing analytics coming out of the 1990s?
Jim MacIntyre: Through the '90s, companies had tried to integrate analytics into each different application. We found the core need was a strong analytical engine and a central, standardized method of bringing data in from multiple systems and then spitting the results of the analysis out to other systems. We founded Visual Sciences with the idea of taking data in from marketing and customer channels like phone, Web, agent management, email, point of sale and kiosk-type interfaces and being able to turn around very specific information to those systems. "If you see this visitor again, offer them X instead of Y." In 2000, we began building that next-generation analytics capability.

DMR: I've mostly associated Visual Sciences with Web analytics, which seems like an art unto itself.
JM: That's definitely where we started and is still a primary focus. The Web has some interesting properties, including very high volumes of real-time data compared to other channels. The Web was the first channel where marketers wanted to understand all the customer interactions, all the stuff that happened in front of a purchase or a business event. They wanted to know what pages were viewed, what ads were shown, how customers reacted to different product offers and where their eye was drawn to on the page. Which things on the screen were easy versus hard to use? How did people traverse different pages to get what they wanted done? People wanted instant access to the analysis, and so we had to build a special technology to address that.

DMR: Does that explain why Web analytics have evolved separate from traditional BI and data warehousing?
JM: Data warehouses are valuable in the sense that they tell you who the customers are, what they purchased and when. The three factors that drive Web analytics are very high volumes of data, very high cardinality and a very high number of data dimensions that are needed for analysis. That's the perfect storm for a BI product and a relational database; they're just not built for the problem. Only the very largest companies were successful at building Web analytics systems based on Teradata, Oracle or other relational databases, and they turned out to be very expensive and very tough on the DBAs who administered them. They'd get 30 days down the road, and their tables would be corrupted and have to reload so they'd never catch up.

DMR: Start with the high-volume issue.
JM: Most business intelligence and analytics products have focused on the transaction and what happens after in supply chain-related data or the accounting-related data. We focused on the things that happen in advance of that. One very large retailer we work with has built a 5TB data warehouse over the last five years; that same retailer produces more than 5TB in a single year in their Internet channel because they collect every little interaction I mentioned earlier. For one complex Web page, you might want to track three or 10 things that occurred when a page was displayed, and each page view leaves you with a data record that averages between 100 and 1,000 bytes. Think about how many page views media sites have, and it turns into an awful lot of data. One credit card customer I looked at processes 10 times as much data from the Web as they process from credit card transactions. The same thing happens at an ATM machine; you get 100 dollars, but how many buttons did you push to get there? Or, say you're a travel company and your ultimate reservation might be 1,000 bytes, but all the customer interactions, how they found you, the campaigns, the ads, promotions, what they perused is a significant multiplier of the end record booking. It's largely the realm of marketing and customer interaction driven by that marketing.

DMR: Your answer to that is a hosted data model?
JM: We tackled it by building an ASP and license model alongside our software products. We're 80 percent service, 20 percent licensed. We can address operating leverage and centralize the management of data with technology tuned for a purpose, whether it's IVR or kiosks or set top boxes. Industries such as financial services might rotate in and out of service because as soon as you put customer account info into the analysis you trigger the Gramm-Leach-Bliley Act. When airlines were cut back after 9/11, they did the opposite and moved it back in house over time.

DMR: What about the dimensional and cardinality issues?
JM: As for the dimensional issue, people working on the Web are constantly publishing more pages and new types of content and capturing more information from visitors all the time. Each little field you capture implicitly or explicitly in a form is another dimension, so there's no fixed data model; it's an open-ended set of data dimensions based on all the stuff you're building on the site. High cardinality simply means that the lists in the data are very long. If I am Amazon selling many more books than can be found in bookstores, I have a huge list I need to show in a report where tables are just huge. It may be millions versus a cube containing tens of thousands. You might want to know how many campaigns referred to a Web site, so it's a huge growing number that doesn't fit the cube-based BI model.

DMR: Do you need huge volumes of traffic to make Web analytics worthwhile?
JM: We stratify capability into three tiers. The first thing people want is basic reporting: How many prospects are coming, how many turn into customers, how long is the visit? Those are status questions everyone needs answered every few days. After the volume has built to more traffic, you get into the range of "why" questions. Why do people navigate a certain way? Why are certain parts of my customer base doing different things? For that you need light analysis, like pass analysis and basic segmentation analysis. The highest stage takes those why questions and uses the analysis to influence other execution systems based on the real-time knowledge. The first level is something every site needs. The next level comes along after you have some volume of traffic where there's ROI that says knowing why is worth it. This leads to larger teams that can get a better or quicker answer to "why" and have a significant impact. We have a customer, a large hotelier, that generates about $10 million in return for one analysis project. The largest return they quoted to us creates more than $100 million of lift. For them, a 1 percent conversion rate improvement in the travel sector will create $10 million of lift. When you see big dollar improvements like that, you start to see whole teams focused on exploring the data to find the next competitive insight that's going to put them ahead.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access