We live in an information-intensive world. For at least 15 years we have heard the complaint, "There's too much information to absorb! What can I do to filter it all?" The glut of data is still growing, even after all these years! A recent, painful example involves the fallout from the terrorist attacks in September 2001 which in part resulted from the apparent inability of the government intelligence industry to foresee an event of such magnitude, despite having huge amounts of information on terrorist activities and potential targets at their disposal. What seems to have been missing was a way to filter all that information. That proved to be a critical error.

There are similar situations within regular business today. The amount of information available to help a business make critical decisions is growing exponentially. Information internal to organizations is being amassed by the terabyte in data warehouses, data marts and operational data stores – not to mention legacy databases. Information external to the organization is growing much more quickly, however. Supply chain partners gather and provide huge data stores with inventory and sales information which grows by the day/week. And the Web grows biologically, akin to cells replicating in an uncontrolled manner.

Just as the government intelligence groups did during the World Trade Center disaster, businesses make critical errors each day due to the inability to filter a growing morass of information – despite significant growth and market acceptance of business intelligence and analytical tools.

However, there are solutions that can help.

Beyond reporting and online analytical processing (OLAP) solutions, sophisticated algorithms have been developed that can determine patterns and help make sense of huge amounts of data; however, they have yet to be widely used. Some have not achieved the scalability to provide value. Some have been marketed by technologists who cannot seem to translate their technology effectively to a real business value proposition, and thus have not gained wide market acceptance. However, the reality of the situation is that the technology to perform pattern definition, analysis and forecasting is here, ready to be exploited to solve the problem of information glut.

On the Internet front, there have been amazing developments in the area of intelligent agents, spiders, bots, Web crawlers and other inference and reasoning mechanisms. A concept called "the semantic Web" is evolving to add machine-readable semantic tags to Web data and documents, which will provide information on the meaning of the data and relationships. As it evolves, the semantic Web will allow for tremendously improved search capabilities as well as better systems interoperability. Agents and bots have been around for years, but with only minimal utilization. Why is this so, when the need for filtering information is so great?

Agents or bots search the Internet based on predefined criteria, allowing users to be selective about what reaches their desktops. The time has come to utilize intelligent agents both for internal business data discovery and external, Internet-based self-service.

Intelligent agents do their work on their own, without human intervention, continually looking for changes in multiple databases related to a specific objective or goal. They can "learn" to search for more refined, applicable information based on user interactions and then suggest possible actions based on the information uncovered. Richard Hackathorn discusses the topic he calls "Web farming" in his book and Web site of the same name. His thesis is to utilize external Web resources to provide valuable competitive intelligence.

Bots perform tasks on the personal and business front. Of personal interest are bots such as Job Sleuth which searches multiple sites for careers of interest, shopping bots such as Bottom Dollar (a real-time shopping agent) or MySimon (finds bargains on the Internet and can be taught to shop intelligently) which perform specific actions based on predefined criteria. Fashion Finder, which suggests gifts with certain parameters, and EchoMail, which manages and classifies e-mail, are other examples.

On the business front, there's Company Sleuth, which gathers insider information on any listed company. MyAlert delivers requested information to your cell phone, and NewsBots provides a customized news service. The hope is that these and other new bots can be effectively used to filter information and deliver it when it is useful to the individual.

The use of sophisticated algorithms or intelligent agents might not have prevented the World Trade Center and Pentagon disasters. However, they have the potential to filter the glut of information still existing, in growing volumes, after all these years, returning either patterns of specific requested information or the discovery of previously unknown data patterns. It seems to be worth the somewhat negligible cost of investing in some of these ostensibly "futuristic" tools to potentially gain significant benefits.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access