The need to increase IT operational efficiencies, along with the need for advanced knowledge of impending problems, has been a driving force behind the quest to implement technology that brings intelligence to the decision-making process. For years, scientists have used mathematical analysis techniques to understand the behavior of things and events. Just as scientists rely on real-time analytics to track a hurricane, visionary IT professionals are adopting these technologies to safeguard the health of critical IT systems and associated business services. The use of analytics is helping organizations shift from a reactionary model to more of a proactive and preventative one. By applying real-time analytics technology that self-learns, companies can now forecast application and service behavior before problems even occur.

Today, leading financial institutions are beginning to adopt technology that constantly adapts to IT changes and provides action-oriented information on potential IT infrastructure needs. Analytics plays an important role in this capacity for its ability to identify risk or opportunities, as well as observe best practices. Through activity-based management techniques, organizations can apply analytics to tap into collective data and get an overall picture of the current and future health of IT systems. Who knew math could be so fun?!

Billion Dollar Baby

From millions of BlackBerry users who recently lost access to their mobile email, to the millions of dollars lost when the Dow Jones computer system became overwhelmed by heavy share trading, a system outage can run up a large tab in terms of lost revenue - but for a global financial institution, an outage can cost billions.

While financial institutions must keep their IT systems functioning at all costs, there's a growing murmur of discontent about the available monitoring tools that support this objective. Traditional monitoring solutions are not flexible enough to track and understand the dynamic, ever-changing nature of enterprise systems. They rely on performance thresholds and scripts that are reliant on human analysis. In a recent Gartner poll, over 70 percent of IT executives graded their traditional monitoring management solutions from CA, HP, BMC and IBM as Cs or Ds, which is consistent with the poll taken one year earlier.1 The report found that the majority of companies polled have little or no confidence in their present performance management solutions. Why is this?

It is because the conventional thinking around performance management is dated and no longer makes sense for organizations today. The idea of baselining (determining what's normal in a constantly changing environment) no longer works because baselines are constantly shifting from day to day, week to week and month to month as new users are added, new services deployed or procedures changed. Then there are frequent anomalies: an enterprise resource planning (ERP) system cranking extra hard at quarter's end, a payroll application working overtime as payday approaches, a Thursday evening system backup or scheduled maintenance. With the shifting of "normal behavior" on each of hundreds if not thousands of systems within the data center, it's humanly impossible to determine baselines for any given point in time. The result is a chain reaction of false alerts, lost staff time, slow problem resolution and increasing costs.

With this being the state of most IT shops today, how can organizations possibly ensure availability? Bottom-line: There are far too many IT variables to learn, account for and track. Some tasks are better left to machines than humans. That's where real-time analytics come in.

Today's leaders are investing in self-learning performance management software: software that leverages real-time analytics, automates threshold administration, and eliminates false alert activity through proactive problem diagnosis. Through the use of real-time analytics - and the availability of self-learning and continuously adaptive technology - organizations are able to automatically gain a level of insight into system behavior and anticipate user interactions, heading off any anticipated problems before they even occur.

Follow the Leaders

Many Fortune 500 companies have already begun to automate more than 80 percent of the manual process associated with their current generation of performance and systems monitoring tools. Using analytics and automation combined, these organizations are now able to automatically inject an intelligence layer into their current tools, which allows them to proactively manage incidents and problems across IT in real-time - hours before degradations occur. The analytics element literally tells the story of system performance and identifies the historical behavior patterns and future trends. This has become particularly prevalent in the financial market, due to the vast amount of critical data involved and the potential for billions in loss due to downtime.

Recently one of the largest banks in the world suffered from regular and unanticipated server degradations, resulting in slow or unsuccessful fund transfers. This bank supports hundreds of millions of dollars in daily transactions, with millions of daily interest dollars at stake. In this case, missed payments cost the bank millions in interest and the lost opportunity of reinvestment. The downtime also resulted in a loss in customers. However through the use of real-time analytics and self-learning performance management software, the bank was able to resolve its performance issues - and today, the organization can successfully forecast potential outages hours in advance.

Self-learning performance management software takes the data formulated by the analytical trend analysis and automatically turns it into an actionable real-time look into the health of the IT infrastructure. Once deployed in the enterprise, behavior-based analytics technology continuously analyzes and correlates thousands of simultaneous performance variables through mathematical algorithms that automatically learn, identify and understand how variables or key performance indicators relate to each other. With this method, organizations are able to move from a reactive to a real-time, proactive mode with the ability to define, monitor and report against defined services, guarantee service level agreements and ultimately link IT with their business.

Analytics: The Secret to Reliable Performance Management

When billions of dollars ride on service performance, and the answers to future performance issues are buried deep in data, organizations need to embrace the fact that it is humanly impossible to manually harvest this information. This is because trying to manage systems or business services with conventional monitoring tools means organizations are constantly inundated with more data than they can possibly analyze and act on.

Whether its' point of sale systems, email applications, Web-based catalogs or online reservations, the delivery of real-time services is affected by an overwhelming number of databases, applications, servers, routers, firewalls and events, meaning all must work together under conditions that change by the second. Do you know which of your systems is about to fail? Your current monitoring system alone can't answer this, nor tell you what's connected, nor what's really happening. As outlined here, real-time analytics can provide visibility into the behavior and the interdependencies of the physical resources and, most importantly, the applications and business functions they support. With the ability to self-learn a system, network, application or security performance, this real-time capacity facilitates quick and time-appropriate decision-making and forecasting of future problems, enabling IT departments to take corrective measures before problems occur.

Companies looking to augment and enhance their ability to monitor the health of their IT infrastructure, supplement their event management and gain an early warning of outages based on trends and patterns, while also gaining a view on how potential issues can affect the IT services delivered to the business, should consider learning technologies that provide a new way of looking at and managing IT.

Reference:

  1. Cameron Haight and Donna Scott. "Big Four Management Software Vendors Face Competitive Threats." Gartner April 13, 2007.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access