In the middle of the week, when no particular significant activity is occurring against your data warehouse, you begin to receive sporadic reports of poor performance being experienced by users querying the data store. You schedule an emergency conference call with the various technical infrastructure groups (network, operating system, middleware, DBAs, security, shared storage, etc.) to try to identify the issue. The technical groups report no faults or obvious resource issues in the infrastructure. Meanwhile, news from the user community now shows that reporting and ad hoc query performance has appeared to return to normal. Your data warehouse performance issue has disappeared as fast as it appeared. How would you proceed from here?
This column is the second portion of a two-part series examining various data and infrastructure issues that may affect performance in the data warehouse environment. (See Part 1 at www.dmreview.com/article_sub.cfm?articleId=1014664.) In this installment, we will look at various infrastructure issues that can affect overall data warehouse performance plus some methods for detecting and measuring these factors.
A Complete Environment View
Your extract, transform and load (ETL) processes are scheduled to run over a predetermined refresh period that the data warehouse was designed to support (e.g., near real-time, intra-day, daily, weekly, etc.). If your users can access the warehouse during this ETL processing, do you know what effect this processing is having on response performance? What effect do other infrastructure activities have on performance of the data warehouse? In addition to ETL, other activities such as database and file system backups, security jobs, large batch jobs, database reorgs and statistic updates, and other resource-intensive processes can degrade overall performance of the data warehouse environment. Do you and your users understand when these types of resource-intensive activities are running so expectations on performance are met? Do you have access to a schedule detailing these activities and the times these activities occur in your environment? If your entire infrastructure or just portions are shared with other non-warehouse applications, you need to gain an understanding of their resource needs and schedules. This same insight should be obtained for infrastructures that use shared storage (network attached storage [NAS], storage area network [SAN], etc.) because you may be sharing components with other applications. Resource constraints in a shared storage environment may not be as readily apparent from the data warehouse environment. The success data warehouses have had in providing information and knowledge to the business has driven the ever-growing need for 24x7 availability. This mandates data warehouse managers and their infrastructure colleagues to find ways to maintain and tune the data warehouse while maintaining high availability and performance.
There are some proactive monitoring activities that should be occurring today in your data warehouse environment to avoid the common disruptions that can affect performance and availability. These checks are probably being done to some degree in your environment. Some of the areas include file system space, database space, temp space, table spaces, log space, archiving, CPU saturation, memory saturation, I/O wait times, network channel speed, cache speeds (speed data is pushed from memory to disk) and others. Depending on the size, redundancy/failover, distribution (global), and flexibility of your data warehouse environment, this monitoring may be needed on many infrastructure tiers and servers. As the data warehouse manager, you should gain an understanding of these monitoring activities, the monitoring tools used, the measures taken and the thresholds at which alerts and proactive actions occur in the infrastructure so you have an understanding of the action plans and can set expectations with users.
Simple Infrastructure Measures
There are some simple tests you or an infrastructure support team member can set up when looking for unapparent or sporadic performance issues in your data warehouse environment. These tests are meant to detect issues outside of the data warehouse applications that may indicate an infrastructure issue. Simple scripts or programs can be set up to measure file system write/read times (write and read a 2GB file), SQL execution times (on application versus database server if applicable) or DBMS connection times. When measured in minute frequencies over a 24x7 period, any large variance in timings unacceptable to you and your users needs to be investigated and thoroughly understood (regarding the impact on the data warehouse environment). For example, a moderately complex SQL statement that typically takes one minute to run against the warehouse will probably show some level of performance variance when measured every few minutes over the course of a week. You need to determine if the variance measured is outside expected levels and determine which processing activity and/or infrastructure components(s) are causing this impact.
Many direct and indirect factors may affect overall performance of your data warehouse environment. A complete understanding of all the infrastructure processing activities that may influence overall data warehouse performance and availability is needed in order to set and meet user expectations. When looking for sporadic or unpredictable performance anomalies in the data warehouse environments, some simple tests and measures can be established. These tests can identify infrastructure performance variances which can help you determine whether or not you have an issue with the data warehouse.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access