How Database Environments Lose Performance

author-image
DQI Bureau
New Update

Data volumes are fast expanding and usage patterns are changing just as rapidly. As a result, database environments that were thought to be well-tuned and well-configured quickly become congested, cluttered, and inefficient with disparate platforms, sprawling systems, a bloated application portfolio, growing costs and less-than-satisfactory performance. Even the best among us are finding ourselves asking, "How did our well-tuned applications turn into resource hogs that stopped meeting expectations?"

Advertisment

If you took the time to review many database environments-examining everything from hardware and operating systems configuration to the database architecture, and cataloging components that affect scalability, high availability, disaster recovery, manageability, security, and tuning-you would find many infrastructures suffer the same maladies. In the reviews we've conducted, we have found common problems: The usage pattern has changed, more users stay connected for longer, certain data tables have grown faster than expected, some data partitions or indexes are out of balance, a minor software upgrade impacted a configuration parameter, or a well-intended ‘quick fix' has disastrous long-term performance impacts. For example, one such quick fix that we frequently encounter is a result of an indiscriminant indexing strategy. DBAs create too many indexes, some of them redundant, which may increase read performance, but they will also almost certainly have a calamitous impact on write performance.

Another cause of dwindling database performance is failing to maintain a commercial software package, such as an ERP system. Over the past year, for example, we worked with a number of large and mid-sized healthcare providers that were using a commercial package for managing medical practices, offices and small clinics without paying enough attention to the underlying database. This particular package embodies a database that, just like all databases, if not well maintained, over time tends to develop problems with performance and data integrity.
This happens slowly in the background as the medical practice grows, specifically in situations where users fail to conduct proactive maintenance or upgrades. Users were reporting such problems as data loss, data corruption, and recovery issues. In some cases, it was unclear if these problems were a result of intrusions that were allowed to happen because the software wasn't upgraded regularly.

TACTICS TO IMPROVE PERFORMANCE

There are a number of places a data professional can target when things seem to be going awry. When application performance starts to falter or data discrepancies emerge, here's where to start looking:

Advertisment
  • Memory Configuration: Wrong memory configuration settings can slow the server down dramatically. Memory gets assigned to the database globally, and separately to every database connection. These allocations must be realigned regularly with new usage patterns. For example, more concurrent users require more available memory. At at a certain point, the database should adopt a connection pooler to improve the ratio of users to connection memory. And larger tables that are accessed frequently for read-only should be held in-memory for quick access.
  • OS Parameters: Operating system parameters need to be set optimally to support new releases and changing usage profiles.
  • Partitioning Strategy: Data loads change over time as an application is used, and outdated partitioning strategies that may have been terrific for the data load you had 18 months ago may no longer support the data load you have today. As a result, queries can become painfully slow. DBAs need to review partitions regularly to make sure they are well balanced and that the distribution of the data across the partitions meets business requirements. Furthermore, it's important that DBAs verify regularly that queries continue to make efficient use of partitions.
  • SQL Queries and Indexes: Poorly written SQL queries and misaligned indexes on commonly executed tasks can have significant impact. A thorough analysis of where queries perform sequential scans, have sub-optimal execution plans or could be supported with better indexing strategies often works wonders. A good indexing strategy eliminates overlapping indexes, to the degree possible, and finds the right balance between write performance (slowed down by Indexes) and read performance (enhanced by indexes). A good strategy also considers index-only scans, returning results from indexes without accessing the entire heap.

But while performance degradation is often the first visible sign of data management problems, by far the worst scenarios result from deficient backup and recovery strategies. Backup strategies must plan for catastrophic failure of a device (for example, a storage system gets corrupted), operator error (a database architect accidentally deletes a table), data corruption (a virus or other defect causes a table corruption that went unnoticed for days or weeks), and compliance requirements (know the rules for archiving and retention).