Toad World Blog

The performance teeter-totter part 2: Toad® Turnpike [Finding indexes]

Jun 16, 2020 5:45:00 AM by Gary Jerep

Finding new, effective indexes to improve database performance is hard.  How hard?  Take a short quiz at the end of this blog.  Remember, Toad® Turnpike is about real people doing real things with Toad solutions.

ToadTurnpikeExit_CorrectAspectRatio-1

Previously…indexing issues that can negatively affect databases

In my previous blog post, DBA Alex described the backlash that kicked up when he introduced just one new index to speed up one user’s report. In short, that one little change on the database caused slow-down issues for other jobs. Alex and his team had to address those issues, and no one was happy. One teeter hour to introduce one index on one table—while creating a huge performance boost for one SQL statement—caused over 90 totter hours of follow-up fixes to re-work other applications whose SQL operations were touching that same table.

 

Where have all the workloads gone? Using SQL workloads to identify tables

Alex’s openness about sharing his experience fostered a healthy discussion about performance management using indexes.  Some of the healthiest tuning strategies start with identifying (SQL) workloads that, in turn, identify tables for which new indexes can be identified.  And there’s different ways to define such workloads.  Here are a few Alex and I discussed:

  • Application-centric workloads.
    This idea might be the first one readers think about.  If you can grab all, most, or even some of the most offensive SQL generated from an important application stack, then you’ve just identified tables that contribute to performance degradation, based on the SQL operations performed on those tables.  The likelihood of finding good index candidates for such tables is very high.

  • System-centric workloads.
    What’s your favorite database performance metric of choice? CPU consumption?  Database Buffer Gets?  Gets normalized per execution?  Disk Reads?   Grab the most resource intensive SQL on the database (based on your chosen performance metrics), and you have identified the objects that contribute most to that consumption. This approach has general appeal if you’re trying to performance tune object access patterns on the entire database, regardless of the applications or users that generate activity.

  • Wait-centric workloads.
    What is the most offensive wait event that causes issues?  Some solutions (like Spotlight®, or Foglight®, for example) can help you grab the SQL list that is causing the most wait times on your database.  I know of one customer who was plagued by blocking lock waits on a few tables.  Placing some new indexes on partitioned versions of these tables reduced blocking lock waits considerably.

  • IO-centric workloads.
    What about identifying the SQL statements that are pounding a “hot” set of tables?  Or maybe you have a few datafiles that are running disk-read-hot.  If you have a monitoring system that can identify your hottest tablespaces or datafiles on the database, then you can use Toad solutions or other products to find the SQL operations that fire up the consumption burner. 

 

Challenges?  What challenges? Identifying SQL, combining indexes, tools to measure performance impact

Alex had thought of some of these approaches before, but they are more easily described than done.  Here were Alex’s top three concerns about implementing the strategies above:

Challenge #1:
Alex confessed that he has a hard time identifying all the SQL. SQL could come from database-stored procedural code, or code external to the database, sure. But what about SQL built dynamically at runtime, or some-time SQL submitted by ad hoc users?

Challenge #2:
How does Alex identify index candidates as the number of objects accessed in his SQL workload gets larger? How does he keep track of the different ways to combine those indexes?

Challenge #3:
Even if Alex *could* find a small fraction of the index possibilities, does he have the right tools to measure the performance impact of new indexes introduced to his workload? Ideally, BEFORE they affect Production?

Quest Software solutions can help with each of these challenges, and I’ll discuss this in the upcoming part three of this blog. 

 

About that Quiz…How to find sets of indexes that speed up database activity on a workload basis?

The second challenge Alex mentions above is nearly impossible.  It may not be difficult to find an index or two that help speed up performance for a few queries here and there.  But to find sets of indexes that speed up database activity on a workload basis?  That’s beyond hard.  Curious?  Here’s a quick quiz.

Suppose you have a SQL workload that collectively accesses only five small tables.  Each table has only three columns.  See if you can guess the answers to these questions.  To keep it simple, let’s consider only “vanilla” column-based indexes.  Composite indexes, ok, but don’t consider clustered-type indexes, function-based, or other attributes, etc.

  1. What is the total number of possible indexes on just one table? On all five tables?

  2. What is the total number of different combinations of these indexes, choosing up to three indexes at a time?

  3. How about the total number of index sets choosing up to, say, 10 indexes at a time?

  4. (Extra Credit)    What is the total number of index sets, regardless of the number of indexes in each set?

Click here to post your answers and compare to other readers’ responses.  Answer the quiz questions by July 17 and we’ll send you some cool Toad stickers. The actual answers will be posted next month in part 3 of this blog.  Stay tuned!

 

Interested in learning more about query/index optimization?

Check out other real-world success stories about Quest's SQL Optimizer. 

Quest Software has been helping database management professionals for nearly 25 years, providing solutions that help you simplify the complex, reduce cost and risk, and drive improved performance. Maybe that’s why we’re consistently voted #1 by DBTA readers and have 95%+ customer satisfaction rate. Learn more about Toad for Oracle by visiting our product page and find out how you can modernize your Oracle database operation to enable business agility.

 

Free trial

Try Toad for Oracle free for 30 days.

Already a loyal Toad for Oracle user? Get SQL Optimizer in Toad for Oracle Xpert Edition and above Upgrade now or talk to sales.

Learn more about how Toad database management tools from Quest can help with database development, performance tuning and database DevOps.

 

Got questions?

Our Toad family of products are continually evolving and getting better because we listen to you, our customers.

Do you have questions? Please post questions to the Toad for Oracle forum on Toad World®.

 

More blogs from Gary

For more real-world stories like this one, and other Toad solution insights, read Gary’s other posts.

 

Help your colleagues

If you think your colleagues would benefit from the Toad Turnpike blog series, share it now on social media with the buttons located at the top of this blog post. Thanks!

 

**AUTHOR’S NOTE: This blog originally appeared on Quest's Communities site...re-posting here for the ToadWorld community.

Tags: Toad for Oracle Spotlight on Oracle Foglight for Databases

Gary Jerep

Written by Gary Jerep

As a Software Consultant within Quest, Gary Jerep has over 20 years of experience assisting DBAs, Developers and Analysts with relational database needs. Prior experience includes systems design and analysis, operations research, Business Intelligence, and end-user experience monitoring.