Q: DBMS_SQL vs. EXECUTE IMMEDIATE

There are many advantages to using DBMS_SQLover EXECUTE IMMEDIATE :

  • It's easier to spell and type (for me anyway – I can't spell IMMEDAITE IMMEADIATE IMMEDIATE emotion-2-1
  • Less latching with DBMS_SQL.
  • Fewer parses with DBMS_SQL.
  • Better scaling of your app with DBMS_SQL because of the above.

Disadvantages :

  • More typing overall with DBMS_SQL.

Here are a couple of worked examples simply selecting a value from DUAL in a loop, and checking how long was spent using CPU to parse the statement, how many parses were required and the total time taken to execute the entire script. One script uses EXECUTE IMMEDIATE and the other uses the DBMS_SQL package.

 

EXECUTE IMMEDIATE Script

The following Statements will require executing a few times during the tests and to prevent the statements from generating an additional hard parse, if this is the first time they have been run, we execute them here once, ignore the results, and then execute them again.

In the remainder of the tests, there will be one soft parse for each of these two statements, however, as all tests will have the same two soft parses, it doesn't really affect the final outcome.

SELECT sn.NAME, my.value
  FROM v$mystat my, v$statname sn 
  WHERE my.statistic# = sn.statistic#
  AND sn.NAME LIKE 'parse%';

  SELECT NAME, gets, misses
  FROM v$latch
  WHERE NAME IN ('shared pool','library cache');

From this point on, unless the cache gets flushed, or the statements themselves fall out of the cache due to other unshared statements, the two statements will have a minimal effect (one soft parse each) on the results.

SELECT sn.NAME, my.value
  FROM v$mystat my, v$statname sn 
  WHERE my.statistic# = sn.statistic#
  AND sn.NAME LIKE 'parse%';

  SELECT NAME, gets, misses
  FROM v$latch
  WHERE NAME IN ('shared pool','library cache');

  DECLARE
    vDummy number(10);

  BEGIN
    FOR x IN 1..1e6 LOOP
      execute immediate 'SELECT :something_old FROM SYS.DUAL' INTO vDummy using x;
    END LOOP;
  END;

  SELECT sn.NAME, my.value
  FROM v$mystat my, v$statname sn 
  WHERE my.statistic# = sn.statistic#
  AND sn.NAME LIKE 'parse%';

  SELECT NAME, gets, misses
  FROM v$latch
  WHERE NAME IN ('shared pool','library cache');

 

DBMS_SQL Script

The script for DBMS_SQL also needs to be sure that the queries that interrogate the statistics for the run do not cause a hard parse and so are executed once here with the results being ignored. After that, the results are as above, resulting in one soft parse each.

SELECT sn.NAME, my.value
  FROM v$mystat my, v$statname sn 
  WHERE my.statistic# = sn.statistic#
  AND sn.NAME LIKE 'parse%';

  SELECT NAME, gets, misses
  FROM v$latch
  WHERE NAME IN ('shared pool','library cache');

From here on, all results are valid and have minimal effect on the final outcome.

SELECT sn.NAME, my.value
  FROM v$mystat my, v$statname sn 
  WHERE my.statistic# = sn.statistic#
  AND sn.NAME LIKE 'parse%';

  SELECT NAME, gets, misses
  FROM v$latch
  WHERE NAME IN ('shared pool','library cache');

  DECLARE
    vDummy number(10);
    vCursor integer;
    vIgnore integer;

  BEGIN
    vCursor := dbms_sql.open_cursor; 
    DBMS_SQL.PARSE(vCursor, 'select :something_else from sys.dual', DBMS_SQL.NATIVE);
    DBMS_SQL.DEFINE_COLUMN(vCursor, 1, vDummy); 

    FOR x IN 1..1e6 LOOP
      DBMS_SQL.BIND_VARIABLE(vCursor, ':something_else', x);
      vIgnore := DBMS_SQL.EXECUTE(vCursor); 
      vIgnore := DBMS_SQL.FETCH_ROWS(vCursor); 
      DBMS_SQL.COLUMN_VALUE(vCursor, 1, vDummy); 
    END LOOP;
    dbms_sql.CLOSE_CURSOR(vCursor);
  END;

  SELECT sn.NAME, my.value
  FROM v$mystat my, v$statname sn 
  WHERE my.statistic# = sn.statistic#
  AND sn.NAME LIKE 'parse%';

  SELECT NAME, gets, misses
  FROM v$latch
  WHERE NAME IN ('shared pool','library cache');

You can see immediately that the use of the DBMS_SQL package takes far more typing, even for something as simple as a SELECT ... FROM DUAL. However, bear in mind that you only have to do this typing once but the resulting code will possibly be executed many times. Suddenly it’s not so bad after all!

The following results show plainly that all the extra typing is very much worthwhile. 

 

And The Results are …

In the following results from the two scripts above, the output from the dummy statements at the start of each script have been omitted as the values they show are simply ignored. I've also combined the before and after results to save space.

 

EXECUTE IMMEDIATE

For the EXECUTE IMMEDIATE script, we have these before and after figures :

NAME                    BEFORE_VALUE AFTER_VALUE
  ----------------------- ------------ -----------
  parse time cpu                     0       7,545
  parse time elapsed                 0       7,793
  parse count (total)               10   1,000,013  
  parse count (hard)                 0           2
  parse count (failures)             0           0

  NAME                    BEFORE_GETS AFTER_GETS  BEFORE_MISSES AFTER_MISSES
  ----------------------- ----------- ----------- ------------- ------------
  shared pool              21,776,935  26,779,393           111          111
  library cache            67,530,636  78,539,324           370          391

 

DBMS_SQL

For DBMS_SQL we have these results :

NAME                    BEFORE_VALUE AFTER_VALUE
  ----------------------- ------------ -----------
  parse time cpu                     0           2
  parse time elapsed                 1           3
  parse count (total)               10          14
  parse count (hard)                 0           2
  parse count (failures)             0           0

  NAME                    BEFORE_GETS AFTER_GETS  BEFORE_MISSES AFTER_MISSES
  ----------------------- ----------- ----------- ------------- ------------
  shared pool              26,779,511  27,779,806           111          111
  library cache            78,539,535  80,539,993           391          392

The above results are summarised below and also show the run times for each run as shown by a simple SET TIMING ON. The database used for these tests was an Oracle 9204 Enterprise Edition running on a SUSE Enterprise Linux version 9 server. No other users were attached to the database other than my TOAD sessions.

NAME                   EXECUTE IMMEDIATE DBMS_SQL   COMMENTS
---------------------- ----------------- ---------- ------------------------------------------
Run time                            6:20       4:23 DBMS_SQL ran in 66% of the EXECUTE IMMEDIATE time.
Parse count (total)            1,000,003          4 Need I say anything else here?
Parse CPU                          7,545          2 DBMS_SQL took 0.026% of the EXECUTE IMMEDIATE time.
Parse elapsed                      7,793          2 DBMS_SQL took 0.025% of the EXECUTE IMMEDIATE time.
Shared Pool gets               5,002,458  1,000,295 DBMS_SQL used 20% of the latches that execute immediate used.
Library cache gets            11,008,688  2,000,458 DBMS_SQL used 18% of the latches that execute immediate used.
Library cache misses                  21          1 DBMS_SQL had only 20% of the latch misses that execute immediate used.

Bearing in mind that the database to which these sessions were attached only had my TOAD sessions (one MOE tab for each script with queries running in separate threads and TOAD queries running in threads as well), running EXECUTE IMMEDIATE style queries on a working database as part of an application may well cause much bigger numbers than the simple test scripts above. It is more likely that the excessive parsing that EXECUTE IMMEDIATE carries out will be affected by other sessions as it will possibly have to wait for the shared pool and library cache latches before it can even begin to parse each query.

 

In Conclusion

So, in conclusion, there are huge performance advantages to be gained by having to type a little more in the development phase.

 

Hints and Tips

If your application is client-server and you make sure that your sessions connect once, do a days work and then disconnect, then you can package up your SQL (or at least the common parts) into a package which will execute a DBMS_SQL.PARSE statement once when it is needed, holding the cursor open and simply binding (if required) and executing each time it is required with an implicit close when the session disconnects as follows.

First of all the package is created. This holds nothing more (for this example) that a cursor variable, a statement and one bind variable within that statement. A test function is defined that uses the above.

CREATE OR REPLACE PACKAGE SQLtest AS

    -- Cursor variable. NULL = never been parsed.
    gCursor integer := NULL;

    -- Statement with bind variable(s) that we execute.
    gStatement varchar2(250) := 'select :something_else from dual';

    -- A bind variable from the above statement.
    gBind varchar2(25) := ':something_else';

    -- A function that runs a test of the above.
    FUNCTION test (pWhatever IN number) RETURN number;
  END;

Next we create the package body. This is simply the above test function which works by checking the package cursor variable and if never parsed, we parse it and define it's columns. Then, and if the statement has been parsed, we simply bind our input parameter into the cursor statement, execute it and fetch the result to return to the caller.

CREATE OR REPLACE PACKAGE BODY SQLtest AS

    FUNCTION test (pWhatever IN number) RETURN number AS

      vDummy number(10);
      vIgnore integer;

    BEGIN
        -- Simple case
        IF (pWhatever IS NULL) THEN
          RETURN NULL;
        END IF;

        -- Have we parsed this cursor before yet? If so, ignore, else
        -- carry out a parse, once and once only!
        IF (gCursor IS NULL) THEN
          gCursor := dbms_sql.open_cursor; 
          DBMS_SQL.PARSE(gCursor, gStatement, DBMS_SQL.NATIVE);
          DBMS_SQL.DEFINE_COLUMN(gCursor, 1, vDummy);
       END IF;  

       -- Do this each and every  time we are called.
       DBMS_SQL.BIND_VARIABLE(gCursor, gBind, pWhatever);
       vIgnore := DBMS_SQL.EXECUTE(gCursor); 
       vIgnore := DBMS_SQL.FETCH_ROWS(gCursor); 
       DBMS_SQL.COLUMN_VALUE(gCursor, 1, vDummy);
       RETURN vDummy; 
    END;
  END;

If you compile the body above with debug mode in TOAD, you can see that it doesn't re-parse the statement by single stepping through two separate executions of the test function. If TOAD runs sessions in threads (check your options) then running a debug session twice will result in two different Oracle sessions – so both will have to parse the statement.

If you have two calls to the function in the one debug session, only the first will parse the statement, as follows :

DECLARE 
    RetVal number;

  BEGIN 
    -- This one will parse.
    RetVal := NORMAN.SQLTEST.TEST ( 777 );

    -- This one will not parse.
    RetVal := NORMAN.SQLTEST.TEST ( 666 );
  END; 

If you single step the above code, you will see the 777 version parsing the statement while the 666 version doesn't.

One final thought. Don't ever do this :

DECLARE
    vDummy number(10);

  BEGIN
    FOR x IN 1..1e6 LOOP
      execute immediate 'SELECT ' || to_char(x) || ' FROM SYS.DUAL' INTO vDummy;
    END LOOP;
  END;

If you do, or your application does this, now would be a good time to run away and hide! The above code results in one hard parse for each and every iteration of the loop, it fills up your library cache with unshared (and probably unsharable) code which will most likely lead to other potentially useful statements being aged out to make room for 1 million different SELECT <some number> FROM DUAL statements. Just say no!

 Norman Dunbar

 

Q: Maximize Database Performance via Toad

Normally, my white papers and PowerPoint presentations about database benchmarking rely upon industry standard tests such as the TPC-C or TPC-H (www.tpc.org). But that’s not too useful in terms of real-world applicability for many people. Because unless you’re testing new hardware (such as for an upgrade or replacement), the standard tests probably will not have clear and obvious relevance. So in this paper, I’m going to show how to use Toad to monitor and diagnose your database’s performance for your own applications. If your database could benefit from supporting 60% more users (load) while also increasing both transaction throughput (TPS) and average response time by 30% – then read on emotion-1

 

The Methodology

Rather than spending inordinate amounts of time deciphering individual explain plans or trying to guestimate what (if any) database parameters might result in substantial runtime improvements, its much easier to simply execute your application code and monitor the systems’ stress points with Toad. Now don’t get me wrong, I really do think explain plans are a great tool for both unit testing and code reviews. But few non-trivial database applications exist of individual and isolated SQL commands that can be effectively tuned in relative seclusion. Plus, we sometimes are not entirely sure how to best and correctly interpret complex individual explain plans due to the sometimes obscured dependencies.

So what’s needed is a database optimization method that’s simple, effective and reliable. Luckily Toad offers just such a tool, it’s called "scalability testing" (and is accomplished via integration with our Benchmark Factory software). Moreover, Toad with the optional DBA Module has sufficient monitoring and diagnostic tools to further expedite the tuning process. In fact, the process can be simplified as follows:

  1. Identify the code you wish to stress test (i.e. SQL, PL/SQL, C, C++, etc)
  2. Establish a meaningful metric for acceptability (e.g. <= 2 seconds per transaction)
  3. Establish a scalability target for either throughput and/or max load (e.g. 100 users)
  4. Run identified code from within Toad or BMF to establish a performance baseline
  5. Run Toad’s static analysis tools to potentially identify and correct some issues
  6. Repeat the following iterative cycle until results meet target:
    1. Reset required data to initial test state (e.g. truncate and import)
    2. Launch Toad – specifically the following 5 screens:
      1. Server Statistics with refresh of 60 seconds
      2. DB Monitor (DBA Module) with refresh of 30 seconds
      3. DB Probe (DBA Module) with refresh of five minutes
      4. ADDM/AWR (DBA Module) and take initial snapshot
      5. Redo Log Frequency Map (DBA Module)
    3. Run identified code from within Toad or BMF to measure iteration’s performance under increasing concurrent user load vs. goal or target
    4. Screen snapshot to file and examine the above five Toad screens looking for the most probable performance issue to address (note that often you’ll get more than one conclusive answer across these five screens, just choose whichever seems to offer the best potential, fix and then repeat the above)

About the Author

Steve Hilker

Steve Hilker was a Product Manager for Quest Software. Steve has over 35 years technical experience spanning application development, system administration, database management and various management positions at several software companies. Steve was the founder of RevealNet, best known for its desktop knowledge bases and unique database tools such as PL/Formatter. RevealNet was acquired by Quest Software in 2001. He's had the pleasure of being the product manager for many of Quest's database tools.

Start the discussion at forums.toadworld.com