Friday, November 12, 2010



SQL_TRACE may also be enabled at the system or session level. It generates so much output and is such a performance impact that you will almost always selectively enable it – you will rarely, if ever, enable it for the system in the init.ora file. SQL_TRACE has two settings as well, TRUE and FALSE. If set to TRUE, it will generate trace files to the directory specified by the init.ora parameter USER_DUMP_DEST, when using dedicated servers to connect to Oracle and BACKGROUND_DUMP_DEST
when using a multi-threaded server (MTS) connection. I would recommend never attempting to use SQL_TRACE with MTS however, as the output from your sessions queries will be written to many various trace files, as your session migrates from shared server to shared server. Under MTS, interpreting SQL_TRACE results is nearly impossible. Another important init.ora parameter is MAX_DUMP_FILE_SIZE. This limits the maximum size of a trace file Oracle will generate. If you discover that your trace files are truncated, you will increase this setting. It may be changed via an alter system or session command. MAX_DUMP_FILE_SIZE may be specified in one of three ways:

1) A numerical value for MAX_DUMP_FILE_SIZE specifies the maximum size in operating system blocks.

2) A number followed by a K or M suffix specifies the file size in kilobytes or megabytes.

3) The string UNLIMITED. This means that there is no upper limit on trace file size. Thus, dump files can be as large as the operating system permits.

I do not recommend UNLIMITED – it is far too easy to completely fill up a file system in this manner; a setting of 50 to 100 MB should be more then sufficient.

Some the various ways to enable SQL_TRACE are:-

1) ALTER SESSION SET SQL_TRACE=TRUE|FALSE – Executing this SQL will enable the default mode of SQL_TRACE in the current session. This is most useful in an interactive environment such as SQL*PLUS or embedded in an application so that the application, may turn SQL_TRACE on and off at will. It is a nice feature in all applications, as it would allow you to turn SQL_TRACE on and off for the application via a command line switch, menu selection, parameter, and so on, easily.

2) SYS.DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION – This packaged procedure allows us to
set SQL_TRACE on and off for any existing session in the database. All we need to do is identify the SID and SERIAL# for the session, this is available in the dynamic performance view V$SESSION.

3) ALTER SESSION SET EVENTS – We can set an event to enable tracing with more information than is normally available via ALTER SESSION SET SQL_TRACE=TRUE. The SET EVENTS approach is not documented or supported by Oracle Corporation, however its existence is generally available knowledge (go to and search for alter session set events 10046 to see the many web sites that document this feature). Using this event we can not only get everything that SQL_TRACE tells us but we can also see the values of bind variables used by our SQL as well as the wait events (what slowed us down) for our SQL as well.

alter session set events '10046 trace name context forever, level ';
alter session set events '10046 trace name context off';

Where N is one of the following values:
a) N=1 – Enable the standard SQL_TRACE facility. This is no different than setting

b) N=4 – Enable standard SQL_TRACE but also capture bind variable values in the trace file.

c) N=8 – Enable standard SQL_TRACE but also capture wait events at the query level into the trace file.

d) N=12 – Enable standard SQL_TRACE and include both bind variables and waits.4)

4) dbms_monitor.session_trace_enable

There are other methods as well, however, the above three are the ones I use and see used most often.

If you have an application provided by a third party, or you have existing applications that are not SQL_TRACE-enabled, what can you do to trace them? There are two approaches that I take. One approach, if the application is a client-server application and stays connected to the database, is to have the session you want to trace start up the application and log into the database. Then, by querying
V$SESSION, you would determine that session's SID and SERIAL#. Now you can call
SYS.DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION to enable tracing in that single session. Today however, many applications are web-based and this trick does not work as well. The sessions are very short and they come and go frequently. What we need is an ability to set SQL_TRACE on for a 'user' – whenever this user is in the database, we need to have set SQL_TRACE on for them. Fortunately, we can do that via the LOGON DDL trigger in the database. For example, a trigger I frequently use in Oracle 8i
(database event triggers such AFTER LOGON are a new feature of Oracle 8.1.5 and up) is:

create or replace trigger logon_trigger
after logon on database
if ( user = 'MANZ' ) then
execute immediate
end if;

This will enable tracing any time I logon to the database. The application does not have to participate in the setting of SQL_TRACE – we'll do this ourselves.

Using and Interpreting TKPROF Output

TKPROF is nothing more than a simple command line tool to format a raw trace file into something we can read easily. It is an excellent utility that is not used enough. I believe this is mostly due to ignorance of its existence. Now that you know it exists, I would expect you would use it constantly.

What I'll do in this section is to run a query with tracing enabled. We'll take a detailed look at the TKPROF report and learn what we need to look for in these reports:

sql> show parameter timed_statistics;
------------------------------------ ------- ------------------------------
timed_statistics boolean TRUE

sql> alter session set sql_trace=true;
Session altered.

sql> select owner, count(*)
from all_objects
group by owner;

------------------------------ ----------
DEMO11 3
SYS 11279
15 rows selected.

sql> select a.spid
2 from v$process a, v$session b
3 where a.addr = b.paddr
4 and b.audsid = userenv('sessionid')
5 /

Here I verified that timed statistics was enabled (it is almost useless to execute SQL_TRACE without it) and then enabled SQL_TRACE. I then ran a query I wanted to analyze. Finally I ran a query to get my SPID, (server process ID) – this is very useful in identifying my trace file. After I executed this query, I exited SQL*PLUS and went to the directory on the database server specified by the USER_DUMP_DEST
init.ora parameter. You can retrieve the value of this parameter online by querying the V$PARAMETER view or using DBMS_UTILITY, (which does not require access to the

sql> declare
l_intval number;
l_strval varchar2(2000);
l_type number;
l_type := dbms_utility.get_parameter_value
('user_dump_dest', l_intval, l_strval);
dbms_output.put_line(l_strval );

PL/SQL procedure successfully completed.

In this directory, I found:

Volume in drive C has no label.
Volume Serial Number is F455-B3C3
Directory of C:\oracle\ADMIN\tkyte816\udump
03/16/2001 02:55p
03/16/2001 02:55p
03/16/2001 08:45a 5,114 ORA00860.TRC
03/16/2001 02:52p 3,630 ORA01112.TRC
03/16/2001 02:53p 6,183 ORA01124.TRC
3 File(s) 14,927 bytes
2 Dir(s) 13,383,999,488 bytes free

A few trace files – this is where the SPID will come in handy. My trace file is ORA01124.TRC. I know that because the SPID is part of the filename. On UNIX a similar naming convention is used that also incorporates the SPID. One issue with trace files is that the files in this directory may not be readable by you, if you are not in the administrative group for Oracle (for example, the DBA group in UNIX). If
not, you should ask your DBA to set:

_trace_files_public = true

In the init.ora file of your test and development machines. This will allow all users to read trace files on the server. You should not use this setting on a production machine as these files can contain sensitive information. On a test or development platform, it should be safe to use. Notice that this init.ora parameter starts with an underscore. It is undocumented and unsupported by Oracle corporation. Again, like the EVENTS command we will use later, it is general knowledge and widely
used – search Google or any other search engine for _trace_files_public, and you'll find many references to this parameter.

Now that we have identified our trace file, we need to format it. We can (and will, further on in this section) read the raw trace file. About 90 percent of the information we need is easily retrieved from a nicely formatted report however. The remaining 10 percent of the information is typically not needed,and when it is,
we'll have to read it from the trace file itself. In order to format the trace file, we will use the TKPROF command line utility. In its simplest form we will just execute:

C:\oracle\ADMIN\tkyte816\udump>tkprof ora01124.trc report.txt

TKPROF: Release - Production on Fri Mar 16 15:04:33 2001
(c) Copyright 1999 Oracle Corporation. All rights reserved.
The parameters to the TKPROF command are the input file name and the output file name. Now, we just need to edit REPORT.TXT, and we will find the following information:

select owner, count(*)
from all_objects
group by owner
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ----- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 1.20 1.21 0 86091 4 15
------- ------ -------- ---------- ----- ---------- ---------- ----------
total 4 1.20 1.21 0 86091 4 15

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 69
Rows Row Source Operation
------- ---------------------------------------------------
21792 FILTER
21976 INDEX RANGE SCAN (object id 34)
28973 INDEX RANGE SCAN (object id 101)
654 INDEX UNIQUE SCAN (object id 36)

TKPROF is showing us a lot of information here. We'll take it piece by piece:

select owner, count(*)
from all_objects
group by owner

First, we see the original query as the server received it. We should be able to recognize our queries easily in here. In this case, it is exactly as I had typed it in. Next comes the overall execution report for this query:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ----- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 1.20 1.21 0 86091 4 15
------- ------ -------- ---------- ----- ---------- ---------- ----------
total 4 1.20 1.21 0 86091 4 15

Here we see the three main phases of the query:

1) The PARSE phase — where Oracle finds the query in the shared pool (soft parse) or creates a new plan for the query (hard parse).

2) The EXECUTE phase.This is the work done by Oracle upon the OPEN or EXECUTE of the
query. For a SELECT, this will be many times 'empty' whereas for an UPDATE, this will be where all of the work is done.

3) Then, there is the FETCH phase. For a SELECT, this will be where most of the work is done and visible, but a statement like an UPDATE will show no work (you don't 'FETCH' from an UPDATE).

The column headings in this section have the following meanings:
1) CALL – Will be one of PARSE, EXECUTE, FETCH, or TOTAL. Simply denotes which phase of query processing we are looking at.

2) COUNT – How many times the event occurred. This can be a very important number. Below, we will take a look at how to interpret the values.

3) CPU – In CPU seconds, how much time was spent on this phase of the query execution. This is only filled in if TIMED_STATISTICS was enabled.

4) ELAPSED – As measured by the wall clock; how long this phase of query execution took. This is only filled in if TIMED_STATISTICS is enabled.

5) DISK – How many physical I/Os to the disk our query performed.

6) QUERY – How many blocks we processed in consistent-read mode. This will include counts of blocks read from the rollback segment in order to 'rollback' a block.

7) CURRENT – How many blocks were read in 'CURRENT' mode. CURRENT mode blocks are
retrieved, as they exist right now, not in a consistent read fashion. Normally, blocks are retrieved for a query as they existed when the query began. Current mode blocks are retrieved, as they exist right now, not from a previous point in time. During a SELECT, we might see CURRENT mode retrievals due to reading the data dictionary to find the extent information for a table to do a full scan (we need the 'right now' information on that, not the consistent-read).During a modification, we will access the blocks in CURRENT mode in order to write to them.

8) ROWS – How many rows were affected by that phase of processing. A SELECT will show them in the FETCH phase. An UPDATE would show how many rows were updated in the EXECUTE phase.

The important threads or facts to look for in this section of the report are as follows:

a) A high (near 100 percent) parse count to execute count ratio when the execute count is greater than one – Here, you take the number of times you parsed this statement and divide by the number of times you executed it. If this ratio is 1 – then you parsed this query each and every time you executed it and that needs to be corrected. We would like this ratio to approach zero. Ideally, the parse count would be one and the execute count would be higher than one. If we see a high parse count, this implies we are performing many soft parses of this query. If you recall from the previous section, this can drastically reduce your scalability, and will impact your run-time performance even in a single user session. You should ensure that you parse a query once per session and execute it repeatedly – you never want to
have to parse your SQL for each execution.

b) Execute count of one for all or nearly all SQL – If you have a TKPROF report in which all SQL statements are executed one time only, you are probably not using bind variables (they queries all look alike except for some constant in them). In a real application trace, we would expect very little 'unique' SQL; the same SQL should be executed more than once. Too much unique SQL typically implies you are not using bind variables correctly.

c) A large disparity between CPU time and elapsed time – This would indicate that you spent a lot of time waiting for something. If you see that you took one CPU second to execute, but it required ten seconds by the wall clock, it means you spent 90 percent of your run-time waiting for a resource. We'll see later on in this section how we can use the raw trace file to determine the cause of the wait. This wait could be for any number of reasons. For example, an update that was blocked by another session would have a very large elapsed time versus CPU time. A SQL query that performs lots of physical disk I/O might have lots of wait time for I/O to complete.

d) A large CPU or elapsed time number – These are your queries that represent the 'lowest hanging fruit'. If you can make them go faster, your program will go faster. Many times, there is one monster query gumming up the works; fix this and the application works just fine.

e) A high (FETCH COUNT)/(rows fetched) ratio – Here we take the number of FETCH calls (two in our example) and the rows fetched count (15 in our example). If this number is near one and the rows fetched is greater than one, our application is not performing bulk fetches. Every language/API has the ability to do this – to fetch many rows at a time in a single call. If you do not utilize this ability to bulk
fetch, you will spend much more time performing round trips from the client to the server than you should. This excessive switching back and forth, in addition to generating a very 'chatty' network situation, is much slower than fetching many rows in one call. How you direct your application to bulk fetch is 'language/API' dependent. For example, in Pro*C you would pre-compile with prefetch=NN,
in Java/JDBC you would call the SETROWPREFETCH method, in PL/SQL you would use the BULK COLLECT directive in a SELECT INTO, and so on. The above example shows that SQL*PLUS (the client we used), called fetch twice in order to retrieve 15 rows. This shows that SQL*PLUS used an array size of at least eight rows. In fact, SQL*PLUS uses by default, an array size of 15 rows – the second fetch it made returned zero records, it just got the end-of file.

f) An excessively high disk count – This is harder to evaluate as a rule of thumb, however if the DISK COUNT = QUERY + CURRENT MODE BLOCK COUNT, then all blocks, were read from disk. We would hope that if the same query were executed again, some of the blocks would be found in the SGA. You should consider a high disk count value to be a red flag, something to investigate. You might have to do some SGA resizing or work on the query to develop one that requires less block reads.

g) An excessively high query or current count – This indicates your query does a lot of work. Whether this is an issue or not is subjective. Some queries just hit a lot of data, as our example above does. A query that is executed frequently however, should have relatively small counts. If you add query and current mode blocks from the total line and divide that by the count column from the execute line, you would
expect a small number.

Lets go, onto the next part of the report:

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 69

This is telling us that the query we executed was found in the shared pool – we did not generate a miss on the library cache during this parse. It indicates that we performed a soft parse of the query. The very first time a query is executed, we would expect this count to be one. If almost every query you execute has a one for this value, it would indicate you are not using bind variables (and you need to fix that). You are not reusing SQL.

The second line there, informs us of the optimizer mode that was in place during the execution of this query. This is informational only – the query plan developed and used would be affected by this setting. Finally, the USERID used to parse the query is presented. This can be resolved into a USERNAME via:

SQL> select * from all_users where user_id = 69;
------------------------------ ---------- ---------
MANZ 69 10-MAR-01

showing that I ran it. The last section of the TKPROF report for this query is the query plan. The query plan that appears by default is shown below:

Rows Row Source Operation
------- ---------------------------------------------------
21792 FILTER
21976 INDEX RANGE SCAN (object id 34)
28973 INDEX RANGE SCAN (object id 101)
654 INDEX UNIQUE SCAN (object id 36)

This is the actual query plan that was used by Oracle at run-time. The interesting thing about this plan is that the rows that flow through each step of the plan are visible. We can see for example that 28971 rows were fetched from OBJAUTH$. These counts are the row counts of the rows flowing out of that step of the execution plan (after any predicates that could be applied where applied to OBJAUTH$, it sent
28971 onto the next step in the plan). In Oracle 8.0 and before, this count was the count of rows inspected by that phase of the execution plan (the number of rows flowing into this step). For example, if 50000 rows were considered in OBJAUTH$, but some WHERE clause was used to exclude them, a TKPROF report from Oracle 8.0 would have reported 50,000 instead. Using this sort of information, you can see what steps you might want to avoid in a query, and either record the query, or use HINTS to
come up with a more desirable plan.

You'll notice that there are a mixture of object names (for example, TABLE ACCESS BY INDEX ROWID IND$) and object IDs (for example, INDEX UNIQUE SCAN (object id 36)). This is because the raw trace file does not record all of the object names, only the object IDs for some objects. Also, TKPROF will not connect to the database to turn the object IDs into object names by default. We can easily turn this object ID into an object name via the query:

SQL> select owner, object_type, object_name
from all_objects
where object_id = 36;
------------------------------ ------------------ --------------

Alternatively, we could add the EXPLAIN= parameter to TKPROF as follows:

C:\oracle\ADMIN\tkyte816\udump>tkprof ora01124.trc x.txt explain=tkyte/tkyte

in this case, we would receive the following error in the output file:
error during parse of EXPLAIN PLAN statement
ORA-01039: insufficient privileges on underlying objects of the view
While we can run the query, the base table that the view accesses are not visible to us. In order to get the explain plan for this query we would use the SYS account, or some other account, that does have access to the underlying objects.
I prefer to never use the EXPLAIN= however and would recommend the same for you.

The reason is that the explain plan query may differ radically from the actual query used at run-time. The only plan that can be trusted is the plan saved in the trace file itself. Here is a simple example using TKPROF with the explain=userid/password that demonstrates this:

select count(object_type)
t where object_id > 0

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ----- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.19 2.07 337 20498 0 1
------- ------ -------- ---------- ----- ---------- ---------- ----------
total 4 0.19 2.07 337 20498 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 69 (MANZ)
Rows Row Source Operation
------- ---------------------------------------------------
21791 INDEX RANGE SCAN (object id 25291)

Rows Execution Plan
------- ---------------------------------------------------

Obviously, one of the plans is incorrect; one shows an index range scan and table access by row ID, the other a simple full scan. Unless you knew that I analyzed the table after you executed your query, but before you ran TKPROF, you would be unable to explain this disparity. After I analyzed the table, the default plan for that query changed dramatically. TKPROF just uses the explain plan facility in Oracle. This will return the query plan that would be used at this point in time; not the plan that was actually used. Many features can impact the plan visible in the trace file versus the plan returned by explain plan. For example, the application could have been used stored query outlines (see Chapter 11 on Optimizer Plan
Stability for more details on this feature). The query plan would have been based on a stored outline at run-time whereas the query plan returned by explain plan would be some other plan. In general, if you do use the EXPLAIN= parameter to TKPROF, you must verify that the two plans agree with each other step by step.

TKPROF has many command line options and if you just type TKPROF on the command line, you'll get
to see them all:
C:\Documents and Settings\Desktop>tkprof
Usage: tkprof tracefile outputfile [explain= ] [table= ]

The most useful option in my opinion is the sort= option. I like to sort by the various CPU and elapsed time metrics to get the 'worst' queries to pop up to the top of the trace file. You can also use this to find the queries that do too much I/O and so on. The remaining options are self-explanatory. 99.9 percent of the time I simply use tkprof tracefilename reportfilename and nothing else. This shows the SQL more or less in the order it was executed at run-time. I might use a tool such as grep in UNIX or find in windows to extract all of the total lines so I know what queries to zoom in on. For example, using our report.txt:

C:\oracle\ADMIN\tkyte816\udump>find "total" report.txt

total 2 0.00 0.00 0 0 0 0
total 4 0.01 0.02 1 1 4 1
total 4 1.20 1.21 0 86091 4 15
total 6 0.01 0.01 0 4 0 2
total 4 0.00 0.00 0 0 0 1
total 14 1.21 1.23 1 86092 8 17
total 6 0.01 0.01 0 4 0 2

shows me I should edit report.txt and search for 1.21 if I want to speed up this process. There are other statements in there but obviously; this is the one I need to focus on if I want this to go faster.

Using and Interpreting Raw Trace Files

There are two types of trace files in Oracle; those generated by SQL_TRACE (these are useful to us) and those generated as a result of a failed session (a bug in the database software). The second type, from a failed session, is not useful to us directly; they are only useful for sending to Oracle Support for analysis.

The first type of trace file is very useful to us, especially if we know how to read and interpret them. Most of the time, we will use TKPROF to format the trace files for use, however from time to time we need to dig into the trace file to understand more about what happened than TKPROF will tell us. For example, suppose someone gives you a TKPROF report with:

EMPNO = :b1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ----- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 54.25 0 17 8 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ----- ---------- ---------- ----------
total 2 0.00 54.25 0 17 8 1

It is obvious we have a problem here – it took almost a minute to do an update of a single row, even though it took no measurable CPU time. The problem is that we had some massive wait event on something, but TKPROF won't tell us what we were waiting on. Additionally, it would be nice if we knew which row it was that we were going after (such as what the EMPNO value in :b1 was). Information like that might help us track down how we go into this situation. Fortunately, the application was traced with the following command:

alter session set events '10046 trace name context forever, level 12';

so the trace file has both wait events and bind variables in it. Let's take a look at the raw trace file from top to bottom. The piece of code that I traced was:

SQL> alter session set events '10046 trace name context forever, level 12';
Session altered.

SQL> declare
l_empno number default 7698;
update emp set ename = lower(ename) where empno = l_empno;

PL/SQL procedure successfully completed.


We do know the EMPNO being used in this simple example but in general, you would not. Here are the contents of the trace file interspersed with comments:

Dump file C:\oracle\admin\tkyte816\udump\ORA01156.TRC
Sat Mar 17 12:16:38 2001

ORACLE V8. - Production vsnsta=0
vsnsql=e vsnxtr=3
Windows 2000 Version 5.0 , CPU type 586
Oracle8i Enterprise Edition Release - Production
With the Partitioning option
JServer Release - Production
Windows 2000 Version 5.0 , CPU type 586
Instance name: tkyte816
Redo thread mounted by this instance: 1
Oracle process number: 11
Windows thread id: 1156, image: ORACLE.EXE

This is a standard trace file header. It is useful to identify the exact system version and database version you are executing on. It also has the Oracle SID in it (Instance Name), which is useful for confirming you are looking at the right trace file in the first place.

*** 2001-03-17 12:16:38.407
*** SESSION ID:(7.74) 2001-03-17 12:16:38.407
APPNAME mod='SQL*PLUS' mh=3669949024 act='' ah=4029777240

The above APPNAME record was made by a call to the DBMS_APPLICATION_INFO package (see
Appendix A on Necessary Supplied Packages for details on this package). Applications use this package to register themselves in the database, so that a query on V$SESSION can see 'who' they are. SQL*PLUS in particular makes use of this package. You may or may not see an APPNAME record in your trace file depending on the environment. It is an excellent idea for you to have all of your applications register themselves, so hopefully you will see this record with your own module name. The meaning of this record is:

mod - Module name as passed to DBMS_APPLICATION_INFO
mh - Module hash value
act - Action as passed to DBMS_APPLICATION_INFO
ah - Action hash value

PARSING IN CURSOR #3 len=70 dep=0 uid=54 oct=42 lid=54 tim=6184206 hv=347037164
alter session set events '10046 trace name context forever, level 12'
EXEC #3:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=4,tim=6184206
WAIT #3: nam='SQL*Net message to client' ela= 0 p1=1111838976 p2=1 p3=0
WAIT #3: nam='SQL*Net message from client' ela= 818 p1=1111838976 p2=1 p3=0

Here we can see the actual statement used to enable tracing. It is preceded by a CURSOR record, which will always be the case (all SQL in the trace file will be preceded by a cursor record). The meaning of the fields in the cursor record is:

Parsing in Cursor #%d len=%d dep=%d uid=%ld oct=%d lid=%ld tim=%ld hv=%ld ad='%s'

Cursor #

The cursor number. You can use this to find the maximum number of open cursors in your application as well since this number is incremented and decremented
each time you open a new cursor and close an existing one.


Length of the SQL statement to follow.


The recursive depth of a SQL statement. Recursive SQL is SQL performed on
behalf of some other SQL. Typically, this is SQL executed by Oracle in order to
parse a query or perform space management. It can also be caused by PL/SQL
(which is SQL) running SQL. So, you may find your own SQL is 'recursive'.


User ID of the current schema. Note, this may be different than the lid below,
especially if you use alter session set current_schema to change the parsing schema.


Oracle Command Type. Numeric code indicating the type of SQL command
being parsed.


The user ID used to security-check the access privileges against. tim A timer. Its resolution is hundredths of seconds. You can subtract these times from each other as you encounter them in the trace file to see how far apart they occurred.


Hash ID of the SQL statement.


ADDR column of V$SQLAREA that refers to this SQL statement.

Next in the trace file, we can see the statement was actually executed right after parsing. The meanings of the values found in the EXEC record are:

EXEC Cursor#:c=%d,e=%d,p=%d,cr=%d,cu=%d,mis=%d,r=%d,dep=%d,og=%d,tim=%d

Cursor # The cursor number.

c - CPU time in hundredths of seconds.
e - Elapsed time in hundredths of seconds.
p - Number of physical reads performed.
cr - Consistent (query mode) reads (logical I/O).
cu - Current mode reads (logical I/O).
mis- Cursor miss in the library cache, indicates we had to parse the statement due to its being aged out of the shared pool, never having been in the shared pool, or it was otherwise invalidated.
r - Number of rows processed.
dep - Recursive depth of the SQL statement.

og - Optimizer goal, 1= all rows, 2 = first rows, 3 = rule, 4 = choose
tim - A timer.

There are other variations of the EXEC, record. Instead of the keyword EXEC we might also find:

PARSE - Parsing a statement.
FETCH - When fetching rows from a cursor.
UNMAP - Used to show freeing of temporary segments from intermediate results when
they are no longer needed.
SORT - UNMAP Same as unmap but for sort segments.

The records for each of PARSE, FETCH, UNMAP, and SORT UNMAP contain the same information as an EXEC record does in the same order.

The last part of this section has our first reports of wait events. In this case they were:

WAIT #3: nam='SQL*Net message to client' ela= 0 p1=1111838976 p2=1 p3=0
WAIT #3: nam='SQL*Net message from client' ela= 818 p1=1111838976 p2=1 p3=0

These are typical 'busy' wait events that we discussed earlier in this chapter. The message to client is the server sending the client a message and waiting for a response. The message from client is the server waiting for the client to send a request over. In this case, the elapsed time (ela) on that event was 8.18
seconds. This just means I waited 8.18 seconds after sending the ALTER SESSION command to the database to send the next command in this example. Unless you are feeding a constant and continuous stream of requests to the server, 'message from client' waits will be unavoidable and normal. The wait record includes these fields:

WAIT Cursor#: nam='%s' ela=%d p1=%ul p2=%ul p3=%ul

Cursor # The cursor number.
nam The name of the wait event. The Oracle Server Reference has a complete list of
wait events and details on each one. ela Elapsed time in hundredths of seconds for the event. p1, p2,p3 The parameters specific to the wait event. Each event has its own set of parameters. Refer to the Oracle Server Reference for the meaning of p1, p2, p3 for a specific wait event.

Now we are ready to look at our first real statement in our trace file:

PARSING IN CURSOR #3 len=110 dep=0 uid=54 oct=47 lid=54 tim=6185026 hv=2018962105

l_empno number default 7698;
update emp set ename = lower(ename) where empno = l_empno;

PARSE #3:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=4,tim=6185026
Here we see our PL/SQL block of code as we submitted it. We can see from the PARSE record that it parsed very quickly, even though it was not in the library cache yet (MIS=1). We see a BINDS record with nothing after it. That's because this particular block of code has no bind variables. We will revisit this bind record further on down. We'll continue onto the next statement in the trace file where things
get really interesting:

PARSING IN CURSOR #4 len=51 dep=1 uid=54 oct=6 lid=54 tim=6185026 hv=2518517322

PARSE #4:c=0,e=0,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=0,tim=6185026
bind 0: dty=2 mxl=22(21) mal=00 scl=00 pre=00 oacflg=03 oacfl2=1 size=24 offset=0
bfp=07425360 bln=22 avl=03 flg=05
WAIT #4: nam='enqueue' ela= 308 p1=1415053318 p2=393290 p3=2947
WAIT #4: nam='enqueue' ela= 307 p1=1415053318 p2=393290 p3=2947
WAIT #4: nam='enqueue' ela= 307 p1=1415053318 p2=393290 p3=2947
WAIT #4: nam='enqueue' ela= 308 p1=1415053318 p2=393290 p3=2947
WAIT #4: nam='enqueue' ela= 307 p1=1415053318 p2=393290 p3=2947
WAIT #4: nam='enqueue' ela= 308 p1=1415053318 p2=393290 p3=2947
WAIT #4: nam='enqueue' ela= 307 p1=1415053318 p2=393290 p3=2947
WAIT #4: nam='enqueue' ela= 308 p1=1415053318 p2=393290 p3=2947
WAIT #4: nam='enqueue' ela= 307 p1=1415053318 p2=393290 p3=2947
WAIT #4: nam='enqueue' ela= 308 p1=1415053318 p2=393290 p3=2947
WAIT #4: nam='enqueue' ela= 307 p1=1415053318 p2=393290 p3=2947

Here, we see our UPDATE statement as Oracle sees it. It is different than it was in the PL/SQL block of code; specifically our reference to l_empno (a variable) is replaced with a bind variable. PL/SQL replaces all references to local variables with bind variables before executing the statement. We can also see from the PARSING IN CURSOR record that the recursive depth (dep) is now one, not zero as it was in
the original PL/SQL block. We can tell that this is SQL executed by some other SQL (or PL/SQL); it was not submitted by the client application to the database. This is a flag we can use to tell us where to look for this SQL. If the dep is not zero, we know the SQL 'lives' in the database, not in a client application. We can use this to our advantage when tuning. SQL that lives in the database can be changed easily without affecting the application. SQL that lives in the client application requires us to find the client, change the code, recompile it, and re-deploy it. The more SQL I can get to live in my database, the better. I can fix it without touching the application itself.

We see a BINDS record in here again, this time with more information. This update statement has a bind value and we can clearly see what it is – the value of the first bind variable is 7,698. Now, if this were a problem query we were looking at (a poorly performing query), we would have most all of information we needed to tune it. We have the text of the query exactly. We have the values of all bind
variables (so we can run it with the same inputs). We even have the wait events that slowed us down during the execution. The only thing missing is the query plan, but that is only because we haven't paged down far enough.

The BIND record in the trace file contains this information:
Field Meaning
cursor - # The cursor number.
bind - N The bind position starting from 0 (0 being the first bind variable).
dty - Datatype (see below).
mxl - Maximum length of the bind variable.
mal - Maximum array length (when using array binds or bulk operations).
scl - Scale.
pre - Precision.
oacflg - Internal flags. If this number is odd, the bind variable might be null (allows nulls).
oacfl2 - Continuation of internal flags.
size - Size of buffer.
offset - Used in piecewise binds.
bfp - Bind Address.
bln - Bind buffer length.
avl - Actual value length.
flag - Internal Flags.

value The character string representation (if possible, might be a 'hex' dump) of the
bind value – this is what we really want!

The dty (data type) number may be decoded using information from the USER_TAB_COLUMNS view. If you select text from all_views where view_name = 'USER_VIEWS', you'll see a decode function call that will map the dty numbers to their character string representations.

The really interesting information, what we were after ultimately, is right here now; the wait information. We can clearly see why the update took almost one minutes wall clock time to complete even though it took no CPU time. The resource we were waiting on was a lock – if you recall from Chapters 3 and 4 where we discussed Locking and Concurrency and Transactions, an enqueue is one of two
locking mechanisms Oracle employs internally to serialize access to shared resources. The trace file shows us we were waiting on a lock, we were not waiting on I/O, we were not waiting on a log file sync, we were not waiting for buffers to become free – we were enqueued on some resource. Going further, we can take the p1 parameter, and decode it to see what type of lock we were waiting on. The process
for doing that:

SQL> create or replace
function enqueue_decode( l_p1 in number ) return varchar2
l_str varchar2(25);
select chr(bitand(l_p1,-16777216)/16777215)||
chr(bitand(l_p1, 16711680)/65535) || ' ' ||
decode( bitand(l_p1, 65535),
0, 'No lock',
1, 'No lock',
2, 'Row-Share',
3, 'Row-Exclusive',
4, 'Share',
5, 'Share Row-Excl',
6, 'Exclusive' )
into l_str
from dual;
return l_str;
Function created.

SQL> select enqueue_decode( 1415053318 ) from dual;
TX Exclusive

This shows that we were waiting on an exclusive row level lock. The answer to why it took one minute to update this row is clear now. There was another session holding a lock on the row we wanted, and they held it for one minute while we waited for it. What we do about this is application-specific. For example, in the above application, I am doing a 'blind update'. If I do not want the application to block
on the update, I might process it as:

select ename from emp where empno = :bv for update NOWAIT;

update emp set ename = lower(ename) where empno = :bv;

This would avoid the locking issue. At least now we are able to absolutely determine what caused this update to take so long and we are able to diagnose it in a post-mortem fashion. We don't have to 'be there' in order to diagnose this, we just need the relevant trace information.

Finishing up the trace file we see:
PARSING IN CURSOR #5 len=52 dep=0 uid=54 oct=47 lid=54 tim=6190451 hv=1697159799
PARSE #5:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=6190451
bind 0: dty=1 mxl=2000(255) mal=25 scl=00 pre=00 oacflg=43 oacfl2=10 size=2000
bfp=07448dd4 bln=255 avl=00 flg=05
bind 1: dty=2 mxl=22(02) mal=00 scl=00 pre=00 oacflg=01 oacfl2=0 size=24 offset=0
bfp=0741c7e8 bln=22 avl=02 flg=05
WAIT #5: nam='SQL*Net message to client' ela= 0 p1=1111838976 p2=1 p3=0
EXEC #5:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=4,tim=6190451
WAIT #5: nam='SQL*Net message from client' ela= 273 p1=1111838976 p2=1 p3=0

Now, this statement is a surprise. We never executed it ourselves and it isn't recursive SQL (dep=0). It came from the client application. This actually gives us insight into how SQL*PLUS and DBMS_OUTPUT work. I have set serveroutput on set in my login.sql file so whenever I log into SQL*PLUS, DBMS_OUTPUT is enabled by default. After every statement we execute that can generate DBMS_OUTPUT data, SQL*PLUS must call the GET_LINES procedure to get the data, and dump it to the screen (see the
section on DBMS_OUTPUT in Appendix A on Necessary Supplied Packages for many more details on DBMS_OUTPUT). Here, we can see SQL*PLUS making that call. Further, we can see that the first parameter, :LINES is in fact an array with 25 slots (mal=25). So, now we know that SQL*PLUS will

retrieve the DBMS_OUTPUT buffer 25 lines at a time and dump it to the screen. The fact that we can trace the behavior of SQL*PLUS is interesting; in fact, we can trace the behavior of any piece of software that executes against Oracle, to see what it does and how it does it.

Lastly, at the bottom of our trace file we see:
XCTEND rlbk=0, rd_only=0
WAIT #0: nam='log file sync' ela= 0 p1=988 p2=0 p3=0
STAT #4 id=1 cnt=1 pid=0 pos=0 obj=0 op='UPDATE EMP '
STAT #4 id=2 cnt=2 pid=1 pos=1 obj=24767 op='TABLE ACCESS FULL EMP '

The XCTEND (transaction boundary) record is a record of our commit, but once again, we didn't commit.

SQL*PLUS did that for us silently when we exited. The values of the XCTEND record are:

rlbk - Rollack flag. If 0 we committed. If 1 we rolled back.
rd_ - Only Read only flag. If 1 the transaction was read-only. If 0, changes were made and committed (or rolled back).

Immediately after the XCTEND record, we can see we had another wait event – this time for a log file sync. If we refer to the Oracle Server Reference and look up that wait event, we'll discover that the 988 for the p1 value represents the fact that buffer 988 in the redo log buffer had to be written out, and that's what we were waiting on. The wait was less than a hundredth of a second, as evidenced by ela=0.

The last records we observe in this trace file are the STAT records. These are the actual query plans used at run-time by our SQL. This is the plan we can trust. Not only that, but this plan will have the row counts correctly associated with each step of the plan. These records are only produced after the cursor they represent is closed. Generally, this means the client application must exit in order to see these
records, simply executing an ALTER SESSION SET SQL_TRACE=FALSE will not necessarily produce these records in the trace file. The values in this record are:

cursor # - The cursor number.
id - The line of the explain plan from 1 to number of lines in the plan.
cnt - The number of rows flowing through this phase of the query plan.
pid - Parent ID of this step in the plan. Used to correctly reflect the hierarchy of
the plan with indention.
pos - Position in the explain plan.
obj - Object ID of the referenced object if applicable.
op - The textual description of the operation performed.

There are only two other record types that we should find in a SQL_TRACE trace file. They represent errors encountered during the execution of a query. The errors will either be:

a) PARSE Errors – The SQL statement was not valid SQL.
b) Run-time errors – such as duplicate value on index, out of space, and so on.

I have used the fact that these errors are recorded to the trace file many times in problem solving. If you are using some off-the-shelf application, a third party tool, or even many Oracle commands and you get back a less than helpful error message, it may be very useful to run the command with SQL_TRACE enabled and see what happens. In many cases, the root cause of the error can be gleaned from the trace file since all of the SQL performed on your behalf is recorded there.
In order to demonstrate what these records look like, I executed the following SQL:

SQL> create table t ( x int primary key );
Table created.

SQL> alter session set sql_trace=true;
Session altered.

SQL> select * from;
select * from
ERROR at line 1:
ORA-00903: invalid table name
tkyte@TKYTE816> insert into t values ( 1 );
1 row created.
tkyte@TKYTE816> insert into t values ( 1 );
insert into t values ( 1 )
ERROR at line 1:
ORA-00001: unique constraint (TKYTE.SYS_C002207) violated
tkyte@TKYTE816> exit

Upon reviewing the trace file I discovered:
PARSE ERROR #3:len=15 dep=0 uid=69 oct=3 lid=69 tim=7160573 err=903
select * from
PARSING IN CURSOR #3 len=27 dep=0 uid=69 oct=2 lid=69 tim=7161010 hv=1601248092
insert into t values ( 1 )
PARSE #3:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=7161010
EXEC #3:c=1,e=9,p=0,cr=9,cu=7,mis=0,r=0,dep=0,og=4,tim=7161019
ERROR #3:err=1 tim=7161019

As you can see, finding the problem SQL (the root cause of the error) is trivial with this method; we can clearly see what went wrong. This is extremely useful if you are debugging an error that is happening deep inside of a stored procedure, for example. I've seen more than one case where an error occurs in a deeply nested stored procedure, and someone has a WHEN OTHERS exception handler that catches and
ignores all exceptions. It is my opinion that the WHEN OTHERS exception handler should never be used and all applications that use it, and do not re-raise the error immediately, should be erased immediately – they are bugs waiting to happen. In any case, the error happens, but is caught and ignored, and no

one is ever notified of it. The procedure appears to have worked, but it did not actually work. Here, a simple SQL_TRACE will show us what the procedure actually does and we can see that there was an error. Then all we need do is figure out why in the world it is being ignored. I also use this when commands return less than useful error messages. For example, if a snapshot (materialized view) refresh
ORA-00942: table or view does not exist, using SQL_TRACE will be extremely useful.

You might not be aware of all of the SQL that is executed on your behalf to refresh a materialized view, and the number of tables that are 'touched'. Using SQL_TRACE you'll actually be able to figure out which table or view does not exist, and put the correct permissions in place.
The format of the PARSE ERROR record is:

len - Length of the SQL statement
dep - Recursive depth of the SQL statement
uid - Parsing schema (may be different from the privilege schema)
oct - Oracle command type
lid - Privilege schema ID, whose privilege set the statement is actually executing under
tim - A timer
err - The ORA error code. If you execute:

ORA-00903: invalid table name

You can get the text of the error.
the contents of the ERROR record is simply:
Field Meaning
cursor # The cursor number
err - The ORA error code
tim - A timer



Creatting Staspack.

The spcreate.sql will create a user perfstat and the tables to hold the
statspack data. It will ask for the password to be set for perfstart user,
default tablespace where the statspack data has to be stored
and the temporary tablespace.

The statspack can be managed through the package statspack.

Describe the statspack in order to view the procedure
and function presents in that package.

> desc statspack;

The snap procedure of statspack is used to take the current snap.

> exec statspack.snap;

select * from stats$LEVEL_DESCRIPTION

---------- -----------------------------------------------------------
0 This level captures general statistics, including rollback
segment, row cache, SGA, system events, background events,
session events, system statistics, wait statistics, lock
statistics, and Latch information

5 This level includes capturing high resource usage SQL
Statements, along with all data captured by lower levels

6 This level includes capturing SQL plan and SQL plan usage
information for high resource usage SQL Statements, along
with all data captured by lower levels

7 This level captures segment level statistics, including
logical and physical reads, row lock, itl and buffer busy
waits, along with all data captured by lower levels

10 This level includes capturing Child Latch statistics, along
with all data captured by lower levels

The modify_statspack_parameter procedure is used to modify the
setting of statspack.

Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------

> exec statspack.modify_statspack_parameter (i_snap_level => 7);

To automate the collection of statspack snap, we need to
run the spauto.sql which is inside the oracle home/rdbms/admin.

By default the job will be scheduled to run for every 1 hour. If
we want to change the interval, modify the spauto.sql file
accordinly and then run the script.

> @?/rdbms/admin/spauto.sql;

To create the staspack report, run the spcreate.sql scirpt
which will ask for the start and the end snap id inorder to
generate the statsistics report between that time period.

> @?/rdbms/admin/spcreate.sql

The snaps will be generated for every one hour (or as per we scheduled) and
keeps filling your tablespace. Inorder to free up space in you tablespace
you can reatain last 10 days ( or maximum of one moth) data and purge the
rest of the snaps.

You can use the sppurge.sql script to accomplish this task. It will ask
for the start snap id and end snapid to be purged.

> @?/rdbms/admin/sppurge.sql;

To view this job you can use the dba_jobs view.

SQL> select job, log_user,priv_user, to_char(last_date,'DD-MON-YY HH24:MI:SS'), to_char(next_date,'DD-MON-YY HH24:MI:SS') FROM


---------- ------------------------------ ------------------------------ ------------------ ------------------
1 SYS SYS 25-NOV-10 10:00:04 25-NOV-10 11:00:00

To Remove the statspack snap job execute the below.

SQL> exec dbms_job.remove(1);

PL/SQL procedure successfully completed.

SQL> select job, log_user,priv_user, to_char(last_date,'DD-MON-YY HH24:MI:SS'), to_char(next_date,'DD-MON-YY HH24:MI:SS') FROM


no rows selected

To drop all the object and user perfstat execute the spdrop.sql

> @?/rdbms/admin/spdrop.sql

StatsPack is designed to be installed when connected as INTERNAL or more appropriately, as SYSDBA (CONNECT SYS/CHANGE_ON_INSTALL AS SYSDBA), although it will execute a CONNECT INTERNAL. In order to install, you must be able to perform that operation. In many installations, this will be a task that you must ask the DBA or administrators to perform.

Once you have the ability to connect INTERNAl, installing StatsPack is trivial. You simply run statscre.sql in 8.1.6 or spcreate.sql in 8.1.7. These will be found in
[ORACLE_HOME]\rdbms\admin when connected as INTERNAL via SQL*PLUS. It would look
something like this:

C:\oracle\RDBMS\ADMIN>sqlplus internal
SQL*PLUS: Release - Production on Sun Mar 18 11:52:32 2001
(c) Copyright 1999 Oracle Corporation. All rights reserved.
Connected to:
Oracle8i Enterprise Edition Release - Production
With the Partitioning option
JServer Release - Production

SQL> @statscre
... Installing Required Packages
You'll need to know three pieces of information before running the statscre.sql script. They are:

a) The default tablespace for the user PERFSTAT that will be created.
b) The temporary tablespace for that user.
c) The name of the tablespace you would like the StatsPack objects created in. You will not be prompted for this in 8.1.7, only in 8.1.6. This tablespace needs to have sufficient space for about 60 extents (so the size you need depends on your default initial extent size) The script will prompt you for this information as it executes. In the event you make a typo or inadvertently cancel the installation, you should use spdrop.sql (8.1.7 and up) or statsdrp.sql
(8.1.6 and before) to remove the user and installed views prior to attempting another install of StatsPack.

The StatsPack installation will create three .lis files (the names of which will be displayed to you during the installation). You should review these for any possible errors that might have occurred. They should install cleanly however, as long as you supplied valid tablespace names (and didn't already have a user PERFSTAT).

Now that StatsPack is installed, all we need to do is collect at least two points of data. The simplest way to do this is to use the STATSPACK package now owned by PERFSTAT as follows:

SQL> exec statspack.snap
PL/SQL procedure successfully completed.

Now, we just need to wait a while, let the system run 'normally', and then take another snapshot. Once we have two data points collected, running a report is just as easy. The script statsrep.sql (8.1.6) or spreport.sql (8.1.7) is available to be run for this purpose. It is a SQL*PLUS script designed to be run when connected as PERFSTAT (whose password by default is PERFSTAT – this should be immediately changed after installation!). The report format between version 8.1.6 and 8.1.7 changed
slightly, and as I prefer the 8.1.7 format to the 8.1.6 format, this is the report we will run. To run it, we will simply execute:

SQL> @spreport

DB Id DB Name Inst Num Instance
----------- ------------ -------- ------------
4080044148 ORA8I 1 ora8i

Completed Snapshots
Snap Snap

Instance DB Name Id Snap Started Level Comment
------------ ------------ ----- ----------------- ----- ----------------------
ora8i ORA8I 1 18 Mar 2001 12:44 10
2 18 Mar 2001 12:47 10

Specify the Begin and End Snapshot Ids
Enter value for begin_snap:

We are presented with a list of data points that we have collected, and will be asked to pick any two to compare. Then a default report name will be generated and we are prompted to accept it or to supply a new name. The report is then generated. The following is the 8.1.7 version of a StatsPack report sectionby-section with comments describing what to look for, and how to interpret the results.

STATSPACK report for
DB Name DB Id Instance Inst Num Release OPS Host
------------ ----------- ------------ -------- ----------- --- ------------
ORA8I 4080044148 ora8i 1 NO aria

Snap Id Snap Time Sessions
------- ------------------ --------
Begin Snap: 1 18-Mar-01 12:44:41 22
End Snap: 3 18-Mar-01 12:57:23 22
Elapsed: 12.70 (mins)

Cache Sizes
db_block_buffers: 16384 log_buffer: 512000
db_block_size: 8192 shared_pool_size: 102400000

The first part of the report is purely informational. It shows what database this report was generated against, including the DB Name and DB Id. These should be unique in your environment. The Instance variable is the Oracle SID of the database. These three bits of data should help you identify exactly which database this report was generated from. That was one of the issues with the old BSTAT/ESTAT reports – they did not show this identifying information. More than once, I was given a
report to critique, but it turned out the report wasn't run on the server with the problem. This should never happen again. The other information here is the timing information; when were these data points collected and how far apart they were. It is surprising to many that the data points do not have to be far apart from each other – the above report covers a 13 minute window. They only have to span a normal
range of activity. A StatsPack report generated with a 15 minute window is as valid as one with a onehour window (or an even larger timeframe). In fact, as the window gets larger, it may be more difficult to come to definitive conclusions given the raw numeric data. Lastly, in this section, we see some highlevel configuration information for the server. The main components of the SGA are visible for review:

Load Profile
Per Second Per Transaction
--------------- ---------------
Redo size: 5,982.09 13,446.47
Logical reads: 1,664.37 3,741.15
Block changes: 17.83 40.09
Physical reads: 15.25 34.29
Physical writes: 5.66 12.73
User calls: 3.28 7.37
Parses: 16.44 36.96
Hard parses: 0.17 0.37
Sorts: 2.95 6.64
Logons: 0.14 0.32
Executes: 30.23 67.95
Transactions: 0.44

This section shows a great deal of information, in a very small amount of space. We can see how much REDO is generated on average every second and for every transaction. Here, I can see that I generate about 5 to 6 KB of redo per second. My average transaction generates just 13 KB of redo. The next bit of information has to do with logical and physical I/O. I can see here that about 1 percent of my logical
reads resulted in physical I/O – that is pretty good. I can also see that on average, my transactions perform almost 4,000 logical reads. Whether that is high or not depends on the type of system you have.

In my case, there were some large background jobs executing, so a high read count is acceptable. Now for the really important information: my parse-related statistics. Here I can see that I do about 16 parses per second and about 0.17 of those are hard parses (SQL that never existed before). Every six seconds or so, my system is parsing some bit of SQL for the very first time. That is not bad. However, I
would prefer a count of zero in this column in a finely tuned system that has been running for a couple of days. All SQL should be in the shared pool after some point in time.

% Blocks changed per Read: 1.07 Recursive Call %: 97.39
Rollback per transaction %: 0.29 Rows per Sort: 151.30

The next section in the above shows us some interesting numbers. The % Blocks Changed per Read shows us that in this case, 99 percent of the logical reads we do are for blocks that are only read, not updated. This system updates only about 1 percent of the blocks retrieved. The Recursive Call % is very high – over 97 percent. This does not mean that 97 percent of the SQL executed on my system is
due to 'space management' or parsing. If you recall from our analysis of the raw trace file earlier from SQL_TRACE, SQL executed from PL/SQL is considered 'recursive SQL'. On my system, virtually all work is performed using PL/SQL, other than mod_plsql (an Apache web server module) and an occasional background job, everything is written in PL/SQL on my system. I would be surprised if the Recursive Call % were low in this case.

The percentage of transactions that rolled back (Rollback per transaction %)is very low, and that is a good thing. Rolling back is extremely expensive. First, we did the work, which was expensive. Then, we undid the work and again, this is expensive. We did a lot of work for nothing. If you find that most of your transactions roll back, you are spending too much time doing work and then immediately undoing it. You should investigate why you roll back so much, and how you can rework your application to avoid that. On the system reported on, one out of every 345 transactions resulted in a rollback – this is acceptable.

Instance Efficiency Percentages (Target 100%)
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.08 In-memory Sort %: 99.60
Library Hit %: 99.46 Soft Parse %: 98.99
Execute to Parse %: 45.61 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 87.88 % Non-Parse CPU: 100.00

Next, we have the Instance Efficiency Percentages. Here, they state the target is 100percent, and this is mostly true. I think the one exception to that is the Execute to Parse ratio. This is a measure of how many times a statement was executed as opposed to being parsed. In a system where we parse, and then execute the statement, and never execute it again in the same session, this ratio will be zero. The above shows that for every one parse, I had about 1.8 executes (almost a two to one ratio).
It will depend on the nature of the system as to whether this is good or bad. On my particular system,we use mod_plsql to do all of our applications. A session is created, a stored procedure is executed, a web page is formed, and the session is destroyed. Unless we execute the same exact SQL many times in a single stored procedure, our execute to parse ratio will be low. On the other hand, if I had a client server or a stateful connection to the database (perhaps via a servlet interface), I would expect this ratio to be much closer to the target of 100 percent.

I understand however, that given the architecture I am using, achieving an extremely high execute to parse ratio is not something I can do. In my mind, the most important ratios are the parse ratios – they get my attention immediately. The soft
parse ratio is the ratio of how many soft versus hard parses we do. 99 percent of the parses on this system are soft parses (reused from the shared pool). That is good. If we see a low soft parse ratio, this would be indicative of a system that did not use bind variables. I would expect to see a very high ratio in this field
regardless of tools or techniques used. A low number means you are wasting resources and introducing contention. The next number to look at is the Parse CPU to Parse Elapsd. Here, I show about 88 percent. This is a little low; I should work on that. In this case for every CPU second spent parsing we spent about 1.13 seconds wall clock time. This means we spent some time waiting for a resource – if the
ratio was 100 percent, it would imply CPU time was equal to elapsed time and we processed without any waits. Lastly, when we look at Non-Parse CPU, this is a comparison of time spent doing real work versus time spent parsing queries. The report computes this ratio with round(100*(1-PARSE_CPU/TOT_CPU), 2). If the TOT_CPU is very high compared to the PARSE_CPU (as it should be), this ratio will be very near 100 percent, as mine is. This is good, and indicates most of the work
performed by the computer was work done to execute the queries, and not to parse them.

All in all, in looking at the above section, my recommendation would be to reduce the hard parses even further. There obviously are a couple of statements still not using bind variables somewhere in the system (every six seconds a new query is introduced). This in turn would reduce the overall number of parses done because a hard parse has to perform a lot of recursive SQL itself. By simply removing a
single hard parse call, we'll reduce the number of soft parses we perform as well. Everything else in that section looked acceptable. This first section we just reviewed is my favorite part of the StatsPack report, at a glance it gives a good overview of the relative 'health' of your system. Now, onto the rest of the

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 75.03 75.26
% SQL with executions>1: 79.18 78.72
% Memory for SQL w/exec>1: 74.15 73.33

This little snippet gives us some insight into our shared pool utilization. The details shown above are:

a) Memory Usage – The percentage of the shared pool in use. This number should stabilize in mid-70 percent to less than 90 percent range over time. If the percentage is too low, you are wasting memory. If the percentage is too high, you are aging components out of the shared pool, this will cause SQL to be hard parsed if it is executed again. In a right-sized system, your shared pool usage will stay in the 75 percent to less than 90 percent range.

b) SQL with executions>1 – This is a measure of how many SQL statements were found in
the shared pool that have been executed more than once. This number must be considered carefully in a system that tends to run in cycles, where a different set of SQL is executed during one part of the day versus another (for example, OLTP during the day, DSS at night). You'll have a bunch of SQL statements in your shared pool during the observed time that were not executed, only because the processes that would execute them did not run during the period of observation. Only if your system runs the same working set of SQL continuously will this number be near 100 percent. Here I show that almost 80 percent of the SQL in my shared pool was used more than once in the 13 minute observation window. The remaining 20 percent was there already probably – my system just had no cause to execute it.

c) Memory for SQL w/exec>1 – This is a measure of how much of the memory the SQL you
used frequently consumes, compared to the SQL you did not use frequently. This number will in general be very close to the percentage of SQL with executions greater than one, unless you have some queries that take an inordinate amount of memory. The usefulness of this particular value is questionable.

So, in general you would like to see about 75 to 85 percent of the shared pool being utilized over time in a steady state. The percentage of SQL with executions greater than one should be near 100 percent if the time window for the StatsPack report is big enough to cover all of your cycles. This is one statistic that is affected by the duration of time between the observations. You would expect it to increase as the
amount of time between observations increases.

Now, onto the next section:
Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
-------------------------------------------- ------------ ------------ -------
SQL*Net more data from dblink 1,861 836 35.86
control file parallel write 245 644 27.63
log file sync 150 280 12.01
db file scattered read 1,020 275 11.80
db file sequential read 483 165 7.08
Wait Events for DB: ORA8I Instance: ora8i Snaps: 1 -3
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> ordered by wait time desc, waits desc (idle events last)

Here is your 'low hanging fruit', the events that are slowing you down more than anything else. The first step is to look at the Wait Time and see if it is even worth your time to bother tuning anything based on them. For example, I spent 8.36 seconds in a 13 minute time period waiting on data from a dblink. Is that worth my time to investigate and 'fix'? In this case, I would say no, the average wait was 0.004 seconds. Further, I know I have a background process running that is doing a large operation over a database link, the wait time is actually pretty small, all things considered.

So, suppose I did have something that needed attention. Here, what you need to do is first find out what the events mean. For example, if you look up 'log file sync' in the Oracle Reference Manual, you'll discover that it is:

When a user session COMMITs, the session's redo information needs to be flushed to the redo logfile. The user session will post the LGWR to write the log buffer to the redo log file. When the LGWR has finished writing, it will post the user session.
Wait Time: The wait time includes the writing of the log buffer and the post.
Now that I understand what the wait is, I would try to come up with a way to make it 'go away'. In the case of a log file sync, this means tuning LGWR. Basically, to make that wait go away you need to make the disks go faster, generate less redo, reduce contention for the log disks, and so on. Discovering the wait event is one thing, making it go away is another. There are some 200+ events that are timed in
Oracle – none of them really have a cut and dry 'do this and they go away' solution.
One thing to keep in mind is that you will always be waiting on something. If you remove one roadblock, another will pop up. You will never get this list of large wait events to go away – you'll always have some. If you find that you are always 'tuning to go as fast as possible', you'll never be done.

You can always make the system go one percent faster, but the amount of work and time you have to spend to get that one percent will increase exponentially over time. Tuning is something that must be done with a goal in mind, an end point. If you cannot say 'I am done when X is true', and X is something measurable, you are just wasting time.

The next section in the report:
Wait Events for DB: ORA8I Instance: ora8i Snaps: 1 –3
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> ordered by wait time desc, waits desc (idle events last)

Total Wait wait Waits
Event Waits Timeouts Time (cs) (ms) /txn
---------------------------- ------------ ---------- ----------- ------ ------
SQL*Net more data from dblin 1,861 0 836 4 5.5
control file parallel write 245 0 644 26 0.7
log file sync 150 0 280 19 0.4
db file scattered read 1,020 0 275 3 3.0

shows all of the wait events that occurred for clients of the database in the window of measurement. In addition to what was available in the Top 5 report, it shows the average wait time in thousands of seconds, and the number of times a transaction waited on it. You can use this to identify wait events that are relevant. I should point out in this listing, there are lots of events you should just ignore. For
example, SQL*Net message from client – ignore it in systems where the client has think time. It represents the time the client sat there and didn't ask the database to do anything (on the other hand, if you see this during a data load – then the client is not feeding the database fast enough and it does
indicate a problem). In this case, however, this just means the client was connected and didn't make any requests. The report heading notes that idle events last. Everything from SQL*Net message from client on down is an 'idle' event, some process was waiting to be told to do something. Ignore them for the most part.
Background Wait Events for DB: ORA8I Instance: ora8i Snaps: 1 -3
-> ordered by wait time desc, waits desc (idle events last)

Total Wait wait Waits
Event Waits Timeouts Time (cs) (ms) /txn
---------------------------- ------------ ---------- ----------- ------ ------
control file parallel write 245 0 644 26 0.7
control file sequential read 42 0 25 6 0.1
log file parallel write 374 0 14 0 1.1
db file parallel write 24 0 0 0 0.1
LGWR wait for redo copy 3 0 0 0 0.0
rdbms ipc message 1,379 741 564,886 4096 4.1
smon timer 3 3 92,163 ###### 0.0
pmon timer 248 248 76,076 3068 0.7

The above section of the StatsPack report shows the wait events that occurred for the database 'background' processes (DBWR, LGRWR, and so on). Once again, idle waits are listed at the bottom of the report and generally should be ignored. These are useful for instance-wide tuning, to find out what the background processes are waiting on. It is easy for you to determine what is holding your session up,
we've done that many times in the examples on bind variables and in other places by querying V$SESSION_EVENT. This report snippet shows the wait events for the background processes in much the same way that we showed them for our individual session. SQL ordered by Gets for DB: ORA8I Instance: ora8i Snaps: 1 -3

-> End Buffer Gets Threshold: 10000
-> Note that resources reported for PL/SQL includes the resources used by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100

Buffer Gets Executions Gets per Exec % Total Hash Value
--------------- ------------ -------------- ------- ------------
713,388 1 713,388.0 56.2 1907729738

BEGIN sys.sync_users.do_it; END;
485,161 1 485,161.0 38.3 1989876028
239,778 2 119,889.0 18.9 617705294
BEGIN statspack.snap(10); END;
This section of the report shows us our 'TOP' SQL. In this section of the report, the SQL is ordered by Buffer Gets, in other words, how many logical I/Os it performed. As noted in the comment at the top, the buffer gets for a PL/SQL unit includes the buffer gets for all SQL executed by the block of code. Therefore, you will frequently see PL/SQL procedures at the top of this list, since the sum of the
individual statements executed by the stored procedure are summed up.

In this particular example, I can see that there was a PL/SQL routine sync_users.do_it. It hit almost three quarters of a million block buffers in its single execution. Whether that is bad or not isn't clear from this report. The only thing this report can do is report on the facts – it makes no judgements based
on those facts. In this case, I know that sync_users is a large batch job that is synchronizing the data dictionaries on two databases, making sure that a user created on one database, is created on the other and the all of the roles are the same and passwords in sync. I expect it to be large – as it turns out, this is
the job that is waiting on the dblink wait event noticed above as well.

SQL ordered by Reads for DB: ORA8I Instance: ora8i Snaps: 1 -3
-> End Disk Reads Threshold: 1000
Physical Reads Executions Reads per Exec % Total Hash Value
--------------- ------------ -------------- ------- ------------
8,484 1 8,484.0 73.0 1907729738

BEGIN sys.sync_users.do_it; END;

2,810 2 1,405.0 24.2 617705294

BEGIN statspack.snap(10); END;

This next section is very similar to the one above, but instead of reporting on logical I/O, it is reporting on physical I/O. This is showing me the SQL that incurs the most read activity on the system, the physical I/O. These are the queries and processes you might want to look at if you are finding yourself to be I/O bound. The sync_users routine might be in need of a little tuning – it is by far the largest
consumer of disk resources on this particular system.

SQL ordered by Executions for DB: ORA8I Instance: ora8i Snaps: 1 -3
-> End Executions Threshold: 100
Executions Rows Processed Rows per Exec Hash Value
------------ ---------------- ---------------- ------------
2,583 0 0.0 4044433098


This portion of the 'TOP' SQL report shows us the SQL that was executed the most during this period of time. This might be useful in order to isolate some very frequently executed queries to see if there is some way to change the logic, to avoid having to execute them so frequently. Perhaps a query is being
executed inside a loop and it could be executed once outside the loop instead – simple algorithm changes might be made to reduce the number of times you have to execute that query. Even if they run blindingly fast, anything that is executed millions of times will start to eat up a large chunk of time.

SQL ordered by Version Count for DB: ORA8I Instance: ora8i Snaps: 1 -3
-> End Version Count Threshold: 20
Count Executions Hash Value
-------- ------------ ------------
21 415 451919557


21 110 1510890808


This report shows the SQL ordered by how many versions of the SQL appear in the shared pool. There can be many reasons why there is more than one version of the same exact SQL statement in the shared pool. Some of the major causes for this are:

a) Different users submit the same SQL, but different tables will actually be accessed.

b) The same query is executed with a radically different environment, for example the optimizer goal is different.

c) Fine Grained Access Control is being used to rewrite the query. Each version in the shared pool is really a very different query.

d) The client uses different data types/lengths on the bind variables – one program binds a character string of length 10 and another binds a character string of length 20 – this will result in a new version of the SQL statement as well.

The following example shows you how you can get many versions of the same SQL query in the shared pool. We start by flushing the shared pool to remove all statements, and then we'll get three versions of the same query loaded in there:

SQL> connect tkyte/tkyte
SQL> alter system flush shared_pool;
System altered.
SQL> select * from t where x = 5;
no rows selected
SQL> alter session set optimizer_goal=first_rows;
Session altered.
SQL> select * from t where x = 5;
no rows selected
SQL> connect scott/tiger
SQL> select * from t where x = 5;
no rows selected
SQL> connect tkyte/tkyte
SQL> select sql_text, version_count
from v$sqlarea
where sql_text like 'select * from t where x = 5%'

-------------------- -------------
select * from t wher 3
e x = 5

SQL> select loaded_versions, optimizer_mode,
parsing_user_id, parsing_schema_id
from v$sql
where sql_text like 'select * from t where x = 5%'

--------------- ---------- --------------- -----------------
1 CHOOSE 69 69
1 FIRST_ROWS 69 69
1 CHOOSE 54 54

This shows why we have multiple versions in the shared pool. The first two are in there because even though the same user ID parsed them, they were parsed in different environments. The first time the optimizer goal was CHOOSE, the next time it was FIRST ROWS. Since a different optimizer mode can result in a different plan, we need two versions of that query. The last row is in there because it is a
totally different query; the text is the same however. This query selects from SCOTT.T, not TKYTE.T – it is a totally separate query all together.

A high version count should be avoided for the same reason that you should use bind variables and avoid soft parsing queries whenever possible; you are doing more work then you need to. Sometimes multiple versions of a query must be kept, especially when it is caused by different accounts executing the same SQL against different tables, as was the case above with TKYTE.T and SCOTT.T. The other case, where you have different environments causing multiple versions, should typically be avoided
whenever possible.

In my case above, the 21 versions were caused by 21 different user accounts parsing the same query against different tables.

Instance Activity Stats for DB: ORA8I Instance: ora8i Snaps: 1 -3
Statistic Total per Second per Trans
--------------------------------- ---------------- ------------ ------------
CPU used by this session 14,196,226 18,630.2 41,876.8
parse count (hard) 127 0.2 0.4
parse count (total) 12,530 16.4 37.0
parse time cpu 203 0.3 0.6
parse time elapsed 231 0.3 0.7
sorts (memory) 2,242 2.9 6.6
This part of the report, Instance Activity Stats, contains lots of very detailed numbers. We've already seen many of these numbers – they were used to compute the ratios and statistics at the beginning of the report. For example, looking at parse count (hard), and (total) we find:

SQL> select round( 100 *(1-127/12530),2 ) from dual;


which is exactly our Soft Parse % from the beginning of the report. This detailed data was used to compute many of the ratios and summaries above.

Tablespace IO Stats for DB: ORA8I Instance: ora8i Snaps: 1 -3
->ordered by IOs (Reads + Writes) desc
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
TEMP 1,221 2 0.0 2.3 628 1 0 0.0

File IO Stats for DB: ORA8I Instance: ora8i Snaps: 1 –3
->ordered by Tablespace, File

Tablespace Filename
------------------------ ----------------------------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
DRSYS /d02/oradata/ora8i/drsys01.dbf
14 0 7.9 2.4 0 0 0

The above two reports are I/O-oriented. In general, you are looking for an even distribution of reads and writes across devices here. You want to find out what files might be 'hot'. Once your DBA understands how the data is read and written, they may be able to get some performance gain by distributing I/O across disks more evenly.

Buffer Pool Statistics for DB: ORA8I Instance: ora8i Snaps: 1 -3
-> Pools D: default pool, K: keep pool, R: recycle pool

Free Write Buffer
Buffer Consistent Physical Physical Buffer Complete Busy
P Gets Gets Reads Writes Waits Waits Waits
- ----------- ------------- ----------- ---------- ------- -------- ----------
D 9,183 721,865 7,586 118 0 0 0

If we were using the multiple buffer pool feature, the above would show us the breakdown of usage by buffer pool. As it is, this is just reiterating information we saw at the beginning of the report.

Rollback Segment Stats for DB: ORA8I Instance: ora8i Snaps: 1 -3
->A high value for "Pct Waits" suggests more rollback segments may be required
Trans Table Pct Undo Bytes
RBS No Gets Waits Written Wraps Shrinks Extends
------ ------------ ------- --------------- -------- -------- --------
0 5.0 0.00 0 0 0 0
1 866.0 0.00 447,312 1 0 0

Rollback Segment Storage for DB: ORA8I Instance: ora8i Snaps: 1 -3
->Optimal Size should be larger than Avg Active
RBS No Segment Size Avg Active Optimal Size Maximum Size
------ --------------- --------------- --------------- ---------------
0 663,552 7,372 663,552
1 26,206,208 526,774 26,206,208
2 26,206,208 649,805 26,206,208
The above shows rollback segment activity. Again, you are looking for an even distribution across rollback segments (with the exception of the SYSTEM rollback segment of course). Also, the report headings have the most useful information to keep in mind while inspecting this section of the report – especially the advice about Optimal being larger then Avg Active if you use an optimal setting at all
(the report above shows this database does not use optimal on the rollback segment size). As this is mostly DBA-related activity (the I/O and rollback segment information) we'll continue onto the next section:

Latch Activity for DB: ORA8I Instance: ora8i Snaps: 1 -3

->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Pct Avg Pct
Get Get Slps NoWait NoWait
Latch Name Requests Miss /Miss Requests Miss
----------------------------- -------------- ------ ------ ------------ ------
Active checkpoint queue latch 271 0.0 0

virtual circuit queues 37 0.0 0
Latch Sleep breakdown for DB: ORA8I Instance: ora8i Snaps: 1 -3
-> ordered by misses desc
Get Spin &
Latch Name Requests Misses Sleeps Sleeps 1->4
-------------------------- -------------- ----------- ----------- ------------
library cache 202,907 82 12 72/8/2/0/0
cache buffers chains 2,082,767 26 1 25/1/0/0/0
Latch Miss Sources for DB: ORA8I Instance: ora8i Snaps: 1 -3
-> only latches with sleeps are shown
-> ordered by name, sleeps desc
NoWait Waiter
Latch Name Where Misses Sleeps Sleeps
------------------------ -------------------------- ------- ---------- -------
cache buffers chains kcbgtcr: kslbegin 0 1 1
library cache kglic 0 7 0
library cache kglhdgn: child: 0 3 1
library cache kglget: child: KGLDSBYD 0 2 0
Child Latch Statistics DB: ORA8I Instance: ora8i Snaps: 1 -3
-> only latches with sleeps are shown
-> ordered by name, gets desc
Child Get
Latch Name Num Requests Misses Sleeps
---------------------- ------ -------------- ----------- ----------
Spin &
Sleeps 1->4
cache buffers chains 930 93,800 21 1
library cache 2 48,412 34 6
library cache 1 42,069 10 3
library cache 5 37,334 10 2
library cache 4 36,007 13 1

latches are lightweight serialization devices used by Oracle. They are always either 'gotten' or not; they are not like enqueues where a session requests a
lock and is put to sleep until it is available. With a latch, the requestor is told immediately 'you got it' or 'you didn't get it'. The requestor then 'spins' (consuming CPU) for a bit and tries again. If that doesn't work, it goes to 'sleep' for a bit, and tries again. The reports above show you this activity. For example, I
can see that the library cache latch was missed 82 times out of 202,907 attempts. Further, 72 of the 82 attempts were successful the next time they tried, 8 on the second and 2 on the third. The ratio of Gets to Misses is very near 100 percent in this system (almost 100 percent of the gets were immediately successful) so there is nothing to do. On a system that does not use bind variables, or parses queries too
frequently, you'll see lots of contention for the library cache latch. One other piece of information I can derive from this latch report above is that about 4.5 percent (93800/2082767) of my cache buffers chains latch requests were for one child latch out of (at least) 930. This probably indicates that I have a hot
block – one that many processes are trying to access simultaneously. They all need a latch in order to access this block and this in resulting in some contention. That would be something to look into. The latch report is useful to identify what latch contention you are experiencing. We will need to go back to the application-level tuning to fix it. Latch contention is a symptom, not a cause of problems. In order to
get rid of the symptom we must determine the cause. Unfortunately, you cannot generate a list of recommendations in the form of 'if you have contention for this latch, you need to do this' (if only it was so easy!). Rather, once you have identified that you in fact do have latch contention, you must go back
to the application design and determine why you are contending for that resource.

This is a report on the dictionary cache. I don't like this report too much, since there is not very much I can do about the numbers reported back by it. The dictionary cache is totally controlled by Oracle, we cannot size the components of it. We can only size the shared pool size, as long as our shared pool is
sized correctly – the above is supposed to take care of itself. As my shared pool is at 75 percent utilized, it is large enough. If my shared pool was 'full' and the hit ratios here were bad, increasing the shared pool would improve the hit ratios.

Library Cache Activity for DB: ORA8I Instance: ora8i Snaps: 1 -3
->"Pct Misses" should be very low
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
--------------- ------------ ------ -------------- ------ ---------- --------
BODY 5,018 0.0 5,018 0.0 0 0
CLUSTER 0 0 0 0
INDEX 1 0.0 1 0.0 0 0
OBJECT 0 0 0 0
PIPE 765 0.0 765 0.0 0 0
SQL AREA 1,283 6.9 38,321 0.6 39 0
TABLE/PROCEDURE 3,005 0.3 11,488 0.6 1 0
TRIGGER 21 0.0 21 0.0 0 0

Here we are seeing the breakdown of the library cache hit ratio. On a finely tuned system the Pct Misses will be very near zero. On a development system, or a system where objects are frequently dropped and recreated, some of these ratios will be high, as will the Invalidations column. Sizing the shared pool and ensuring that you minimize hard parses by using bind variables, is the way to ensure good ratios in the above.

SGA Memory Summary for DB: ORA8I Instance: ora8i Snaps: 1 -3
SGA regions Size in Bytes
------------------------------ ----------------
Database Buffers 134,217,728
Fixed Size 69,616
Redo Buffers 532,480
Variable Size 130,187,264
sum 265,007,088

This part of the report shows the utilization of the shared pool in some detail. You can see how over time, memory usage shifts from component to component, some sections giving up memory and others allocating more memory. I've found this section of the report most useful as a way to explain why something was reported in another section. For example, I received a series of StatsPack reports for analysis. They showed fairly constant hard and soft parse counts and then all of a sudden, the hard
parses went through the roof for about an hour, then everything went back to normal. Using the reports, I was able to determine this at the same time the hard parse count shot up, the shared pool sql area memory usage decreased by a large amount, many tens of MB. Curious about this, I asked 'did someone flush the shared pool?' and the answer was 'of course, yes'. It was part of their standard operating
procedure; every six hours they flushed the shared pool. Why? No one really knew, they just always did. In fact, a job was set up to do it. Disabling this job fixed the periodic performance issues, which were totally self-induced by flushing the shared pool (and hence flushing all of the plans that they spent
the last six hours accumulating).

init.ora Parameters for DB: ORA8I Instance: ora8i Snaps: 1 -3
End value
Parameter Name Begin value (if different)
----------------------------- ---------------------------- --------------
background_dump_dest /export/home/ora816/admin/ora8i/b
compatible 8.1.0,
control_files /d01/oradata/ora8i/control01.ctl,
core_dump_dest /export/home/ora816/admin/ora8i/c
db_block_buffers 16384

End of Report

The last part of the report is a listing of the init.ora parameters that differ from the default. These are useful in determining why something is occurring, similar to the shared pool memory report above. You can see quickly which parameters have been explicitly set and determine their effects on the system.



We've seen the names of many 'events' in this section. We have seen how the event has a name and then up to three parameters p1, p2, and p3. We can either flip to the manuals every time we want to look up the meanings of p1, p2, and p3 for a given event, or we can simply query V$EVENT_NAME. For example, two of the events we've encountered in this chapter are latch free and enqueue. Using this view:

sql> select * from v$event_name where name = 'latch free';

---------- -------------------- ---------- ---------- ----------

2 latch free address number tries

sql> select * from v$event_name where name = 'enqueue';
---------- -------------------- ---------- ---------- ----------
11 enqueue name|mode id1 id2


V$FILESTAT and V$TEMPSTAT can give you a quick glimpse at the I/O done on your system, and how long Oracle has spent reading and writing any given file. You can either use StatsPack to get snapshots of this usage, or you can take a quick snapshot of this table yourself, wait a while, and then compare the


This is a view we've used a couple of times already in Chapter 3 on Locking and Concurrency. This view is used to tell you who has what locked. Remember, Oracle does not store row level locks externally from the data itself, so don't go looking for them in here. In this view, you'll be able to see who has TM (DML Enqueue) locks on tables, so you'll be able to tell that session 'x,y' has some rows locked in a
given table but you cannot determine which rows they have locked.


This view contains statistics for your session only. This is very useful to diagnose how your session is doing. The schema that creates this view must have direct access to the V$STATNAME and V$MYSTAT objects, for example:

sql> grant select on v_$statname to tkyte;
Grant succeeded.

sql> grant select on v_$mystat to tkyte;
Grant succeeded.

Notice that we used V_$STATNAME and not V$STATNAME. This is because V$STATNAME is really just a synonym for the view V_$STATNAME. This view contains the statistic number, an internal code, not the name of the event that is being tracked. I typically install a view like this:

sql> create view my_stats
select, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
View created

------------------------------ ----------
logons cumulative 1
logons current 1
opened cursors cumulative 160
opened cursors current 1

in my systems to make it easier to query. Once we have that set up, you can create queries that give you StatsPack-like information for your session. For example, we can compute the all-important Soft Parse Ratio like this:

SQL> select round(100 *
(1-max(decode(name,'parse count (hard)',value,null))/
max(decode(name,'parse count (total)',value,null))), 2
) "Soft Parse Ratio"
from my_stats

Soft Parse Ratio

If you create a set of queries like this, and put them in a 'logoff' trigger or embed them directly in your application, you can monitor the performance of your individual applications to see how many commits they perform, how many rollbacks, and so on by session and application.


This view contains a list of every open cursor for all sessions. This is very useful for tracking down 'cursor leaks' and to see what SQL your session has been executing. Oracle will cache cursors even after you explicitly close them so don't be surprised to see cursors in there that you thought you closed (you might have). For example, in the same SQL*PLUS session I was using to compute the Soft Parse
Ratio above I found:

SQL> select * from v$open_cursor
where sid = ( select sid from v$mystat where rownum = 1 );

-------- --- --------- -------- ---------- ------------------------------
8C1706A0 92 OPS$TKYTE 8AD6BB54 130268528 select lower(user) || '@' ||

As you can see, there are a number of cursors apparently open. However:

SQL> select * from my_stats where name = 'opened cursors

------------------------------ ----------
opened cursors current 1

I really only have one cursor truly opened (and that is actually the cursor used to query how many open cursors I have). Oracle is caching the other cursors in the hope that I will execute them again.


V$PARAMETER is useful to find various settings relevant to tuning, such as the block size, the sort area size, and so on. Its relevance to tuning is that it contains all of the init.ora parameter values, and many of these will have a bearing on our performance.


V$SESSION contains a row for every session in the database. As with V$STATNAME shown earlier, you will need to be granted permissions by your DBA to use this view:

SQL> grant select on v_$session to tkyte;
Grant succeeded.

To find your session's specific row, you can simply query:
SQL> select * from v$session
where sid = ( select sid from v$mystat where rownum = 1 )

I usually use this view to see what else is going on in the database. For example, I frequently use a script called showsql that shows me a listing of every session, its status (active or not), the module, action, and client_info settings, and finally for active sessions, what SQL they happen to be running.

The MODULE, ACTION, and CLIENT_INFO fields are settable by you in your applications via a call to the procedures in the DBMS_APPLICATION_INFO package (see the Appendix A on Necessary Supplied Packages for details on this package). I would encourage you to instrument every application you build with calls to this package to set this field. It can really save time trying to figure out what session is running which application – if you put that information into the V$ view, it'll be obvious.

My showsql script is simply:

column username format a15 word_wrapped
column module format a15 word_wrapped
column action format a15 word_wrapped
column client_info format a15 word_wrapped
column status format a10
column sid_serial format a15
set feedback off
set serveroutput on
select username, ''''||sid||','||serial#||'''' sid_serial, status , module,
action, client_info
from v$session
where username is not null
column username format a20
column sql_text format a55 word_wrapped
set serveroutput on size 1000000
x number;
procedure p ( p_str in varchar2 )
l_str long := p_str;
exit when l_str is null;
dbms_output.put_line( substr( l_str, 1, 250 ) );
l_str := substr( l_str, 251 );
end loop;
for x in
( select username||'('||sid||','||serial#||
') ospid = ' || process ||
' program = ' || program username,
to_char(LOGON_TIME,' Day HH24:MI') logon_time,
to_char(sysdate,' Day HH24:MI') current_time,
sql_address, LAST_CALL_ET
from v$session
where status = 'ACTIVE'
and rawtohex(sql_address) <> '00'
and username is not null order by last_call_et )
dbms_output.put_line( '--------------------' );
dbms_output.put_line( x.username );
dbms_output.put_line( x.logon_time || ' ' ||
' last et = ' ||
for y in ( select sql_text
from v$sqltext_with_newlines
where address = x.sql_address
order by piece )
p( y.sql_text );
end loop;
end loop;

and it produces output like this:
sql> @showsql
--------------- ------------- ---------- --------------- ------ -----------
OPS$TKYTE '30,23483' ACTIVE 01@ showsql.sql
OPS$TKYTE(30,23483) ospid = 29832 program = sqlplus@aria (TNS V1-V3)
Sunday 20:34 Sunday 20:40 last et = 0
SELECT USERNAME || '(' || SID || ',' || SERIAL# || ') ospid
= ' || PROCESS || ' program = ' || PROGRAM USERNAME,TO_CHAR(
CTXSYS(56,32) ospid = 15610 program = ctxsrv@aria (TNS V1-V3)
Monday 11:52 Sunday 20:40 last et = 20
BEGIN drilist.get_cmd( :sid, :mbox, :pmask, :cmd_type,:disp_
id, :disp_return_address, :disp_user, :disp_command, :disp_arg1,
:disp_arg2, :disp_arg3, :disp_arg4, :disp_arg5, :disp_arg6, :di
sp_arg7, :disp_arg8, :disp_arg9, :disp_arg10 ); :error_stack :=
drue.get_stack; exception when dr_def.textile_error then :error_
stack := drue.get_stack; when others then drue.text_on_stack(sql
errm); :error_stack := drue.get_stack; END;

As you can see, SQL*PLUS has filled in the MODULE column in V$SESSION with the name of the script that is being executed currently. This can be very useful, especially when your applications take advantage of it to show their progress.


We have used this view a number of times already. I use it frequently to see what is causing a procedure or query to 'wait' for a resource. You can get similar information from a trace file with the appropriate event set, but this makes it so easy to capture the current state of your session's events, run a process, and then display the differences. This is much easier then reviewing a trace file for similar information.

This view contains the wait events for all sessions in the system, so it is useful for seeing the major wait events for sessions, other than your own as well. Similar to the way we can turn SQL_TRACE on in another session using DBMS_SYSTEM, we can use V$SESSION_EVENT to see what wait events other sessions are experiencing.


We will look at this view in some detail in Appendix A on Necessary Supplied Packages. This view is used by long running processes such as index creates, backups, restores, and anything the cost based optimizer thinks will take more than six seconds of execution time, to report their progress. Your applications can use this view as well, via the DBMS_APPLICATION_INFO package. If you have a long
running process or job, it should be instrumented with calls to BMS_APPLICATION_INFO to report on its current progress. In this fashion, you can monitor your job easily and see if it is really 'stuck' or just taking its time to complete its task.


This view shows all sessions currently waiting for something and how long they have been waiting. This is typically used to monitor systems that appear to have 'hung' or are running very slowly.


V$SESSTAT is similar to V$MYSTAT but it shows statistics for all sessions, not just yours. This is useful for monitoring another session that you have identified to see how it is doing.


This is useful to see how much I/O your (or any other) session has done. I use this view in a fashion similar to V$MYSTAT and V$SESSION_EVENT. I take a snapshot, run some processes, and then review the 'difference' between the two points. This shows how much I/O the process performs. I could get this information from a TKPROF report as well, but this lets me aggregate the counts up easily. TKPROF will show the I/O performed by statement. This allows me to run an arbitrary set of statements and collect the I/O statistics for them.


These views show us all of the SQL that is parsed and stored in the shared pool. We have already used both of these in various places in this section.
V$SQLAREA is an aggregate view. There will be one row per SQL query in this view. The
VERSION_COUNT column tells us how many rows we could expect to find in the V$SQL table for that query. Try to avoid querying this view and use V$SQL instead. V$SQLAREA can be quite expensive, especially on an already busy system.
We can use the V$SQLAREA and V$SQL views to see what SQL our system executes, how many times this SQL is executed, parsed, how many logical and physical I/Os it does, and so on. We can use these views to find the SQL that is missing bind variables as well.


V$STATNAME contains the mapping from the statistic number to the statistic name. It is useful when joined with V$MYSTAT and V$SESSTAT to turn the statistic number into the readable name.


Whereas V$SESSTAT keeps statistics by session, V$SYSSTAT keeps them for the instance. As sessions come and go, their entries are added and removed in the V$SESSTAT view. The data in V$SYSSTAT persists until the database is shutdown. This is the information StatsPack uses to derive many of its ratios and such.


This is to events what the V$SYSSTAT view is to statistics. It contains instance-level wait event information. It is what StatsPack uses to derive many of its reports as well.

Ref : Expert Oracle - Thomas Kyte.

--- Also some more points of Nested loop ----

Nested loop (loop over loop)

In this algorithm, an outer loop is formed which consists of few entries and then for each entry, and inner loop is processed.


Select tab1.*, tab2.* from tabl, tab2 where tabl.col1=tab2.col2;

It is processed like:

For i in (select * from tab1) loop
For j in (select * from tab2 where col2=i.col1) loop
Display results;
End loop;
End loop;

The Steps involved in doing nested loop are:

a) Identify outer (driving) table

b) Assign inner (driven) table to outer table.

c) For every row of outer table, access the rows of inner table.

In execution plan it is seen like this:


When optimizer uses nested loops?

Optimizer uses nested loop when we are joining tables containing small number of rows with an efficient driving condition. It is important to have an index on column of inner join table as this table is probed every time for a new value from outer table.

Optimizer may not use nested loop in case:

No of rows of both the table is quite high
Inner query always results in same set of records
The access path of inner table is independent of data coming from outer table.
Note: You will see more use of nested loop when using FIRST_ROWS optimizer mode as it works on model of showing instantaneous results to user as they are fetched. There is no need for selecting caching any data before it is returned to user. In case of hash join it is needed and is explained below.

Hash join

Hash joins are used when the joining large tables. The optimizer uses smaller of the 2 tables to build a hash table in memory and the scans the large tables and compares the hash value (of rows from large table) with this hash table to find the joined rows.

The algorithm of hash join is divided in two parts

Build a in-memory hash table on smaller of the two tables.
Probe this hash table with hash value for each row second table
In simpler terms it works like

Build phase

For each row RW1 in small (left/build) table loop
Calculate hash value on RW1 join key
Insert RW1 in appropriate hash bucket.
End loop;

Probe Phase

For each row RW2 in big (right/probe) table loop
Calculate the hash value on RW2 join key
For each row RW1 in hash table loop
If RW1 joins with RW2
Return RW1, RW2
End loop;
End loop;

When optimizer uses hash join?

Optimizer uses has join while joining big tables or big fraction of small tables.

Unlike nested loop, the output of hash join result is not instantaneous as hash joining is blocked on building up hash table.

Note: You may see more hash joins used with ALL_ROWS optimizer mode, because it works on model of showing results after all the rows of at least one of the tables are hashed in hash table.

Sort merge join

Sort merge join is used to join two independent data sources. They perform better than nested loop when the volume of data is big in tables but not as good as hash joins in general.

They perform better than hash join when the join condition columns are already sorted or there is no sorting required.

The full operation is done in two parts:

Sort join operation
get first row RW1 from input1
get first row RW2 from input2.

Merge join operation
while not at the end of either input loop
if RW1 joins with RW2
get next row R2 from input 2
return (RW1, RW2)
else if RW1 < style=""> get next row RW1 from input 1
get next row RW2 from input 2
end loop

Note: If the data is already sorted, first step is avoided.

Important point to understand is, unlike nested loop where driven (inner) table is read as many number of times as the input from outer table, in sort merge join each of the tables involved are accessed at most once. So they prove to be better than nested loop when the data set is large.

When optimizer uses Sort merge join?

a) When the join condition is an inequality condition (like <, <=, >=). This is because hash join cannot be used for inequality conditions and if the data set is large, nested loop is definitely not an option.

b) If sorting is anyways required due to some other attribute (other than join) like “order by”, optimizer prefers sort merge join over hash join as it is cheaper.

Note: Sort merge join can be seen with both ALL_ROWS and FIRST_ROWS optimizer hint because it works on a model of first sorting both the data sources and then start returning the results. So if the data set is large and you have FIRST_ROWS as optimizer goal, optimizer may prefer sort merge join over nested loop because of large data. And if you have ALL_ROWS as optimizer goal and if any inequality condition is used the SQL, optimizer may use sort-merge join over hash join

Posted by Sachin at Friday, March 02, 2007
Labels: CBO, Tuning

Sachin said...
I wanted to put some examples in the post itself, but missed it earlier.

Here it is:

SQL> conn scott/*****
SQL> create table e as select * from emp;

Table created.

SQL> create table d as select * from dept;

Table created.

SQL> create index e_deptno on e(deptno);

Index created.

Gather D stats as it is

SQL> exec dbms_stats.gather_table_stats('SCOTT','D')
PL/SQL procedure successfully completed.

Set artificial stats for E:

SQL> exec dbms_stats.set_table_stats(ownname => 'SCOTT', tabname => 'E', numrows => 100, numblks => 100, avgrlen => 124);

PL/SQL procedure successfully completed.

Set artificial stats for E_DEPTNO index

SQL> exec dbms_stats.set_index_stats(ownname => 'SCOTT', indname => 'E_DEPTNO', numrows => 100, numlblks => 10);

PL/SQL procedure successfully completed.

Check out the plan:

A) With less number of rows(100 in E), you will see Nested loop getting used.

SQL> select e.ename,d.dname from e, d where e.deptno=d.deptno;

Execution Plan
Plan hash value: 3204653704

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 100 | 2200 | 6 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| E | 25 | 225 | 1 (0)| 00:00:01 |
| 2 | NESTED LOOPS | | 100 | 2200 | 6 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL | D | 4 | 52 | 3 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | E_DEPTNO | 33 | | 0 (0)| 00:00:01 |

B) Let us set some more artificial stats to see which plans is getting used:

SQL> exec dbms_stats.set_table_stats(ownname => 'SCOTT', tabname => 'E', numrows => 1000000, numblks => 10000, avgrlen => 124);

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.set_index_stats(ownname => 'SCOTT', indname => 'E_DEPTNO', numrows => 1000000, numlblks => 1000);

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.set_table_stats(ownname => 'SCOTT', tabname => 'D', numrows => 1000000,numblks => 10000 , avgrlen => 124);

PL/SQL procedure successfully completed.

Now we have 1000000 number of rows in E and D table both and index on E(DEPTNO) reflects the same.

Plans changes !!

SQL> select e.ename,d.dname from e, d where e.deptno=d.deptno;

Execution Plan
Plan hash value: 51064926

| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 250G| 5122G| | 3968K(100)| 13:13:45 |
|* 1 | HASH JOIN | | 250G| 5122G| 20M| 3968K(100)| 13:13:45 |
| 2 | TABLE ACCESS FULL| E | 1000K| 8789K| | 2246 (3)| 00:00:27 |
| 3 | TABLE ACCESS FULL| D | 1000K| 12M| | 2227 (2)| 00:00:27 |

C) Now to test MERGE JOIN, we set moderate number of rows and do some ordering business.

SQL> exec dbms_stats.set_table_stats(ownname => 'SCOTT', tabname => 'E', numrows => 10000, numblks => 1000, avgrlen => 124);

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.set_index_stats(ownname => 'SCOTT', indname => 'E_DEPTNO', numrows => 10000, numlblks => 100);

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.set_table_stats(ownname => 'SCOTT', tabname => 'D', numrows => 1000, numblks => 100, avgrlen => 124);

PL/SQL procedure successfully completed.

SQL> select e.ename,d.dname from e, d where e.deptno=d.deptno order by e.deptno;

Execution Plan
Plan hash value: 915894881

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2500K| 52M| 167 (26)| 00:00:02 |
| 1 | MERGE JOIN | | 2500K| 52M| 167 (26)| 00:00:02 |
| 2 | TABLE ACCESS BY INDEX ROWID| E | 10000 | 90000 | 102 (1)| 00:00:02 |
| 3 | INDEX FULL SCAN | E_DEPTNO | 10000 | | 100 (0)| 00:00:02 |
|* 4 | SORT JOIN | | 1000 | 13000 | 25 (4)| 00:00:01 |
| 5 | TABLE ACCESS FULL | D | 1000 | 13000 | 24 (0)| 00:00:01 |

Hope these examples help in learning ...

Ref : Sachin blog-- Thanks

1 comment:

  1. DreamHost is ultimately the best hosting provider with plans for any hosting needs.