Thursday, November 18, 2010

Tuning Waits and Latches.

Buffer busy waits:

select count(*), p1 filen, p2 blockn, SESSION_ID, SESSION_STATE, BLOCKING_SESSION,WAIT_TIME,TIME_WAITED,CURRENT_OBJ#,CURRENT_FILE#,
CURRENT_BLOCK#,sample_time from
v$active_session_history where event='buffer busy waits'
group by p1,p2,SESSION_ID,SESSION_STATE,BLOCKING_SESSION,WAIT_TIME,
TIME_WAITED,CURRENT_OBJ#,CURRENT_FILE#,CURRENT_BLOCK#,sample_time
/

Wait details

select a.sid,a.event,/*a.wait_time,*/a.seconds_in_wait,a.state, b.OBJECT_ID,
b.SESSION_ID,b.ORACLE_USERNAME,b.OS_USER_NAME,b.PROCESS,b.LOCKED_MODE
from v$session_wait a,v$locked_object b where a.sid=b.session_id AND
a.event not in('SQL*Net message from client' ,'rdbms ipc message')
/

Currently blocking file details

select p1 "File #", p2 "Block #", p3 "Reason Code"
from
v$session_wait
where
event = 'buffer busy waits'
/

Wait Stats

SELECT time,count,class
FROM V$WAITSTAT
ORDER BY time,count
/

To find for which object we are getting buffer busy waits

SELECT count(*) NUM_WAITERS, p1 FILE#, p2 BLK#, p3 CLASS
FROM v$session_wait
WHERE event = 'buffer busy waits'
GROUP BY p1, p2, p3
/


SELECT owner,segment_name,segment_type
FROM dba_extents
WHERE file_id=&file
AND &blockid BETWEEN block_id AND block_id + blocks
/


Wait Problem and Fix

Sequential Read
Indicates many index reads—tune the code (especially joins)

Scattered Read
Indicates many full table scans—tune the code; cache small tables

Free Buffer
Increase the DB_CACHE_SIZE; shorten the checkpoint; tune the code

Buffer Busy Segment header
add freelists or freelist groups

Buffer Busy Data block
separate “hot” data; use reverse key indexes; use
smaller blocks; increase initrans (debatable); reduce block
popularity; make I/O faster

Buffer Busy Undo header
add rollback segments or areas

Buffer Busy Undo block
commit more; larger rollback segments or areas

Latch Free
Investigate the detail

Enqueue – ST
Use LMTs or preallocate large extents

Enqueue – HW
Preallocate extents above the high water mark

Enqueue – TX4
Increase initrans or use a smaller block size on the table or index

Enqueue – TX6
Fix the code that is making the block unsharable (use v$lock to find)

Enqueue – TM
Index foreign keys; check application locking of tables

Log Buffer Space
Increase the log buffer; use faster disks for the redo logs

Log File Switch
Archive destination slow or full; add more or larger redo logs

Log file sync
Commit more records(in bactches) at a time; use faster redo log disks; use
raw devices

Write complete waits
Add database writers; checkpoint more often; buffer cache too small

Idle Event Ignore it
Following are some common idle events (by type of idle event):
■ Dispatcher timer (shared server idle event)
■ Lock manager wait for remote message (RAC idle event)
■ Pipe get (user process idle event)
■ Pmon timer (background process idle event)
■ PX Idle Wait: (parallel query idle event)
■ PX Deq Credit: need buffer (parallel query idle event)
■ PX Deq: Execution Msg (parallel query idle event)
■ Rdbms ipc message (background process idle event)
■ Smon timer (Background process idle event)
■ SQL*Net message from client (user process idle event)
■ Virtual circuit status (shared server idle event)


High Buffer Gets SQL's

The buffer gets statistic is cumulative, the query with the most buffer
gets may not be the worst-performing query in the database; it may just have been executed enough times to earn the highest ranking. Compare the cumulative number of buffer gets to the cumulative number of disk reads for the queries; if the numbers are close, then you should evaluate the explain plan for the query to find out why it is performing so many disk reads. If the disk reads are not high but the buffer gets are high and the executions are low, then the query is either using a bad index or performing a join in the wrong order. This is also a problem for your system, since you are using a lot of your memory unnecessarily.



Tuning the top 25 buffer get and top 25 physical get queries has
yielded system performance gains of anywhere from 5 percent to
5000+ percent. The SQL section of the STATSPACK report tells you which queries to consider tuning first. The top 10 SQL statements should not be substantially more than 10 percent of your buffer gets or disk reads.

Things to Look for in the Instance Statistics Section

Compare the number of sorts performed on disk to the number performed in memory; increase the PGA_AGGREGATE_TARGET (or SORT_AREA_SIZE for earlier versions) to reduce disk sorts .

If physical reads are high, you are probably performing full table scans. If there are a significant number of full table scans of large tables, evaluate the
most-used queries and try to reduce this inefficiency by using indexes.

A large number for consistent gets signals potentially over-indexed or non-selective index use. If dirty buffers inspected is high (over 5 percent) relative to free buffers requested, the DB_CACHE_SIZE may be too small or you may not be checkpointing often enough.

If leaf node splits are high, consider rebuilding indexes
that have grown and fragmented.

If there are many sorts being performed to disk (greater than 1–5
percent of the total number of rows being sorted), you may need to
increase the initialization parameters associated with sorting.

Dirty buffers inspected- This is the number of dirty (modified) data buffers that were aged out on the LRU list. A value here indicates that the DBWR is not keeping up. You may benefit by adding more DBWRs. If the dirty buffers inspected is greater than 0, consider increasing the database writes.


Consistent gets
The number of blocks read from the buffer cache for queries without
the SELECT FOR UPDATE clause. The value for this statistic plus the value of the “db
block gets” statistic constitute what is referred to as logical reads (all reads cached in memory). These are usually the CURRENT version of the block, but it can also be a Consistent Read (CR) version.

DB block gets
The number of blocks read in the buffer cache that were accessed for
INSERT, UPDATE, DELETE, or SELECT FOR UPDATE statements. These are CURRENT
versions of the block. When these are changed, they are reflected in the ‘db block
changes’ value.

Physical reads
The number of data blocks that were not read from the cache. This
could be reads from disks, OS cache, or disk cache to satisfy a SELECT, SELECT FOR
UPDATE, INSERT, UPDATE, or DELETE statement.

Enqueue timeouts The number of times that an enqueue (lock) was requested and the
specific one that was requested was not available. If this statistic is above 0 investigate the locking issues.

Free buffer inspected ‘Free buffers inspected’ includes buffers that were skipped
because they were dirty, pinned, or busy. If you subtract those values (‘dirty buffers inspected’ and ‘buffer is pinned count’) from this statistic, it will leave the buffers that could not be reused due to latch contention. A large number would be a good indicator of a too-small buffer cache.

Parse count The number of times a SQL statement was parsed (total count).

Recursive calls
The number of recursive calls to the database. This type of call occurs
for a few reasons—misses in the Dictionary Cache, dynamic storage extension, and
when PL/SQL statements are executed. Generally, if the number of recursive calls is
more than 4 per process, you should check the Dictionary Cache Hit Ratio, and see if
there are tables or indexes with a large number of extents. Unless there is a significant use of PL/SQL, the ratio of recursive calls to user calls should be 10 percent or less.

Redo size
The size in bytes of the amount of redo information that was written to the
redo logs. This information can be used to help size the redo logs.

Sorts (disk)
The number of sorts that were unable to be performed in memory and
therefore required the creation of a temp segment in the temporary tablespace.
This statistic divided by the sorts (memory) should not be above 5 percent. If it is, you should increase the SORT_AREA_SIZE or PGA_AGGREGATE_TARGET parameter in the init.ora file.

Sorts (memory)
The number of sorts that were performed in memory.

Sorts (rows)
The total number of rows that were sorted.


The “sorts (disk)” statistic divided by the “sorts (memory)” should
not be above 1–5 percent. If it is, you should increase the PGA_
AGGREGATE_TARGET (or SORT_AREA_SIZE) parameter in the
initialization file (given that physical memory is available to do this).
Remember that the memory allocated for Sort_Area_Size is a per-user
value and PGA_AGGREGATE_TARGET is across all sessions.


■ Table fetch by rowid Indicates the number of rows that were accessed by using a
ROWID. This ROWID came from either an index or a ‘where rowid = ’ statement. A
high number usually indicates a well-tuned application as far as fetching the data goes.
■ Table fetch continued row The number of rows that were fetched that were chained or
migrated.

If chained rows are indicated, the problem needs to be fixed as soon as
possible. Chained rows can cause severe degradation of performance if
a large number of rows are chained

Tablespace and File I/O Statistics
The next section of the report provides the I/O statistics first listed by I/Os and then listed by tablespace and by datafile. If the I/O activity is not properly distributed among your files, you may encounter performance bottlenecks during periods of high activity. As a rule of thumb, you don’t want more than 100 I/Os per second per 10,000 RPM disk (even with a RAID array). If the ‘Av Rd(ms)’ column (reads per millisecond) is higher than 14 ms (given a fair amount of reading
is being done), you may want to investigate, since most disks should provide at least this much performance. If this column shows 1000 ms or more, you probably have some type of I/O problem, and if it shows ######, then you have a serious I/O problem of some kind (this can also be a formatting problem, but anything greater than 1000 is a problem when there are a fair number of reads being done). I have seen I/O issues that are related to other problems but show up as an I/O
problem. For disks with a lot of memory cached on the disk, the I/O time is often less than 1 ms for a disk where heavy reading is being done. You should use this section of the report to identify such bottlenecks and to measure how effectively you have resolved those problems.

The parameter that can be set in the init.ora to help improve the read time is the
DB_FILE_MULTIBLOCK_READ_COUNT parameter, which controls the number of blocks that
can be read in one I/O when a full table scan is being performed. This can reduce the number of I/Os needed to scan a table, thus improving the performance of the full table scan. Unfortunately, the optimizer might do more full table scans as a result of setting DB_FILE_MULTIBLOCK_READ_COUNT (you don’t want this behavior), so you may also need to set the OPTIMIZER_INDEX_COST_ADJ to a number, such as 10, to eliminate this problem and drive the use of indexes.

If the number of physical reads is heavier on one physical disk, proper
balancing of data will probably increase performance.




Latch Statistics
Latches are low-level queueing mechanisms (they’re accurately referred to as mutual exclusion mechanisms) used to protect shared memory structures in the SGA (memory). Latches are like locks on memory that are very quickly gotten and released, consuming roughly 32 bytes. Latches are used to prevent concurrent access to a shared memory structure. If the latch is not available, then a latch free miss is recorded. Most latch problems are related to not using bind variables
(library cache latch), redo generation issues (redo allocation latch), buffer cache contention issues (cache buffers lru chain), and hot blocks in the buffer cache (cache buffers chain).

There are also latch waits related to bugs, so check MetaLink as well. When latch miss ratios are greater than 0.5 percent, you should investigate the issue. In Oracle 10gR2, the Cache Buffers Chains (CBC) latch can be shared to some degree.

There are two types of latches: “willing to wait” latches (example is a library cache latch) and “not willing to wait” latches (an example is a redo copy latch). A process that is willing to wait will try to acquire a latch. If none are available, it will spin and then request the latch again. It will continue to do this up to the _SPIN_COUNT initialization parameter (note that spinning costs CPU). If it can’t get a latch after spinning up to the _SPIN_COUNT, it will go to sleep, not do anything for a while, and then will wake up after one centisecond (one hundredth of a
second). It will do this twice. It will then start this process again, spinning up to the _SPIN_COUNT and then sleeping for twice as long (two centiseconds). After doing this again, it will double again. So the pattern is 1, 1, 2, 2, 4, 4, etc. It will do this until it gets the latch.

Every time the latch sleeps, it will create a latch sleep wait. An example of a “willing to wait” latch is a library cache latch. Some latches are “not willing to wait.” A latch of this type does not wait for the latch to become available. It immediately times out and retries to obtain the latch. A redo copy latch is an example of a “not willing to wait” latch. A “not willing to wait” latch will
generate information for the immediate_gets and the immediate_misses columns of the V$LATCH view and also in the STATSPACK report. The hit ratio for these latches should also approach 99 percent, and the misses should never fall below 1 percent misses.

By viewing this section of STATSPACK or querying the V$LATCH view, you can see how
many processes had to wait (a latch miss) or sleep (a latch sleep) and the number of times they had to sleep. If you see ##### in any field, it usually means bad news, as the value exceeds the length of the field. V$LATCHHOLDER, V$LATCHNAME, and V$LATCH_CHILDREN are also helpful in investigating latch issues. Here is a partial listing of the latch activity section; there are three sections (latch activity, latch sleep, and latch miss) of the STATSPACK report (this one has a
library cache problem—partial display latches only):

One thing to remember about processes that are sleeping: These processes may also be holding other latches that will not be released until the process is finished with them. This will cause even more processes to sleep, waiting for those latches. So, you can see how important it is to reduce contention as much as possible. The following table explains the columns in this part of the report:

Latch Name
The name of the latch.

Gets
The number of times a “willing to wait” request for a latch was
requested and it was available.

Misses
The number of times a “willing to wait” request for a latch was
initially requested but was not available.

Sleeps
The number of a “willing to wait request” for a latch failed over
and over until the spin count was exceeded and the process went
to sleep. The number of sleeps may be higher than the misses.
Processes may sleep multiple times before obtaining the latch.

NoWait
Misses The number of times an immediate (not willing to wait) request for
a latch was unsuccessful

■ Latch free

When ‘latch free’ is high in the wait events section of the report, then there
are problems that need to be investigated in the latch section of the report. This section will help you look for which latches are a problem. The problem could be a sleeping latch (couldn’t get the latch and sleeping until the next try) or a spinning latch (waiting and retrying based on spin count).

■ Library cache and shared pool The library cache is a hash table you access through
an array of hash buckets (similar to the buffer cache). The memory for the library cache comes from the shared pool (the dictionary cache used for Oracle internal objects is also part of the shared pool). The library cache latch serializes access to objects in the library cache. Every time a SQL or PL/SQL procedure, package, function, or trigger is executed, this library cache latch is used to search the shared pool for the exact statement so that it can be reused. A single shared pool latch protected the allocation of memory in the library cache in Oracle 8i; as of Oracle 9i, there are 7 child latches for this. Contention for the ‘shared pool,’ ‘library cache pin,’ or ‘library cache’ latches primarily occurs
when the shared pool is too small or when statements are not reused. Statements are not usually reused when bind variables are not used. Common but not exact SQL can flood the shared pool with statements. Increasing the size of the shared pool, at times, only makes the latch problem worse. You can also set the value of the initialization parameter CURSOR_SHARING=FORCE (or CURSOR_SHARING=SIMILAR) to help fix this issue and to reduce problems when bind variables are not used. CURSOR_SHARING=FORCE will substitute bind variables for literals.
CURSOR_SHARING=SIMILAR will substitute bind variables for literals if the execution plan is guaranteed to be the same.



■ Redo copy - The redo copy latch is used to copy redo records from the PGA into the redo log buffer. The number of “redo copy” latches has a default of 2*CPU_COUNT, but this can be set using the _LOG_SIMULTANEOUS_COPIES initialization parameter. Increasing this parameter may help to reduce contention for the redo copy latch.

■ Redo allocation - The redo allocation latch (allocates the space in the redo log buffer) contention can be reduced by increasing the size of the log buffer (LOG_BUFFER) or by using the NOLOGGING feature, which will reduce the load on the redo log buffer. You should also try to avoid unnecessary commits.

■ Row cache - objects The “row cache objects” latch contention usually means that there is contention in the data dictionary. This may also be a symptom of excessive parsing of SQL statements that depend on public synonyms. Increasing the shared pool usually solves this latch problem. You usually increase the shared pool for a library cache latch problem well before this one is a problem. Also, according to MetaLink Note 166474.1, “Use Locally Managed tablespaces for your application objects, especially indexes. This will decrease Row Cache locks in a surprising fashion and consequently avoid common hanging problems.”


■ Cache buffers chains (CBC) The “cache buffers chains” (CBC) latch is needed to scan
the SGA buffer cache for database cache buffers. In Oracle 10g, the CBC can be shared, eliminating some of the contention. Tuning the code to use less of these is the best solution to eliminating problems with this latch. Also, reducing the popularity of the block will reduce the length of the hash chain (as discussed in the next item).

The CBC latches are used when searching for, adding, or removing a buffer from the
buffer cache. Buffer hash table x$bh holds headers (on a hash chain protected by a CBC latch) that point to db_block buffers in memory. Buffers are “hashed to a chain,” and the _db_block_hash_buckets define the number of chains (buckets) to which a buffer will hash. The more buckets (chains) that there are, the smaller the “chain” length will be with buffers hashed to the same chain (as long as it’s a prime number).

The CBC latches are used to protect a buffer list in the buffer cache. If _db_block_hash_buckets is not set to a prime number, you get many buffers hashed to one chain and none hashed to others (causing hot blocks to tie up other blocks on the chain) because of hashing anomalies. Contention on this latch could indicate a “hot block” or bad setting for _db_block_hash_buckets prior to 9i. Prior to version 8i, Oracle made this the prime number higher than db_block_buffers/4 and this worked pretty well, although multiple blocks still got hashed to the same chain. In 8i, Oracle made this db_block_buffers*2,but they forgot to make it prime (which, because it is a hashed value, caused many blocks to be hashed to the same chain); many users experienced severe problems with this latch (you can set _db_block_hash_buckets = next prime(db_block_buffers*2) to solve this issue in prior versions). In 9i and 10g, Oracle sets it correctly and there are
enough “hash latches,” as people often call them. You will access a lot of these, since you need one every time you access a block, but you should not have a miss ratio of over 1–2 percent on this latch.

■ Hot blocks Blocks often accessed in the buffer cache cause “cache buffers chains”
latch issues. Hot blocks may also be a symptom of poorly tuned SQL statements. A hot
record creates a hot block that can cause issues for other records inside that block as well as any block “hashed” to the same chain. To find the hot block, query v$latch_children for the address and join it to v$bh to identify the blocks protected by this latch (this will show all blocks that are affected by the hot block). You can identify the object by querying DBA_EXTENTS according to the file# and dbablk found from v$bh. Using a reverse key index, if the hot block is on an index, will move sequential records to other blocks so that they are not locked up by the hot block in the chain. If the hot block is the index root block, a reverse-key index won’t help.

■ Cache buffers LRU chain The “cache buffers lru chain” latch is used to scan the LRU
(least recently used) chain containing all of the blocks in the buffer cache. A small buffer cache, excessive buffer cache throughput, many cache-based sorts, and the DBWR not keeping up with the workload are all culprits that can cause this issue. Try to fix the queries that are causing the excessive logical reads and/or use multiple buffer pools.

Latch Problem and Fix
Library cache
Use bind variables; adjust the shared_pool_size.

Shared pool
Use bind variables; adjust the shared_pool_size.

Redo allocation
Minimize redo generation and avoid unnecessary commits.

Redo copy
Increase the _log_simultaneous_copies.

Row cache objects
Increase the shared pool.

Cache buffers chain
Increase _DB_BLOCK_HASH_BUCKETS or make it prime.

Cache buffers lru chain
Use multiple buffer pools or fix queries causing excessive reads.

Latches are like locks on pieces of memory (or memory buffers). If the
latch hit ratio is below 99 percent, there is a serious problem, since
not even the lock to get memory could be gotten.




■ The files needed to create, manage, and drop the STATSPACK objects are all in the
/rdbms/admin subdirectory under the Oracle software home directory, and all start with the letters ‘sp’.
■ Create a tablespace to hold the STATSPACK data apart from your application and
SYSTEM objects.
■ Change the PERFSTAT account’s password and consider locking the account when it is
no longer in use.
■ Select the proper level for your reporting. In general, start with level 5 and use a higher level for further investigation.
■ Avoid running STATSPACK reports for intervals that include database shutdowns.
■ Actively manage the STATSPACK data, analyzing and purging it as needed. Monitor its
space usage and include it in your backup and upgrade plans.
■ If you choose to run both, ensure that you stagger the data collection of AWR from the collection for STATSPACK (by at least 30 minutes) to avoid a performance hit as well as conflicts.
■ If you use Grid Control, you can run the AWR Report directly from Grid Control.
■ Get to know your system by reviewing and knowing the regular Load Profile of your
system. Significant changes to the Load Profile during what should be similar workloads or common times during the day may warrant further investigation.
■ Hit ratios are a great barometer of the health of your system. A large increase or drop from day to day is an indicator of a major change that needs to be investigated.
■ Generally, buffer and library cache hit ratios should be greater than 95 percent for OLTP, but they could be lower for a data warehouse that may do many full table scans.
■ Tuning by wait events is one of the best possible reactive tuning methods.
■ The top 5 wait events reveal to you the largest issues on your system at the macro level. Rarely do they point you to a specific problem. Other parts of STATSPACK will tell you why you are receiving the top 5 waits.

■ Tuning the top 25 buffer get and top 25 physical get queries has yielded system
performance gains of anywhere from 5 to 5000 percent. The SQL section of the
STATSPACK report tells you which queries to potentially tune first.
■ The top 10 percent of your SQL statements should not be more than 10 percent of your buffer gets or disk reads.
■ If the free buffers inspected divided by the free buffer scans equals less than 1, the DB_CACHE_SIZE parameter may need to be increased.
■ The “sorts (disk)” statistic divided by the “sorts (memory)” should not be above 1–5 percent. If it is, you should increase the PGA_AGGREGATE_TARGET (or SORT_AREA_SIZE) parameter in the initialization file (given that physical memory is available to do this). Remember that the memory allocated for Sort_Area_Size is a per-user value and PGA_AGGREGATE_TARGET is across all sessions.
■ Latches are like locks on pieces of memory (or memory buffers). If the latch hit ratio is below 99 percent, there is a serious problem, since not even the lock to get memory could be gotten.
■ Segment statistics are a great way to pinpoint performance problem to a given table, index, or partition. Oracle 10gR2 contains many segment-level statistics in both the AWR Report and STATSPACK.
■ If the PINHITRATIO is less than 95 percent when the report is run for an extended
period of time, the SHARED_POOL_SIZE is probably too small for your best system
performance. If the reloads are greater than 1 percent, this also points to a
SHARED_POOL_SIZE that is too small.
■ You do not set maxtrans in 10g (it defaults to 255).
■ Never go to the block level unless you absolutely have to go there. The block level is a great place to find hot block and ITL issues, but it takes a lot of time and energy on the part of an advanced DBA to pinpoint problems at this level.
■ The ADDM Report can be a helpful tuning utility, but ADDM is better used through
Oracle’s Grid Control for maximum benefits.


Top 25 Init Parameters:-

1. DB_CACHE_SIZE Initial memory allocated to data cache or memory used for data itself.

2. SGA_TARGET If you use Oracle’s Automatic Shared Memory Management, this
parameter is used to automatically determine the size of your data cache, shared pool,large pool, and Java pool (see Chapter 1 for more information). Setting this to 0 disables it.

3. PGA_AGGREGATE_TARGET Soft memory cap for total of all users’ PGAs.

4. SHARED_POOL_SIZE Memory allocated for data dictionary and for SQL and PL/SQL.

5. SGA_MAX_SIZE Maximum memory that the SGA can dynamically grow to.

6. OPTIMIZER_MODE CHOOSE, RULE, FIRST_ROWS, FIRST_ROWS_n, or ALL_ROWS.
Although RULE is definitely desupported and obsolete and people are often scolded for
even talking about it, I was able to set the mode to RULE in 10g. Consider the following error I received when I set OPTIMIZER_MODE to a mode that doesn’t exist (SUPER_FAST):
SQL> alter system set optimizer_mode=super_fast
ERROR:
ORA-00096: invalid value SUPER_FAST for parameter optimizer_mode, must be from
among first_rows_1000, first_rows_100, first_rows_10, first_rows_1, first_rows,
all_rows, choose, rule

7. CURSOR_SHARING Converts literal SQL to SQL with bind variables, reducing parse
overhead.

8. OPTIMIZER_INDEX_COST_ADJ Coarse adjustment between the cost of an index scan
and the cost of a full table scan. Set between 1 and 10 to force index use more frequently. Setting this parameter to a value between 1 and 10 would pretty much guarantee index use, even when not appropriate, so be careful, since it is highly dependent on the index design and implementation being correct. Please note that if you are using Applications 11i: Setting OPTIMIZER_INDEX_COST_ADJ to any value other than the default (100) is not supported (see Metalink Note 169935.1). Also, see bug 4483286.

9. QUERY_REWRITE_ENABLED Used to enable Materialized View and
Function-Based-Index capabilities and other features in some versions.

10. DB_FILE_MULTIBLOCK_READ_COUNT For full table scans to perform I/O more
efficiently, this reads the given number of blocks in a single I/O.

11. LOG_BUFFER Buffer for uncommitted transactions in memory (set in pfile).

12. DB_KEEP_CACHE_SIZE Memory allocated to keep pool or an additional data cache
that you can set up outside the buffer cache for very important data that you don’t want pushed out of the cache.

13. DB_RECYCLE_CACHE_SIZE Memory allocated to recycle pool or an additional data
cache that you can set up outside the buffer cache and in addition to the keep cache
described in Item 12. Usually, DBAs set this up for ad hoc user query data that has
queries that are poorly written.

14. DBWR_IO_SLAVES (also DB_WRITER_PROCESSES if you have async I/O) Number of
writers from SGA to disk for simulated async I/O. If you have async I/O, then you use
DB_WRITER_PROCESSES to set up multiple writers to more quickly write out dirty
blocks during a database write (DBWR).

15. LARGE_POOL_SIZE Total blocks in the large pool allocation for large PL/SQL and a
few other Oracle options less frequently used.

16. STATISTICS_LEVEL Used to enable advisory information and optionally keep
additional O/S statistics to refine optimizer decisions. TYPICAL is the default.

17. JAVA_POOL_SIZE Memory allocated to the JVM for JAVA stored procedures.

18. JAVA_MAX_SESSIONSPACE_SIZE Upper limit on memory that is used to keep track of
user session state of JAVA classes.

19. MAX_SHARED_SERVERS Upper limit on shared servers when using shared servers.

20. WORKAREA_SIZE_POLICY Used to enable automatic PGA size management.

21. FAST_START_MTTR_TARGET Bounds time to complete a crash recovery. This is
the time (in seconds) that the database will take to perform crash recovery of a single instance. If you set this parameter, LOG_CHECKPOINT_INTERVAL should not be
set to 0. If you don’t set this parameter, you can still see your estimated MTTR (mean time to recovery) by querying V$INSTANCE_RECOVERY for ESTIMATED_MTTR.

22. LOG_CHECKPOINT_INTERVAL Checkpoint frequency (in OS blocks—most OS blocks
are 512 bytes) at which Oracle performs a database write of all dirty (modified) blocks to the datafiles in the database. Oracle will also perform a checkpoint if more than one-quarter of the data buffers are dirty in the db cache and also on any log switch. The LGWR (log writer) also updates the SCN in the control files and datafiles with the SCN of the checkpoint.

23. OPEN_CURSORS Specifies the size of the private area used to hold (open) user
statements. If you get “ORA-01000: maximum open cursors exceeded,” you may need
to increase this parameter, but make sure you are closing cursors that you no longer
need. Prior to 9.2.0.5, these open cursors were also cached and at times caused issues (ORA-4031) if OPEN_CURSORS was set too high. In 9.2.05, SESSION_CACHED_
CURSORS now controls the setting of the PL/SQL cursor cache. Do not set the parameter
SESSION_CACHED_CURSORS as high as you set OPEN_CURSORS, or you may experience
ORA-4031 or ORA-7445 errors.

24. DB_BLOCK_SIZE Default block size for the database. A smaller block size will reduce contention by adjacent rows, but a larger block size will lower the number of I/Os needed to pull back more records. A larger block size will also help in range scans where the blocks desired are sequentially stored.

25. OPTIMIZER_DYNAMIC_SAMPLING Controls the number of blocks read by the dynamic
sampling query. Very useful with systems that are using Global Temporary Tables.

Top 10 Initialization Parameters Not to Forget

This section details some other important initialization parameters. On the other hand, these parameters may be important only in certain cases or only if you are using a certain feature or version of Oracle:

■ CONTROL_FILES This is the location of your control files.

■ COMPATIBLE Set this to the correct version, or you’ll miss things in the new version.

■ OPTIMIZER_FEATURES_ENABLE If this is not set, you are missing out on new features.

■ UNDO_MANAGEMENT Set this to AUTO for automatic UNDO management.

■ UNDO_TABLESPACE Set this to the tablespace to use for UNDO management.

■ UNDO_RETENTION The undo retention time in seconds.

■ JOB_QUEUE_PROCESSES If you want to use DBMS_JOB, you must set this parameter.

Note that DBMS_JOB has been replaced by the Scheduler in 10g, but it uses the same
parameter.

■ UTL_FILE_DIR This must be set to use the UTL_FILE package.

■ RECOVERY_PARALLELISM Recover using the Parallel Query Option, a faster recovery.

■ LICENSE_MAX_SESSIONS and LICENSE_MAX_USERS These limit concurrent and
named users.

■ LICENSE_SESSIONS_WARNING Here, you specify at which session+1 you get a
license warning.

Reference :- Oralce 10g Performance tuning

No comments:

Post a Comment