Wednesday, December 29, 2010

Points to Remember

Some of the Usefull points:-


1) A consistent get is a block gotten in read consistent mode (point in time mode). It MAY or MAY NOT involve reconstruction (rolling back). No. of time a consistent get/read has been requested for a block.

(No.of Rows fetched / arraysize ) + No. of Blocks = No. of Consistent Gets.


2) db block gets are blocks gotten in CURRENT mode -- they were blocks that told us were all of the other blocks were! (they were the segment headers).

3) Set arraysize 100 --No. of rows to be fetched at a time default is 15

More about Logical I/O and Physical I/O , consistent gets etc..

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:880343948514

4) dba_tab_modificatons.
a) alter table emp monitoring;
b) exec dbms_stats.flush_database_monitoring_info;
c) Select the dba_tab_modifications to view the details


5) Re-creating Password file:-

1) Recreate the passwordfile

You can recreate the password file using the ORAPWDxx command by which you can
specify the password for internal. Before running this command to create the
new password file, make sure to delete the old password file. Otherwise, the
ORAPWDxx utility will give an error message.

D:\> ORAPWD file=D:\ORANT\DATABASE\PWDORCL.ORA password=
entries=5

Verify if the new file has been created.

The passwordfile might be hidden. Hidden files can be seen:
on the DOS-prompt by type :
c:\> dir /A:H
in Windows NT explorer :
use the Windows Explorer View/Option 'show all files' to see the
file.


Remark:
-------

If the value of the "remote_login_passwordfile" parameter in the "init.ora" is
EXCLUSIVE then you must shutdown your database before recreating the password
file.

Failure to do so will result in being unable to connect as internal as long as
you don't stop the database by for example stopping services.

2) Alter the password of the user 'SYS' (Read Warning 1)

a) Verify the values of the "remote_login_passwordfile" in the
"init.ora" parameter file.

If its values is exclusive continue with e) if Shared continue with b)

b) Stop the database if it is started

c) Change the "remote_login_passwordfile" in the Init.ora to
"exclusive".

d) Startup the database.

e) Alter the user SYS to a new password, now also the password for
internal.

3) Use ORADIM

You could drop the instance and recreate it with a new password.

Suppose the SID in question is ORCL:

a) You shutdown the database and stop the services.

b) D:\> ORADIM -delete -sid ORCL

c) D:\> ORADIM -new -sid ORCL -intpwd -startmode
auto -pfile D:\ORANT\DATABASE\INITORCL.ORA

-----------
WARNING
-----------

Solution Description:
=====================

II) Most Common Problems
--------------------

1) You're trying to recreate the Password file with ORAPWDxx and
you get the following error:

D:\> ORADIM file=D:\ORANT\DATABASE\PWDORCL.ORA password=ORACLE

OPW-00005: File with same name exists - please delete or rename

Solution:
There is already a file with the same name in the specified
directory. Remove or rename that specific file, before creating the
new one.

The passwordfile might be hidden. Hidden files can be seen:
on the DOS-prompt by type :
c:\> dir /A:H
in Windows NT explorer :
use the Windows Explorer View/Option 'show all files' to see the
file.

2)ORA-1017 or ORA-1031 when trying to connect as internal

[NOTE:114399.1] : WINNT: ORA-1017 WHEN CONNECTING AS INTERNAL ON NT
[NOTE:114401.1] : WINNT: ORA-1031 WHEN CONNECTING AS INTERNAL ON NT

3) ORA-1996 when trying to grant sysdba or sysoper to user

: OERR ORA 1996 GRANT failed password file is full
: ORA-1994 WHEN GRANTING SYSDBA TO A USER

You must create a larger password file to be able to contain more entries
and re-grant the privileges to all of the users.

4) By default in 8.1.x, a new database will use Windows NT Native
Authentication by having the following already set in the "sqlnet.ora" file:

SQLNET.AUTHENTICATION_SERVICES = (NTS)

This will allow any privileged user defined in the NT user group (named
ORA__DBA) to login to database without providing a password. To disable
this feature and force all users to provide a password, simply comment out
the above in the sqlnet.ora file:

#SQLNET.AUTHENTICATION_SERVICES = (NTS)



6) If Creating logical standby database on the same server then when performing any kind of physical file addition/modification then you need to do it in primary then
using the logminer package take the transaction details and skip those transaction in the logical standby database usning the dbms_logstdby.skip_transaction and rerun those statements manually with modification as per the logical standby in the logical standby database.



Errors Faced:-

Problem :- Unable to open the Xserver.

Solution :- The xming is running already, so when i closed it and started using the xlaunch the xming is started and am able to transfer the X server sessions to my desktop.

Logging

Table Mode Insert Mode ArchiveLog mode result
----------- ------------- ----------------- ----------
LOGGING APPEND ARCHIVE LOG redo generated
NOLOGGING APPEND ARCHIVE LOG no redo
LOGGING no append "" redo generated
NOLOGGING no append "" redo generated
LOGGING APPEND noarchive log mode no redo
NOLOGGING APPEND noarchive log mode no redo
LOGGING no append noarchive log mode redo generated
NOLOGGING no append noarchive log mode redo generated


direct path operations (create, alter move, insert /*+ APPEND */, sqlldr direct=y) can skip redo generation in a noarchivelog mode database

more about logging & nologging :
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:5280714813869


Syntax of With statement:-

SQL> with
a as (select sysdate from dual),
b as (select sysdate from dual)
select * from a, b;

SYSDATE SYSDATE
--------- ---------
03-FEB-11 03-FEB-11

Copying data between databases.

SQL> copy from manzoor/ahamed@firstdb create mytest using select * from test;

Array fetch/bind size is 15. (arraysize is 15)
Will commit when done. (copycommit is 0)
Maximum long size is 80. (long is 80)
Table MYTEST created.

5 rows selected from manzoor@firstdb.
5 rows inserted into MYTEST.
5 rows committed into MYTEST at DEFAULT HOST connection.


Standby Database:-

Problem:-

I have a 10.2.0.1 database and a standby database on another server that has the redo logs applied every 15 mins.

I added a new file to the primary db but the standby server's disk was full so when it tried to automatically it failed.

I've sorted the space issue out but now I get the following error :

ORA-00283: recovery session canceled due to errors
ORA-01111: name for data file 129 is unknown - rename to correct file
ORA-01110: data file 129: 'C:\ORACLE\ORA10G\DATABASE\UNNAMED00129'
ORA-01157: cannot identify/lock data file 129 - see DBWR trace file
ORA-01111: name for data file 129 is unknown - rename to correct file
ORA-01110: data file 129: 'C:\ORACLE\ORA10G\DATABASE\UNNAMED00129'

Solution:-

STEP1:- Check the name of the datafile on standby database

select name from v$datafile; ---- This command should show the datafile name as UNNAMEDxxxxx

STEP2:- ON STANDBY DATABASE
alter system set standby_file_management='manual';

STEP3:- Rename the datafile
alter database create datafile 'C:\ORACLE\ORA10G\DATABASE\UNNAMED00129' as 'C:\ORACLE\ORA10G\DATABASE\actual_dbfile_name.dbf'


STEP4:- On the standby database
alter system set standby_file_management='auto';

STEP5:- On the standby database
recover managed standby database disconnect;

shutdown and then reopen the standby database and continue applying the redo logs

-----------

Problem:-

In Primary database all the datafiles are present in /u01 mount point and also in the
standby database all the datafiles are present in the /u02 mount point.

in standby the db_file_name_convert parameter has been set as below.
db_file_name_convert ='/u01','/u02'

Now i wanted to create a tablespace in the primary with 5g datafile but there is no sufficient space left in the standby database which is having only 2G free space.
But i have another mount point /u03 with 20G space. Inorder to use the /u03 mountpoint i have followed the steps as below. First you need to alter the parameter with the same name of the datafile which we are going to create in the primary.

Note:- if you are going to specify the entire filename then it has to be specified first and then the parent mount point has to be specified else it will try to create the datafile in the paraent mount point and the whold datafile name will be ignored.

In Standby

SQL> alter system set db_file_name_convert =
'/u01/app/oracle/oradata/mydatafile.dbf','/u03/oradata/mydatafile.dbf',
'/u01/app/oracle/oradata','/u02/app/oracle/oradata' scope=spfile;

sql> recover managed standby database cancel;
Media recover complete.

sql> shut immediate;

In primary

sql> create tablespace mytblspc datafile '/u01/app/oracle/oradata/mydatafile.dbf' size 5G;

In standby
sql> startup mount;
sql> alter database recover managed standby database disconnect from session;

Now you can see that only for that particular datafile is been created in /u03 mount point in the standby database.


Datafile Resizing:-

Only possible if there are no blocks at the end of the datafiles.

Eg:-

The test tablespace has been allocated with 100m, we have 3 objects in this tablespace, now we have truncated all these 3 tables, only each table will have only one extent as below.

SQL> select segment_name, extent_id, block_id, blocks, bytes/1024 "KB" froM DBA_EXTENTS WHERE SEGMENT_NAME in ('EMP','EMP1','EMP2') and owner = 'MANZOOR';

SEGMENT_NAME EXTENT_ID BLOCK_ID BLOCKS KB
-------------------- ---------- ---------- ---------- ----------
EMP 0 9 8 64
EMP1 0 12297 8 64
EMP2 0 12681 8 64

As per the above the emp1 and emp2 has the extent nearly at the end of the datafile, so we cannot resize the datafile in this case.

SQL> alter database datafile '/u01/app/oracle/oradata/orclsol/test01.dbf' resize 95m;
alter database datafile '/u01/app/oracle/oradata/orclsol/test01.dbf' resize 95m
*
ERROR at line 1:
ORA-03297: file contains used data beyond requested RESIZE value

Solution:-

Move the tables to another tablespaces temporarily , then resize the datafiles and move them back to the original tablespace.

SQL> alter table manzoor.emp1 move tablespace users;

Table altered.

SQL> alter table manzoor.emp2 move tablespace users;

Table altered.

SQL> alter database datafile '/u01/app/oracle/oradata/orclsol/test01.dbf' resize 5m;

Database altered.

SQL> alter table manzoor.emp1 move tablespace test;

Table altered.

SQL> alter table manzoor.emp2 move tablespace test;

Table altered.

SQL> select segment_name, extent_id, block_id, blocks, bytes/1024 "KB" froM DBA_EXTENTS WHERE SEGMENT_NAME in ('EMP','EMP1','EMP2') and owner = 'MANZOOR';

SEGMENT_NAME EXTENT_ID BLOCK_ID BLOCKS KB
-------------------- ---------- ---------- ---------- ----------
EMP 0 9 8 64
EMP1 0 17 8 64
EMP2 0 25 8 64



Problem:-

One instance is running in linux host system, am not able to connect from the windows system using the OEM tool, checked that the dbconsole is already started and running in linux server.

Solution:-

Have droped and recreated the repository.

$ emca -deconfig all db -repos drop

STARTED EMCA at Mar 2, 2011 11:17:11 AM
EM Configuration Assistant, Version 10.2.0.1.0 Production
Copyright (c) 2003, 2005, Oracle. All rights reserved.

Enter the following information:
Database SID: orclsol
Listener port number: 1521
Password for SYS user:
Password for SYSMAN user:
Password for SYSMAN user:
Do you wish to continue? [yes(Y)/no(N)]: Y
Mar 2, 2011 11:17:27 AM oracle.sysman.emcp.EMConfig perform
INFO: This operation is being logged at /u01/app/oracle/product/10.2.0/db_1/cfgtoollogs/emca/orclsol/emca_2011-03-02_11-17-10-AM.log.
Mar 2, 2011 11:17:28 AM oracle.sysman.emcp.util.DBControlUtil stopOMS
INFO: Stopping Database Control (this may take a while) ...
Mar 2, 2011 11:17:30 AM oracle.sysman.emcp.EMReposConfig dropRepository
INFO: Dropping the EM repository (this may take a while) ...
Mar 2, 2011 11:18:25 AM oracle.sysman.emcp.EMReposConfig invoke
INFO: Repository successfully dropped
Enterprise Manager configuration completed successfully
FINISHED EMCA at Mar 2, 2011 11:18:25 AM

$ emca -config all db -repos create

STARTED EMCA at Mar 2, 2011 11:18:38 AM
EM Configuration Assistant, Version 10.2.0.1.0 Production
Copyright (c) 2003, 2005, Oracle. All rights reserved.

Enter the following information:
Database SID: orclsol
Central agent home: /u01/app/oracle/product/10.2.0/db_1
Listener port number: 1521
Password for SYS user:
Password for DBSNMP user:
Password for SYSMAN user:
Password for SYSMAN user: Email address for notifications (optional):
Outgoing Mail (SMTP) server for notifications (optional):
-----------------------------------------------------------------

You have specified the following settings

Database ORACLE_HOME ................ /u01/app/oracle/product/10.2.0/db_1

Database hostname ................ mysolaris
Listener port number ................ 1521
Database SID ................ orclsol
Central agent home ................ /u01/app/oracle/product/10.2.0/db_1
Email address for notifications ...............
Outgoing Mail (SMTP) server for notifications ...............

-----------------------------------------------------------------
Do you wish to continue? [yes(Y)/no(N)]: Y
Mar 2, 2011 11:19:26 AM oracle.sysman.emcp.EMConfig perform
INFO: This operation is being logged at /u01/app/oracle/product/10.2.0/db_1/cfgtoollogs/emca/orclsol/emca_2011-03-02_11-18-38-AM.log.
Mar 2, 2011 11:19:27 AM oracle.sysman.emcp.EMReposConfig createRepository
INFO: Creating the EM repository (this may take a while) ...
Mar 2, 2011 11:21:43 AM oracle.sysman.emcp.EMReposConfig invoke
INFO: Repository successfully created
Mar 2, 2011 11:21:50 AM oracle.sysman.emcp.util.CentralAgentUtil registerWithCentralAgents
INFO: Registering target(s) with central agent(s) (this may take a while)...
Mar 2, 2011 11:21:59 AM oracle.sysman.emcp.util.DBControlUtil startOMS
INFO: Starting Database Control (this may take a while) ...
Mar 2, 2011 11:23:36 AM oracle.sysman.emcp.EMDBPostConfig performConfiguration
INFO: Database Control started successfully
Mar 2, 2011 11:23:36 AM oracle.sysman.emcp.EMDBPostConfig performConfiguration
INFO: >>>>>>>>>>> The Database Control URL is http://mysolaris:1158/em <<<<<<<<<<<
Enterprise Manager configuration completed successfully
FINISHED EMCA at Mar 2, 2011 11:23:36 AM


$ emctl status dbconsole.

Oracle Enterprise Manager 10g Database Control Release 10.2.0.2.0
Copyright (c) 1996, 2005 Oracle Corporation. All rights reserved.
http://mysolaris:1158/em/console/aboutApplication
Oracle Enterprise Manager 10g is running.
------------------------------------------------------------------
Logs are generated in directory /u01/app/oracle/product/10.2.0/db_1/mysolaris_orclsol/sysman/log


Now am able to connect to the database using OEM.


Oralce Best Practices:-
Arup Nanda has worked exclusively as an Oracle DBA for
more than 13 years and won Oracle's DBA of the Year award in 2003.

Based on his Collaborate '07 presentation titled "DBA Best Practices from the Field," this tip provides criteria for determining what makes a "best practice" as well as Nanda's top five best practices for the Oracle DBA.

In my view, a best practice can only be a best practice if it
1) can be justified as to why it's advantageous and
2) can be flexible to adapt to multiple situations. The following five essential best practices were derived from my years of experience working with Oracle systems large and small.

#1: Multiple Oracle Homes

My favorite best practice is the one about multiple Oracle Homes. Here it how it goes. When applying a patch or a patchset, I recommend against applying to the existing Oracle Home. Instead, I suggest creating a new Oracle Home, and apply the patches there.

I create the first Oracle Home at /app/oracle/db_1, for instance. When a patch comes out, I install the whole Oracle
software in a different home -- /app/oracle/db_2 -- and then apply the patch there.
During the process of installation and patch application, the database is up and running, as it runs off the home /db_1. When the outage window comes, all I have to do is to shut down Oracle; change Oracle Home to db_2 and bring the database up. If there is a problem, I can reset the Oracle Home back to the old one.

So, here is the conventional approach:

1.Shut down the database
2.Apply patch to the Oracle Home
3.Start the database
4.In case of problems:
5.Shut down the database
6.Roll back the patch
7.Start the database

Steps 2 and 6 could take as much as three hours depending on the amount of patching. The database is down during these times.
In the new approach:

1.Install new Oracle Home
2.Apply the patch to the new Home
3.Shut down the database
4.Change Oracle Home to the new location
5.Start the database
6.In case of problems:
7.Shut down the database
8.Change Oracle Home to the old one
9.Start the database
The database is down only during steps 4 and 8, which takes a couple of minutes at the most, not hours.

So, here are the advantages:

1.The downtime is significantly reduced, to one 60th of the original time.
2.The risk is reduced significantly, as there is no rolling back a patch; you just go to the older version.
3.You can perform a "diff" on these two homes to see what changed. You can see the differences across multiple homes as well.
4.You can take several databases running on the same server to the new Oracle Home one by one.
5.You can see the various Oracle Homes and what patch level they are on using the inventory.
The only negative part is the space consumption -- you need space for two Oracle Homes. But considering a typical Oracle Home takes
about 4 GB or less, this aspect of the suggestion is trivial.
#2: Set audit trail to DB

Set the audit trail to DB by placing the parameter AUDIT_TRAIL = DB in the initialization parameter file during the database creation.
Setting this parameter does not start the auditing, because an explicit AUDIT command must be given on the object. But the parameter
must be set to a value other than FALSE (the default) for the command to take effect. Being a non-dynamic parameter, the database must
be bounced to change the value of AUDIT_TRAIL. To save the trouble and avoid an outage, always set the value to DB, even if you never
intend to audit anything. It does not break anything and you will always be ready to audit when the time comes.

#3: Don't use .log

Don't use .log as the extension of redo logs. Someone may run a script to remove all the log files assuming they are redundant
and you will end up losing the online redo logs as well, forcing a database recovery. Instead, name them with extension "redo" or "rdo."

#4: Preview RMAN Restore

Preview RMAN Restore to identify all the various backup pieces that will be used in the recovery process without doing an actual recovery.
This eliminates any surprises from missing pieces during an actual recovery process.

#5: Create a new Oracle user for clients running on the same server as the DB

The Oracle Database server software also contains the client piece, which allows the clients to connect to the database on the same server.
But as a best practice do not use the same user or the software; use a new one. For instance, if "oracle" is the user to install Oracle software, create a new user called, say, "oraapp" and install the client-only software using that user. The user "oraapp" should not be part of the dba or the oinstall group; so this user can't log on to the database as sysdba. Create a new group called "appgrp" and assign the user oraaap to this group.

All the application users on the box should also be part of the appgrp group. This way they can use the sqlplus, sqlldr and other executables on the server,
but not be able to connect as sysdba.

The common practice is to use the client software in the same user as the database software owner; but starting with 10.2, Oracle has changed the security policy that takes away the global execution permissions from the Oracle Home. So the only option is to let app users be part of the dba group or change the permissions on Oracle Home -- both make the database extremely vulnerable.

------------------------------------------------------

Problem :- Unable to connect to the database from the client, we have set the oracle_home and found that all the parameters are ok, but still we are not able to connect to the db, we get the error as TNS failed to resolve the service name.

Solution:-

After checking all we found that in sqlnet.ora the NAMES.DEFAULT_DOMAIN has been set but in the tnsnames.ora file we are not using any netservice names with the domain, hence whenver we ping the tnsping automatically the default_domain is been added to the netservice names and that particular netservice name will be searched in the tnsnames.ora file, since the netserivce name with domain name is not present we got this error.

Hence we have commeneted the line in the sqlnet.ora parameter and then it works.

-------------------------------------------------------------------------



Problem:

Unable to connnect to the database using sqlplus / as sysdba even as a admin user.

Solution:-

C:\>sqlplus "/AS SYSDBA"
SQL*Plus: Release 10.2.0.3.0 - Production on Tue May 6 04:42:32 2008
Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
ERROR:
ORA-01031: insufficient privileges
Enter user-name:

If you are getting the above error when you try to login with "sqlplus /AS SYSDBA", please check if the following are properly set.
1] The Windows user ID with which you signed in is part of the ORA_DBA group.
2] SQLNET.AUTHENTICATION_SERVICES=(NTS) in sqlnet.ora file.

Now try again.....

C:\>sqlplus "/AS SYSDBA"
SQL*Plus: Release 10.2.0.3.0 - Production on Tue May 6 04:49:13 2008
Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the OLAP and Data Mining options
SQL> show user
USER is "SYS"


How to check the User group in Windows:-

c:\> net user administrator

User name Administrator
Full Name
Comment Built-in account for administering the computer/domain
User's comment
Country code 000 (System Default)
Account active Yes
Account expires Never

Password last set 12/20/2010 3:58 PM
Password expires Never
Password changeable 12/20/2010 3:58 PM
Password required Yes
User may change password Yes

Workstations allowed All
Logon script
User profile
Home directory
Last logon 5/5/2011 8:17 AM

Logon hours allowed All

Local Group Memberships *Administrators *Debugger Users
*ora_dba
Global Group memberships *None
The command completed successfully.


In Linux

$ id oracle


CPU Usage.

cpu used by this sesision - 5448
The value is given in hsec we need to divide it by 100 to get number of seconds.
5448/100 = 54 Seconds


Difference between RBO and CB

Please tell me the difference between rule-based and cost-based optimization of SQL queries, in detail. Thank you.
A long time ago, the only optimizer in the Oracle database was the Rule-Based Optimizer (RBO). Basically, the RBO used a set of rules to determine how to execute a query. If an index was available on a table, the RBO rules said to always use the index. There are some cases where the use of an index slowed down a query. For example, assume someone put an index on the GENDER column, which holds one of two values, MALE and FEMALE. Then someone issues the following query:
SELECT * FROM emp WHERE gender='FEMALE';
If the above query returned approximately 50% of the rows, then using an index would actually slow things down. It would be faster to read the entire table and throw away all rows that have MALE values. Experts in Oracle query optimization have come to a rule of thumb that says if the number of rows returned is more than 5-10% of the total table volume, using an index would slow things down. The RBO would always use an index if present because its rules said to.

It became obvious that the RBO, armed with its set of discrete rules, did not always make great decisions. The biggest problem with the RBO was that it did not take the data distribution into account. So the Cost-Based Optimizer (CBO) was born. The CBO uses statistics about the table, its indexes and the data distribution to make better informed decisions. Using our previous example, assume that the company has employees that are 95% female and 5% male. If you query for females, then you do not want to use the index. If you query for males, then you would like to use the index. The CBO has information at hand to help make these kind of determinations that were not available in the old RBO.



DBWR_IO_SLAVES vs DB_WRITER_PROCESSES
Questions about multiple DBWR processes have plagued DBAs since Oracle7. You configured multiple DBWR process in Oracle7 by setting the parameter DB_WRITERS. In Oracle7, multiple DBWR processes were actually slave processes that were unable to perform asynchronous I/O calls on their own. The algorithm used by the Oracle7 DBWR caused it to incur waits when the delay of a single write caused additional writes to queue up until the initial write was complete.

Oracle8 and later release’s DBWR architecture corrects this problem. Oracle’s DBWR now writes continuously without waiting for previous writes to complete. The new design allows DBWR to act as if it were inherently synchronous, regardless of whether the operating system supports asynchronous I/O or not. Administrators are able to configure multiple DBWR process by setting the init.ora parameter db_writer_processes. Multiple database writers became available in Oracle 8.0.4 and allow true multiple database writes. There is no master-slave relationship as in Version 7.

If you implement database writer I/O slaves by setting the dbwr_io_slaves parameter, you configure a single (master) DBWR process that has slave processes that are subservient to perform asynchronous I/O calls. I/O slaves can also be used to simulate asynchronous I/O on platforms that do not support asynchronous I/O or implement it inefficiently.

You can't activate both multiple DBWRs and I/O slaves. If both parameters are set in the parameter file, dbwr_io_slaves will take precedence.
To determine whether to use multiple DBWn processes or database slaves, follow these guidelines:

Use db_writer_processes for most write intensive applications. One per CPU is the recommended setting.

Use db_writer_processes for databases that have a large data buffer cache.
Use dbwr_io_slaves for applications that are not write intensive and run on operating systems that support asynchronous I/O.

Use dbwr_io_slaves on platforms that do no support asynchronous I/O.
Use dbwr_io_slaves on single CPU systems. Multiple DBWR processes are
CPU intensive.

Points on Oracle SQL Performance Parameters:- By donal bursleon

Oracle Corporation has invested millions of dollars in making the cost-based SQL optimizer (CBO) one of the most sophisticated tools ever created. The job of the CBO is to always choose the most optimal execution plan for any SQL statement.
However, there are some things that the CBO cannot detect, which is where the DBA comes in. The types of SQL statements, the speed of the disks and the load on the CPUs, all affect the "best" execution plan for a SQL statement. For example, the best execution plan at 4:00 A.M. when 16 CPUs are idle may be quite different from the same query at 3:00 P.M. when the system is 90 percent utilized.

Despite the name "Oracle", the CBO is not psychic, and Oracle can never know, a priori, the exact load on the Oracle system. Hence the Oracle professional must adjust the CBO behavior periodically. Most Oracle professionals make these behavior adjustments using the instance-wide CBO behavior parameters such as optimizer_index_cost_adj and optimizer_index_caching.

However, Oracle does not recommend changing the default values for many of these CBO settings because the changes can affect the execution plans for thousands of SQL statements.

Here are some of the major adjustable parameters that influence the behavior of the CBO:


optimizer_index_cost_adj: This parameter alters the costing algorithm for access paths involving indexes. The smaller the value, the cheaper the cost of index access.

optimizer_index_caching: This is the parameter that tells Oracle how much of your index is likely to be in the RAM data buffer cache. The setting for optimizer_index_caching affects the CBO's decision to use an index for a table join (nested loops), or to favor a full-table scan.

optimizer_max_permutations: This controls the maximum number of table join permutations allowed before the CBO is forced to pick a table join order. For a six-way table join, Oracle must evaluate 6-factorial, or 720, possible join orders for the tables.

db_file_multiblock_read_count: When set to a high value, the CBO recognizes that scattered (multi-block) reads may be less expensive than sequential reads. This makes the CBO friendlier to full-table scans.

parallel_automatic_tuning: When set "on", full-table scans are parallelized. Because parallel full-table scans are very fast, the CBO will give a higher cost to index access, and be friendlier to full-table scans.

hash_area_size (if not using pga_aggregate_target): The setting for hash_area_size parameter governs the propensity of the CBO to favor hash joins over nested loop and sort merge table joins.

sort_area_size (if not using pga_aggregate_target): The sort_area_size influences the CBO when deciding whether to perform an index access or a sort of the result set. The higher the value for sort_area_size, the more likely that a sort will be performed in RAM, and the more likely that the CBO will favor a sort over pre-sorted
index retrieval.

Important Note: Prior to Oracle 10g, adjusting these optimizer parameters was the only way to compensate for sample size issues with dbms_stats. As of 10g, the use of dbms_stats.gather_system_stats and improved sampling within dbms_stats had made adjustments to these parameters far less important. Ceteris Parabus, always adjust CBO statistics before adjusting optimizer parms. For more details on optimizer parameters, see my latest book "Oracle Tuning: The Definitive Reference".

The parameter optimizer_index_cost_adj controls the CBO's propensity to favor index scans over full-table scans. As we will see, in a dynamic system, the "ideal" value for optimizer_index_cost_adj may change radically in just a few minutes, as the type of SQL and load on the database changes.

Using optimizer_index_cost_adj

The optimizer_index_cost_adj is the most important parameter of all, and the default setting of 100 is incorrect for most Oracle systems. However, for OLTP systems, resetting this parameter to a smaller value (between 10 and 30) may result in huge performance gains.

10g Note: In Oracle 10g, you can achieve a similar result to reducing the value of optimizer_index_cost_adj by analyzing your workload statistics (dbms_stats.gather_system_stats). Also note that utilizing CPU costing (_optimizer_cost_model) may effect the efficiency of plans with lower values for optimizer_index_cost_adj.

Is it possible to query the Oracle environment and intelligently determine the optimal setting for optimizer_index_cost_adj? Let's examine the issue.

The optimizer_index_cost_adj parameters default to a value of 100, and can range in value from 1 to 10,000. A value of 100 means that equal weight is given to index vs. multiblock reads. In other words, optimizer_index_cost_adj can be thought of as a "how much do I like full-table scans?" parameter.

With a value of 100, the CBO likes full-table scans and index scans equally, and a number lower than 100 tells the CBO that index scans are faster than full-table scans. However, even with a super-low setting (optimizer_index_cost_adj=1), the CBO will still choose full-table scans for no-brainers, like tiny tables that reside on two blocks.

optimizer_index_cost_adj.sql


col c1 heading 'Average Waits for|Full Scan Read I/O' format 9999.999
col c2 heading 'Average Waits for|Index Read I/O' format 9999.999
col c3 heading 'Percent of| I/O Waits|for Full Scans' format 9.99
col c4 heading 'Percent of| I/O Waits|for Index Scans' format 9.99
col c5 heading 'Starting|Value|for|optimizer|index|cost|adj' format 999


select
a.average_wait c1,
b.average_wait c2,
a.total_waits /(a.total_waits + b.total_waits) c3,
b.total_waits /(a.total_waits + b.total_waits) c4,
(b.average_wait / a.average_wait)*100 c5
from
v$system_event a,
v$system_event b
where
a.event = 'db file scattered read'
and
b.event = 'db file sequential read'
;


Here is the output from the script in Listing A.

Starting
Value
for
optimizer
Percent of Percent of index
Average waits for Average waits for I/O waits I/O waits cost
full scan read I/O index read I/O for full scans for index scans adj

------------------ ------------------- ---------------- --------------- -------
1.473 .289 .02 .98 20


As you can see, the suggested starting value for optimizer_index_cost_adj may be too high because 98 percent of the data waits are on index (sequential) block access. How we can "weight" this starting value for optimizer_index_cost_adj to reflect the reality that this system has only two percent waits on full-table scan reads (a typical OLTP system with few full-table scans). As a practical matter, we never want an automated value for optimizer_index_cost_adj to be less than one or more than 100.

Also, these values change constantly, As the I/O waits accumulate and access patterns change, this same script may give a very different result at a different time of the day.


Optimization

The Oracle Cost-based SQL optimizer is one of the world's most sophisticated software achievements, but it is the job of the Oracle professional to provide valid statistics for the schema and understand how the Oracle parameters affect the overall performance of the SQL optimizer. Remember, suboptimal SQL execution plans are a major reason for poorly performing Oracle databases, and because the CBO determines the execution plans, it is a critical component in Oracle optimization.


Thanks to Donald bursleon.

--------------------------------------------------------------------------------


Moving data between different versions using exp/imp utility:-
By Tom

To move data DOWN a version(s), you need to export using that lower versions EXP tool and IMP using that lower versions tool.

To move data UP a version, you export using the EXP of the database that contains the data and you IMP using the imp that ships with the TARGET database.

Eg:

to move data from 8.0 to 7.3:

o run catexp7 in the 8.0 database.
o exp73 over sqlnet to the 8.0 database.
o imp73 natively into the 7.3 database.


to move data from 7.3 to 8.1

o exp73 natively on the 7.3 instance
o imp natively using the 8.1 imp.exe



Port Details:-

1521: Default port for the TNS Listener.
1522 – 1540: Commonly used ports for the TNS Listener
1575: Default port for the Oracle Names Server
1630: Default port for the Oracle Connection Manager – client connections
1830: Default port for the Oracle Connection Manager – admin connections
2481: Default port for Oracle JServer/Java VM listener
2482: Default port for Oracle JServer/Java VM listener using SSL
2483: New port for the TNS Listener
2484: New port for the TNS Listener using SSL


-

Use plimit to set the user shell limit in solaris.

Eg:-

plimit $$
plimit -n 65536 $$

2 comments:

  1. HI,

    Really your blog is so awesome.Why dont you write a book..
    A small request.Need to know about applying patch in GRID Home?
    How to deconfigure cluster?
    why and when to use perl rootcrs.pl unlock,rootdelete.sh?

    can you make a post on this....Eagerly waiting.

    Thanks,
    Bala

    ReplyDelete
  2. Very Nice article thank you for sharing useful information. Know more about Oracle DBA Training In Bangalore

    ReplyDelete