Sunday, November 28, 2010

ADDM Reports.

Different Type of Advisors are:-

SQL> SELECT * FROM DBA_ADVISOR_dEFINITIONS;

ADVISOR_ID ADVISOR_NAME PROPERTY
---------- ------------------------------ ----------
1 ADDM 1
2 SQL Access Advisor 15
3 Undo Advisor 1
4 SQL Tuning Advisor 7
5 Segment Advisor 3
6 SQL Workload Manager 0
7 Tune MView 31


ADDM ( Automatic Database Diagnostic Monitor)

The AWR has all the information on the activities and waits on the oracle 10g database. The ADDM is primary client of AWR information. ADDM can be considered as expert residing in oracle 10g.

The ADDM provides suggestions and recommendations for fixing the problems.

The ADDM is automatically invoked by the oracle 10g database and performs analysis to determine any issues in the database. The ADDM recommends the solution if it identifies issues.

The ADDM analysis is performed every time an AWR snapshot is taken. The MMON process triggers the ADDM analysis each time a snapshot is taken to do analysis of the period corresponding to last two snapshots. This approach proactively monitors the database and detects bottlenecks before they become significant problem . It is also possible to invoke ADDM manually along with the areas that have problem identified. ADDM also reports areas of the system that have no problem.s This allows you to quickly see that there is little to be gained by performing actions in hose areas.

The results of ADDM are shared in the AWR and are accessible through dictionary views and EM.

Analysis is performed from the top down, identifying symptoms first and then refining them to reach the root cause.

he goal of the analysis to to reduce a single throughput metric called DBtime. The Dbtime is the cumulative time spent by the database server in processing user requests, which includes wait time and CPU time. You can view this metric from the time model dictionary views. By reducing the dbtime, the database is able to support more user requests using the same resources, in other words the database can perform the same workload in less time.

Since the ADDM is integrated to the database server running the analysis has a minor impart on database at normally takes less than 3 seconds to complete the analysis. The addm analysis results are represented as findings, and each find belong to are of the three categories, problem (root cause), symptom, information.

DBA_ADVISOR_FINDINGS - To get the findings in the Database.
DBA_ADVISOR_RECOMMENDATIONS - To get the Recommendations on the findings.

Generating ADDM Reports:-

1) @?/rdbms/admin/addmrpt.sql
2) @?/rdbms/admin/addmrpti.sql (For particular instance).

Generating ADDM report through dbms_advisor Package:-

begin
dbms_advisor.create_task
(
ADVISOR_NAME => 'ADDM',
TASK_NAME => 'ADDM_REPORT',
TASK_DESC => 'GETTING ADDM ADVICE'
);
dbms_advisor.set_task_parameter
(
TASK_NAME => 'ADDM_REPORT',
PARAMETER => 'START_SNAPSHOT',
VALUE => 43
);
dbms_advisor.set_task_parameter
(
TASK_NAME => 'ADDM_REPORT',
PARAMETER => 'END_SNAPSHOT',
value => 44
);
dbms_advisor.execute_Task
(
TASK_NAME => 'ADDM_REPORT'
);
end;
/

SQL> set long 10000;
SQL> set pagesize 10000;
SQL> select dbms_advisor.get_Task_report('ADDM_REPORT') from dual;

DBMS_ADVISOR.GET_TASK_REPORT('ADDM_REPORT')
--------------------------------------------------------------------------------
DETAILED ADDM REPORT FOR TASK 'ADDM_REPORT' WITH ID 246
-------------------------------------------------------

Analysis Period: 01-DEC-2010 from 10:32:21 to 11:00:22
Database ID/Instance: 2728202513/1
Database/Instance Names: TOMIG/tomig
Host Name: rhel.manzoor.com
Database Version: 10.2.0.1.0
Snapshot Range: from 43 to 44
Database Time: 1 seconds
Average Database Load: 0 active sessions

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

THERE WAS NOT ENOUGH DATABASE TIME FOR ADDM ANALYSIS.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

ADDITIONAL INFORMATION
----------------------

There was no significant database activity to run the ADDM.

The database's maintenance windows were active during 99% of the analysis
period.

The analysis of I/O performance is based on the default assumption that the
average read time for one database block is 10000 micro-seconds.

An explanation of the terminology used in this report is available when you
run the report with the 'ALL' level of detail.


Views:-

dba_advisor_Tasks;
dba_advisor_findings;
dba_advisor_recommendations;
dba_advisor_actions;
dba_advisor_log

AWR/ ASH Reports

Automatic Workload Repository (AWR)

In 10g Oracle has introduced new statistical report of database called the Automatic Workload Repository (AWR) report. Its just the higher version of the Statspack report.

The AWR takes a snap shot of the database in specified intervals (default is one hour) and stores in Sysaux tablespace.

The snap shots are taken automatically if the statistics_level parameter is set to typical/all. If it set to basic then statistics details are not gathered.

The AWR contains the performance statistics and workload information on the database. The information is gatherd every one hour and preserved for seven days by default. At any time we can change the Relation and Interval time.

To enable AWR set statistic_level = typical / all .
To disable AWR set statistic_level - basic;

Select * from V$statistics_level;

The statistics are collected and stored in memory in SGA. The in memory statistics collection area is a circular buffer, where the old data is overwritten after flushing to disk.

The AWR statistics snap shot are owned by Sys schema. Than MMAN (Manageability monitor) process is responsible for filtering and transferring the memory statistics to the disk every hour. When the buffer is full, the MMNL (Manageability Monitory Light) process is responsible to flush the information to the repository.

Active Session History (ASH)

The ASH contains recent information on active sessions sampled every second. The AWR are taken every one hour and its one hour old will not help in diagnosing issues that are current on the database. Typically to resolve issues urgenly on the database, details information pertaining o last 5 to 10 mins is critical. Because recording session activity is expensive, ASH samples V$SESSION every second and records the event for which he session are waiting.

You can query ASH information through V$active_session_history. The view contains one row for each active session per sample and returns the latest session's sample rows first. It include

* sql identifier of sql statement.
* object no., file no., and block no.
* wait event identifier & parameters.
* user identifier, Session identifier and Serial number.
* client identifier and name of the operating system program.

For eg. To diagnoise the performance problem of sid 12.

Query

select session_state, event, current_obj# from V$active_session_history where session_id = 12;

Generating AWR Reports.

a) To view the Sanp shots number and it time.

>selec * from dba_hist_snapshot;

b) To generate AWR report using sql script.

>@?/rdbms/admin/awrrpt.sql;

c) To view the Current Setting.

SQL> select * from dba_hist_wr_control;

DB Id SNAP_INTERVAL RETENTION TOPNSQL
----------- ----------------- ----------------- --------
2728202513 +00000 01:00:00.0 +00007 00:00:00.0 DEFAULT

Here the snap_interval shows that it has been set to take the snap for every one hour and Retention period has been set to 7 days.

To Change the Setting.

SQL> begin
2 dbms_workload_repository.modify_snapshot_settings
3 (
4 RETENTION => 7200,
5 INTERVAL => 30
6 );
7 end;
8 /

SQL> select * from dba_hist_wr_control;

DB Id SNAP_INTERVAL RETENTION TOPNSQL
----------- ----------------- ------------------ ----------
2728202513 +00000 00:30:00.0 +00005 00:00:00.0 DEFAULT

Now the Interval has been changed to 30 minutes and the Retention period has been changed from 7 days to 5 days (7200 is the No. of minutes).

To Create a Snap Manually.

SQL> exec dbms_workload_repository.create_snapshot;

PL/SQL procedure successfully completed.

To Drop snapshots.

SQL> exec dbms_workload_repository.DROP_SNAPSHOT_RANGE (31,38);

PL/SQL procedure successfully completed.

To Generate AWR reports using dbms_workload_repository package.

SQL> select snap_id , dbid from dba_hist_snapshot order by snap_id;

SNAP_ID DB Id
---------- -----------
21 2728202513
22 2728202513
23 2728202513
24 2728202513
25 2728202513
26 2728202513
27 2728202513
28 2728202513
29 2728202513
30 2728202513

HTML Report

SQL> spool awrreport.html;

SQL> select output from table
(
dbms_workload_repository.AWR_REPORT_HTML
(2728202513,1,21,30)
);

SQL> spool off;

AWR Text file Report

SQL> spool awrtext.txt

SQL> select ouput from table
(
dbms_workload_repository.AWR_REPORT_TEXT
(2728202513,1,21,30)
);

SQL> spool off;

Generating ASH Report.

Usning Sql script file.

>@?/rdbms/admin/ashrpt.sql

Using Package.


> select output from table
(
dbms_workload_repository.ash_report_html
(
2728202513,1,to_date('20-11-YYYY 10:00:00','DD-MM-YYYY HH24:MI:SS'),
to_date('20-11-YYYY 10:10:00','DD-MM-YYYY HH24:MI:SS')
);

> select output from table
(
dbms_workload_repository.ash_report_text
(
2728202513,1,to_date('20-11-YYYY 10:00:00','DD-MM-YYYY HH24:MI:SS'),
to_date('20-11-YYYY 10:10:00','DD-MM-YYYY HH24:MI:SS')
);


====================================================================================

Friday, November 26, 2010

Partitions - Examples.

Here we can see examples on

1. Range
2. Hash and
3. List

Partions.

1. Range Partition.

This type of partition is very useful when dealing with data that has logical
ranges into which can be distributed eg. year.


SQL>create table purchase (pur_date date, pur_item varchar2(30), pur_amount number)
Partition by range (pur_date)
(
partition PUR2008 values less than (to_date('01-01-2009','DD-MM-YYYY')) tablespace users,
partition PUR2009 values less than (to_date('01-01-2010','DD-MM-YYYY')) tablespace users,
partition OTHERS values less than (maxvalue) tablespace users
)
/

Table created.

SQL> create index glb_index on purchase (pur_date);

Index created.


SQL> create index loc_index on purchase (pur_date, pur_item) local;

Index created.

SQL> select partition_name, table_name from user_Tab_partitions where table_name = 'PURCHASE';

PARTITION_NAME TABLE_NAME
------------------------------ ------------------------------
OTHERS PURCHASE
PUR2008 PURCHASE
PUR2009 PURCHASE


SQL> select INDEX_NAME,PARTITION_NAME ,STATUS,GLOBAL_STATS from user_ind_partitions;

INDEX_NAME PARTITION_NAME STATUS GLO
------------------------------ ------------------------------ -------- ---
LOC_INDEX PUR2008 USABLE NO
LOC_INDEX PUR2009 USABLE NO
LOC_INDEX OTHERS USABLE NO


SQL> select index_name, index_type, table_name,partitioned from user_indexes where table_name = 'PURCHASE';

INDEX_NAME INDEX_TYPE TABLE_NAME PAR
------------------------------ --------------------------- ------------------------------ ---
GLB_INDEX NORMAL PURCHASE NO
LOC_INDEX NORMAL PURCHASE YES


Selecting From Partitions:-

SQL> select * from purchase partition (pur2008);

PUR_DATE PUR_ITEM PUR_AMOUNT
--------- ------------------------------ ----------
01-JAN-08 Cupboards 50000
01-JAN-08 Cupboards 50000

SQL> select * from purchase partition (pur2009);

PUR_DATE PUR_ITEM PUR_AMOUNT
--------- ------------------------------ ----------
01-OCT-09 Computers 50000
01-NOV-09 Keyboards 2000
01-DEC-09 Mouse 1000


SQL> select * from purchase partition (others);

PUR_DATE PUR_ITEM PUR_AMOUNT
--------- ------------------------------ ----------
01-MAY-10 Speakers 500
01-JUL-10 Monitors 10000


Split Partitions:-

SQL> alter table purchase split partition others at (to_date('01-01-2011','DD-MM-YYYY')) into (partition PUR2010, partition OTHERS);

Table altered.

SQL> select * from purchase partition (pur2010);

PUR_DATE PUR_ITEM PUR_AMOUNT
--------- ------------------------------ ----------
01-MAY-10 Speakers 500
01-JUL-10 Monitors 10000

SQL> select * from purchase partition (others);

no rows selected


To Drop Partitions:-

SQL> alter table purchase drop partition PUR2008 update global indexes;

Table altered.

Note:-

While dropping a partition if that particular partitions contains
any data then we need to add the 'UPDATE GLOBAL INDEXES' keyword
it the table has any global indexes else the indexes will be unusable.

Alternatively we can also perform the below methods so that index will became
usable.

a)
SQL> alter table purchase drop partition PUR2008;
SQL> alter index glb_index rebuild;

or

b)
SQL> delete from purchase partition (PUR2010);
SQL> alter table purchase drop partition PUR2010;



To Add Partition:-

SQL> alter table purchase add partition PUR2011 values less than (to_date('01-01-2012','DD-MM-YYYY'));
alter table purchase add partition PUR2011 values less than (to_date('01-01-2012','DD-MM-YYYY'))
*
ERROR at line 1:
ORA-14074: partition bound must collate higher than that of the last partition


SQL> alter table purchase drop partition OTHERS update global indexes;

Table altered.


SQL> alter table purchase add partition PUR2011 values less than (to_date('01-01-2012','DD-MM-YYYY'));

Table altered.


Convert A Partition Into A Stand-alone Table Using Exchange Partition.


Syntax:-

ALTER TABLE EXCHANGE PARTITION
WITH TABLE INDEXES
VALIDATION EXCEPTIONS INTO ;


SQL> create table purchase_2008 as select * from purchase where 1 =2;

Table created.

SQL> desc purchase_2008;
Name Null? Type
----------------------------------------- -------- ----------------------------
PUR_DATE DATE
PUR_ITEM VARCHAR2(30)
PUR_AMOUNT NUMBER


SQL> alter table purchase exchange partition PUR2008 with table purchase_2008;

Table altered.

SQL> select * from purchase partition (PUR2008);

no rows selected

SQL> select * from purchase_2008;

PUR_DATE PUR_ITEM PUR_AMOUNT
--------- ------------------------------ ----------
01-JAN-08 Cupboards 50000
01-FEB-08 Cots 50000


Mergeing contents of one partitions into another partitions, Once it is
merged the partition will me removed. Once Merge is completed
rebuild the indexes.

SQL> select * from purchase partition (PUR2011);

no rows selected

SQL> SELECT * FROM PURCHASE PARTITION (PUR2010);

PUR_DATE PUR_ITEM PUR_AMOUNT
--------- ------------------------------ ----------
01-JAN-08 Cupboards 50000
01-JAN-08 Cupboards 50000
01-OCT-09 Computers 50000
01-NOV-09 Keyboards 2000
01-DEC-09 Mouse 1000
01-MAY-10 Speakers 500
01-JUL-10 Monitors 10000

7 rows selected.


SQL> alter table purchase merge partitions pur2010, pur2011 into partition pur2011;

Table altered.

SQL> select * from purchase partition (PUR2011);

PUR_DATE PUR_ITEM PUR_AMOUNT
--------- ------------------------------ ----------
01-JAN-08 Cupboards 50000
01-JAN-08 Cupboards 50000
01-OCT-09 Computers 50000
01-NOV-09 Keyboards 2000
01-DEC-09 Mouse 1000
01-MAY-10 Speakers 500
01-JUL-10 Monitors 10000

7 rows selected.

SQL> SELECT * FROM PURCHASE PARTITION (PUR2010);
SELECT * FROM PURCHASE PARTITION (PUR2010)
*
ERROR at line 1:
ORA-02149: Specified partition does not exist


SQL> select index_name, status from user_indexes where table_name = 'PURCHASE';

INDEX_NAME STATUS
------------------------------ --------
GLB_INDEX UNUSABLE
LOC_INDEX N/A

SQL> alter index glb_index rebuild;

Index altered.

SQL> select index_name, status from user_indexes where table_name = 'PURCHASE';

INDEX_NAME STATUS
------------------------------ --------
GLB_INDEX VALID
LOC_INDEX N/A


Truncating Partitions:-


SQL> alter table purchase truncate partition pur2011 update indexes;

Table truncated.



Hash Partitions:-

Use hash partition if your data does not easily lend itself to range partition,
but you would like to partition for performance and manageibility reasons.

Hash partition provides a method of evenly distributing data accross a specified
number of partitons. Rows are mapped nto partitions based on a hash value
of the Partition key.


Eg.

Create a Hash Partitioned Table.


SQL> create table purchase (pur_date date, pur_item varchar2(30))
partition by hash (pur_date)
partitions 4
store in (users, tools, data, data)

Table created.

SQL> SELECT PARTITION_NAME, TABLE_NAME FROM USER_TAB_PARTITIONS;

PARTITION_NAME TABLE_NAME
------------------------------ ------------------------------
SYS_P21 PURCHASE
SYS_P22 PURCHASE
SYS_P23 PURCHASE
SYS_P24 PURCHASE

Add Partition:-

SQL> alter table purchase add partition;

Table altered.

SQL> SELECT PARTITION_NAME, TABLE_NAME FROM USER_TAB_PARTITIONS;

PARTITION_NAME TABLE_NAME
------------------------------ ------------------------------
SYS_P22 PURCHASE
SYS_P23 PURCHASE
SYS_P24 PURCHASE
SYS_P21 PURCHASE
SYS_P25 PURCHASE


Note :- You cannot drop a hash partiton.


List partition:-

Use list artition when you require explict control over how rows map to partitions.
You can specify a list of discret e values for the partitioning column in the
description for each partition.

SQL> create table customers (name varchar2(30), city varchar2(30), addr varchar2(60))
partition by list (city)
(
partition north_india values ('DELHI','BOMBAY','CALCUTTA') tablespace users,
partition south_india values ('CHENNAI','BANGALORE','KOCHIN') tablespace users
);

Table created.


SQL> select partition_name , table_name from user_tab_partitions;

PARTITION_NAME TABLE_NAME
------------------------------ ------------------------------
NORTH_INDIA CUSTOMERS
SOUTH_INDIA CUSTOMERS


Adding additional values to a partition.

SQL> alter table customers MODIFY PARTItion SOUTH_INDIA add values ('MANGALORE');

Table altered.


Add partition:-

SQL> alter table customers add partition others values ('OTHERS');

Table altered.

Drop partiton:-

SQL> alter table customers drop partition others update indexes;

Table altered.

Tuesday, November 23, 2010

Migrating Database to use ASM

Migrating Database to Use ASM

1. Create a pfile from the spfile.

SQL> create pfile='/u01/app/oracle/oradata/tomig/inittomig.ora' from spfile;

File created.

2. Shutdown the database cleanly.

SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

3. Edit the below parameter in pfile

db_create_file_dest='+DATA'
db_recovery_file_dest='+FLASHBACK'

and remove the controlfile parameter from pfile.
Note :- The Data and the Flashback are the diskgroup names.


[oracle@rhel ~]$ vi /u01/app/oracle/oradata/tomig/inittomig.ora
[oracle@rhel ~]$ rman target /

Recovery Manager: Release 10.2.0.1.0 - Production on Wed Nov 24 08:54:06 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database (not started)

RMAN> startup nomount pfile = '/u01/app/oracle/oradata/tomig/inittomig.ora';

Oracle instance started

Total System Global Area 285212672 bytes

Fixed Size 1218992 bytes
Variable Size 92276304 bytes
Database Buffers 188743680 bytes
Redo Buffers 2973696 bytes

--Restore the controlfile from the current controlfile.. on issuing this
new controlfiles will be created in Data and Flashback disgroups.

RMAN> restore controlfile from '/u01/app/oracle/oradata/tomig/control01.ctl';

Starting restore at 24-NOV-10
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK

channel ORA_DISK_1: copied control file copy
output filename=+DATA/tomig/controlfile/backup.256.735900923
output filename=+FLASHBACK/tomig/controlfile/backup.256.735900925
Finished restore at 24-NOV-10

RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1


--Use the backup as copy rman command to backup the database and give the destination
as the DATA diskgroup.

RMAN> backup as copy database format '+DATA';

Starting backup at 24-NOV-10
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK
channel ORA_DISK_1: starting datafile copy
input datafile fno=00001 name=/u01/app/oracle/oradata/tomig/system01.dbf
output filename=+DATA/tomig/datafile/system.257.735901005 tag=TAG20101124T085645 recid=1 stamp=735901106
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:45
channel ORA_DISK_1: starting datafile copy
input datafile fno=00003 name=/u01/app/oracle/oradata/tomig/sysaux01.dbf
output filename=+DATA/tomig/datafile/sysaux.258.735901113 tag=TAG20101124T085645 recid=2 stamp=735901221
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:56
channel ORA_DISK_1: starting datafile copy
input datafile fno=00002 name=/u01/app/oracle/oradata/tomig/undo01.dbf
output filename=+DATA/tomig/datafile/undotbs1.259.735901227 tag=TAG20101124T085645 recid=3 stamp=735901243
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile fno=00004 name=/u01/app/oracle/oradata/tomig/users01.dbf
output filename=+DATA/tomig/datafile/users.260.735901251 tag=TAG20101124T085645 recid=4 stamp=735901268
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile fno=00005 name=/u01/app/oracle/oradata/tomig/test01.dbf
output filename=+DATA/tomig/datafile/test.261.735901277 tag=TAG20101124T085645 recid=5 stamp=735901281
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
copying current control file
output filename=+DATA/tomig/controlfile/backup.262.735901283 tag=TAG20101124T085645 recid=6 stamp=735901291
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
Finished backup at 24-NOV-10


--Using the Switch command in rman switch the database to the copy of database
which we have created in the earlier step.

RMAN> switch database to copy;

datafile 1 switched to datafile copy "+DATA/tomig/datafile/system.257.735901005"
datafile 2 switched to datafile copy "+DATA/tomig/datafile/undotbs1.259.735901227"
datafile 3 switched to datafile copy "+DATA/tomig/datafile/sysaux.258.735901113"
datafile 4 switched to datafile copy "+DATA/tomig/datafile/users.260.735901251"
datafile 5 switched to datafile copy "+DATA/tomig/datafile/test.261.735901277"

--Do an recovery for the database.

RMAN> run
{
set until logseq 0 thread 1;
recover database;
}

executing command: SET until clause

Starting recover at 24-NOV-10
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:03

Finished recover at 24-NOV-10

RMAN> alter database open resetlogs;

database opened

RMAN> exit


Recovery Manager complete.

[oracle@rhel ~]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Wed Nov 24 09:05:08 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> show parameter control;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
control_file_record_keep_time integer 7
control_files string +DATA/tomig/controlfile/backup
.256.735900923, +FLASHBACK/tom
ig/controlfile/backup.256.7359
00925
SQL> show parameter spfile;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string

--The database is now running using pfile. Hence create a spfile from the pfile.
Note that the controlfile parameter will not exists in new spfile.
So we need to add it in spfile.


SQL> create spfile from pfile ='/u01/app/oracle/oradata/tomig/inittomig.ora';

File created.

SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.


-- We need to issue startup nomount because the controlfile parameter is currently
not exists in spfile.

SQL> startup nomount;
ORACLE instance started.

Total System Global Area 285212672 bytes
Fixed Size 1218992 bytes
Variable Size 92276304 bytes
Database Buffers 188743680 bytes
Redo Buffers 2973696 bytes

SQL> show parameter spfile;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string /u01/app/oracle/product/10.2.0
/db_1/dbs/spfiletomig.ora

Modify the control_file parameter to point out the controlfiles which exists in
the diskgroups of DATA and Flashback.


SQL> alter system set
control_files='+DATA/tomig/controlfile/backup.256.735900923','+FLASHBACK/tomig /controlfile/backup.256.735900925' scope=spfile;

System altered.

--Bounce the Database

SQL> shut immediate;
ORA-01507: database not mounted


ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 285212672 bytes
Fixed Size 1218992 bytes
Variable Size 92276304 bytes
Database Buffers 188743680 bytes
Redo Buffers 2973696 bytes
Database mounted.
Database opened.


SQL> select name from V$datafile;

NAME
--------------------------------------------------------------------------------
+DATA/tomig/datafile/system.257.735901005
+DATA/tomig/datafile/undotbs1.259.735901227
+DATA/tomig/datafile/sysaux.258.735901113
+DATA/tomig/datafile/users.260.735901251
+DATA/tomig/datafile/test.261.735901277


Now create online redologfiles in Diskgroups.


SQL> select member from V$logfile;

MEMBER
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/tomig/redo01.rdo
/u01/app/oracle/oradata/tomig/redo02.rdo
/u01/app/oracle/oradata/tomig/redo03.rdo
/u01/app/oracle/oradata/tomig/redo04.rdo

SQL> select * from V$log;


GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ---------
1 1 0 20971520 2 YES UNUSED 0
2 1 1 20971520 2 NO CURRENT 1027524 24-NOV-10

SQL> alter database add logfile group 3 ('+DATA','+DATA') size 20m;

Database altered.

SQL> alter database add logfile group 4 ('+DATA','+DATA') size 20m;

Database altered.

SQL> alter database drop logfile group 1;

Database altered.

SQL> select * from V$log;

GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ---------
2 1 1 20971520 2 NO CURRENT 1027524 24-NOV-10
3 1 0 20971520 2 YES UNUSED 0
4 1 0 20971520 2 YES UNUSED 0

SQL> alter system switch logfile;

System altered.


SQL> alter system switch logfile;

System altered.

SQL> select * from V$log;

GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ---------
2 1 1 20971520 2 YES INACTIVE 1027524 24-NOV-10
3 1 2 20971520 2 YES INACTIVE 1028015 24-NOV-10
4 1 3 20971520 2 NO CURRENT 1028021 24-NOV-10

SQL> alter database drop logfile group 2;

Database altered.

SQL> select * from V$log;

GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ---------
3 1 2 20971520 2 YES INACTIVE 1028015 24-NOV-10
4 1 3 20971520 2 NO CURRENT 1028021 24-NOV-10



SQL> select member from V$logfile;

MEMBER
------------------------------------------------------------------------------------------------------------------------------------------------------
+DATA/tomig/onlinelog/group_3.263.735902317
+DATA/tomig/onlinelog/group_3.264.735902329
+DATA/tomig/onlinelog/group_4.265.735902361
+DATA/tomig/onlinelog/group_4.266.735902373


SQL> select name from V$controlfile;

NAME
------------------------------------------------------------------------------------------------------------------------------------------------------
+DATA/tomig/controlfile/backup.256.735900923
+FLASHBACK/tomig/controlfile/backup.256.735900925

--All the file has been migrated to ASM, now we have only two controlfiles,
we can multiplex is to 4 using below.

Alter the controlfile parameter with existing 2 files plus 2 more locations
as +DATA AND +FLASHBACK as below.


SQL> alter system set control_files='+DATA/tomig/controlfile/backup.256.735900923','+FLASHBACK/tomig/controlfile/backup.256.735900925','+DATA','+FLASHBACK'
2 scope=spfile;

System altered.


SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

--When we use ASM the only option available for backup is rman, so connect to
rman to multiplex the files.

[oracle@rhel ~]$ rman target /

Recovery Manager: Release 10.2.0.1.0 - Production on Wed Nov 24 09:25:01 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database (not started)

RMAN> startup nomount

Oracle instance started

Total System Global Area 285212672 bytes

Fixed Size 1218992 bytes
Variable Size 92276304 bytes
Database Buffers 188743680 bytes
Redo Buffers 2973696 bytes

-execute the below command so that rman will restore this controlfile to
all the locations that been mentioned in the control_parameter.

RMAN> restore controlfile from '+DATA/tomig/controlfile/backup.256.735900923';

Starting restore at 24-NOV-10
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK

channel ORA_DISK_1: copied control file copy
output filename=+DATA/tomig/controlfile/backup.256.735900923
output filename=+FLASHBACK/tomig/controlfile/backup.256.735900925
output filename=+DATA/tomig/controlfile/backup.267.735902747
output filename=+FLASHBACK/tomig/controlfile/backup.260.735902749
Finished restore at 24-NOV-10

RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

RMAN> alter database open;

database opened

RMAN> exit


Recovery Manager complete.
[oracle@rhel ~]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Wed Nov 24 09:26:24 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> select name from V$controlfile;

NAME
--------------------------------------------------------------------------------
+DATA/tomig/controlfile/backup.256.735900923
+FLASHBACK/tomig/controlfile/backup.256.735900925
+DATA/tomig/controlfile/backup.267.735902747
+FLASHBACK/tomig/controlfile/backup.260.735902749


----End of Migrating a database to Use ASM--------------------------------------

Monday, November 22, 2010

Create ASM Instance - Only for Learning Purpose

The below methods can be used to create asm Instance, this is only for learning purpose.

In Linux:-

Step 1)

Check whether we need all the oracleasm packages, if the packages is not installed
download it from the oracle site based on you os and install it in your server.

Required packages are

oracleasm-2.6.9-67.0.0.0.1.ELsmp-2.0.3-2
oracleasm-support-2.0.3-2
oracleasmlib-2.0.4-1.el4


Step 2) Perform the below steps as root user.

Create a physical file using dd command.

# dd if=/dev/zero of=/u01/app/oracle/oradata/asmdisks/disk1 bs=2M count=1024
# dd if=/dev/zero of=/u01/app/oracle/oradata/asmdisks/disk2 bs=2M count=1024
# dd if=/dev/zero of=/u01/app/oracle/oradata/asmdisks/disk1 bs=2M count=1024

The above command will create disk1, disk2 and disk3 with 2gb each, i.e. 2M*1024
which is 2024 MB.

Now we have 3 physical files with 2gb each.

Step 3) Map the loop back devices to the files.

# /sbin/losetup /dev/loop1 /u01/app/oracle/oradata/asmdisks/disk1
# /sbin/losetup /dev/loop2 /u01/app/oracle/oradata/asmdisks/disk2
# /sbin/losetup /dev/loop3 /u01/app/oracle/oradata/asmdisks/disk3

Step 4) Confiure the oracleasm

# /etc/init.d/oracleasm configure

Default user....... : oracle
Default group....... : oinstall
start asm........... : y
.....................: y

Step 5 ) Create ASM Disks

# /etc/init.d/oracleasm createdisk asmdisk1 /dev/loop1
# /etc/init.d/oracleasm createdisk asmdisk2 /dev/loop2
# /etc/init.d/oracleasm createdisk asmdisk3 /dev/loop3


Step 6 ) Add the below entries in /etc/rc.local file inorder to set the configuration
every time the server is rebooted.

# vi /etc/rc.local

/sbin/losetup /dev/loop1 /u01/app/oracle/oradata/asmdisks/disk1
/sbin/losetup /dev/loop2 /u01/app/oracle/oradata/asmdisks/disk2
/sbin/losetup /dev/loop3 /u01/app/oracle/oradata/asmdisks/disk3

/etc/init.d/oracleasm createdisk asmdisk1 /dev/loop1
/etc/init.d/oracleasm createdisk asmdisk2 /dev/loop2
/etc/init.d/oracleasm createdisk asmdisk3 /dev/loop3

Reboot the server and login as oracle user.


Step 7)

Using DBCA


$ dbca

Using the dbca create the asm instance by selecting the configure
ASM option.


Manual Method


Set up the parameter file for ASM instance.

$ vi $ORACLE_HOME/dbs/init+ASM.ora

instance_type= ASM
db_unique_name=+ASM
large_pool_size=16m
asm_diskstring='ORCL:*'

$ export ORACLE_SID=+ASM
$ sqlplus / as sysdba

SQL> startup
ORA-9701 : unable to connect to cluster manager

SQL> exit
Disconnected

$ su - root
password :

# cd /u01/app/oracle/product/10.2.0/db_1/bin
# ./localconfig add
----
---
---
oracle css is installed and running under init(1M)

# su - oracle

$ export ORACLE_SID=+ASM
$ sqlplus / as sysdba

SQL> startup
----
---
ora-15110 no diskgroup mounted

SQL> select path from V$asm_disk;

ORCL:asmdisk1
ORCL:asmdisk2
ORCL:asmdisk3

SQL> create diskgroup data normal redundancy failgroup fg1 disk 'ORCL:asmdisk1'
failgroup fg2 disk 'ORCL:asmdisk2';

SQL> create diskgroup flashback external redundancy disk 'ORCL:asmdisk2';

SQL> shut immediate;

$ sqlplus / as sysdba

SQL> create spfile from pfile;

SQL> startup
---
---
---
Asm diskgroup not mounted

SQL> alter system set asm_diskgroups = 'DATA','FLASHBACK' scope=spfile;

SQL> shut immediate;

SQL> startup
---
---
ASM diskgroup mounted
-------------------



Creating ASM Instance in windows:-

C:\Documents and Settings>D:\oracle\product\10.2.0\db_1\BIN\localconfig add
Step 1: creating new OCR repository
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'administrator', privgrp ''..
Operation successful.
Step 2: creating new CSS service
successfully created local CSS service
successfully added CSS to home

C:\Documents and Settings\Administrator>asmtool -create d:\asmdisks\disk1 1024

C:\Documents and Settings\Administrator>asmtool -create d:\asmdisks\disk2 1024

C:\Documents and Settings\Administrator>oradim -new -asmsid +ASM -startmode auto
Instance created.

Create an parameter file with the below parameters for the ASM instance and save it in oracle home path.

instance_type=ASM
db_unique_name=+ASM
large_pool_size=16m
_asm_allow_only_raw_disks=false
asm_diskstring='D:\asmdisks\*'


C:\Documents and Settings\Administrator>set oracle_sid=+ASM

C:\Documents and Settings\Administrator>sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Tue Nov 23 10:57:49 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ASM instance started

Total System Global Area 88080384 bytes
Fixed Size 1247444 bytes
Variable Size 61667116 bytes
ASM Cache 25165824 bytes
ORA-15110: no diskgroups mounted

SQL> select path from V$asm_disk;

PATH
--------------------------------------------------------------------------------
D:\ASMDISKS\DISK2
D:\ASMDISKS\DISK1

SQL> create diskgroup data normal redundancy failgroup fg1 disk 'D:\ASMDISKS\DISK1' failgroup fg2 disk 'D:\ASMDISKS\DISK2';

Diskgroup created.

SQL> shut immediate;
ASM diskgroups dismounted
ASM instance shutdown
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

C:\Documents and Settings\Administrator>sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Tue Nov 23 11:01:07 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> create spfile from pfile;

File created.

SQL> startup ;
ASM instance started

Total System Global Area 88080384 bytes
Fixed Size 1247444 bytes
Variable Size 61667116 bytes
ASM Cache 25165824 bytes
ORA-15110: no diskgroups mounted

SQL> alter system set asm_diskgroups='DATA' scope=spfile;

System altered.

SQL> shut immediate;
ORA-15100: invalid or missing diskgroup name
ASM instance shutdown

SQL> startup ;
ASM instance started

Total System Global Area 88080384 bytes
Fixed Size 1247444 bytes
Variable Size 61667116 bytes
ASM Cache 25165824 bytes
ASM diskgroups mounted

SQL> select name, total_mb, free_mb , usable_file_mb from V$asm_diskgroup;

NAME TOTAL_MB FREE_MB USABLE_FILE_MB
------------------------------ ---------- ---------- --------------
DATA 2048 1946 973


End of Creating ASM instance in Linux and Windows Server (Testing/Learning Purpose only)

Sunday, November 21, 2010

Transport Tablespaces

Here we can see different methods that can be used to transport
tablespaces.

1. Using exp/imp
2. Using expdp/impdp
3. Using expdp/impdp with database link.
4. Cross platform transport tablespace.


There are some restrictions on transporting tablespaces, namely:

a) The source and target databases must be running on the same hardware platforms – You cannot take the datafiles from Windows NT and transfer them to HP/UX, for example. A DMP file can be copied from OS to OS – the DATAFILES of a database cannot be; the datafiles are not OS independent like a DMP file is.

Note :- You can use the cross platform transport tablespace which is a new feature
added in oracle 10g.

b) The source and target databases must be using the same character set – You cannot take a set of files from a database with a WE8ISO8859P1 character set and attach them to a UTF8 instance, for example.

c) The source database must not have a tablespace by the same name – The tablespace name from the originating database will be used. If the source database already has a tablespace by that name, Oracle cannot attach another with that name.

d) The source and target database must have the same block sizes – You cannot attach the files from a 4 KB blocksize database to an 8 KB blocksize database.

e) You must transport a self-contained set of objects – For example, you cannot transport a tablespace that contains an index without also transporting the tablespace that contains the table, which the index is on.

f) There are some objects that cannot be transported – These include snapshots/materialized views, function-based indexes, domain indexes (such as those produced by interMedia), scoped refs, and advanced queues with more then one recipient.

g) The source database must set the tablespace to be transported in READ ONLY mode for a short period of time – This is the period of time it takes to export the tablespace meta data and copy the datafiles elsewhere.

h) SYS owned objects cannot be transported – If a tablespace contains any object own by SYS,the transport will fail. This means that objects like rollback segments, the system tablespace, and so on cannot be transported (which is reasonable, as there would be no reason to transport those objects in any case).


Using exp/imp:-

Here we are going a transport a tablespace named TEST from the production database
to test database.

production db name :- tomig
Test db name :- clntomig

$ export ORACLE_SID=tomig
$ sqlplus / as sysdba

Check for transport tablespace dependent objects. I.e if you are going to transport
a tablespace and if it contains any dependent objects in any other tablespace then you need to transport that tablespace too, or you need to move the dependent objects to the same tablespace which you are going to transport. For this dependency checking we can use the below package.

SQL> exec dbms_tts.transport_set_check ('TEST',TRUE,TRUE);

PL/SQL procedure successfully completed.

SQL> select * from transport_set_violations;

VIOLATIONS
----------------------------------------------------------------------------------
Index MANZOOR.EMP_PK in tablespace USERS points to table MANZOOR.EMP in tablespace TEST

SQL> alter index manzoor.emp_pk redbuild tablesacpace test;

Index altered.

SQL> exec dbms_tts.transport_set_check ('TEST',TRUE,TRUE);

PL/SQL procedure successfully completed.

SQL> select * from transport_set_violations;

no rows selected

SQL> alter tablespace test read only;

Tablespace altered.

SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options

$ exp "'"sys/sys as sysdba"'" file=/u01/test.dmp tablespaces=test
transport_tablespace=y

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
Note: table data (rows) will not be exported
About to export transportable tablespace metadata...
For tablespace TEST ...
. exporting cluster definitions
. exporting table definitions
. . exporting table EMP
. exporting referential integrity constraints
. exporting triggers
. end transportable tablespace metadata export
Export terminated successfully without warnings.

--Now the tablespace metadata has been exported. Then copy the datafiles pertaining to that tablespace and paste it to the desination database.

$ cp test01.dbf /u01/app/oracle/oradata/clntomig/test01.dbf


Then Import the metadata of tablespace and link the datafiles.

$ export ORACLE_SID=clntomig

$ imp "'"sys/sys as sysdba"'" file=/u01/test.dmp transport_tablespace=y
tablespaces=test datafiles=/u01/app/oracle/oradata/clntomig/test01.dbf

-------------
Import terminated successfully without warnings.

$ sqlplus / as sysdba

SQL> select name from V$database;

NAME
--------
CLNTOMIG

SQL> select name from V$tablespaces;

NAME
------
TEST
..
..

Now make the tablespace read write in both the database. (tomig and clntomig).

SQL> alter tablespace test read write;


Using expdp/impdp

$ export ORACLE_SID=tomig
$ sqlplus / as sysdba

SQL> exec dbms_tts.transport_set_check ('TEST',TRUE,TRUE);

PL/SQL procedure successfully completed.

SQL> select * from transport_set_violations;

no rows selected

SQL> alter tablespace test read only;

Tablespace altered.

SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options

$ expdp "'"sys/sys as sysdba"'" directory=data_pump_dir dumpfile=trans.dmp
transport_tablespaces=test

-Now copy the datafiles and dump files to the destination database.

$ cp /u01/app/oracle/oradata/tomig/test01.dbf /u01/app/oracle/oradata/clntomig/test01.dbf

$ cp /u01/app/oracle/admin/tomig/dpdump/trans.dmp /u01/app/oracle/admin/clntomig/dpdump/trans.dmp

$ export ORACLE_SID=clntomig
$ impdp "'"sys/sys as sysdba"'" directory=data_pump_dir dumpfile=trans.dmp
transport_datafiles=/u01/app/oracle/oradata/clntomig/test01.dbf

Import: Release 10.2.0.1.0 - Production on Monday, 22 November, 2010 9:25:31

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Master table "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_TRANSPORTABLE_01": 'sys/******** AS SYSDBA' directory=data_pump_dir dumpfile=usning_expdp.dmp transport_datafiles=/u01/app/oracle/oradata/clntomig/test01.dbf
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at 09:25:38

..
Now make the tablespace read write in both the database.

SQL> alter tablespace test read write;

-------------------------------------

Using expdp/impdp with dblink and cross platform Transport tablespace

dblink is used to access remote database by connecting to a local database.

remote database : tomig ( RHEL 4 Server)
local datbase : firstdb ( Windows Server)

here say i have db link created in my local database which links to the remdb,
using this dblink i can access my objects that present in the remote database by connecting to my local database instead of connecting to the remote database.

Here we are going to transport the tablespace which is present in the remote database
using the database link (dblink) and then going to import the same in the local database.


local database (Windows Server) :-

> set oracle_sid=firstdb
> sqlplus / as sysdba

SQL> create public database link link_tomig using 'TOMIG';

SQL> grant dba to manzoor;

SQL> exit;

C:\Documents and Settings\Administrator>expdp manzoor/ahamed directory=data_pump_dir dumpfile=trans_net.dmp network_link=link_tomig transport_tablespaces=test

Export: Release 10.2.0.1.0 - Production on Monday, 22 November, 2010 10:03:42

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "MANZOOR"."SYS_EXPORT_TRANSPORTABLE_01": manzoor/******** directory=data_pump_dir dumpfile=trans_net.dmp network_link=link_tomig transport_tablespaces
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "MANZOOR"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MANZOOR.SYS_EXPORT_TRANSPORTABLE_01 is:
D:\ORACLE\PRODUCT\10.2.0\ADMIN\FIRSTDB\DPDUMP\TRANS_NET.DMP
Job "MANZOOR"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 10:03:54


-- Here using the database link we have exported the metadata of the TEST tablespace which is present in the remote server.

Now copy the database file from the remote server to the local server
using any of the below methods.

1. FTP
2. SCP
3. WINscp (or)
4. dbms_file_transfer package.

In this example we can use the dbms_file_transfer package to transfer the
datafile from the remote to the local server.

In Remote Database execute the below.

$ sqlplus / as sysdba

SQL> create or replace directory file_dir as '/u01/app/oracle/oradata/tomig';

Directory created.

SQL> grant read, write on directory file_dir to manzoor;

Grant succeeded.

Now in you local database.

$ sqlplus / as sysdba

SQL> conn manzoor/ahamed;

SQL> begin
dbms_file_transfer.get_file
(
SOURCE_DIRECTORY_OBJECT => 'FILE_DIR',
SOURCE_FILE_NAME => 'test01.dbf',
SOURCE_DATABASE => 'LINK_TOMIG',
DESTINATION_DIRECTORY_OBJECT => 'DATA_PUMP_DIR',
DESTINATION_FILE_NAME => 'TEST01.DBF'
);
end;
/

PL/SQL procedure successfully completed.



Now we have the dump of test tablespace and the datafiles of the test tablespace
in our local server (windows).

Since the datafile of the remote is different platform , we need to use the cross platform transprot tablespace method to accomplish this task.

Check the endian format of your remote and the local server.

SQL> select * from V$transportable_platform order by platform_id;

PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
----------- ---------------------------------------- --------------
1 Solaris[tm] OE (32-bit) Big
2 Solaris[tm] OE (64-bit) Big
3 HP-UX (64-bit) Big
4 HP-UX IA (64-bit) Big
5 HP Tru64 UNIX Little
6 AIX-Based Systems (64-bit) Big
7 Microsoft Windows IA (32-bit) Little
8 Microsoft Windows IA (64-bit) Little
9 IBM zSeries Based Linux Big
10 Linux IA (32-bit) Little
11 Linux IA (64-bit) Little

PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
----------- ---------------------------------------- --------------
12 Microsoft Windows 64-bit for AMD Little
13 Linux 64-bit for AMD Little
15 HP Open VMS Little
16 Apple Mac OS Big
17 Solaris Operating System (x86) Little
18 IBM Power Based Linux Big

17 rows selected.

If the endian format of both the platfrom (remote and local) are same then we
can directly link the datafiles to the database , else we need to convert the
datafiles using RMAN and then we need to import to the database.

Here

PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
----------- ------------------------------ --------------
7 Microsoft Windows IA (32-bit) Little
10 Linux IA (32-bit) Little

as per the above both the server has the same endian format so we can directly import
to the database. Suppose if the endian format is different then perform the below
steps to convert the datafile and then do the import.

> rman target /

RMAN> convert datafile 'D:\oracle\product\10.2.0\admin\firstdb\dpdump\test01.dbf' from platform 'Linux IA (32-bit)' db_file_name_convert '/u01/app/oracle','D:\oracle';

Starting backup at 22-NOV-10
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile conversion
input filename=D:\ORACLE\PRODUCT\10.2.0\ORADATA\FIRSTDB\TEST01.DBF
converted datafile=D:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\DATA_D-TOMIG_I-2728202513_TS-TEST_FNO-5_02LTKQBP
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:04
Finished backup at 22-NOV-10

--
Now the converted file has been placed at oracle home.

C:\Documents and Settings\Administrator>impdp manzoor/ahamed directory=data_pump_dir dumpfile=TRANS_NET transport_datafiles=D:\ORACLE\PRODUCT\10.2.0\DB_1\DATABA
28202513_TS-TEST_FNO-5_02LTKQBP

Import: Release 10.2.0.1.0 - Production on Monday, 22 November, 2010 10:38:48

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Master table "MANZOOR"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "MANZOOR"."SYS_IMPORT_TRANSPORTABLE_01": manzoor/******** directory=data_pump_dir dumpfile=TRANS_NET transport_datafiles=D:\ORACLE\PRODUCT\10.2.0\DB_1
IG_I-2728202513_TS-TEST_FNO-5_02LTKQBP
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
ORA-39151: Table "MANZOOR"."EMP" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "MANZOOR"."SYS_IMPORT_TRANSPORTABLE_01" completed with 1 error(s) at 10:38:50

Now copy the datafile from oracle home and paste it to the database directory.

SQL> alter tablespace test read write;

Tablespace altered.

SQL> alter tablespace test offline;

Tablespace altered.

SQL> alter database rename file 'D:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\DATA_D-TOMIG_I-2728202513_TS-TEST_FNO-5_02LTKQBP' to 'D:\oracle\product\10.2.0\oradata\firstdb\test01.dbf';

Database altered.

SQL> alter tablespace test online;

Tablespace altered.



-------------------------------------------------------------------------------

Friday, November 19, 2010

All Parameters

■ ANSI Compliance

BLANK_TRIMMING

BLANK_TRIMMING specifies the data assignment semantics of character datatypes.
Values:

■ TRUE
Allows the data assignment of a source character string or variable to a destination
character column or variable even though the source length is longer than the
destination length. In this case, however, the additional length over the destination
length is all blanks. This value complies with the semantics of SQL92 Transitional
Level and above.

■ FALSE
Disallows the data assignment if the source length is longer than the destination
length and reverts to SQL92 Entry Level semantics.

■ Backup and Restore

BACKUP_TAPE_IO_SLAVES

BACKUP_TAPE_IO_SLAVES specifies whether I/O server processes (also called
slaves) are used by Recovery Manager to back up, copy, or restore data to tape. When
the value is set to true, Oracle uses an I/O server process to write to or read from a tape device. When the value is set to false (the default), Oracle does not use I/O
server process for backups. Instead, the shadow process engaged in the backup
accesses the tape device.

Note: You cannot perform duplexed backups unless you enable this
parameter. Otherwise, Oracle returns an error. When this parameter is
enabled, Recovery Manager will configure as many server processes
as needed for the number of backup copies requested.


RECYCLEBIN


TAPE_ASYNCH_IO

TAPE_ASYNCH_IO controls whether I/O to sequential devices (for example, backup or
restore of Oracle data to or from tape) is asynchronous—that is, whether parallel
server processes can overlap I/O requests with CPU processing during table scans. If
your platform supports asynchronous I/O to sequential devices, Oracle recommends
that you leave this parameter set to its default. However, if the asynchronous I/O
implementation is not stable, you can set TAPE_ASYNCH_IO to false to disable
asynchronous I/O. If your platform does not support asynchronous I/O to sequential
devices, this parameter has no effect.


■ BFILEs

SESSION_MAX_OPEN_FILES

SESSION_MAX_OPEN_FILES specifies the maximum number of BFILEs that can be
opened in any session. Once this number is reached, subsequent attempts to open
more files in the session by using DBMS_LOB.FILEOPEN() or OCILobFileOpen()
will fail. The maximum value for this parameter depends on the equivalent parameter
defined for the underlying operating system.


■ Buffer Cache and I/O

DB_nK_CACHE_SIZE
DB_BLOCK_BUFFERS
DB_BLOCK_SIZE
DB_CACHE_ADVICE
DB_CACHE_SIZE

DB_FILE_MULTIBLOCK_READ_COUNT

DB_FILE_MULTIBLOCK_READ_COUNT is one of the parameters you can use to
minimize I/O during table scans. It specifies the maximum number of blocks read in
one I/O operation during a sequential scan. The total number of I/Os needed to
perform a full table scan depends on such factors as the size of the table, the
multiblock read count, and whether parallel execution is being utilized for the
operation.


DB_KEEP_CACHE_SIZE

DB_RECYCLE_CACHE_SIZE

DB_WRITER_PROCESSES

Default value : 1 or CPU_COUNT / 8, whichever is greater
Range of values : 1 to 20


DB_WRITER_PROCESSES is useful for systems that modify data heavily. It specifies
the initial number of database writer processes for an instance.



DBWR_IO_SLAVES

DBWR_IO_SLAVES is relevant only on systems with only one database writer process
(DBW0). It specifies the number of I/O server processes used by the DBW0 process. The
DBW0 process and its server processes always write to disk. By default, the value is 0 and I/O server processes are not used.

If you set DBWR_IO_SLAVES to a nonzero value, the number of I/O server processes
used by the ARCH and LGWR processes is set to 4. However, the number of I/O server
processes used by Recovery Manager is set to 4 only if asynchronous I/O is disabled
(either your platform does not support asynchronous I/O or disk_asynch_io is set
to false).

Typically, I/O server processes are used to simulate asynchronous I/O on platforms
that do not support asynchronous I/O or that implement it inefficiently. However, you
can use I/O server processes even when asynchronous I/O is being used. In that case
the I/O server processes will use asynchronous I/O.
I/O server processes are also useful in database environments with very large I/O
throughput, even if asynchronous I/O is enabled.



DISK_ASYNCH_IO

DISK_ASYNCH_IO controls whether I/O to datafiles, control files, and logfiles is
asynchronous (that is, whether parallel server processes can overlap I/O requests with CPU processing during table scans). If your platform supports asynchronous I/O to disk, Oracle recommends that you leave this parameter set to its default value.
However, if the asynchronous I/O implementation is not stable, you can set this
parameter to false to disable asynchronous I/O. If your platform does not support
asynchronous I/O to disk, this parameter has no effect.

If you set DISK_ASYNCH_IO to false, then you should also set DBWR_IO_SLAVES to
a value other than its default of zero in order to simulate asynchronous I/O.

FILESYSTEMIO_OPTIONS

Syntax FILESYSTEMIO_OPTIONS = { none | setall | directIO | asynch }

FILESYSTEMIO_OPTIONS specifies I/O operations for file system files.


READ_ONLY_OPEN_DELAYED

READ_ONLY_OPEN_DELAYED determines when datafiles in read-only tablespaces are
accessed.

Values:

■ true
The datafiles are accessed for the first time only when an attempt is made to read
data stored within them.

■ false
The datafiles are accessed at database open time.


USE_INDIRECT_DATA_BUFFERS

■ Cursors and Library Cache

CURSOR_SHARING

CURSOR_SHARING determines what kind of SQL statements can share the same
cursors.

Values:

■ FORCE
Forces statements that may differ in some literals, but are otherwise identical, to
share a cursor, unless the literals affect the meaning of the statement.

■ SIMILAR
Causes statements that may differ in some literals, but are otherwise identical, to
share a cursor, unless the literals affect either the meaning of the statement or the
degree to which the plan is optimized.

■ EXACT
Only allows statements with identical text to share the same cursor.

Note: Forcing cursor sharing among similar (but not identical)
statements can have unexpected results in some DSS applications, or
applications that use stored outlines.



CURSOR_SPACE_FOR_TIME

CURSOR_SPACE_FOR_TIME lets you use more space for cursors in order to save time.
It affects both the shared SQL area and the client's private SQL area.
Values:

■ TRUE
Shared SQL areas are kept pinned in the shared pool. As a result, shared SQL areas
are not aged out of the pool as long as an open cursor references them. Because
each active cursor's SQL area is present in memory, execution is faster. However,
the shared SQL areas never leave memory while they are in use. Therefore, you
should set this parameter to TRUE only when the shared pool is large enough to
hold all open cursors simultaneously.
In addition, a setting of TRUE retains the private SQL area allocated for each cursor
between executions instead of discarding it after cursor execution, saving cursor
allocation and initialization time

■ FALSE
Shared SQL areas can be deallocated from the library cache to make room for new
SQL statements.


OPEN_CURSORS

OPEN_CURSORS specifies the maximum number of open cursors (handles to private
SQL areas) a session can have at once. You can use this parameter to prevent a session from opening an excessive number of cursors.

It is important to set the value of OPEN_CURSORS high enough to prevent your
application from running out of open cursors. The number will vary from one
application to another. Assuming that a session does not open the number of cursors
specified by OPEN_CURSORS, there is no added overhead to setting this value higher
than actually needed.


SESSION_CACHED_CURSORS

SESSION_CACHED_CURSORS specifies the number of session cursors to cache.
Repeated parse calls of the same SQL statement cause the session cursor for that
statement to be moved into the session cursor cache. Subsequent parse calls will find
the cursor in the cache and do not need to reopen the cursor. Oracle uses a least
recently used algorithm to remove entries in the session cursor cache to make room for new entries when needed.

This parameter also constrains the size of the PL/SQL cursor cache which PL/SQL
uses to avoid having to reparse as statements are re-executed by a user.


■ Database/Instance Identification

DB_DOMAIN

In a distributed database system, DB_DOMAIN specifies the logical location of the
database within the network structure. You should set this parameter if this database is or ever will be part of a distributed system. The value consists of the extension components of a global database name, consisting of valid identifiers (any
alphanumeric ASCII characters), separated by periods. Oracle recommends that you
specify DB_DOMAIN as a unique string for all databases in a domain.
This parameter allows one department to create a database without worrying that it
might have the same name as a database created by another department. If one sales
department's DB_DOMAIN is JAPAN.ACME.COM, then their SALES database
(SALES.JAPAN.ACME.COM) is uniquely distinguished from another database with
DB_NAME = SALES but with DB_DOMAIN = US.ACME.COM.

If you omit the domains from the name of a database link, Oracle expands the name by
qualifying the database with the domain of your local database as it currently exists in the data dictionary, and then stores the link name in the data dictionary. The characters valid in a database domain name are: alphanumeric characters, underscore (_), and number sign (#).


DB_NAME
INSTANCE_NAME

■ Diagnostics and Statistics

BACKGROUND_CORE_DUMP
BACKGROUND_DUMP_DEST
CORE_DUMP_DEST
DB_BLOCK_CHECKING

DB_BLOCK_CHECKING controls whether or not Oracle performs block checking for
database blocks. The checking that is performed depends on the value you supply, as
follows:

■ OFF - no block checking is performed for blocks in the user tablespaces. However,
semantic block checking for SYSTEM tablespace blocks is always turned on.

■ LOW - basic block header checks are performed after block contents change in
memory (for example, after UPDATE or INSERT statements, on-disk reads, or
inter-instance block transfers in RAC)

■ MEDIUM - all LOW checks are performed, as well as semantic block checking for all
non-index-organized table blocks

■ FULL - all LOW and MEDIUM checks are performed, as well as semantic checks for
index blocks (that is, blocks of subordinate objects that can actually be dropped
and reconstructed when faced with corruption) Oracle checks a block by going through the data in the block, making sure it is logically self-consistent. Block checking can often prevent memory and data corruption. Block checking typically causes 1% to 10% overhead, depending on workload and the parameter value. The more updates or inserts in a workload, the more expensive it is to turn on block checking. You should set DB_BLOCK_CHECKING to FULL if the performance overhead is acceptable. For backward compatibility the use of FALSE (implying OFF) and TRUE (implying FULL) is preserved.


DB_BLOCK_CHECKSUM

DB_BLOCK_CHECKSUM determines whether DBWn and the direct loader will calculate a
checksum (a number calculated from all the bytes stored in the block) and store it in
the cache header of every data block when writing it to disk. Checksums are verified
when a block is read - only if this parameter is TYPICAL or FULL and the last write of the block stored a checksum. In FULL mode, Oracle also verifies the checksum before a change application from update/delete statements and recomputes it after the change is applied. In addition, Oracle gives every log block a checksum before writing it to the current log.

If this parameter is set to OFF, DBWn calculates checksums only for the SYSTEM
tablespace, but not for user tablespaces. Checksums allow Oracle to detect corruption caused by underlying disks, storage systems, or I/O systems. If set to FULL, DB_BLOCK_CHECKSUM also catches in-memory corruptions and stops them from making it to the disk. Turning on this feature in TYPICAL mode causes only an additional 1% to 2% overhead. In the FULL mode it causes 4% to 5% overhead. Oracle recommends that you set DB_BLOCK_CHECKSUM to TYPICAL. For backward compatibility we preserve use of TRUE (implying TYPICAL) and FALSE (implying OFF) values.


EVENT

EVENT is a parameter used only to debug the system. Do not alter the value of this
parameter except under the supervision of Oracle Support Services staff.


MAX_DUMP_FILE_SIZE

MAX_DUMP_FILE_SIZE specifies the maximum size of trace files (excluding the alert
file). Change this limit if you are concerned that trace files may use too much space.


SHADOW_CORE_DUMP
STATISTICS_LEVEL


STATISTICS_LEVEL specifies the level of collection for database and operating
system statistics. The Oracle Database collects these statistics for a variety of purposes, including making self-management decisions.

The default setting of TYPICAL ensures collection of all major statistics required for database self-management functionality and provides best overall performance. The
default value should be adequate for most environments.

When the STATISTICS_LEVEL parameter is set to ALL, additional statistics are added
to the set of statistics collected with the TYPICAL setting. The additional statistics are timed OS statistics and plan execution statistics.
Setting the STATISTICS_LEVEL parameter to BASIC disables the collection of many
of the important statistics required by Oracle Database features and functionality,

including:
■ Automatic Workload Repository (AWR) Snapshots
■ Automatic Database Diagnostic Monitor (ADDM)
■ All server-generated alerts
■ Automatic SGA Memory Management
■ Automatic optimizer statistics collection
■ Object level statistics
■ End to End Application Tracing (V$CLIENT_STATS)
■ Database time distribution statistics (V$SESS_TIME_MODEL
and V$SYS_TIME_MODEL)
■ Service level statistics
■ Buffer cache advisory
■ MTTR advisory
■ Shared pool sizing advisory
■ Segment level statistics
■ PGA Target advisory
■ Timed statistics
■ Monitoring of statistics

Note: Oracle strongly recommends that you do not disable these
important features and functionality.


TIMED_OS_STATISTICS

TIMED_OS_STATISTICS specifies (in seconds) the interval at which Oracle collects
operating system statistics when a request is made from the client to the server or
when a request completes.

■ On dedicated servers, Oracle collects operating system statistics at user logon and
after each subsequent client invocation through the OCI into the Oracle server as a
remote procedure call message.
■ On shared servers, Oracle collects statistics when client calls to Oracle are
processed.


A value of zero specifies that operating system statistics are not gathered. To collect statistics, set a value meaningful for your application and site needs.

Note: Gathering operating system statistics is very expensive. Oracle
recommends that you set this parameter in an ALTER SYSTEM statement rather than in the initialization parameter file, and that you reset the value to zero as soon as the needed statistics have been gathered.


TIMED_STATISTICS

TIMED_STATISTICS specifies whether or not statistics related to time are collected.

Values:
■ true
The statistics are collected and stored in trace files or displayed in the
V$SESSTATS and V$SYSSTATS dynamic performance views.
■ false
The value of all time-related statistics is set to zero. This setting lets Oracle avoid the overhead of requesting the time from the operating system. Normally, TIMED_
STATISTICS should be false.

On some systems with very fast timer access, Oracle might enable timing even if this
parameter is set to false. On these systems, setting the parameter to true can
sometimes produce more accurate statistics for long-running operations.


TRACE_ENABLED

TRACE_ENABLED controls tracing of the execution history, or code path, of Oracle.
Oracle Support Services uses this information for debugging.

When TRACE_ENABLED is set to true, Oracle records information in specific files
when errors occur. Oracle records this information for all instances, even if only one instance terminates. This allows Oracle to retain diagnostics for an entire cluster. Although the overhead incurred from this processing is not excessive, you can improve performance by setting TRACE_ENABLED to false. You might do this, for
example, to meet high-end benchmark requirements. However, if you leave this
parameter set to false, you may lose valuable diagnostic information. Therefore,
always set TRACE_ENABLED to true to trace system problems and to reduce
diagnostic efforts in the event of unexplained instance failures.


TRACEFILE_IDENTIFIER

TRACEFILE_IDENTIFIER specifies a custom identifier that becomes part of the
Oracle Trace file name. Such a custom identifier is used to identify a trace file simply from its name and without having to open it or view its contents.
Each time this parameter is dynamically modified, the next trace dump will be written
to a trace file which has the new parameter value embedded in its name. Trace file
continuity information is automatically added to both the old and new trace files to
indicate that these trace files belong to the same process.
This parameter can only be used to change the name of the foreground process' trace
file; the background processes continue to have their trace files named in the regular format. For foreground processes, the TRACEID column of the V$PROCESS view
contains the current value of the TRACEFILE_IDENTIFIER parameter. When this
parameter value is set, the trace file name has the following format:
sid_ora_pid_traceid.trc



USER_DUMP_DEST

■ Distributed, Replication

COMMIT_POINT_STRENGTH
DISTRIBUTED_LOCK_TIMEOUT

DISTRIBUTED_LOCK_TIMEOUT specifies the amount of time (in seconds) for
distributed transactions to wait for locked resources.

GLOBAL_NAMES

GLOBAL_NAMES specifies whether a database link is required to have the same name
as the database to which it connects. If the value of GLOBAL_NAMES is false, then no check is performed. If you use or plan to use distributed processing, then Oracle recommends that you set this parameter to true to ensure the use of consistent naming conventions for databases and links in a networked environment.


HS_AUTOREGISTER

HS_AUTOREGISTER enables or disables automatic self-registration of Heterogeneous
Services (HS) agents. When enabled, information is uploaded into the server's data
dictionary to describe a previously unknown agent class or a new agent version.


Oracle recommends that you set this parameter to true. Oracle incurs less overhead
when establishing subsequent connections through the same agent if self-registered
information is available in the server's data dictionary.


OPEN_LINKS

OPEN_LINKS specifies the maximum number of concurrent open connections to
remote databases in one session. These connections include database links, as well as
external procedures and cartridges, each of which uses a separate process.
Oracle counts one open link for the following:

■ For each user that references a public or private database link

■ For each external procedure or cartridge connection when it is executed for the
first time

Both types of connections close when the session ends. You can also close a database
link connection explicitly by issuing an ALTER SESSION CLOSE DATABASE LINK
statement.

You should set this parameter to allow for the external procedure and cartridge
connections expected during the session plus the number of databases referred to in
typical distributed transactions (that is, a single SQL statement that references multiple databases), so that all the databases can be open to execute the statement. For example,

if queries alternately access databases A, B, and C, and OPEN_LINKS is set to 2, time
will be lost waiting while one connection is broken and another made. Increase the
value if many different databases are accessed over time.


OPEN_LINKS_PER_INSTANCE

OPEN_LINKS_PER_INSTANCE specifies the maximum number of migratable open
connections globally for each database instance. XA transactions use migratable open
connections so that the connections are cached after a transaction is committed.
Another transaction can use the connection, provided the user who created the
connection is the same as the user who owns the transaction.
OPEN_LINKS_PER_INSTANCE is different from OPEN_LINKS, which indicates the
number of connections from a session. The OPEN_LINKS parameter is not applicable
to XA applications.


REPLICATION_DEPENDENCY_TRACKING

REPLICATION_DEPENDENCY_TRACKING enables or disables dependency tracking for
read/write operations to the database. Dependency tracking is essential for
propagating changes in a replicated environment in parallel.

Values:

■ TRUE
Enables dependency tracking.

■ FALSE
Allows read/write operations to the database to run faster, but does not produce
dependency information for Oracle to perform parallel propagation.

Note: Do not specify this value unless you are sure that your application will not perform any read/write operations to the replicated tables.

■ File Locations, Names, and Sizes

AUDIT_FILE_DEST
BACKGROUND_CORE_DUMP
BACKGROUND_DUMP_DEST
CONTROL_FILES
CORE_DUMP_DEST
DB_CREATE_FILE_DEST

DB_CREATE_FILE_DEST specifies the default location for Oracle-managed datafiles.
This location is also used as the default location for Oracle-managed control files and online redo logs if none of the DB_CREATE_ONLINE_LOG_DEST_n initialization
parameters are specified.

DB_CREATE_ONLINE_LOG_DEST_n

Syntax DB_CREATE_ONLINE_LOG_DEST_[1 | 2 | 3 | 4 | 5] =
directory | disk group

DB_CREATE_ONLINE_LOG_DEST_n (where n = 1, 2, 3, ... 5) specifies the default
location for Oracle-managed control files and online redo logs. If more than one DB_
CREATE_ONLINE_LOG_DEST_n parameter is specified, then the control file or online
redo log is multiplexed across the locations of the other DB_CREATE_ONLINE_LOG_
DEST_n parameters. One member of each online redo log is created in each location,
and one control file is created in each location. Specifying at least two parameters provides greater fault tolerance for the control files and online redo logs if one of the locations should fail.


DB_FILES
DB_RECOVERY_FILE_DEST
DB_RECOVERY_FILE_DEST_SIZE
FILE_MAPPING

FILE_MAPPING enables or disables file mapping. The FMON background process will
be started to manage the mapping information when file mapping is enabled.


IFILE

Use IFILE to embed another parameter file within the current parameter file. For
example:

IFILE = COMMON.ORA
You can have up to three levels of nesting. In this example, the file COMMON.ORA could contain a second IFILE parameter for the file COMMON2.ORA, which could contain a third IFILE parameter for the file GCPARMS.ORA. You can also include multiple
parameter files in one parameter file by listing IFILE several times with different
values:

IFILE = DBPARMS.ORA
IFILE = GCPARMS.ORA
IFILE = LOGPARMS.ORA


LOG_ARCHIVE_DEST_n

SPFILE

The value of this parameter is the name of the current server parameter file (SPFILE)
in use. This parameter can be defined in a client side PFILE to indicate the name of the server parameter file to use. When the default server parameter file is used by the server, the value of SPFILE is internally set by the server.

The SPFILE resides in the ORACLE_HOME/dbs directory; however, users can place it
anywhere on their machine as long as it is specified in an initialization parameter file.

■ Globalization

NLS_CALENDAR

NLS_CALENDAR specifies which calendar system Oracle uses. It can have one of the
following values:
■ Arabic Hijrah
■ English Hijrah
■ Gregorian
■ Japanese Imperial
■ Persian
■ ROC Official (Republic of China)
■ Thai Buddha
For example, suppose NLS_CALENDAR is set to "Japanese Imperial", the date format is
"E YY-MM-DD". ("E" is the date format element for the abbreviated era name.) If the
date is May 15, 1997, then the SYSDATE is displayed as follows:
SELECT SYSDATE FROM DUAL;
SYSDATE
--------
H 09-05-15


NLS_COMP

Syntax NLS_COMP = { BINARY | LINGUISTIC | ANSI }

Values:
■ BINARY
Normally, comparisons in the WHERE clause and in PL/SQL blocks is binary unless
you specify the NLSSORT function.
■ LINGUISTIC
Comparisons for all SQL operations in the WHERE clause and in PL/SQL blocks
should use the linguistic sort specified in the NLS_SORT parameter. To improve
the performance, you can also define a linguistic index on the column for which
you want linguistic comparisons.
■ ANSI
A setting of ANSI is for backwards compatibility; in general, you should set NLS_
COMP to LINGUISTIC


NLS_CURRENCY

NLS_CURRENCY specifies the string to use as the local currency symbol for the L
number format element. The default value of this parameter is determined by NLS_
TERRITORY.


NLS_DATE_FORMAT

NLS_DATE_FORMAT specifies the default date format to use with the TO_CHAR and
TO_DATE functions. The default value of this parameter is determined by
NLS_TERRITORY.

The value of this parameter can be any valid date format mask, and the value must be
surrounded by double quotation marks. For example:
NLS_DATE_FORMAT = "MM/DD/YYYY"


NLS_DATE_LANGUAGE

NLS_DATE_LANGUAGE specifies the language to use for the spelling of day and month
names and date abbreviations (a.m., p.m., AD, BC) returned by the TO_DATE and TO_
CHAR functions.


NLS_DUAL_CURRENCY

NLS_DUAL_CURRENCY specifies the dual currency symbol (such as "Euro") for the
territory. The default is the dual currency symbol defined in the territory of your
current language environment.


NLS_ISO_CURRENCY

NLS_ISO_CURRENCY specifies the string to use as the international currency symbol
for the C number format element. Local currency symbols can be ambiguous. For example, a dollar sign ($) can refer to U.S. dollars or Australian dollars. ISO Specification 4217 1987-07-15 defines unique "international" currency symbols for the currencies of specific territories or countries.


NLS_LANGUAGE

NLS_LANGUAGE specifies the default language of the database. This language is used
for messages, day and month names, symbols for AD, BC, a.m., and p.m., and the
default sorting mechanism. This parameter also determines the default values of the
parameters NLS_DATE_LANGUAGE and NLS_SORT.


NLS_LENGTH_SEMANTICS

NLS_LENGTH_SEMANTICS enables you to create CHAR and VARCHAR2 columns using
either byte or character length semantics. Existing columns are not affected.
NCHAR, NVARCHAR2, CLOB, and NCLOB columns are always character-based. You may
be required to use byte semantics in order to maintain compatibility with existing
applications.

NLS_LENGTH_SEMANTICS does not apply to tables in SYS and SYSTEM. The data
dictionary always uses byte semantics.

NLS_NCHAR_CONV_EXCP

NLS_NCHAR_CONV_EXCP determines whether an error is reported when there is data
loss during an implicit or explicit character type conversion between
NCHAR/NVARCHAR and CHAR/VARCHAR2. The default value results in no error
being reported.


NLS_NUMERIC_CHARACTERS

NLS_NUMERIC_CHARACTERS specifies the characters to use as the group separator
and decimal character. It overrides those characters defined implicitly by NLS_
TERRITORY. The group separator separates integer groups (that is, thousands,
millions, billions, and so on). The decimal separates the integer portion of a number
from the decimal portion.You can specify any character as the decimal or group separator. The two characters specified must be single-byte and must be different from each other. The characters cannot be any numeric character or any of the following characters: plus (+), minus sign (-), less than sign (<), greater than sign (>). Either character can be a space.

For example, if you wish to specify a comma as the decimal character and a space as
the group separator, you would set this parameter as follows:
NLS_NUMERIC_CHARACTERS = ", "


NLS_SORT

■ If the value is BINARY, then the collating sequence for ORDER BY queries is based
on the numeric value of characters (a binary sort that requires less system
overhead).

■ If the value is a named linguistic sort, sorting is based on the order of the defined linguistic sort. Most (but not all) languages supported by the NLS_LANGUAGE
parameter also support a linguistic sort with the same name.


NLS_TERRITORY

NLS_TERRITORY specifies the name of the territory whose conventions are to be
followed for day and week numbering.

This parameter also establishes the default date format, the default decimal character and group separator, and the default ISO and local currency symbols.


NLS_TIMESTAMP_FORMAT

NLS_TIMESTAMP_FORMAT defines the default timestamp format to use with the TO_
CHAR and TO_TIMESTAMP functions.

The value must be surrounded by quotation marks as follows:
NLS_TIMESTAMP_FORMAT = 'YYYY-MM-DD HH:MI:SS.FF'
You can specify the value of NLS_TIMESTAMP_FORMAT by setting it in the
initialization parameter file. You can specify its value for a client as a client
environment variable. You can also alter the value of NLS_TIMESTAMP_FORMAT by changing its value in the initialization parameter and then restarting the instance. To alter the value during a session use the ALTER SESSION SET statement.


NLS_TIMESTAMP_TZ_FORMAT

NLS_TIMESTAMP_TZ_FORMAT defines the default timestamp with time zone format
to use with the TO_CHAR and TO_TIMESTAMP_TZfunctions.

The value must be surrounded by quotation marks as follows:
NLS_TIMESTAMP_TZ_FORMAT = 'YYYY-MM-DD HH:MI:SS.FF TZH:TZM'
You can specify the value of NLS_TIMESTAMP_TZ_FORMAT by setting it in the
initialization parameter file. You can specify its value for a client as a client
environment variable.

You can also alter the value of NLS_TIMESTAMP_TZ_FORMAT by changing its value in
the initialization parameter and then restarting the instance. To alter the value during a session use the ALTER SESSION SET statement.



■ Java

JAVA_MAX_SESSIONSPACE_SIZE
JAVA_POOL_SIZE
JAVA_SOFT_SESSIONSPACE_LIMIT

■ Job Queues

JOB_QUEUE_PROCESSES

JOB_QUEUE_PROCESSES specifies the maximum number of processes that can be
created for the execution of jobs. It specifies the number of job queue processes per
instance (J000, ... J999). Replication uses job queues for data refreshes. Advanced
queuing uses job queues for message propagation. You can create user job requests
through the DBMS_JOB package.

Some job queue requests are created automatically. An example is refresh support for
materialized views. If you wish to have your materialized views updated
automatically, you must set JOB_QUEUE_PROCESSES to a value of one or higher.



■ License Limits

LICENSE_MAX_SESSIONS

LICENSE_MAX_SESSIONS specifies the maximum number of concurrent user sessions
allowed. When this limit is reached, only users with the RESTRICTED SESSION
privilege can connect to the database. Users who are not able to connect receive a
warning message indicating that the system has reached maximum capacity.
A zero value indicates that concurrent usage (session) licensing is not enforced. If you set this parameter to a nonzero number, you might also want to set LICENSE_
SESSIONS_WARNING.

Do not enable both concurrent usage licensing and user licensing. Set either LICENSE_
MAX_SESSIONS or LICENSE_MAX_USERS to zero.


LICENSE_MAX_USERS

LICENSE_MAX_USERS specifies the maximum number of users you can create in the
database. When you reach this limit, you cannot create more users. You can, however,
increase the limit.

Do not enable both concurrent usage (session) licensing and user licensing. Set either LICENSE_MAX_SESSIONS or LICENSE_MAX_USERS to zero.


LICENSE_SESSIONS_WARNING

LICENSE_SESSIONS_WARNING specifies a warning limit on the number of concurrent
user sessions. When this limit is reached, additional users can connect, but Oracle
writes a message in the alert file for each new connection. Users with RESTRICTED
SESSION privilege who connect after the limit is reached receive a warning message
stating that the system is nearing its maximum capacity.



■ Memory

LARGE_POOL_SIZE

LARGE_POOL_SIZE specifies (in bytes) the size of the large pool allocation heap. The
large pool allocation heap is used in shared server systems for session memory, by
parallel execution for message buffers, and by backup processes for disk I/O buffers.
Parallel execution allocates buffers out of the large pool only when SGA_TARGET is set.

Caution: When Oracle derives a default value, it adds 250K for each session for the shared server if DISPATCHERS is configured. The final derived value also includes a port-specific amount of memory for backup I/O buffers. The total derived default value can either be too large to allocate or can cause performance problems. In that case, set LARGE_POOL_SIZE to a number sufficiently small so that the database can start.

SHARED_POOL_RESERVED_SIZE

SHARED_POOL_RESERVED_SIZE specifies (in bytes) the shared pool space that is
reserved for large contiguous requests for shared pool memory.

You can use this parameter to avoid performance degradation in the shared pool in
situations where pool fragmentation forces Oracle to search for and free chunks of
unused pool to satisfy the current request.


SHARED_POOL_SIZE

Default value If SGA_TARGET is set: If the parameter is not specified, then the default is 0 (internally determined by the Oracle Database). If the parameter is
specified, then the user-specified value indicates a minimum value for the memory pool. If SGA_TARGET is not set (32-bit platforms): 32 M, rounded up to the
nearest granule size. If SGA_TARGET is not set (64-bit platforms): 84 M, rounded up to the nearest granule size.

SHARED_POOL_SIZE specifies (in bytes) the size of the shared pool. The shared pool
contains shared cursors, stored procedures, control structures, and other structures. If you set PARALLEL_AUTOMATIC_TUNING to false, then Oracle also allocates parallel execution message buffers from the shared pool. Larger values improve performance in multi-user systems. Smaller values use less memory.
You can monitor utilization of the shared pool by querying the view V$SGASTAT.


■ Miscellaneous

AQ_TM_PROCESSES

AQ_TM_PROCESSES enables time monitoring of queue messages. The times can be
used in messages that specify delay and expiration properties. Values from 1 to 10 specify the number of queue monitor processes created to monitor the messages. If AQ_TM_PROCESSES is not specified or is set to 0, then the queue monitor is not created.

COMPATIBLE
FIXED_DATE

FIXED_DATE enables you to set a constant date that SYSDATE will always return
instead of the current date. To undo a fixed date setting, specify FIXED_DATE=NONE.
This parameter is useful primarily for testing. The value can be in the format shown
above or in the default Oracle date format, without a time.


■ Networking

LOCAL_LISTENER

Syntax LOCAL_LISTENER = network_name
Default value - (ADDRESS = (PROTOCOL=TCP)(HOST=hostname)(PORT=1521))
where hostname is the network name of the local host.

LOCAL_LISTENER specifies a network name that resolves to an address or address list
of Oracle Net local listeners (that is, listeners that are running on the same machine as this instance). The address or address list is specified in the TNSNAMES.ORA file or other address repository as configured for your system.


REMOTE_LISTENER

REMOTE_LISTENER specifies a network name that resolves to an address or address
list of Oracle Net remote listeners (that is, listeners that are not running on the same machine as this instance). The address or address list is specified in the
TNSNAMES.ORA file or other address repository as configured for your system.


SERVICE_NAMES

SERVICE_NAMES specifies one or more names by which clients can connect to the
instance. The instance registers its service names with the listener. When a client
requests a service, the listener determines which instances offer the requested service and routes the client to the appropriate instance.

You can specify multiple service names in order to distinguish among different uses of the same database.
For example:
SERVICE_NAMES = sales.acme.com, widgetsales.acme.com

■ Objects and LOBs

OBJECT_CACHE_MAX_SIZE_PERCENT

Default value 10

The object cache is a memory block on the client that allows applications to store
entire objects and to navigate among them without round trips to the server. OBJECT_
CACHE_MAX_SIZE_PERCENT specifies the percentage of the optimal cache size that
the session object cache can grow past the optimal size. The maximum size is equal to
the optimal size plus the product of this percentage and the optimal size. When the
cache size exceeds this maximum size, the system will attempt to shrink the cache to
the optimal size.


OBJECT_CACHE_OPTIMAL_SIZE

The object cache is a memory block on the client that allows applications to store
entire objects and to navigate among them without round trips to the server. OBJECT_
CACHE_OPTIMAL_SIZE specifies (in bytes) the size to which the session object cache
is reduced when the size of the cache exceeds the maximum size.


■ OLAP

OLAP_PAGE_POOL_SIZE

■ Optimizer

OPTIMIZER_DYNAMIC_SAMPLING

Default value If OPTIMIZER_FEATURES_ENABLE is set to 10.0.0 or higher, then 2
If OPTIMIZER_FEATURES_ENABLE is set to 9.2.0, then 1
If OPTIMIZER_FEATURES_ENABLE is set to 9.0.1 or lower, then 0

OPTIMIZER_DYNAMIC_SAMPLING controls the level of dynamic sampling performed
by the optimizer.


OPTIMIZER_FEATURES_ENABLE

Syntax OPTIMIZER_FEATURES_ENABLE = { 8.0.0 | 8.0.3 | 8.0.4
| 8.0.5 | 8.0.6 | 8.0.7 | 8.1.0 | 8.1.3 | 8.1.4 |
8.1.5 | 8.1.6 | 8.1.7 | 9.0.0 | 9.0.1 | 9.2.0 |
10.0.0 | 10.1.0 | 10.1.0.3 | 10.1.0.4 | 10.2.0.1}


OPTIMIZER_FEATURES_ENABLE acts as an umbrella parameter for enabling a series
of optimizer features based on an Oracle release number.

For example, if you upgrade your database from release 8.0.6 to release 9.2, but you
want to keep the release 8.0.6 optimizer behavior, you can do so by setting this
parameter to 8.0.6. At a later time, you can try the enhancements introduced in
releases up to and including release 9.2 by setting the parameter to 9.2.0.




OPTIMIZER_INDEX_CACHING

Range of values 0 to 100

OPTIMIZER_INDEX_CACHING lets you adjust the behavior of cost-based optimization
to favor nested loops joins and IN-list iterators. The cost of executing an index using an IN-list iterator or of executing a nested loops join when an index is used to access the inner table depends on the caching of that index in the buffer cache. The amount of caching depends on factors that the optimizer cannot predict, such as the load on the system and the block access patterns of different users.

You can modify the optimizer's assumptions about index caching for nested loops
joins and IN-list iterators by setting this parameter to a value between 0 and 100 to
indicate the percentage of the index blocks the optimizer should assume are in the
cache. Setting this parameter to a higher value makes nested loops joins and IN-list
iterators look less expensive to the optimizer. As a result, it will be more likely to pick nested loops joins over hash or sort-merge joins and to pick indexes using IN-list iterators over other indexes or full table scans. The default for this parameter is 0, which results in default optimizer behavior.


OPTIMIZER_INDEX_COST_ADJ

Range of values 1 to 10000

OPTIMIZER_INDEX_COST_ADJ lets you tune optimizer behavior for access path
selection to be more or less index friendly—that is, to make the optimizer more or less prone to selecting an index access path over a full table scan.
The default for this parameter is 100 percent, at which the optimizer evaluates index
access paths at the regular cost. Any other value makes the optimizer evaluate the
access path at that percentage of the regular cost. For example, a setting of 50 makes the index access path look half as expensive as normal.

OPTIMIZER_MODE

Syntax OPTIMIZER_MODE =
{ first_rows_[1 | 10 | 100 | 1000] | first_rows |all_rows }

Default value all_rows

Values:

■ first_rows_n
The optimizer uses a cost-based approach and optimizes with a goal of best
response time to return the first n rows (where n = 1, 10, 100, 1000).

■ first_rows
The optimizer uses a mix of costs and heuristics to find a best plan for fast delivery of the first few rows.

■ all_rows
The optimizer uses a cost-based approach for all SQL statements in the session and
optimizes with a goal of best throughput (minimum resource use to complete the
entire statement).


OPTIMIZER_SECURE_VIEW_MERGING

OPTIMIZER_SECURE_VIEW_MERGING allows you to enable or disable view merging
globally for the database.

FALSE
Oracle does not use view merging or predicate move-around.
■ TRUE
Oracle assesses the query, considering all transformations, and chooses the
method with the lowest cost. To take advantage of query rewrite for a particular query, you must disable the OPTIMIZER_SECURE_VIEW_MERGING parameter.

QUERY_REWRITE_ENABLED

QUERY_REWRITE_ENABLED allows you to enable or disable query rewriting globally
for the database.

Values:

■ false
Oracle does not use rewrite.
■ true
Oracle costs the query with rewrite and without rewrite and chooses the method
with the lower cost.
■ force
Oracle always uses rewrite and does not evaluate the cost before doing so. Use
force when you know that the query will always benefit from rewrite and when
reduction in compile time is important. To take advantage of query rewrite for a particular materialized view, you must enable query rewrite for that materialized view, and you must enable cost-based optimization.

QUERY_REWRITE_INTEGRITY

QUERY_REWRITE_INTEGRITY determines the degree to which Oracle must enforce
query rewriting. At the safest level, Oracle does not use query rewrite transformations that rely on unenforced relationships.

Values:
■ enforced
Oracle enforces and guarantees consistency and integrity.
■ trusted
Oracle allows rewrites using relationships that have been declared, but that are not
enforced by Oracle.
■ stale_tolerated
Oracle allows rewrites using unenforced relationships. Materialized views are
eligible for rewrite even if they are known to be inconsistent with the underlying
detail data.

STAR_TRANSFORMATION_ENABLED

■ Parallel Execution

PARALLEL_ADAPTIVE_MULTI_USER

PARALLEL_ADAPTIVE_MULTI_USER, when set to true, enables an adaptive
algorithm designed to improve performance in multiuser environments that use
parallel execution. The algorithm automatically reduces the requested degree of
parallelism based on the system load at query startup time. The effective degree of
parallelism is based on the default degree of parallelism, or the degree from the table or hints, divided by a reduction factor.

The algorithm assumes that the system has been tuned for optimal performance in a
single-user environment.

Tables and hints use the default degree of parallelism.


PARALLEL_EXECUTION_MESSAGE_SIZE

PARALLEL_EXECUTION_MESSAGE_SIZE specifies the size of messages for parallel
execution (formerly referred to as parallel query, PDML, Parallel Recovery,
replication).

On most platforms, the default value is 2148bytes if PARALLEL_AUTOMATIC_TUNING
is set to false, and 4096 bytes if PARALLEL_AUTOMATIC_TUNING is set to true.
The default value is adequate for most applications. Larger values require a larger
shared pool. Larger values result in better performance at the cost of higher memory
use. For this reason, replication gets no benefit from increasing the size.

PARALLEL_MAX_SERVERS

PARALLEL_MAX_SERVERS specifies the maximum number of parallel execution
processes and parallel recovery processes for an instance. As demand increases, Oracle increases the number of processes from the number created at instance startup up to this value.


If you set this parameter too low, some queries may not have a parallel execution
process available to them during query processing. If you set it too high, memory
resource shortages may occur during peak periods, which can degrade performance.


PARALLEL_MIN_PERCENT

PARALLEL_MIN_PERCENT operates in conjunction with PARALLEL_MAX_SERVERS
and PARALLEL_MIN_SERVERS. It lets you specify the minimum percentage of parallel
execution processes (of the value of PARALLEL_MAX_SERVERS) required for parallel
execution. Setting this parameter ensures that parallel operations will not execute
sequentially unless adequate resources are available. The default value of 0 means that no minimum percentage of processes has been set.

Consider the following settings:
PARALLEL_MIN_PERCENT = 50
PARALLEL_MIN_SERVERS = 5
PARALLEL_MAX_SERVERS = 10
If 8 of the 10 parallel execution processes are busy, only 2 processes are available. If you then request a query with a degree of parallelism of 8, the minimum 50% will not be met.

You can use this parameter in conjunction with PARALLEL_ADAPTIVE_MULTI_USER.
In a multi-user environment, an individual user or application can set PARALLEL_
MIN_PERCENT to a minimum value until sufficient resources are available on the
system and an acceptable degree of parallelism is returned.

PARALLEL_MIN_SERVERS

PARALLEL_MIN_SERVERS specifies the minimum number of parallel execution
processes for the instance. This value is the number of parallel execution processes
Oracle creates when the instance is started.


PARALLEL_THREADS_PER_CPU

PARALLEL_THREADS_PER_CPU specifies the default degree of parallelism for the
instance and determines the parallel adaptive and load balancing algorithms. The
parameter describes the number of parallel execution processes or threads that a CPU
can handle during parallel execution.

The default is platform-dependent and is adequate in most cases. You should decrease
the value of this parameter if the machine appears to be overloaded when a
representative parallel query is executed. You should increase the value if the system is I/O bound.

■ PL/SQL

PLSQL_NATIVE_LIBRARY_DIR
PLSQL_NATIVE_LIBRARY_SUBDIR_COUNT
PLSQL_V2_COMPATIBILITY
REMOTE_DEPENDENCIES_MODE
UTL_FILE_DIR

■ PL/SQL Compiler

PLSQL_CCFLAGS
PLSQL_CODE_TYPE
PLSQL_DEBUG
PLSQL_OPTIMIZE_LEVEL
PLSQL_WARNINGS
NLS_LENGTH_SEMANTICS

■ SGA Memory

DB_nK_CACHE_SIZE
DB_CACHE_ADVICE
DB_CACHE_SIZE
HI_SHARED_MEMORY_ADDRESS

HI_SHARED_MEMORY_ADDRESS specifies the starting address at runtime of the system
global area (SGA). It is ignored on platforms that specify the SGA's starting address at linktime.

On 64-bit platforms, use HI_SHARED_MEMORY_ADDRESS to specify the high-order 32
bits of a 64-bit address. Use SHARED_MEMORY_ADDRESS to specify the low-order 32
bits of the address (see "SHARED_MEMORY_ADDRESS" on page 1-133). If both
parameters are 0 or unspecified, the SGA address defaults to a platform-specific
location.

JAVA_POOL_SIZE
LOCK_SGA
OLAP_PAGE_POOL_SIZE
PRE_PAGE_SGA

SGA_MAX_SIZE

SGA_MAX_SIZE specifies the maximum size of the SGA for the lifetime of the instance.

SGA_TARGET specifies the total size of all SGA components. If SGA_TARGET is
specified, then the following memory pools are automatically sized:
■ Buffer cache (DB_CACHE_SIZE)
■ Shared pool (SHARED_POOL_SIZE)
■ Large pool (LARGE_POOL_SIZE)
■ Java pool (JAVA_POOL_SIZE)
■ Streams pool (STREAMS_POOL_SIZE)
If these automatically tuned memory pools are set to non-zero values, then those
values are used as minimum levels by Automatic Shared Memory Management. You
would set minimum values if an application component needs a minimum amount of
memory to function properly.
The following pools are manually sized components and are not affected by
Automatic Shared Memory Management:
■ Log buffer
■ Other buffer caches, such as KEEP, RECYCLE, and other block sizes
■ Fixed SGA and other internal allocations
The memory allocated to these pools is deducted from the total available for SGA_
TARGET when Automatic Shared Memory Management computes the values of the
automatically tuned memory pools.


SGA_TARGET
SHARED_MEMORY_ADDRESS

■ Real Application Clusters

ACTIVE_INSTANCE_COUNT

ACTIVE_INSTANCE_COUNT enables you to designate one instance in a two-instance
cluster as the primary instance and the other instance as the secondary instance. This parameter has no functionality in a cluster with more than two instances.
When you set this parameter to 1, the first instance you start up becomes the primary
instance and accepts client connections. The second instance starts up as a secondary
instance and can accept client connections only if the first instance fails. In such an event, the secondary instance becomes the primary instance.
When the failed instance can once again be started up, it starts up as the secondary
instance, and will not accept client connections unless the current primary instance
fails.


CLUSTER_DATABASE

CLUSTER_DATABASE is a Real Application Clusters parameter that specifies whether
or not Real Application Clusters is enabled.


CLUSTER_DATABASE_INSTANCES

CLUSTER_DATABASE_INSTANCES is a Real Application Clusters parameter that
specifies the number of instances that are configured as part of your cluster database. You must set this parameter for every instance. Normally you should set this parameter to the number of instances in your Real Application Clusters environment. A proper setting for this parameter can improve memory use.
Oracle uses the value of this parameter to compute the default value of the LARGE_
POOL_SIZE parameter when the PARALLEL_AUTOMATIC_TUNING parameter is set to
true. Note that the PARALLEL_AUTOMATIC_TUNIG parameter has been deprecated.


CLUSTER_INTERCONNECTS


GC_FILES_TO_LOCKS

Syntax GC_FILES_TO_LOCKS =
'{file_list=lock_count[!blocks][EACH][:...]}'
Spaces are not allowed within the quotation marks.


■ file_list
One or more datafiles listed by their file numbers, or ranges of file numbers, with
comma separators:

filenumber[-filenumber][,filenumber[-filenumber]]...

To find the correspondence between filenames and file numbers, query the FILE_
NAME and FILE_ID columns of the DBA_DATA_FILES data dictionary view.

■ lock_count
The number of PCM locks assigned to file_list. By default these locks are
fixed. If you set lock_count to 0, then Oracle uses fine-grain locking for these
files and takes locks as needed from the pool of releasable locks.

■ blocks

Specifies the number of contiguous blocks covered by one lock. The default is
noncontiguous blocks.

■ EACH
Indicates that each datafile in file_list is assigned a separate set of lock_
count PCM locks.

GCS_SERVER_PROCESSES

Default value If CLUSTER_DATABASE is set to true, then 2
If CLUSTER_DATABASE is set to false, then 0

GCS_SERVER_PROCESSES specifies the initial number of server processes in Global
Cache Service to serve the inter-instance traffic among Real Application Clusters
instances.


INSTANCE_GROUPS

INSTANCE_GROUPS is a Real Application Clusters parameter that you can specify
only in parallel mode. Used in conjunction with the PARALLEL_INSTANCE_GROUP
parameter, it lets you restrict parallel query operations to a limited number of
instances.

INSTANCE_NUMBER

MAX_COMMIT_PROPAGATION_DELAY

MAX_COMMIT_PROPAGATION_DELAY is a Real Application Clusters parameter. This
initialization parameter should not be changed except under a limited set of
circumstances specific to the cluster database.


PARALLEL_INSTANCE_GROUP

PARALLEL_INSTANCE_GROUP is a Real Application Clusters parameter that you can
specify in parallel mode only. Used in conjunction with the INSTANCE_GROUPS
parameter, it lets you restrict parallel query operations to a limited number of
instances.

This parameter identifies the parallel instance group Oracle will use for spawning
parallel execution processes. Parallel operations will spawn parallel execution
processes only on instances that specify a matching group in their INSTANCE_GROUPS
parameter. If the value assigned to PARALLEL_INSTANCE_GROUP is the name of a group that does not exist, then the operation runs serially. No parallelism is used.

THREAD

THREAD is a Real Application Clusters parameter that specifies the number of the redo
thread to be used by an instance. When you create a database, Oracle creates and enables thread 1 as a public thread (one that can be used by any instance). You must create and enable subsequent threads using the ADD LOGFILE THREAD clause and ENABLE THREAD clause of the ALTER DATABASE statement. The number of threads you create is limited by the MAXINSTANCES parameter specified in the CREATE DATABASE statement.
In exclusive mode, thread 1 is the default thread. However, you can specify THREAD
for an instance running in exclusive mode if you want to use the redo log files in a
thread other than thread 1.

In parallel mode, you can specify any available redo thread number, as long as that
thread number is enabled and is not in use by another instance.
A value of zero specifies that this instance can use any available, enabled public
thread.


■ Redo Logs, Archiving, and Recovery

CONTROL_FILE_RECORD_KEEP_TIME

CONTROL_FILE_RECORD_KEEP_TIME specifies the minimum number of days before
a reusable record in the control file can be reused. In the event a new record needs to be added to a reusable section and the oldest record has not aged enough, the record section expands. If this parameter is set to 0, then reusable sections never expand, and records are reused as needed.

Note: This parameter applies only to records in the control file that
are circularly reusable (such as archive log records and various
backup records). It does not apply to records such as datafile,
tablespace, and redo thread records, which are never reused unless the
corresponding object is dropped from the tablespace.


DB_CREATE_ONLINE_LOG_DEST_n
DB_RECOVERY_FILE_DEST
DB_RECOVERY_FILE_DEST_SIZE
FAST_START_MTTR_TARGET
LOG_ARCHIVE_CONFIG

Syntax LOG_ARCHIVE_CONFIG =
{
[ SEND | NOSEND ]
[ RECEIVE | NORECEIVE ]
[ DG_CONFIG=(remote_db_unique_name1 [, ... remote_db_
unique_name9) | NODG_CONFIG ]
}
Default value - 'SEND, RECEIVE, NODG_CONFIG'

■ SEND
Enables the sending of redo logs to remote destinations
■ NOSEND
Disables the sending of redo logs to remote destinations
■ RECEIVE
Enables the receipt of remotely archived redo logs
■ NORECEIVE
Disables the receipt of remotely archived redo logs
■ DG_CONFIG
Specifies a list of up to 9 unique database names (defined with the DB_UNIQUE_
NAME initialization parameter) for all of the databases in the Data Guard
configuration.
■ NODG_CONFIG
Eliminates the list of service provider names previously specified with the DG_
CONFIG option.

LOG_ARCHIVE_DEST_n
LOG_ARCHIVE_DEST_STATE_n

LOG_ARCHIVE_DUPLEX_DEST

LOG_ARCHIVE_DUPLEX_DEST is similar to the initialization parameter LOG_
ARCHIVE_DEST. This parameter specifies a second archive destination: the duplex
archive destination. This duplex archive destination can be either a must-succeed or a best-effort archive destination, depending on how many archive destinations must
succeed (as specified in the LOG_ARCHIVE_MIN_SUCCEED_DEST parameter).
The default setting of a null string ("") or (' ') indicates that a duplex archive
destination does not exist.

LOG_ARCHIVE_FORMAT

LOG_ARCHIVE_FORMAT is applicable only if you are using the redo log in
ARCHIVELOG mode. Use a text string and variables to specify the default filename
format when archiving redo log files. The string generated from this format is
appended to the string specified in the LOG_ARCHIVE_DEST parameter.
The following variables can be used in the format:
%s log sequence number
%S log sequence number, zero filled
%tthread number
%Tthread number, zero filled
%a activation ID
%d database ID
%r resetlogs ID that ensures unique names are constructed for the archived log files
across multiple incarnations of the database
Using uppercase letters for the variables (for example, %S) causes the value to be fixed length and padded to the left with zeros. An example of specifying the archive redo log filename format follows:

LOG_ARCHIVE_FORMAT = 'log%t_%s_%r.arc'

LOG_ARCHIVE_MAX_PROCESSES

Range of values 1 to 30


LOG_ARCHIVE_MIN_SUCCEED_DEST

Range of values 1 to 10 if you are using LOG_ARCHIVE_DEST_n
1 or 2 if you are using LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST


LOG_ARCHIVE_TRACE

Range of values 0, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192

The valid values have the following meanings:
■ 0: Disable archivelog tracing (this is the default)
■ 1: Track archival of redo log file
■ 2: Track archival status of each archivelog destination
■ 4: Track archival operational phase
■ 8: Track archivelog destination activity
■ 16: Track detailed archivelog destination activity
■ 32: Track archivelog destination parameter modifications
■ 64: Track ARCn process state activity
■ 128: Track FAL (fetch archived log) server related activities
■ 256: Track RFS Logical Client
■ 512: Track LGWR redo shipping network activity
■ 1024: Track RFS Physical Client
■ 2048: Track RFS/ARCn Ping Heartbeat
■ 4096: Track Real Time Apply
■ 8192: Track Redo Apply (Media Recovery or Physical Standby)


LOG_BUFFER

Default value 512 KB or 128 KB * CPU_COUNT, whichever is greater

LOG_BUFFER specifies the amount of memory (in bytes) that Oracle uses when
buffering redo entries to a redo log file. Redo log entries contain a record of the
changes that have been made to the database block buffers. The LGWR process writes
redo log entries from the log buffer to a redo log file.

In general, larger values for LOG_BUFFER reduce redo log file I/O, particularly if
transactions are long or numerous. In a busy system, a value 65536 or higher is
reasonable.

LOG_CHECKPOINT_INTERVAL

LOG_CHECKPOINT_INTERVAL specifies the frequency of checkpoints in terms of the
number of redo log file blocks that can exist between an incremental checkpoint and
the last block written to the redo log. This number refers to physical operating system blocks, not database blocks. Regardless of this value, a checkpoint always occurs when switching from one online redo log file to another. Therefore, if the value exceeds the actual redo log file size, checkpoints occur only when switching logs. Checkpoint frequency is one of the factors that influence the time required for the database to recover from an unexpected failure.


LOG_CHECKPOINT_TIMEOUT

LOG_CHECKPOINT_TIMEOUT specifies (in seconds) the amount of time that has
passed since the incremental checkpoint at the position where the last write to the redo log (sometimes called the tail of the log) occurred. This parameter also signifies that no buffer will remain dirty (in the cache) for more than integer seconds. Specifying a value of 0 for the timeout disables time-based checkpoints. Hence, setting the value to 0 is not recommended unless FAST_START_MTTR_TARGET is set.

LOG_CHECKPOINTS_TO_ALERT

LOG_CHECKPOINTS_TO_ALERT lets you log your checkpoints to the alert file. Doing
so is useful for determining whether checkpoints are occurring at the desired
frequency.


RECOVERY_PARALLELISM

RECOVERY_PARALLELISM specifies the number of processes to participate in instance
or crash recovery. A value of 0 or 1 indicates that recovery is to be performed serially by one process.


■ Resource Manager

RESOURCE_LIMIT

RESOURCE_LIMIT determines whether resource limits are enforced in database
profiles.
Values:
■ TRUE
Enables the enforcement of resource limits
■ FALSE
Disables the enforcement of resource limits


RESOURCE_MANAGER_PLAN

RESOURCE_MANAGER_PLAN specifies the top-level resource plan to use for an
instance. The resource manager will load this top-level plan along with all its
descendants (subplans, directives, and consumer groups). If you do not specify this
parameter, the resource manager is off by default.
You can change the setting of this parameter using the ALTER SYSTEM statement to
turn on the resource manager (if it was previously off) or to turn off the resource
manager or change the current plan (if it was previously on). If you specify a plan that does not exist in the data dictionary, Oracle returns an error message.



■ Security and Auditing

AUDIT_FILE_DEST
AUDIT_SYS_OPERATIONS

AUDIT_SYSLOG_LEVEL

AUDIT_SYSLOG_LEVEL enables OS audit logs to be written to the system via the
SYSLOG utility if the AUDIT_TRAIL parameter is set to os.
The value of facility can be any of the following: USER, LOCAL0-LOCAL7, SYSLOG,
DAEMON, KERN, MAIL, AUTH, LPR, NEWS, UUCP or CRON.
The value of level can be any of the following: NOTICE, INFO, DEBUG, WARNING, ERR,
CRIT, ALERT, EMERG . If you use this parameter, it is best to assign a file corresponding to every facility.level combination (especially kern.emerg) in syslog.conf . Sometimes these are assigned to print to the console in the default syslog.conf file. This can become annoying and will be useless as audit logs.
Also, if you use this parameter, it is best to set the maximum length of syslog messages in the system to 512 bytes.


AUDIT_TRAIL

AUDIT_TRAIL enables or disables database auditing.
Values:
■ none
Disables database auditing.
■ os
Enables database auditing and directs all audit records to the operating system's
audit trail.
■ db
Enables database auditing and directs all audit records to the database audit trail
(the SYS.AUD$ table).
■ db,extended
Enables database auditing and directs all audit records to the database audit trail
(the SYS.AUD$ table). In addition, populates the SQLBIND and SQLTEXT CLOB
columns of the SYS.AUD$ table.
■ xml
Enables database auditing and writes all audit records to XML format OS files.
■ xml,extended
Enables database auditing and prints all columns of the audit trail, including
SqlText and SqlBind values.
You can use the SQL statement AUDIT to set auditing options regardless of the setting
of this parameter.



O7_DICTIONARY_ACCESSIBILITY

O7_DICTIONARY_ACCESSIBILITY controls restrictions on SYSTEM privileges. If the
parameter is set to true, access to objects in the SYS schema is allowed (Oracle7
behavior). The default setting of false ensures that system privileges that allow
access to objects in "any schema" do not allow access to objects in the SYS schema.
For example, if O7_DICTIONARY_ACCESSIBILITY is set to false, then the SELECT
ANY TABLE privilege allows access to views or tables in any schema except the SYS
schema (data dictionary tables cannot be accessed). The system privilege EXECUTE
ANY PROCEDURE allows access on the procedures in any schema except the SYS
schema. If this parameter is set to false and you need to access objects in the SYS schema, then you must be granted explicit object privileges. The following roles, which can be granted to the database administrator, also allow access to dictionary objects:
■ SELECT_CATALOG_ROLE
■ EXECUTE_CATALOG_ROLE
■ DELETE_CATALOG_ROLE


OS_AUTHENT_PREFIX

OS_AUTHENT_PREFIX specifies a prefix that Oracle uses to authenticate users
attempting to connect to the server. Oracle concatenates the value of this parameter to the beginning of the user's operating system account name and password. When a
connection request is attempted, Oracle compares the prefixed username with Oracle
usernames in the database.

The default value of this parameter is OPS$ for backward compatibility with previous
versions. However, you might prefer to set the prefix value to "" (a null string), thereby eliminating the addition of any prefix to operating system account names.

OS_ROLES

Values:
■ TRUE
The operating system completely manages the role grants for all database
usernames. When a user attempts to create a session, the username's security
domain is initialized using the roles identified by the operating system. A user can
subsequently enable as many roles identified by the operating system as specified
by the parameter MAX_ENABLED_ROLES.

Revocation by Oracle of roles granted by the operating system is ignored, as are
any roles previously granted by Oracle.

■ FALSE
Oracle identifies and manages the roles.

RDBMS_SERVER_DN
REMOTE_LOGIN_PASSWORDFILE

REMOTE_LOGIN_PASSWORDFILE specifies whether Oracle checks for a password file.
Values:
■ NONE
Oracle ignores any password file. Therefore, privileged users must be
authenticated by the operating system.
■ SHARED
One or more databases can use the password file. The password file can contain
SYS as well as non-SYS users.

Note:-
The value EXCLUSIVE is supported for backward
compatibility. It now has the same behavior as the value SHARED.


REMOTE_OS_AUTHENT

REMOTE_OS_AUTHENT specifies whether remote clients will be authenticated with the
value of the OS_AUTHENT_PREFIX parameter.


REMOTE_OS_ROLES

REMOTE_OS_ROLES specifies whether operating system roles are allowed for remote
clients. The default value, false, causes Oracle to identify and manage roles for
remote clients.


SQL92_SECURITY

The SQL92 standards specify that security administrators should be able to require
that users have SELECT privilege on a table when executing an UPDATE or DELETE
statement that references table column values in a WHERE or SET clause. SQL92_
SECURITY specifies whether users must have been granted the SELECT object
privilege in order to execute such UPDATE or DELETE statements.

■ Sessions and Processes

CPU_COUNT

PROCESSES

PROCESSES specifies the maximum number of operating system user processes that
can simultaneously connect to Oracle. Its value should allow for all background
processes such as locks, job queue processes, and parallel execution processes.
The default values of the SESSIONS and TRANSACTIONS parameters are derived from
this parameter. Therefore, if you change the value of PROCESSES, you should evaluate
whether to adjust the values of those derived parameters.

SESSIONS

SESSIONS specifies the maximum number of sessions that can be created in the
system. Because every login requires a session, this parameter effectively determines
the maximum number of concurrent users in the system. You should always set this
parameter explicitly to a value equivalent to your estimate of the maximum number of
concurrent users, plus the number of background processes, plus approximately 10%
for recursive sessions.

Oracle uses the default value of this parameter as its minimum. Values between 1 and
the default do not trigger errors, but Oracle ignores them and uses the default instead.

The default values of the ENQUEUE_RESOURCES and TRANSACTIONS parameters are
derived from SESSIONS. Therefore, if you increase the value of SESSIONS, you
should consider whether to adjust the values of ENQUEUE_RESOURCES and
TRANSACTIONS as well. (Note that ENQUEUE_RESOURCES is obsolete as of Oracle
Database 10g release 2 (10.2).)
In a shared server environment, the value of PROCESSES can be quite small. Therefore,
Oracle recommends that you adjust the value of SESSIONS to approximately 1.1 *
total number of connections.


■ Shared Server Architecture

CIRCUITS
DISPATCHERS

Syntax DISPATCHERS = 'dispatch_clause'
dispatch_clause::=
(PROTOCOL = protocol) |
(ADDRESS = address) |
(DESCRIPTION = description )
[options_clause]

options_clause::=
(DISPATCHERS = integer |
SESSIONS = integer |
CONNECTIONS = integer |
TICKS = seconds |
POOL = {1 | ON | YES | TRUE | BOTH |
({IN | OUT} = ticks) | 0 | OFF | NO | FALSE |
ticks} |
MULTIPLEX = {1 | ON | YES | TRUE |
0 | OFF | NO | FALSE | BOTH | IN | OUT} |
LISTENER = tnsname |
SERVICE = service |
INDEX = integer)
Default value If SHARED_SERVERS is greater than 0, then DISPATCHERS defaults to
'(PROTOCOL=tcp)', causing 1 TCP/IP dispatcher to be created.

DISPATCHERS configures dispatcher processes in the shared server architecture. The
parsing software supports a name-value syntax to enable the specification of attributes in a position-independent, case-insensitive manner. For example:
DISPATCHERS = '(PROTOCOL=TCP)(DISPATCHERS=3)' Attributes may be specified using the full attribute name or any substring beginning with the first 3 characters. For example, SESSIONS can be specified as SES, SESS, SESSI, and so on.
Specify only one of the following attributes: PROTOCOL, ADDRESS, or DESCRIPTION.
If you specify either ADDRESS or DESCRIPTION, then you can specify additional
network attributes. Doing so supports multi-homed hosts.


MAX_DISPATCHERS

MAX_DISPATCHERS specifies the maximum number of dispatcher processes allowed
to be running simultaneously. It can be overridden by the DISPATCHERS parameter
and is maintained for backward compatibility with older releases.


MAX_SHARED_SERVERS

MAX_SHARED_SERVERS specifies the maximum number of shared server processes
allowed to be running simultaneously. Setting this parameter enables you to reserve
process slots for other processes, such as dedicated servers.

When you want to reduce the range of shared servers, you can reduce MAX_SHARED_
SERVERS before reducing SHARED_SERVERS. If MAX_SHARED_SERVERS is lower than
SHARED_SERVERS, then the number of shared servers will not vary but will remain at
the constant level specified by SHARED_SERVERS. If MAX_SHARED_SERVERS is not
specified, then a shared server process may be spawned as long as the number of free
process slots is greater than 1 / 8 the maximum number of processes, or 2 if
PROCESSES is less than 24.



SHARED_SERVER_SESSIONS

SHARED_SERVER_SESSIONS specifies the total number of shared server sessions to
allow. Setting this parameter enables you to reserve user sessions for dedicated
servers.


SHARED_SERVERS

SHARED_SERVERS specifies the number of server processes that you want to create
when an instance is started. If system load decreases, then this minimum number of
servers is maintained. Therefore, you should take care not to set SHARED_SERVERS
too high at system startup.

■ Standby Database

ARCHIVE_LAG_TARGET

ARCHIVE_LAG_TARGET limits the amount of data that can be lost and effectively
increases the availability of the standby database by forcing a log switch after the
specified amount of time elapses.

A 0 value disables the time-based thread advance feature; otherwise, the value
represents the number of seconds. Values larger than 7200 seconds are not of much use
in maintaining a reasonable lag in the standby database. The typical, or recommended
value is 1800 (30 minutes). Extremely low values can result in frequent log switches,
which could degrade performance; such values can also make the archiver process too
busy to archive the continuously generated logs.


DB_FILE_NAME_CONVERT

DB_UNIQUE_NAME

DB_UNIQUE_NAME specifies a globally unique name for the database. Databases with
the same DB_NAME within the same DB_DOMAIN (for example, copies of a database
created for reporting or a physical standby) must have a unique DB_UNIQUE_NAME.
Every database's DB_UNIQUE_NAME must be unique within the enterprise.

The value of DB_UNIQUE_NAME can be up to 30 characters and is case insensitive. The
following characters are valid in a database name: alphanumeric characters,
underscore (_), number sign (#), and dollar sign ($).


DG_BROKER_CONFIG_FILEn

DG_BROKER_CONFIG_FILEn (where n = 1, 2) specifies the names for the Data Guard
broker configuration files. Every database that is part of a Data Guard broker configuration has two broker configuration files, which contain entries that describe the state and properties of the configuration (such as the sites and databases that are part of the configuration, the roles and properties of each of the databases, and the state of each of the elements of the configuration). Two files are provided so as to always maintain the last known good state of the configuration.

If DG_BROKER_CONFIG_FILEn is not explicitly defined, then it is set to an operating
system-specific default value at instance startup. The parameter can only be altered
when the DMON (Data Guard broker) process is not running.

DG_BROKER_START

DG_BROKER_START enables Oracle to determine whether or not the Data Guard
broker (DMON) process should be started. DMON is a non-fatal Oracle background
process and exists as long as the instance exists, whenever this parameter is set to
true.

If the site is never going to be configured in a Data Guard broker configuration, then you can leave the parameter unspecified and accept the default value of false. If the site is part of a Data Guard broker configuration, then automatic startup of the DMON process is simplified by setting DG_BROKER_START to true in the initialization parameter file.



FAL_CLIENT

FAL_CLIENT specifies the FAL (fetch archive log) client name that is used by the FAL
service, configured through the FAL_SERVER parameter, to refer to the FAL client. The
value is an Oracle Net service name, which is assumed to be configured properly on
the FAL server system to point to the FAL client (standby database).
Given the dependency of FAL_CLIENT on FAL_SERVER, the two parameters should
be configured or changed at the same time.


FAL_SERVER

FAL_SERVER specifies the FAL (fetch archive log) server for a standby database. The
value is an Oracle Net service name, which is assumed to be configured properly on
the standby database system to point to the desired FAL server.


LOG_FILE_NAME_CONVERT
STANDBY_ARCHIVE_DEST

STANDBY_ARCHIVE_DEST is relevant only for a standby database in managed
recovery mode. It specifies the location of archive logs arriving from a primary
database. Oracle uses STANDBY_ARCHIVE_DEST and LOG_ARCHIVE_FORMAT to
fabricate the fully qualified standby log filenames and stores the filenames in the
standby control file. You can see the value of this parameter by querying the V$ARCHIVE_DEST data dictionary view


STANDBY_FILE_MANAGEMENT

STANDBY_FILE_MANAGEMENT enables or disables automatic standby file
management. When automatic standby file management is enabled, operating system
file additions and deletions on the primary database are replicated on the standby
database.
Values:

■ MANUAL
disables automatic standby file management
■ AUTO
enables automatic standby file management
Setting STANDBY_FILE_MANAGEMENT to AUTO causes Oracle to automatically create
files on the standby database and, in some cases, overwrite existing files. Care must be taken when setting STANDBY_FILE_MANAGEMENT and DB_FILE_NAME_CONVERT so
that existing standby files will not be accidentally overwritten.

If the standby database is on the same system as the primary database, then ensure
that the primary and standby systems do not point to the same files.



■ Temporary Sort Space

BITMAP_MERGE_AREA_SIZE
CREATE_BITMAP_AREA_SIZE
HASH_AREA_SIZE


PGA_AGGREGATE_TARGET

PGA_AGGREGATE_TARGET specifies the target aggregate PGA memory available to all
server processes attached to the instance.

Setting PGA_AGGREGATE_TARGET to a nonzero value has the effect of automatically
setting the WORKAREA_SIZE_POLICY parameter to AUTO. This means that SQL
working areas used by memory-intensive SQL operators (such as sort, group-by,
hash-join, bitmap merge, and bitmap create) will be automatically sized. A nonzero
value for this parameter is the default since, unless you specify otherwise, Oracle sets it to 20% of the SGA or 10 MB, whichever is greater.
Setting PGA_AGGREGATE_TARGET to 0 automatically sets the WORKAREA_SIZE_
POLICY parameter to MANUAL. This means that SQL workareas are sized using the *_
AREA_SIZE parameters.

Oracle attempts to keep the amount of private memory below the target specified by
this parameter by adapting the size of the work areas to private memory. When
increasing the value of this parameter, you indirectly increase the memory allotted to work areas. Consequently, more memory-intensive operations are able to run fully in memory and less will work their way over to disk.

When setting this parameter, you should examine the total memory on your system
that is available to the Oracle instance and subtract the SGA. You can assign the
remaining memory to PGA_AGGREGATE_TARGET.


SORT_AREA_RETAINED_SIZE


SORT_AREA_SIZE
WORKAREA_SIZE_POLICY

■ Transactions

COMMIT_WRITE

COMMIT_WRITE is an advanced parameter used to control how redo for transaction
commits is written to the redo logs. The IMMEDIATE and BATCH options control how
redo is batched by Log Writer. The WAIT and NOWAIT options control when the redo
for a commit is flushed to the redo logs.


DML_LOCKS

A DML lock is a lock obtained on a table that is undergoing a DML operation (insert,
update, delete). DML_LOCKS specifies the maximum number of DML locks—one for
each table modified in a transaction. The value should equal the grand total of locks on tables currently referenced by all users. For example, if three users are modifying data in one table, then three entries would be required. If three users are modifying data in two tables, then six entries would be required.
The default value assumes an average of four tables referenced for each transaction.
For some systems, this value may not be enough.

Enqueues are shared memory structures that serialize access to database resources. If
you set the value of DML_LOCKS to 0, enqueues are disabled and performance is
slightly increased. However, you should be aware of the following restrictions when
you set you DML_LOCKS to 0:

■ You cannot use DROP TABLE, CREATE INDEX statements.

■ You cannot use explicit lock statements such as LOCK TABLE IN EXCLUSIVE
MODE.

■ Enterprise Manager cannot run on any instances for which DML_LOCKS is set to 0
Oracle holds more locks during parallel DML than during serial execution. Therefore,
if your database supports a lot of parallel DML, you may need to increase the value of this parameter.


FAST_START_PARALLEL_ROLLBACK

FAST_START_PARALLEL_ROLLBACK determines the maximum number of processes
that can exist for performing parallel rollback. This parameter is useful on systems in which some or all of the transactions are long running.

Values:

■ FALSE indicates that parallel rollback is disabled
■ LOW limits the number of rollback processes to 2 * CPU_COUNT
■ HIGH limits the number of rollback processes to 4 * CPU_COUNT

TRANSACTIONS

TRANSACTIONS specifies the maximum number of concurrent transactions. Greater
values increase the size of the SGA and can increase the number of rollback segments
allocated. The default value is greater than SESSIONS (and, in turn, PROCESSES) to
allow for recursive transactions.



■ Undo Management

RESUMABLE_TIMEOUT

RESUMABLE_TIMEOUT enables or disables resumable statements and specifies
resumable timeout at the system level.


ROLLBACK_SEGMENTS

Syntax ROLLBACK_SEGMENTS =
(segment_name [, segment_name] ... )
Default value If you do not specify this parameter, the instance uses public rollback
segments by default, unless the UNDO_MANAGEMENT initialization
parameter is set to AUTO. In that case, the ROLLBACK_SEGMENTS
parameter is ignored and automatic undo management is used.

ROLLBACK_SEGMENTS allocates one or more rollback segments by name to this
instance. If you set this parameter, the instance acquires all of the rollback segments named in this parameter, even if the number of rollback segments exceeds the
minimum number required by the instance (calculated as TRANSACTIONS /
TRANSACTIONS_PER_ROLLBACK_SEGMENT).

You cannot change the value of this parameter dynamically, but you can change its
value and then restart the instance. Although this parameter usually specifies private rollback segments, it can also specify public rollback segments if they are not already in use.

To find the name, segment ID number, and status of each rollback segment in the
database, query the data dictionary view DBA_ROLLBACK_SEGS.
When UNDO_MANAGEMENT is set to AUTO, ROLLBACK_SEGMENTS is ignored.



TRANSACTIONS_PER_ROLLBACK_SEGMENT

TRANSACTIONS_PER_ROLLBACK_SEGMENT specifies the number of concurrent
transactions you expect each rollback segment to have to handle. The minimum
number of rollback segments acquired at startup is TRANSACTIONS divided by the
value for this parameter. For example, if TRANSACTIONS is 101 and this parameter is
10, then the minimum number of rollback segments acquired would be the ratio
101/10, rounded up to 11.

You can acquire more rollback segments by naming them in the parameter
ROLLBACK_SEGMENTS.


UNDO_MANAGEMENT

UNDO_MANAGEMENT specifies which undo space management mode the system should
use. When set to AUTO, the instance starts in automatic undo management mode. In
manual undo management mode, undo space is allocated externally as rollback
segments.


UNDO_RETENTION

UNDO_RETENTION specifies (in seconds) the low threshold value of undo retention.
For AUTOEXTEND undo tablespaces, the system retains undo for at least the time
specified in this parameter, and automatically tunes the undo retention period to
satisfy the undo requirements of the queries. For fixed- size undo tablespaces, the
system automatically tunes for the maximum possible undo retention period, based on
undo tablespace size and usage history, and ignores UNDO_RETENTION unless
retention guarantee is enabled.

The setting of this parameter should account for any flashback requirements of the
system. Automatic tuning of undo retention is not supported for LOBs. The
RETENTION value for LOB columns is set to the value of the UNDO_RETENTION
parameter.

The UNDO_RETENTION parameter can only be honored if the current undo tablespace
has enough space. If an active transaction requires undo space and the undo
tablespace does not have available space, then the system starts reusing unexpired
undo space. This action can potentially cause some queries to fail with a "snapshot too old" message.

The amount of time for which undo is retained for the Oracle Database for the current
undo tablespace can be obtained by querying the TUNED_UNDORETENTION column of
the V$UNDOSTAT dynamic performance view.

UNDO_TABLESPACE

UNDO_TABLESPACE specifies the undo tablespace to be used when an instance starts
up. If this parameter is specified when the instance is in manual undo management
mode, then an error will occur and startup will fail.
If the UNDO_TABLESPACE parameter is omitted, the first available undo tablespace in
the database is chosen. If no undo tablespace is available, the instance will start
without an undo tablespace. In such cases, user transactions will be executed using the SYSTEM rollback segment. You should avoid running in this mode under normal
circumstances.

You can replace an undo tablespace with another undo tablespace while the instance is
running.


Others:-


UTL_FILE_DIR

UTL_FILE_DIR lets you specify one or more directories that Oracle should use for
PL/SQL file I/O. If you are specifying multiple directories, you must repeat the UTL_
FILE_DIR parameter for each directory on separate lines of the initialization
parameter file.

All users can read or write to all files specified by this parameter. Therefore all
PL/SQL users must be trusted with the information in the directories specified by this parameter.

FILEIO_NETWORK_ADAPTERS


FILEIO_NETWORK_ADAPTERS specifies a list of network adapters that can be used to
access the disk storage. On platforms where the database files reside in network
attached storage, this parameter provides the storage access library the list of network adapters that can be used to access the storage.

The netrwork adapter name is a fully qualified address name of the network card that
can be accessed through the host name database or using the Network Information
Service. The components of the adapter name are separated by periods. For example,
the following is a fully qualified adapter name:
ib1.oracle.com

SKIP_UNUSABLE_INDEXES

SKIP_UNUSABLE_INDEXES enables or disables the use and reporting of tables with
unusable indexes or index partitions.

Values:
■ true
Disables error reporting of indexes and index partitions marked UNUSABLE. This
setting allows all operations (inserts, deletes, updates, and selects) on tables with
unusable indexes or index partitions.

Note: If an index is used to enforce a UNIQUE constraint on a table,
then allowing insert and update operations on the table might violate
the constraint. Therefore, this setting does not disable error reporting
for unusable indexes that are unique.

■ false
Enables error reporting of indexes marked UNUSABLE. This setting does not
allow inserts, deletes, and updates on tables with unusable indexes or index
partitions.


SQL_TRACE

SQL_TRACE enables or disables the SQL trace facility. Setting this parameter to true
provides information on tuning that you can use to improve performance. You can
change the value using the DBMS_SYSTEM package.

Caution: Using this initialization parameter to enable the SQL trace
facility for the entire instance can have a severe performance impact.
Enable the facility for specific sessions using the ALTER SESSION
statement. If you must enable the facility on an entire production
environment, then you can minimize performance impact by:

■ Maintaining at least 25% idle CPU capacity
■ Maintaining adequate disk space for the USER_DUMP_DEST
location
■ Striping disk space over sufficient disks

Note: This parameter is deprecated. Oracle recommends that you
use the DBMS_MONITOR and DBMS_SESSION packages instead. SQL_
TRACE is retained for backward compatibility only.

SQLTUNE_CATEGORY

SQLTUNE_CATEGORY specifies the category name for use by sessions to qualify the
lookup of SQL profiles during SQL compilation.


Asm Instance Parameters

ASM_DISKGROUPS

ASM_DISKGROUPS specifies a list of names of disk groups to be mounted by an
Automatic Storage Management instance at instance startup or when an ALTER
DISKGROUP ALL MOUNT statement is issued.
Automatic Storage Management (ASM) automatically adds a disk group to this
parameter when the disk group is successfully created or mounted, and automatically
removes a disk group from this parameter when the disk group is dropped or
dismounted.

Issuing the ALTER DISKGROUP...ALL MOUNT or ALTER DISKGROUP...ALL
DISMOUNT command does not affect the value of this parameter.

ASM_DISKSTRING

ASM_DISKSTRING specifies an operating system-dependent value used by Automatic
Storage Management to limit the set of disks considered for discovery. When a new
disk is added to a disk group, each Automatic Storage Management instance that has
the disk group mounted must be able to discover the new disk using the value of
ASM_DISKSTRING.

In most cases, the default value will be sufficient. Using a more restrictive value may reduce the time required for Automatic Storage Management to perform discovery,
and thus improve disk group mount time or the time for adding a disk to a disk group.
A "?" at the beginning of the string gets expanded to the Oracle home directory.
Depending on the operating system, wildcard characters can be used.
It may be necessary to dynamically change ASM_DISKSTRING before adding a disk so
that the new disk will be discovered.

An attempt to dynamically modify ASM_DISKSTRING will be rejected and the old
value retained if the new value cannot be used to discover a disk that is in a disk
group that is already mounted.


ASM_POWER_LIMIT

ASM_POWER_LIMIT specifies the maximum power on an Automatic Storage
Management instance for disk rebalancing. The higher the limit, the faster rebalancing will complete. Lower values will take longer, but consume fewer processing and I/O resources.

If the POWER clause of a rebalance operation is not specified, then the default power
will be the value of ASM_POWER_LIMIT.