Thursday, February 10, 2011

All about Dataguard


There are two types of standby databases:

• Physical standby database

a) Identical to the primary database on a block-for-block basis
b) Synchronized with the primary database through application of
redo data received from the primary database.

• Logical standby database

a) Shares the same schema definition
b) Synchronized with the primary database by transforming the data
in the redo received from the primary database into SQL
statements and then executing the SQL statements.

Types of Services

There are three types of services provided with Data Guard:

• Redo transport services
-LGWR SYNC / ASYNC
-ARCn ASYNC

• Log apply services
– Redo Apply
– SQL Apply

• Role-management services
- Switchover
- Planned role reversal
— Used for OS or hardware maintenance
- Failover
— Unplanned role reversal
— Use in emergency
— Zero or minimal data loss depending on choice
of data protection mode
— Can be initiated automatically when fast-start failover is
enabled.

Data Protection Modes

• Maximum protection
• Maximum availability
• Maximum performance


Maximum Protection

This protection mode guarantees that no data loss occurs if the primary database fails. To provide this level of protection, the redo data that is needed to recover each transaction must be written to both the local online redo log and the standby redo log (used to store redo data received from another database) on at least one standby database before the transaction commits. To ensure that data loss does not occur, the primary database shuts down if a fault prevents it from writing its
redo stream to at least one remote standby redo log. For multiple- instance Real Application Clusters (RAC) databases, Data Guard shuts down the primary database if it is unable to write the redo records to at least one properly configured database instance.

Maximum Availability

This protection mode provides the highest possible level of data protection without compromising the availability of the primary database. As with maximum protection mode, a transaction does not commit until the redo needed to recover that transaction is written to the local online redo log and to at least one remote standby redo log. Unlike maximum protection mode, the primary database does not shut down if a fault prevents it from writing its redo stream to a remote standby
redo log. Instead, the primary database operates in maximum performance mode until the fault is corrected and all the gaps in the redo log files are resolved. When all the gaps are resolved, the primary database automatically resumes operating in maximum availability mode. This mode guarantees that no data loss occurs if the primary database fails, but only if a second fault does not prevent a complete set of redo data from being sent from the primary database to at least one standby database.

Maximum Performance (Default)

This default protection mode provides the highest possible level of data protection without affecting the performance of the primary database. This is accomplished by allowing a transaction to commit as soon as the redo data needed to recover that transaction is written to the local online redo log. The primary database’s redo data stream is also written to at least one standby database, but that redo stream is written asynchronously with respect to the commitment of the transactions
that create the redo data.

When network links with sufficient bandwidth are used, this mode provides a level of data protection that approaches that of maximum availability mode with minimal impact on primary database performance.


Benefits of Implementing Oracle Data Guard

Oracle Data Guard provides the following benefits:

• Continuous service through disasters or crippling data failures
• Complete data protection against corruptions and data loss
• Efficient use of system resources
• Elimination of idle standby systems
• Flexible configuration of your system to meet business protection and
recovery requirements
• Centralized management

Data Guard Operational Requirements:

a) Hardware and Operating System

• The hardware can be different for the primary and standby databases.
• The operating system and platform architecture for the primary and
standby databases must be the same.
• The operating system releases for the primary and standby databases can
be different.
• If all databases are on the same system, verify that the OS allows you
to mount more than one database with the same name.

b)Oracle Database Software

• Same release of Oracle Database Enterprise Edition must be
installed for all databases.
• SYSDBA privileges are required for the accounts used to manage the
database instances.
• Each database must have its own control file.
• Primary database must operate in ARCHIVELOG mode.
• Enable FORCE LOGGING on the primary database before taking data file
backups for standby creation.
• If any databases use ASM and/or OMF, all should use the same combination.

Data Guard Architechture:-

Log writer (LGWR) process:

LGWR collects transaction redo information and updates the online redo logs. In synchronous mode, it ships redo information directly to the remote file server (RFS) process on the standby database and waits for a confirmation before proceeding.

In asynchronous mode, it ships the redo information directly but does not wait before
proceeding. In asynchronous mode, LGWR submits the network I/O request to the network
server (LNSn) process for that destination.

• Archiver (ARCn) process:

ARCn, or a SQL session performing an archival operation,
creates a copy of the online redo logs locally for use in a primary database recovery. The ARCn process may also ship the redo stream to the RFS process while simultaneously archiving the online log. ARCn is also responsible for proactively detecting and resolving gaps on all standby databases.

• Fetch archive log (FAL) (physical standby databases only):

FAL provides a client/server mechanism for resolving gaps detected in the range of archived redo logs that are generated at the primary database and received at the standby database. This process is started only when needed and shuts down as soon as it is finished. It is very likely you will not see this process running.

• Remote file server (RFS) process:
RFS receives redo information from the primary database. RFS can write the redo into standby redo logs or directly to archived redo logs.

Each LNSn and ARCn process from the primary database has its own RFS process.

• Managed recovery process (MRP):

For physical standby databases only, MRP applies archived redo log information to the physical standby database. If you start the managed recovery with the
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE SQL statement, this foreground session performs the recovery. If you use the optional DISCONNECT [FROM SESSION] clause, the MRP background process starts.

If you use Data Guard broker to manage your standby databases, the broker always starts the MRP background process for a physical standby database.

• Logical standby process (LSP):

For logical standby databases only, LSP controls the applying of archived redo log information to the logical standby database.


Standby Redo Logs

A standby redo log is used only when the database is in the standby role to store redo data received from the primary database. Standby redo logs form a separate pool of log file groups.

Configuring standby redo log files is highly recommended on all standby databases in a Data Guard configuration, including the primary database to aid in role reversal.

A standby redo log is required to implement:
• The maximum protection and maximum availability levels of data protection
• Real-time apply
• Cascaded redo log destinations

Standby redo logs are recommended for maximum performance data protection mode.
Unless you are using the real-time apply feature, standby redo logs must be archived before the data can be applied to the standby database. The standby archival operation occurs automatically.

You must create at least the same number of standby redo log files as are contained on the primary database. It is highly recommended that you have one more standby redo log group than you have online redo log groups as the primary database. In addition, the files must be the same size or larger than the primary database’s online redo logs. If your online redo log files are of different sizes, the RFS process automatically uses the same size standby redo log as the online
redo log file.

The RFS process writes to an archive redo log file if any of the following conditions are met:

• There are no standby redo logs.
• It cannot find the same size standby redo log as the incoming online redo log file.
• All of the standby redo logs of the correct size have not yet been archived.

Redo Apply :

The Data Guard physical standby Redo Apply architecture consists of:

• A production (primary) database, which is linked to one or more standby databases (up to nine) that are identical copies of the production database.

The limit of nine standby databases is imposed by the LOG_ARCHIVE_DEST_n
parameter. In Oracle Database 10g, the maximum number of destinations is 10. One is
used as the local archive destination, leaving the other nine for uses such as the standby database.

Note: You can use the Cascaded Redo Log Destinations feature to incorporate more than
nine standby databases in your configuration.

The primary database is open and active. The standby databases are either in recovery
mode or open in read-only mode, but not both.

Redo is applied to each standby database by using standard Oracle recovery techniques.


Data Guard SQL Apply:

In a logical standby database configuration, Data Guard SQL Apply uses redo information shipped from the primary system. However, instead of using media recovery to apply changes (as in the physical standby database configuration), archived redo log information is transformed into equivalent SQL statements by using LogMiner technology. These SQL statements are then applied to the logical standby database. The logical standby database is open in read/write mode and is available for reporting capabilities.


Real-Time Apply

When you enable the optional real- time apply feature, log apply services apply the redo data from standby redo log files in real time (at the same time the log files are being written to) as opposed to recovering redo from archived redo log files when a log switch occurs. If for some reason the apply service is unable to keep up (for example, if you have a physical standby in READ ONLY mode for a period of time), then the apply service automatically goes to the archived redo log files
as needed. The apply service also tries to catch up and go back to reading the standby redo log files as soon as possible.

Real-time application of redo information provides a number of benefits, including quicker switchover and failover operations, instantly up-to-date results after you change a physical standby database to read-only, up-to-date reporting from a logical standby database, and the ability to leverage larger logs files.

Having larger log files with real-time apply is desirable because the apply service stays with a log longer and the overhead of switching has less impact on the real-time apply processing.

If you define a delay on a destination (with the DELAY attribute) and use real- time apply, the delay is ignored.

For physical standby databases, the managed recovery process (MRP) applies the redo from the standby redo log files after the remote file server (RFS) process finishes writing. To start real- time apply for a physical standby database, issue the following command:

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE;

For logical standby databases, the logical standby process (LSP) applies the redo from the standby redo log files after the RFS process finishes writing. To start real- time apply for a logical standby database, issue the following command:

ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

Note: Standby redo log files are required for real-time apply. It is highly recommended that you have one more standby redo log group than the number of online log groups on the primary database.

Real-time apply is supported by the broker.


Standby Database Modes

You can maintain the standby data in one of the following
modes:

• For physical standby databases
– Redo Apply
– Open read-only mode

• For logical standby databases
– Open read/write mode

Open read/write mode: In this mode, log apply services continue to manage the application of log information from archived redo logs. In addition, the database is open for reporting. The logs are being applied to the logical standby database while users are allowed to perform queries on the tables that are being updated by the log apply service. Users are not allowed to perform DML on the tables in the schemas that the log apply service is maintaining. However, users can modify database objects in other schemas that are not being maintained by the log apply service.


Create Physical Standby Database


Primary Database Standby Database Name
firstdb firstsby


Step 1:

Create a password file for Primary database if the password file is not already present, note that password, since we need to create the password file for the standby database with the same password.

Unix
$orapwd file=$ORACLE_HOME/dbs/orapwfirstdb password=admin

Windows
>orapwd file=D:\oracle\product\10.2.0\db_1\database\PWDfirstdb.ora password=admin


Step 2:

Check Whether the Database is running is archive log mode, if not
Alter the database to run in archive log mode.

SQL> ARCHIVE LOG LIST;

SQL> SHUTDOWN IMMEDIATE;

SQL> STARTUP MOUNT;

SQL> ALTER DATABASE ARCHIVE LOG;

SQL> ALTER DATABASE OPEN;


Step 3:
Enable the Force logging mode for the database. This is to make sure that all the data are logged in to redo, even the data which are
Loaded using direct path.

SQL> ALTER DATABASE FORCE LOGGING;

Step 4:-

Add the below parameters in pfile / spfile.

------Primry Database : Primary Role Parameter----

db_name=firstdb
db_unique_name=firstdb
Log_archive_config=’DG_CONFIG=(firstdb,firstsby)’
Log_archive_dest_1=’LOCATION=/firstdb/arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
db_unique_name = firstdb'
Log_archive_dest_2 =’SERVICE=firstsby LGWR ASYNC VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE) DB_UNIQUE_NAME=firstsby’
Log_archive_dest_state_1=enable
Log_archive_dest_state_2=defer
ARCHIVE_LAG_TARGET = 1800


------Primry Database : Standby Role Parameter----

FAL_SERVER=firstsby
FAL_CLIENT=firstdb
db_file_name_convert=’/oradata/firstsby’,’/oradata/firstdb’
log_file_name_convert=’/oradata/firstsby’,’/oradata/firstdb’
standby_file_management=auto


Descriptions:-

Db_name – The Name of the Primary Database

Db_unqiue_name – The Name of the Current Database

Log_archive_config – The Name of the Primary database followed by the
Name of all the standby databases.


Log_archive_dest_1 – The location where the archived log files are to be stored and for which role the logs files are to be archived.

Log_archive_dest_2 – Mention the Oracle Net service name of the standby
Database, in valid for we are mention as online_logfile, primary_roles
Means that this destination is valid only when the current database
Is running on primary role.

Log_archive_dest_state_1 – Enable the primay database archiving location.

Log_archive_dest_state_2 – Defer ‘ Disbale this standby archiving , since currently we have not configured the standby database, Enable
It once we have configured the standby database.

FAL_SERVER – Fetch Archive logs ‘The Archived logs files are fetched
From the primary database to standby database, here we need to mention
The fal_server as the name of the standby database since when the primary database takes the standby roles the standby database becomes
The primary.

FAL_CLIENT – The current database will be the client

DB_FILE_NAME_CONVERT – We need to mention the name when the database
Location are different from the primary database, if we are
Creating the standby database with same name and location as
Of primary in another server then this is not required.

Log_file_name_convert – Same as above.



Step 5:

Shutdown the Database and take the cold backup and then startup
The database with the modified init.ora file (pfile/spfile).

Step 6:

Create a standby controlfile for the standby database,
And a pfile backup of the primary database.

SQL> alter database create standby controlfile as ‘/tmp/control01.ctl’;

SQL> create pfile =’/tmp/initfirstsby/ora’ from spfile;

Step 7:

Create the Directory structure for the standby database,
And copy all the backed up files such as cold backup, pfile and
Standby controlfile and paste it in appropriate locations.


Step 8:

Modify the init parameter as below.

------Standby Database : Primary Role Parameter----

Db_name=firstdb
Db_unique_name=firstsby
Log_archive_config=’DG_CONFIG=(firstdb,firstsby)’
Log_archive_dest_1=’LOCATION=/firstsby/arch
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=firstsby’
Log_archive_dest_2 =’SERVICE=firstdb LGWR ASYNC VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE) DB_UNIQUE_NAME=firstdb’
Log_archive_dest_state_1=enable
Log_archive_dest_state_2=enable




------Standby Database : Standby Role Parameter----

FAL_SERVER=firstdb
FAL_CLIENT=firstsby
Db_file_name_convert=’/oradata/firstdb’,’/oradata/firstsby’
Log_file_name_convert=’/oradata/firstdb’,’/oradata/firstsby’
Standby_file_management=auto

Note : You are also have to modify the location parameter accordingly, such as controlfile location, udump, bdump, adbump etc.

Step 9:

Configure the tnsnames.ora file according as per the host and service.

Eg. If Primary and Standby are in the same system then..

FIRSTDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = home-593392fa60)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = firstdb)
)
)

FIRSTSBY =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = home-593392fa60)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = firstsby)
)
)

Step 10:

Using tnsping utility check primary and Standby connections.

Step 11:

Create a password file for the standby database, the password
Must be same as of the primary database.

Step 12:

In Windows create a new Oracle Service.

>oradim –new –sid firstsby

Step 13:

Mount the database.


Unix
$ export ORACLE_SID=firstsby

Windows
set ORACLE_SID=firstsby

Sqlplus sys/sys as sysdba

SQL> Create spfile from pfile = ‘………………………………………..’
SQL> STARTUP MOUNT;

Step 14:

Start applying the redo on the standby.

SQL> alter database recover managed standby database disconnect from session;

Step 15:

Just do some logfile switches in Primary database and see whether
The archived log files are begin transported to the standby db and the
Redo are being applied in standby.


In primary

SQL> alter system switch logfile;

In Standby

SQL> select sequence#, archived, applied from V$archived_log;


*************************End of Creating Physical Standby*************

Standby Database:-

Problem:-

I have a 10.2.0.1 database and a standby database on another server that has the redo logs applied every 15 mins.

I added a new file to the primary db but the standby server's disk was full so when it tried to automatically it failed.

I've sorted the space issue out but now I get the following error :

ORA-00283: recovery session canceled due to errors
ORA-01111: name for data file 129 is unknown - rename to correct file
ORA-01110: data file 129: 'C:\ORACLE\ORA10G\DATABASE\UNNAMED00129'
ORA-01157: cannot identify/lock data file 129 - see DBWR trace file
ORA-01111: name for data file 129 is unknown - rename to correct file
ORA-01110: data file 129: 'C:\ORACLE\ORA10G\DATABASE\UNNAMED00129'

Solution:-

STEP1:- Check the name of the datafile on standby database

select name from v$datafile; ---- This command should show the datafile name as UNNAMEDxxxxx

STEP2:- ON STANDBY DATABASE
alter system set standby_file_management='manual';

STEP3:- Rename the datafile
alter database create datafile 'C:\ORACLE\ORA10G\DATABASE\UNNAMED00129' as 'C:\ORACLE\ORA10G\DATABASE\actual_dbfile_name.dbf'


STEP4:- On the standby database
alter system set standby_file_management='auto';

STEP5:- On the standby database
recover managed standby database disconnect;

shutdown and then reopen the standby database and continue applying the redo logs

-----------

Problem:-

In Primary database all the datafiles are present in /u01 mount point and also in the
standby database all the datafiles are present in the /u02 mount point.

in standby the db_file_name_convert parameter has been set as below.
db_file_name_convert ='/u01','/u02'

Now i wanted to create a tablespace in the primary with 5g datafile but there is no sufficient space left in the standby database which is having only 2G free space.
But i have another mount point /u03 with 20G space. Inorder to use the /u03 mountpoint i have followed the steps as below. First you need to alter the parameter with the same name of the datafile which we are going to create in the primary.

Note:- if you are going to specify the entire filename then it has to be specified first and then the parent mount point has to be specified else it will try to create the datafile in the paraent mount point and the whold datafile name will be ignored.

In Standby

SQL> alter system set db_file_name_convert =
'/u01/app/oracle/oradata/mydatafile.dbf','/u03/oradata/mydatafile.dbf',
'/u01/app/oracle/oradata','/u02/app/oracle/oradata' scope=spfile;

sql> recover managed standby database cancel;
Media recover complete.

sql> shut immediate;

In primary

sql> create tablespace mytblspc datafile '/u01/app/oracle/oradata/mydatafile.dbf' size 5G;

In standby
sql> startup mount;
sql> alter database recover managed standby database disconnect from session;

Now you can see that only for that particular datafile is been created in /u03 mount point in the standby database.


-----------------------------------------------------------------------------------

Creating a Standby database using Rman.

If we are using the ASM file system then only possible way for backup is RMAN, hence we need to follow the below method to create a standby database using RMAN.

1) Create the pfile for standby (using the above mentioned parameters), create a standby control file from the primary. (take a backup as " backup current controlfile for standby format '/u01/stdby_contoflie'" or "copy current controlfile
for standby to '/uo1/standby_controlfile'"
2) copy the pfile and controlfile to the destination server.
3) In destination register the standby database statically with the listener.
4) create the password file for the standby database.
5) in Primary take the backup of the database using RMAN.
6) Once the backup is completed, take the backup of the backupset as below.

RMAN> BACKUP BACKUPSET 27 format '/u01/full_bkp_of_db';

7) Once the backup of backupset is completed then copy the above file to the same place in the destination server. i.e copy the full_bkp_of_db file to the same path in the destination server.

8) Now create the spfile from the pfile and startup the statndby database in nomount state and exit from the session.

$ export ORACLE_SID=orclstdb
$ sqlplus / as sysdba

Connected to idle instance
SQL> create spfile from pfile = '/u01/initorclstdb.ora';
File Created
sql> startup nomount;
sql> exit;

Note : Pls ensure that the ASM disk groups are properly configured in the standby database server.

9) Now in Primary server connect to the target and the standby instance using RMAN.

[oracle@rhel u01]$ rman target / auxiliary sys/admin@orclstdb

Recovery Manager: Release 10.2.0.1.0 - Production on Sat Jan 22 09:25:04 2011

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: ORCL (DBID=1266452760)
connected to auxiliary database: ORCL (not mounted)

RMAN> run
2> {
3> allocate auxiliary channel a1 device type disk;
4> duplicate target database for standby nofilenamecheck;
5> }

using target database control file instead of recovery catalog
allocated channel: a1
channel a1: sid=156 devtype=DISK

Starting Duplicate Db at 22-JAN-11

contents of Memory Script:
{
restore clone standby controlfile;
sql clone 'alter database mount standby database';
}
executing Memory Script

Starting restore at 22-JAN-11

channel a1: starting datafile backupset restore
channel a1: restoring control file
channel a1: reading from backup piece +FLASH/orcl/backupset/2011_01_22/ncnnf0_tag20110122t091459_0.287.741086099
channel a1: restored backup piece 1
failover to piece handle=/u01/standby_controlfile_ORCL tag=TAG20110122T091459
channel a1: restore complete, elapsed time: 00:00:16
output filename=+DATA1/orclstdb/controlfile/current.256.741086717
output filename=+DATA1/orclstdb/controlfile/backup.257.741086719
Finished restore at 22-JAN-11

sql statement: alter database mount standby database

contents of Memory Script:
{
set newname for tempfile 1 to
"+DATA1/orcl/tempfile/temp.265.738672101";
switch clone tempfile all;
set newname for datafile 1 to
"+DATA1/orcl/datafile/system.262.738672057";
set newname for datafile 2 to
"+DATA1/orcl/datafile/undotbs1.263.738672087";
set newname for datafile 3 to
"+DATA1/orcl/datafile/sysaux.264.738672089";
set newname for datafile 5 to
"+DATA1/orcl/datafile/users1.270.740827167";
restore
check readonly
clone database
;
}
executing Memory Script

executing command: SET NEWNAME

renamed temporary file 1 to +DATA1/orcl/tempfile/temp.265.738672101 in control file

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 22-JAN-11

channel a1: starting datafile backupset restore
channel a1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to +DATA1/orcl/datafile/system.262.738672057
restoring datafile 00002 to +DATA1/orcl/datafile/undotbs1.263.738672087
restoring datafile 00003 to +DATA1/orcl/datafile/sysaux.264.738672089
restoring datafile 00005 to +DATA1/orcl/datafile/users1.270.740827167
channel a1: reading from backup piece /u01/full_database_741085808_ORCL
channel a1: restored backup piece 1
piece handle=/u01/full_database_741085808_ORCL tag=TAG20110122T091008
channel a1: restore complete, elapsed time: 00:00:50
Finished restore at 22-JAN-11

contents of Memory Script:
{
switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy recid=6 stamp=741086785 filename=+DATA1/orclstdb/datafile/system.258.741086735
datafile 2 switched to datafile copy
input datafile copy recid=7 stamp=741086785 filename=+DATA1/orclstdb/datafile/undotbs1.260.741086741
datafile 3 switched to datafile copy
input datafile copy recid=8 stamp=741086785 filename=+DATA1/orclstdb/datafile/sysaux.259.741086739
datafile 5 switched to datafile copy
input datafile copy recid=9 stamp=741086785 filename=+DATA1/orclstdb/datafile/users1.261.741086745
Finished Duplicate Db at 22-JAN-11
released channel: a1

RMAN> exit;

10) Now start the MRP process in the standby database and check where the logs are transferred to standby and applied.

--------------------------------------------------------



Defining the Redo Transport Mode

Use the attributes of LOG_ARCHIVE_DEST_n:
• ARCH and LGWR

Specify that either the archiver process or the log writer
process is responsible for transmitting redo to the standby destination
ARCH is the default.

• SYNC and ASYNC (LGWR only)
Specify that network I/O operations are to be performed synchronously or asynchronously when using LGWR SYNC is the default.

• AFFIRM and NOAFFIRM
Ensure that redo has been successfully written to disk on the standby destination

Maximum Protection
• Enables zero data loss
• Redo data must be written to both the local online redo log and the standby redo log on at least one standby database.
• Primary database shuts down if a fault prevents it from writing its redo stream to at least one remote standby redo log.

• Configuration requirements:
– Standby redo log files on at least one standby database
– SYNC, LGWR, and AFFIRM attributes for at least one standby database

Maximum Availability

• Enables zero data loss
• Provides the highest possible level of data protection
without compromising the availability of the primary database
• Redo data must be written to both the local online redo log and the standby redo log on at least one standby database.
• Primary database does not shut down if a fault prevents it from writing its redo stream.

• Configuration requirements:
– Standby redo log files on at least one standby database
– SYNC, LGWR, and AFFIRM attributes
for at least one standby database

Maximum Performance

• Default level of data protection
• Provides the highest possible level of data protection without affecting the performance of the primary database
• Transactions can commit as soon as the redo data is
written to the local online redo log.
• Redo stream is written asynchronously with respect to
the commitment of the transactions that create the redo data.

Setting the Protection Mode by Using SQL

You must set attributes to support the type of protection desired. i.e.
configuring Redo Transport Mode and standby redo log files
• Issue the ALTER DATABASE statement on the primary
database at mount stage is upgrading or in the open stage if downgrading:

ALTER DATABASE SET STANDBY TO MAXIMIZE [PROTECTION | AVAILABILITY | PERFORMANCE];

Log_archive_dest_n Parameters.

Delay Appliactino of Redo

You can delay the application of changes to standby databases, thereby providing protection from user errors or corruptions. You can protect against the application of corrupted or erroneous data to the standby database. The apply process also revalidates the log records to prevent application of log corruptions.

For example, if a critical table is accidentally dropped from the primary database, you can prevent this action from affecting the standby database by delaying the application of this change in the standby database.

If operating in maximum protection or maximum availability mode, Data Guard will ensure zero data loss even with the delayed apply in effect.

If you define a delay for a destination that has real- time apply enabled, the delay is ignored.

Note: You can use Flashback Database as an alternative to the Apply Delay configuration option.

Setting LOG_ARCHIVE_DEST_n to Delay the Application of Redo
Use the attributes of LOG_ARCHIVE_DEST_n to control the
application of redo:
• DELAY: number of minutes to delay application of redo
(default: 30 minutes


Alternate

Can specify one alternate destination for the
LOG_ARCHIVE_DEST_n parameter
• Allow a failed destination to change destinations
– Disk full: switch to new disk
– Oracle Net link fails: switch to new network link
• Require REOPEN=0 or MAX_FAILURE
• Enabled with LOG_ARCHIVE_DEST_STATE_n

log_archive_dest_3='SERVICE=stby1_path1 REOPEN=0
ALTERNATE=LOG_ARCHIVE_DEST_4'
log_archive_dest_4='SERVICE=stby1_path2 REOPEN=0
OPTIONAL'
log_archive_dest_state_3=ENABLE
log_archive_dest_state_4=ALTERNATE


MAX_FAILURE

MAX_FAILURE[=count]
• Number of times redo transport services attempts to
reestablish communication
• Requires REOPEN
• No default count
log_archive_dest_3='SERVICE=o10g1 LGWR MAX_FAILURE=30
REOPEN'

NET_TIMEOUT

• Enables the LGWR process to avoid a network timeout
issue
• Valid with SYNC or ASYNC destinations
• Value supplied is the number of seconds to wait.
• Range of values for NET_TIMEOUT: 15 to 1200
• Default: 180
• Use caution in maximum protection mode.
log_archive_dest_2='SERVICE=o10g2 LGWR SYNC
NET_TIMEOUT=30'

REOPEN

• REOPEN[=seconds]
– Minimum number of seconds to wait before retrying a
failed destination at log switch
– Failures can be network failures, quota exceptions, disk
full, and so on.
– Default: 300 seconds (5 minutes)
• REOPEN=0
– Failed destinations remain disabled until:
— Manually reenabled
— ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_n=ENABLE
issued
— Instance restart
– Required when using ALTERNATE destinations with
MAX_FAILURE=0 attributes


Enabling Multiple Connections for Remote Archival of Redo

The MAX_CONNECTIONS attribute of LOG_ARCHIVE_DEST_n is used to set the number of
parallel connections that are used for transmitting archived redo log files to a remote destination.

The MAX_CONNECTIONS attribute defaults to 1, indicating that a single connection is
established for the communication and transfer of data. The maximum value for
MAX_CONNECTIONS is 5.

Note: You must set the LOG_ARCHIVE_MAX_PROCESSES initialization parameter to be
greater than or equal to the value of MAX_CONNECTIONS to achieve the desired number of parallel connections. If the value of the MAX_CONNECTIONS attribute exceeds the value of LOG_ARCHIVE_MAX_PROCESSES, Data Guard will use the available ARCn processes.

Creating a Logical Standby Database

Benefits of Implementing a Logical Standby Database

• Provides an efficient use of system resources:
– Open, independent, and active production database
– Additional indexes and materialized views can be created
for improved query performance.

• Reduces workload on the primary database by offloading the following workloads to a logical standby database :

– Reporting
– Summations
– Queries


Securing Your Logical Standby Database

• Configure the database guard to control user access to tables.
• ALTER DATABASE GUARD command keywords:

– ALL: prevents users from making changes to any data in the database.
– STANDBY: prevents users from making changes to any data maintained by
Data Guard SQL Apply.
– NONE: normal security
• Query GUARD_STATUS column in V$DATABASE.
• Database guard level is set to ALL by broker automatically on the logical standby database.
• Database guard level applies to all users except SYS.

Preparing to Create a Logical Standby Database
1. Check for unsupported data types.
• Log apply services automatically exclude tables with unsupported data types when applying redo data to the logical standby database.

• Unsupported data types:
– BFILE, ROWID, and UROWID
– User-defined types
– Object types REFs
– Varrays
– Nested tables
– XMLtype

You can query the DBA_LOGSTDBY_UNSUPPORTED data dictionary view to see all of the
tables that contain data types that are not supported by logical standby databases. These tables are not maintained (do not have DML applied) in the logical standby database.

Any changes made to unsupported data types, tables, sequences, or views on the primary database are neither propagated to the logical standby database, nor is an error message returned.

It is a good idea to query this view on the primary database to ensure that those tables necessary for critical applications are not in this list before you create the logical standby database.

If the primary database includes unsupported tables that are critical, consider using a physical standby database instead.

Note: This view does not show any tables from the SYS schema because changes to the SYS schema object are not applied to the logical standby database. In addition, this view does not show tables with table compression.

2. Be aware of unsupported DDL commands.
• ALTER DATABASE
• ALTER SESSION
• ALTER MATERIALIZED VIEW
• ALTER MATERIALIZED VIEW LOG
• ALTER SYSTEM
• CREATE CONTROL FILE
• CREATE DATABASE
• CREATE DATABASE LINK
• CREATE PFILE FROM SPFILE
• CREATE SCHEMA AUTHORIZATION
• CREATE MATERIALIZED VIEW
• CREATE MATERIALIZED VIEW LOG
• CREATE SPFILE FROM PFILE
• DROP DATABASE LINK
• DROP MATERIALIZED VIEW
• DROP MATERIALIZED VIEW LOG
• EXPLAIN
• LOCK TABLE
• SET CONSTRAINTS
• SET ROLE
• SET TRANSACTION

Not all data definition language (DDL) commands that are executed on the primary database are applied to the logical standby database. If you execute any of these commands (mentioned above) on the primary database, they are not executed on any logical standby database in your configuration. You must execute them on the logical standby database to maintain consistency between the primary database and the logical standby database.

3. Ensure row uniqueness.

• Query DBA_LOGSTDBY_NOT_UNIQUE on the primary database to find tables without a unique identifier:

SQL> desc DBA_LOGSTDBY_NOT_UNIQUE
Name Null? Type
-------------- -------- ------------
OWNER NOT NULL VARCHAR2(30)
TABLE_NAME NOT NULL VARCHAR2(30)
BAD_COLUMN VARCHAR2(1)

• BAD_COLUMN values:
– Y: Data type is unbounded.
– N: Table contains enough column information.

• Add a primary key or unique index to ensure that SQL
Apply can efficiently apply data updates

Because the row IDs on a logical standby database might not be the same as the row IDs on the primary database, a different mechanism must be used to match the updated row on the primary database to its corresponding row on the logical standby database. Primary keys and unique indexes can be used to match the corresponding rows. It is recommended that you add a primary key or a unique index to tables on the primary database (whenever appropriate and possible) to ensure that SQL Apply can efficiently apply data updates to the logical standby database.

You can query the DBA_LOGSTDBY_NOT_UNIQUE view to identify tables in the primary
database that do not have a primary key or unique index with NOT NULL columns. Issue the following query to display a list of tables that SQL Apply might not be able to uniquely identify:

SQL> SELECT OWNER, TABLE_NAME,BAD_COLUMN
FROM DBA_LOGSTDBY_NOT_UNIQUE
WHERE TABLE_NAME NOT IN
(SELECT TABLE_NAME FROM DBA_LOGSTDBY_UNSUPPORTED);

The key column in this view is BAD_COLUMN. If this view returns a row for a given table, you may want to consider adding a primary or unique key constraint on the table.

A value of Y indicates that the table does not have a primary or unique constraint and that the column is defined using an unbounded data type, such as CLOB. If two rows in the table match except for values in their LOB column, then the table cannot be maintained properly and SQL Apply stops.

A value of N indicates that the table does not have a primary or unique constraint but that it contains enough column information to maintain the table in the logical standby database.

However, the redo transport services and log apply services run more efficiently if you add a primary key. You should consider adding a disabled RELY constraint to these tables.


SQL> ALTER TABLE hr.employees
ADD PRIMARY KEY (employee_id, last_name)
RELY DISABLE

If your application ensures that the rows in a table are unique, you can create a disabled primary key RELY constraint on the table without incurring the overhead of maintaining a primary key on the primary database.

The RELY constraint tells the system to log the named columns (in this example, employee_id and last_name) to identify rows in this table. Be careful to select columns for the disabled RELY constraint that uniquely identify the row. If the columns selected for the RELY constraint do not uniquely identify the row, SQL Apply does not apply redo information to the logical standby database. Supplemental logging must be set to MINIMAL or FULL.


4. Verify that the primary database is configured for
ARCHIVELOG mode.

5. Enable supplemental logging

Supplemental logging must be enabled on the primary database to support a logical standby database. Because an Oracle Database logs only the columns that were modified, this is not always sufficient to uniquely identify the row that changed, and additional information must be put into the stream of redo data.

The supplemental information that is added to the redo data helps
SQL Apply correctly identify and maintain tables in the logical standby database.

The supplemental_db_logging clauses of the ALTER DATABASE command instructs the
Oracle Database server to add or stop adding supplemental data to the log stream.
Full supplemental logging (or identification key logging) enables database-wide, before- image logging of primary keys or unique indexes (in the absence of primary keys) for all updates. With this type of logging, an application can identify updated rows logically rather than resorting to row IDs. This type of logging is required by SQL Apply.

On the primary database, issue the following statement to add primary key and unique index information to the archived redo log file:

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA
(PRIMARY KEY, UNIQUE INDEX) COLUMNS;


You must enable full supplemental logging before you create the logical standby database. The reason is that the logical standby database cannot use archived redo logs that contain both supplemental log data and nonsupplemental log data.

Note: Enterprise Manager automatically enables supplemental logging when you create the logical standby database.

The following columns in the V$DATABASE view have a YES value after supplemental logging has been enabled. YES represents the following for the respective columns:

• SUPPLEMENTAL_LOG_DATA_MIN: LogMiner has sufficient information to support
chained rows and various storage arrangements.
• SUPPLEMENTAL_LOG_DATA_PK: All columns of the primary key are placed in the redo
log whenever there is an update.
• SUPPLEMENTAL_LOG_DATA_UI: If any unique key columns are modified, all other
columns belonging to the unique key are also logged.

Note: If you enable full supplemental logging on your primary database and you have already created physical standby databases, then you must enable supplemental logging on each physical standby database to ensure that future switchovers work correctly.

6. Log apply services automatically exclude unsupported objects when applying redo data to the logical standby database.
• Unsupported objects:
– Tables and sequences in the SYS schema
– Tables using table compression
– Tables used to support materialized views
– Global temporary tables
– Tables with unsupported data types

7. Verifying Values of Initialization Parameters

Verify the values of the following initialization parameters on the primary database before creating the logical standby database (where the parameters will also be applicable :

• PARALLEL_MAX_SERVERS: Value must be set to 5 or greater. The recommended value is
9. The SQL Apply Service on the logical standby site uses several parallel processes when applying the SQL to the logical database.

• LOG_PARALLELISM: Value must be set to 1. This is the default value. If this value is changed, LogMiner on the logical standby site cannot read the redo logs.

• SHARED_POOL_SIZE: Value should be 160 MB or greater. This is a recommendation.
Your configuration may operate with a lower value, but in a production environment a lower value may cause performance degradation.

Automatic Deletion of Redo Log Files by SQL Apply

In Oracle Database 10g Release 2, archived redo logs on the logical standby database are automatically deleted by SQL Apply after they have been applied. This feature reduces the amount of space consumed on the logical standby database and eliminates the manual step of deleting the archived redo log files.

* Enabling the auto-delete feature:
EXECUTE dbms_logstdby.apply_set('LOG_AUTO_DELETE','TRUE');

* Disabling the auto-delete feature:
EXECUTE dbms_logstdby.apply_set('LOG_AUTO_DELETE','FALSE');

Step 1:-
a) Create the Physical Standby database.
b) Ensure that the physical standby database is caught up to the primary database.

Step 2:-

• Before converting the physical standby database to a logical standby database, stop Redo Apply.
• Required to avoid applying changes past the redo that
contains the LogMiner dictionary

SQL> RECOVER MANAGED STANDBY DATABASE CANCEL;
Media Recovery Complete

Step 3:-
a. Prepare the primary database for role transitions by setting LOG_ARCHIVE_DEST_n parameters appropriately.

b. Set the LOG_ARCHIVE_DEST_3 initialization parameter for transitioning to a logical standby role.

LOG_ARCHIVE_DEST_3=
'LOCATION=/arch/chicago/
VALID_FOR=(STANDBY_LOGFILES, STANDBY_ROLE)
DB_UNIQUE_NAME=chicago'

LOG_ARCHIVE_DEST_STATE_3=enable

This parameter takes effect only when the primary database is transitioned to the standby database role.

Step 4: Build a LogMiner Dictionary in the Redo Data

• Build a LogMiner dictionary in the redo data so that SQL Apply can properly interpret changes in the redo.

• Supplemental logging is automatically enabled.

SQL> EXECUTE DBMS_LOGSTDBY.BUILD;

Note: The DBMS_LOGSTDBY.BUILD procedure waits for all existing transactions to complete so long-running transactions executing on the primary database will affect its operation.

Step 5: Transition to a Logical Standby Database

a. Convert to a logical standby database.


Perform the following steps to prepare the physical standby database to transition to a logical standby database:

a. Issue the ALTER DATABASE RECOVER TO LOGICAL STANDBY db_name command to
continue applying redo data to the physical standby database until it is ready to convert to a logical standby database. Specify a database name (db_name) to identify the new logical standby database.

The redo log files contain the information needed to convert your physical
standby database to a logical standby database. The statement applies redo data until the LogMiner dictionary is found in the redo log files.

b. Shutdown the logical standby database.

b. Create a new password file using the orapwd utility. You must create a new password file because the conversion process changes the database name for the logical standby database.

c. Start it in MOUNT mode.
d. Modify the LOG_ARCHIVE_DEST_n parameters to specify separate local destinations for:
- Archived redo log files that store redo data generated by the logical standby database
- Archived redo log files that store redo data received the the primary database
Modify the LOG_ARCHIVE_DEST_1 parameter to be valid for
(ONLINE_LOGFILES,ALL_ROLES).

Step 6: Open the Logical Standby Database
a. Open the new logical standby database with the
RESETLOGS option:

SQL> ALTER DATABASE OPEN RESETLOGS;

b. Start the application of redo data to the logical standby
database:

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE

Step 7: Verify That the Logical Standby Database Is Performing Properly

a. Verify that the archived redo log files were registered:

SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME,
DICT_BEGIN, DICT_END
FROM DBA_LOGSTDBY_LOG ORDER BY SEQUENCE#;

Verify that redo data is being applied correctly

SQL> SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME = 'coordinator state';

View the V$LOGSTDBY view to see current SQL Apply activity:

SQL> SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY;

f. Check the overall progress of SQL Apply:

SELECT APPLIED_SCN, LATEST_SCN FROM V$LOGSTDBY_PROGRESS;


Additional Configuration Tasks Perform the following tasks as appropriate for your
configuration:

• Configure standby redo logs.
• Enable Flashback Database.
• Upgrade the data protection mode.

Setting up a Skip Handler for a DDL Statement

The DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT initialization parameters are not honored once a physical standby database is converted to a logical standby database. This can be a problem, for example, when adding a datafile to the primary database and the datafile paths are different between the primary and standby. This section describes the steps necessary to register a skip handler and provide SQL Apply with a replacement DDL string to execute by converting the path names of the primary database datafiles to the standby datafile path names.

This may or may not be a problem for everyone. For example, if you are using Oracle Managed Files (OMF), SQL Apply will successfully execute DDL statements generated from the primary to CREATE and ALTER tablespaces and their associated system generated path name on the logical standby.


# ---[ Primary ] --- #

SQL> select name, value from v$spparameter
2 where name in ('db_create_file_dest', 'db_unique_name');

NAME VALUE
------------------------- ------------------
db_create_file_dest /u02/oradata
db_unique_name modesto

SQL> create tablespace data2 datafile size 5m;

Tablespace created.

SQL> select tablespace_name, file_name
2 from dba_data_files
3 where tablespace_name = 'DATA2';

TABLESPACE_NAME FILE_NAME
------------------ ---------------------------------------------------------
DATA2 /u02/oradata/MODESTO/datafile/o1_mf_data2_6lwh7wgv_.dbf


# ---[ Logical Standby ] --- #

SQL> select name, value from v$spparameter
2 where name in ('db_create_file_dest', 'db_unique_name');

NAME VALUE
------------------------- ------------------
db_create_file_dest /u02/oradata
db_unique_name turlock

----------------------- alert.log -----------------------
Wed Jan 12 18:45:28 EST 2011
Completed: create tablespace data2 datafile size 5m
---------------------------------------------------------

SQL> select tablespace_name, file_name
2 from dba_data_files
3 where tablespace_name = 'DATA2';

TABLESPACE_NAME FILE_NAME
------------------ ---------------------------------------------------------
DATA2 /u02/oradata/TURLOCK/datafile/o1_mf_data2_6lwh8q9d_.dbf

If on the other hand, you attempt to specify a physical path name in the CREATE/ALTER tablespace statement that does not exist on the logical standby, SQL Apply will not succeed in processing the statement and will fail. Whenever SQL Apply encounters an error while applying a SQL statement, it will stop and provide the DBA with an opportunity to correct the statement and restart SQL Apply.


# ---[ Primary ] --- #

SQL> alter tablespace data2 add datafile '/u05/oradata/MODESTO/data02.dbf' size 5m;

Tablespace altered.


# ---[ Logical Standby ] --- #

----------------------- alert.log -----------------------
Wed Jan 12 19:59:36 EST 2011
alter tablespace data2 add datafile '/u05/oradata/MODESTO/data02.dbf' size 5m
Wed Jan 12 19:59:36 EST 2011
ORA-1119 signalled during: alter tablespace data2 add datafile '/u05/oradata/MODESTO/data02.dbf' size 5m...
LOGSTDBY status: ORA-01119: error in creating database file '/u05/oradata/MODESTO/data02.dbf'
ORA-27040: file create error, unable to create file
Linux Error: 2: No such file or directory
LOGSTDBY Apply process P004 pid=31 OS id=28497 stopped
Wed Jan 12 19:59:36 EST 2011
Errors in file /u01/app/oracle/admin/turlock/bdump/turlock_lsp0_28465.trc:
ORA-12801: error signaled in parallel query server P004
ORA-01119: error in creating database file '/u05/oradata/MODESTO/data02.dbf'
ORA-27040: file create error, unable to create file
Linux Error: 2: No such file or directory
LOGSTDBY Analyzer process P003 pid=30 OS id=28495 stopped
LOGSTDBY Apply process P005 pid=32 OS id=28499 stopped
LOGSTDBY Apply process P006 pid=33 OS id=28501 stopped
LOGSTDBY Apply process P007 pid=34 OS id=28503 stopped
LOGSTDBY Apply process P008 pid=35 OS id=28505 stopped
---------------------------------------------------------

SQL> select event_timestamp, event, status
2 from dba_logstdby_events;

EVENT_TIMESTAMP EVENT Status
----------------------------- ------------------------------ ------------------------------
12-JAN-11 07.59.36.134349 PM alter tablespace data2 add dat ORA-01119: error in creating d
afile '/u05/oradata/MODESTO/da atabase file '/u05/oradata/MOD
ta02.dbf' size 5m ESTO/data02.dbf'
ORA-27040: file create error,
unable to create file
Linux Error: 2: No such file o
r directory


-- ----------------------------------------------
-- Disable the database guard for this session so
-- we can modify the logical standby.
-- ----------------------------------------------

SQL> alter session disable guard;

Session altered.


-- ----------------------------------------------
-- Issue a compensating transaction or statement
-- on the logical standby. For example, issue the
-- CREATE TABLESPACE or ALTER TABLESPACE command
-- that failed, but use the correct file path
-- name.
-- ----------------------------------------------

SQL> alter tablespace data2 add datafile '/u05/oradata/TURLOCK/data02.dbf' size 5m;

Tablespace altered.


-- ----------------------------------------------
-- Re-enable the database guard for this session.
-- ----------------------------------------------

SQL> alter session enable guard;

Session altered.


-- ----------------------------------------------
-- Finally, restart logical apply with a clause
-- that will cause the failed transaction to be
-- automatically skipped.
-- ----------------------------------------------

SQL> alter database start logical standby apply immediate skip failed transaction;

Database altered.


-- ----------------------------------------------
-- Verify results.
-- ----------------------------------------------

SQL> select tablespace_name, file_name
2 from dba_data_files
3 where tablespace_name = 'DATA2';

TABLESPACE_NAME FILE_NAME
------------------ ---------------------------------------------------------
DATA2 /u02/oradata/TURLOCK/datafile/o1_mf_data2_6lwh8q9d_.dbf
DATA2 /u05/oradata/TURLOCK/data02.dbf


As stated at the beginning of this section, it is possible to avoid errors of this nature on the logical standby database by registering a skip handler and provide SQL Apply with a replacement DDL string to execute by converting the path names of the primary database datafiles to the standby datafile path names. The steps to perform this are presented below. The actions below should be run on the logical standby database.


First, create the PL/SQL 'skip procedure' to handle tablespace DDL transactions.

create or replace procedure sys.handle_tbs_ddl (
old_stmt in varchar2
, stmt_typ in varchar2
, schema in varchar2
, name in varchar2
, xidusn in number
, xidslt in number
, xidsqn in number
, action out number
, new_stmt out varchar2
) as
begin

-- --------------------------------------------------------
-- All primary file specification that contain a directory
-- '/u05/oradata/MODESTO' should be changed to the
-- '/u05/oradata/TURLOCK' directory specification.
-- --------------------------------------------------------
new_stmt := replace(old_stmt, '/u05/oradata/MODESTO', '/u05/oradata/TURLOCK');
action := dbms_logstdby.skip_action_replace;

exception
when others then
action := dbms_logstdby.skip_action_error;
new_stmt := null;

end handle_tbs_ddl;

Stop the LSP process.

SQL> alter database stop logical standby apply;

Database altered.

Register the skip procedure with SQL Apply.

SQL> execute dbms_logstdby.skip(stmt => 'TABLESPACE', proc_name => 'sys.handle_tbs_ddl');

PL/SQL procedure successfully completed.

Start LSP Process.

SQL> alter database start logical standby apply immediate;

Database altered.

Test:-

# ---[ Primary ] --- #

SQL> alter tablespace data2 add datafile '/u05/oradata/MODESTO/data03.dbf' size 5m;

Tablespace altered.


# ---[ Logical Standby ] --- #

----------------------- alert.log -----------------------
Wed Jan 12 20:51:58 EST 2011
LOGSTDBY status: ORA-16110: user procedure processing of logical standby apply DDL
LOGSTDBY status: ORA-16202: Skip procedure requested to replace statement
Wed Jan 12 20:51:58 EST 2011
alter tablespace data2 add datafile '/u05/oradata/TURLOCK/data03.dbf' size 5m
Completed: alter tablespace data2 add datafile '/u05/oradata/TURLOCK/data03.dbf' size 5m
---------------------------------------------------------

SQL> select tablespace_name, file_name
2 from dba_data_files
3 where tablespace_name = 'DATA2';

TABLESPACE_NAME FILE_NAME
------------------ ---------------------------------------------------------
DATA2 /u02/oradata/TURLOCK/datafile/o1_mf_data2_6lwh8q9d_.dbf
DATA2 /u05/oradata/TURLOCK/data02.dbf
DATA2 /u05/oradata/TURLOCK/data03.dbf

Recovering from a Failed DML Transaction
When SQL Apply receives an error during a DDL transactions, it is often safe for the user to manually issue a compensating DDL transaction, skip the failed transaction, and restart SQL Apply. A failed DML transaction, however, can become problematic and should be carefully reviewed before simply trying to skip it. For example, suppose you have a transaction that consists of 100 inserts and 2 updates. If one of the inserts fail and the user decides to simply skip that single insert, then the entire transaction will be considered logically corrupt, creating data integrity issues on the standby database.

In most cases, a failed DML transaction is the result of a table associated with that transaction being out of synch with its corresponding table on the primary. The following is a list of common reasons why a table on the logical standby may become out of synch with the primary:


Having the SQL Apply engine skip transactions without properly issuing a compensating transaction.

Providing users improper access to objects being maintained by SQL Apply.

Performing unsupported operations on the primary.

Modifying user data as the SYS user.


The best way for resolving this type of error is to instantiate the table so that it is once again in synch with the primary. Fortunately, Oracle provides the DBMS_LOGSTDBY.INSTANTIATE_TABLE procedure as an easy and efficient method to bring a table on a logical standby back into sync with the version on the primary database



About the INSTANTIATE_TABLE Procedure

This procedure creates and populates a table in the standby database from a corresponding table in the primary database. The table requires the name of the database link (dblink) as an input parameter. If the table already exists in the logical standby database, it will be dropped and re-created based on the table definition at the primary database. This procedure only brings over the data associated with the table, and not the associated indexes and constraints.

Use the INSTANTIATE_TABLE procedure to:


Add a table to a standby database.
Re-create a table in a standby database.

Syntax

DBMS_LOGSTDBY.INSTANTIATE_TABLE (
schema_name IN VARCHAR2,
table_name IN VARCHAR2,
dblink IN VARCHAR2);
Parameters

Parameter Description
------------- -----------------------------------------------------------
schema_name Name of the schema
table_name Name of the table to be created or re-created in the
standby database
dblink Name of the database link account that has privileges
to read and lock the table in the primary database,
as well as the SELECT_CATALOG_ROLE on the primary database
Usage Notes


Use this procedure to create and populate a table in a way that keeps the data on the standby database transactionally consistent with the primary database.

This table will not be synchronized with the rest of the tables being maintained by SQL Apply and SQL Apply will not start to maintain it until SQL Apply encounters redo that occurred after the table was instantiated from the primary. The SCN at which the table was instantiated from the primary database is available in the DBA_LOGSTDBY_EVENTS view.

The specified table must be a table that is supported by logical standby (that is, it does not appear in the DBA_LOGSTDBY_UNSUPPORTED_TABLES view on the primary database).

If there are any skip rules that specifically name this table (without any wild cards), those skip rules will be dropped as part of INSTANTIATE_TABLE, so that the table will be properly maintained by SQL Apply in the future. If there are skip rules that indirectly reference this table (match a skip rule with a wild card in the schema_name or table_name, and have a TABLE, DML, or SCHEMA_DDL statement type), INSTANTIATE_TABLE will fail with an ORA-16278 error. Any multi-object skip rules that pertain to the table must be dropped or changed before re-attempting the INSTANTIATE_TABLE call.


The first step is to create a database link on the logical standby that points to the primary.


SQL> connect sys/manager@turlock as sysdba
Connected.

SQL> create database link modesto
2 connect to scott identified by tiger
3 using 'modesto.idevelopment.info';

Database link created

Next, let's modify the structure of the SCOTT.EMP table on the logical standby which will result in a failed DML transaction on the logical standby when updates for this table are sent from the primary.

SQL> connect sys/manager@turlock as sysdba
Connected.

SQL> alter database guard none;

Database altered.

SQL> connect scott/tiger@turlock
Connected.

SQL> alter table emp add (new_column varchar2(10));

Table altered.

From the primary database, modify a record or two in the SCOTT.EMP table. After modifying the table records, perform a log switch on the primary to send its redo to the logical standby.


SQL> connect scott/tiger@modesto
Connected.

SQL> update emp set monthly_salary=22083 where emp_id=50;

1 row updated.

SQL> commit;

Commit complete.

SQL> connect sys/manager@modesto as sysdba
Connected.

SQL> alter system switch logfile;

System altered.

Query the DBA_LOGSTDBY_EVENTS view on the logical standby to look for the failed DML transaction.

SQL> connect sys/manager@turlock as sysdba
Connected.

SQL> select a.event_time, a.status
2 from (select b.event_time, b.status
3 from dba_logstdby_events b
4 order by b.event_time desc
5 ) a
6 where rownum < 5;

EVENT_TIME STATUS
-------------------- ---------------------------------------------------------
03-FEB-2011 16:11:39 ORA-26676: Table 'SCOTT.EMP' has 9 columns in the LCR
and 10 columns in the replicated site
03-FEB-2011 16:11:38 ORA-16222: automatic Logical Standby retry of last action
03-FEB-2011 16:11:38 ORA-16111: log mining and apply setting up
03-FEB-2011 16:11:38 ORA-26676: Table 'SCOTT.EMP' has 9 columns in the LCR
and 10 columns in the replicated site

To start the recovery process for the failed DML transaction, we first need to stop logical standby apply.

SQL> alter database stop logical standby apply;

Database altered.

Next, use the DBMS_LOGSTDBY.INSTANTIATE_TABLE procedure on the logical standby to put the SCOTT.EMP table back in synch with the primary.

SQL> exec dbms_logstdby.instantiate_table('SCOTT', 'EMP', 'MODESTO.IDEVELOPMENT.INFO');

PL/SQL procedure successfully completed.

Restart logical standby apply.

SQL> alter database start logical standby apply immediate;

Database altered.

Verify the integrity of the logical standby and the new SCOTT.EMP table.


SQL> select a.event_time, a.status
2 from (select b.event_time, b.status
3 from dba_logstdby_events b
4 order by b.event_time desc
5 ) a
6 where rownum < 5;

EVENT_TIME STATUS
-------------------- -------------------------------------------------------------------------------------
03-FEB-2011 16:26:04 ORA-16111: log mining and apply setting up
03-FEB-2011 16:23:49 Instantiation SCN: 2704695
03-FEB-2011 16:11:39 ORA-26676: Table 'SCOTT.EMP' has 9 columns in the LCR and 10 columns in the replicate
d site

03-FEB-2011 16:11:38 ORA-16222: automatic Logical Standby retry of last action

SQL> select monthly_salary from scott.emp where emp_id = 50;

MONTHLY_SALARY
--------------
22083

Set the GUARD on the logical standby back to its original setting

SQL> alter database guard standby;

Database altered.

--------------------


Features of Data Guard Broker

Automated creation of Data Guard configurations incorporating a primary database, a new or existing (physical or logical) standby database, redo transport services, and log apply services
Note: Any of the databases in the configuration can be a RAC database.

• Adding up to eight new or existing (physical or logical, RAC or non-RAC) standby databases to each existing Data Guard configuration, for a total of one primary database, and from one to nine standby databases in the same configuration.

• Managing an entire Data Guard configuration (including all databases, redo transport services, and log apply services) through a client connection to any database in the configuration.

• Invoking switchover or failover with a single command to initiate and control complex role changes across all databases in the configuration.

• Monitoring the status of the entire configuration, capturing diagnostic information, reporting statistics such as the log apply rate and the redo generation rate, and detecting problems quickly with centralized monitoring, testing, and performance tools.

Data Guard Broker: Components

• Client-side:
– Oracle Enterprise Manager 10g Grid Control
– DGMGRL (command-line interface)

• Server-side: Data Guard monitor
– DMON process
– Configuration files

Data Guard monitor is a broker component that is integrated with the Oracle
database. Data Guard monitor comprises the Data Guard monitor process (DMON) and broker configuration files, with which you can control the databases of that configuration, modify their behavior at run time, monitor the overall health of the configuration, and provide notification of other operational characteristics.
The configuration file contains profiles that describe the states and properties of the databases in the configuration. Associated with each database are various properties that the DMON process uses to control the database’s behavior. The properties are recorded in the configuration file as a part of the database’s object profile that is stored there. Many database properties are used to control database initialization parameters related to the Data Guard environment.

A broker configuration consists of:

• A configuration object: A named collection of database profiles. A database profile is a description of a database object, including its current state, current status, and properties.

• Database objects: Objects corresponding to primary or standby databases.

• Instance objects: A database object may comprise one or more instance objects if it is a RAC database.


The Data Guard monitor process (DMON) is an Oracle background process that runs on every site that is managed by the broker. When you start the Data Guard broker, a DMON process is created.

When you use Enterprise Manager or the Data Guard command- line interface (CLI), the DMON process is the server-side component that interacts with the local instance and the DMON processes that are running on other sites to perform the requested function.

The DMON process is also responsible for monitoring the health of the broker configuration and for ensuring that every site has a consistent copy of the configuration files in which the DMON process stores its configuration data.

There are two multiplexed versions of the configuration file on each site.

Data Guard Broker: Requirements

• Enterprise Edition of Oracle Database 10g
• Single-instance or multi-instance environment
• COMPATIBLE must be set to 9.2.0.1.0 or higher for primary and standby databases.
• Oracle Net network files must be configured for databases that you add to the configuration.
• The value of the LOCAL_LISTENER initialization parameter on each instance that is part of your Data Guard broker configuration must resolve to a listener address that is reachable by all members of the configuration.
• To enable the Data Guard broker CLI to restart instances during the course of broker operations, a service with a specific name must be statically registered with the local listener of each instance. The value of the GLOBAL_DBNAME attribute must be set to a concatenation of db_unique_name_DGMGRL.db_domain.
• DG_BROKER_START = TRUE
• The primary database must be in ARCHIVELOG mode.
• All databases must be in MOUNT or OPEN mode.

• If any database in your configuration is a RAC database, you must configure the
DG_BROKER_CONFIG_FILEn initialization parameters for that database so that they point
to the same shared files for all instances of that database. You cannot use the default values for these parameters.

Note: The shared files could be files on a cluster file system, if available, or on raw devices.

• If any database in your configuration is a RAC database, the START_OPTIONS for that
database must be set to MOUNT in the Oracle Cluster Repository (OCR) using SRVCTL, as
follows:

SRVCTL ADD DATABASE -d -o <$ORACLE_HOME> -s MOUNT
or
SRVCTL MODIFY DATABASE -d -o <$ORACLE_HOME> -s MOUNT

• You must use a server parameter file (SPFILE) for initialization parameters.
• Using the SPFILE enables the Data Guard broker to keep its configuration file and the database SPFILE consistent.

Data Guard Monitor: DMON Process

The Data Guard monitor comprises two components: the DMON process and the configuration file.
The Data Guard Monitor process (DMON) is an Oracle background process that is part of each database instance managed by the broker. When you start the Data Guard broker, a portion of the SGA is allocated and a DMON process is created. The amount of memory allocated is typically less than 50 KB per site; the actual amount on your system varies.

When you use Enterprise Manager or the CLI, the DMON process is the server-side component that interacts with the local instance and the DMON processes running on other sites to perform the requested function.

The DMON process is also responsible for monitoring the health of the broker configuration and for ensuring that every database has a consistent copy of the broker configuration files in which the DMON process stores its configuration data.

Data Guard Monitor: Configuration File

The DMON process maintains persistent configuration data about all the databases in the broker configuration in a broker configuration file.

Every database that is part of the Data Guard broker configuration has two broker configuration files that are maintained and synchronized for each database in the broker configuration. One of the files is in use and the other acts as a backup.

The configuration files are binary files and cannot be edited.

When the broker is started for the first time, the configuration files are created and named automatically by using a default name (dr1.dat and
dr2.date. You can override this default name by setting the DG_BROKER_CONFIG_FILEn initialization parameters. You can also change the configuration file names dynamically by issuing the ALTER SYSTEM SQL statement.

The configuration files contain entries that describe the state and properties of the databases in the configuration. For example, the files record the databases that are part of the configuration, the roles and properties of each of the databases, and the state of each of the databases in the configuration.

Two files are maintained so that there is always a record of the last known valid
state of the configuration. The broker uses the data in the configuration file to configure and start the databases, control each database’s behavior, and provide information to the CLI and Enterprise Manager.


Benefits of Using the Data Guard Broker

By automating the tasks required to configure and monitor a Data Guard configuration, the broker enhances the high-availability, data protection, and disaster protection capabilities that are inherent in Oracle Data Guard.

If the primary database fails, the broker streamlines the process for any one of the standby databases to replace the primary database and take over production processing.

Using the broker, you can logically define and create a Data Guard configuration consisting of a primary database and a (physical or logical, RAC or non-RAC) standby database.

The broker enables easy configuration of additional standby databases. After you create a Data Guard configuration consisting of a primary and a standby database, you can add up to eight standby databases (new or existing, physical or logical) to each Data Guard configuration.

Provides simplified, centralized, and extended management
* Automates switchover and failover to a specified standby database in the
broker configuration.
* Automatically communicates between the databases in
a Data Guard configuration using Oracle Net Services.
* Provides built-in validation that monitors the health of
all the databases in the configuration.


Configuring Data Guard Broker

Step 1:-

Set the dg_broker_start = true in both primary and standby database.

Execute the below in both primary and standby:-
alter system set db_broker_start = true

Set up the local_listener parameter if the listener port is not the default 1521, on both the primary and standby database.

alter system set local_listener = 'LISTENER_DG' scope=both;

Step 2:-

By default 2 configuartion files get created in the Oracle home path.

db_broker_config_file1 and db_broker_config_file2

Step 3:-

Register the database statically in the listener in both primary and standby database. Mention the GLOBAL_DBNAME parameter equals to the db_unique_name_DGMGRL.db_domain_name

See the below example.

Primary datbase:-



SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
(PROGRAM = extproc)
)
(SID_DESC =
(GLOBAL_DBNAME = orcl_DGMGRL)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
(SID_NAME = orcl)
)
)

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rhel.manzoor.com)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
)

Standby Database:-

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
(PROGRAM = extproc)
)
(SID_DESC =
(SID_NAME = orclstdb)
(GLOBAL_DBNAME = orclstdb_DGMGRL)
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1)
)
)

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = linux.manzoor.com)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
)


Step 4:-

Configure the tnsnames.ora file with the service name equal to the global_dbname in both primary and Standby database.

Primary:-

ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rhel.manzoor.com )(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl_DGMGRL)
)
)

ORCLSTDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = linux.manzoor.com )(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orclstdb_DGMGRL)
)
)


Standby:-

ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rhel.manzoor.com )(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl_DGMGRL)
)
)
ORCLSTDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = linux.manzoor.com )(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orclstdb_DGMGRL)
)
)

Step 5:-

* Connect to dgmgrl CLI.
* Create a new configuration.
* Add the standby database.
* Enable the Configuartion.

[oracle@rhel admin]$ dgmgrl
DGMGRL for Linux: Version 10.2.0.1.0 - Production

Copyright (c) 2000, 2005, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/admin@orcl
Connected.
DGMGRL> help

The following commands are available:

add Add a standby database to the broker configuration
connect Connect to an Oracle instance
create Create a broker configuration
disable Disable a configuration, a database, or Fast-Start Failover
edit Edit a configuration, database, or instance
enable Enable a configuration, a database, or Fast-Start Failover
exit Exit the program
failover Change a standby database to be the primary database
help Display description and syntax for a command
quit Exit the program
reinstate Change a disabled database into a viable standby database
rem Comment to be ignored by DGMGRL
remove Remove a configuration, database, or instance
show Display information about a configuration, database, or instance
shutdown Shutdown a currently running Oracle instance
start Start Fast-Start Failover observer
startup Start an Oracle database instance
stop Stop Fast-Start Failover observer
switchover Switch roles between the primary database and a standby database

Use "help " to see syntax for individual commands

DGMGRL> help create

Create a broker configuration

Syntax:

CREATE CONFIGURATION AS
PRIMARY DATABASE IS
CONNECT IDENTIFIER IS ;

DGMGRL> create configuration prod_data_guard as primary database is orcl connect identifier is orcl;
Configuration "prod_data_guard" created with primary database "orcl"

DGMGRL> show configuration verbose

Configuration
Name: prod_data_guard
Enabled: NO
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
orcl - Primary database

Current status for "prod_data_guard":
DISABLED

DGMGRL> help add

Add a standby database to the broker configuration

Syntax:

ADD DATABASE AS
CONNECT IDENTIFIER IS
MAINTAINED AS {PHYSICAL|LOGICAL};

DGMGRL> add database orclstdb as connect identifier is orclstdb maintained as logical;
Database "orclstdb" added

DGMGRL> show configuration verbose;

Configuration
Name: prod_data_guard
Enabled: NO
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
orcl - Primary database
orclstdb - Logical standby database

Current status for "prod_data_guard":
DISABLED

DGMGRL> show database verbose orcl

Database
Name: orcl
Role: PRIMARY
Enabled: NO
Intended State: OFFLINE
Instance(s):
orcl

Properties:
InitialConnectIdentifier = 'orcl'
LogXptMode = 'ASYNC'
Dependency = ''
DelayMins = '0'
Binding = 'OPTIONAL'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '180'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
LsbyASkipTxnCfgPr = '0,0,0'
LsbyDSkipTxnCfgPr = '0,0,0'
LsbyASkipCfgPr = ''
LsbyDSkipCfgPr = ''
LsbyASkipErrorCfgPr = ''
LsbyDSkipErrorCfgPr = ''
LsbyMaxEventsRecorded = '0'
LsbyTxnConsistency = ''
LsbyRecordSkipErrors = ''
LsbyRecordSkipDdl = ''
LsbyRecordAppliedDdl = ''
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '2'
LogArchiveMinSucceedDest = '1'
FastStartFailoverTarget = ''
StatusReport = '(monitor)'
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
LsbyParameters = '(monitor)'
LsbySkipTxnTable = '(monitor)'
LsbySkipTable = '(monitor)'
LsbyFailedTxnInfo = '(monitor)'
HostName = 'rhel.manzoor.com'
SidName = 'orcl'
LocalListenerAddress = '(ADDRESS=(PROTOCOL=tcp)(HOST=rhel.manzoor.com)(PORT=1521))'
StandbyArchiveLocation = '+DATA1'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = '%t_%s_%r.dbf'
LsbyMaxSga = '0'
LsbyMaxServers = '0'
LatestLog = '(monitor)'
TopWaitEvents = '(monitor)'

Current status for "orcl":
DISABLED

DGMGRL> help enable

Enable a configuration, a database, or Fast-Start Failover

Syntax:

ENABLE CONFIGURATION;

ENABLE DATABASE ;

ENABLE FAST_START FAILOVER;

DGMGRL> enable configuration
Enabled.
DGMGRL> show configuration verbose

Configuration
Name: prod_data_guard
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
orcl - Primary database
orclstdb - Logical standby database

Current status for "prod_data_guard":
SUCCESS


DGMGRL> help switchover

Switch roles between the primary database and a standby database

Syntax:

SWITCHOVER TO ;

DGMGRL> switchover to orclstdb
Performing switchover NOW, please wait...
Switchover succeeded, new primary is "orclstdb"

DGMGRL> switchover to orcl
Performing switchover NOW, please wait...
Error: ORA-16821: logical standby database dictionary not yet loaded

Failed.
Unable to switchover, primary database is still "orclstdb"

----------------------------------------------------------------------------
ORA-16821: logical standby database dictionary not yet loaded

Cause: Logical standby apply had not finished loading the dictionary. This warning is flagged by the broker's health check mechanism. This error is also flagged by failover and switchover if the target standby database has not loaded its dictionary.

Action: Start SQL Apply on the logical standby database and wait for it to reach the APPLYING state
-----------------------------------------------------------------------------

Retry the same after some time.

DGMGRL> switchover to orcl
Performing switchover NOW, please wait...
Switchover succeeded, new primary is "orcl"
------------------------------------------------------------------------------

Enabling Fast Start Failover:-

1) Check whether the primary and standby database are configured with standby redo log files, if not add the standby redo log files.

alter database add standby logfile group 3 ('/u01/stdredo.rdo') size 20m;


2) Ensure that the Log transport mode is set to 'SYNC' in both

DGMGRL> edit database orcl set property LogXptMode = SYNC;
Property "logxptmode" updated
DGMGRL> edit database orclstdb set property LogXptMode = SYNC;
Property "logxptmode" updated

3) Set the Fast start failover target database for both the database.

DGMGRL> edit database orcl set property FastStartFailoverTarget = orclstdb;
Property "faststartfailovertarget" updated
DGMGRL> edit database orclstdb set property FastStartFailoverTarget = orcl;
Property "faststartfailovertarget" updated

4) Upgrade the Protection Mode to Maximum Availability.

DGMGRL> show configuration verbose;

Configuration
Name: prod_data_guard
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
orcl - Primary database
orclstdb - Logical standby database

Current status for "prod_data_guard":
SUCCESS

DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MaxAvailability;
Operation requires shutdown of instance "orcl" on database "orcl"
Shutting down instance "orcl"...
Database closed.
Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "orcl" on database "orcl"
Starting instance "orcl"...
ORACLE instance started.
Database mounted.
DGMGRL> show configuration verbose;

Configuration
Name: prod_data_guard
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: DISABLED
Databases:
orcl - Primary database
orclstdb - Logical standby database

Current status for "prod_data_guard":
SUCCESS

5) Enable Flashback Database on the Primary and Standby databases if not already enabled.

6) Start the Fast start observer, It has to run on the server in background, so we can create a script and start it to run in background.

Script:-

$ vi start_observer.sh

#!/bin/ksh
#Start observer
export ORCLE_HOME=/u01/app/oracle/product/10.2.0/db_1
dgmgrl << EOF
connect sys/admin@orcl
start observer
EOF

$chmod 777 start_observer.sh

$ nohup ./start_observer.sh &

7) Now enable the Fast start Failover.

DGMGRL> connect sys/admin@orcl
Connected.
DGMGRL> show configuration verbose;

Configuration
Name: prod_data_guard
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: DISABLED
Databases:
orcl - Primary database
orclstdb - Logical standby database

Current status for "prod_data_guard":
SUCCESS

DGMGRL> help enable

Enable a configuration, a database, or Fast-Start Failover

Syntax:

ENABLE CONFIGURATION;

ENABLE DATABASE ;

ENABLE FAST_START FAILOVER;

DGMGRL> enable fast_start failover
Enabled.
DGMGRL> show configuration verbose;

Configuration
Name: prod_data_guard
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
orcl - Primary database
orclstdb - Logical standby database
- Fast-Start Failover target

Fast-Start Failover
Threshold: 30 seconds
Observer: rhel.manzoor.com

Current status for "prod_data_guard":
SUCCESS

Configuration the Application to direct the connection automatically to the server after a SWITCHOVER or FAILOVER

Transpert Application Failover & Connect Time Failover

Here we can see the steps to configure the application to redirect its connection to the new Primary database after a switchover or failover.

Step 1:-

First, we make sure that the client uses a tnsnames.ora with a connect descriptor that uses a SERVICE_NAME instead of a SID

MYDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.3 )(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.4 )(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = db_prod)
)
)

Host 192.168.1.3 in which the primary datbase orcl is running and in
Host 192.168.1.4 in which the standby database orclstdb is running.

Second, we take care that the service db_prod is offered only at the right database only on the primary. Notice that the PMON background processes of both databases must be able to communicate with the (local) listeners in order to register the service db_prod. If you don’t use the listener port 1521, they can’t. You have to point to that listener port then with the initialization parameter LOCAL_LISTENER.

We create and start now the service db_prod manually on the primary:

SQL> alter system set service_names = db_prod scope=both;

System altered.

lsnrctl status:-

Service "db_prod" has 1 instance(s).
Instance "orcl", status READY, has 1 handler(s) for this service...

or you can create a service using the package.

begin
dbms_service.create_service('db_prod','db_prod');
end;
/
begin
DBMS_SERVICE.START_SERVICE('db_prod');
end;
/

Then we create a trigger, that ensures that this service is only offered, if the database is in the primary role:

create trigger myapptrigg after startup on database
declare
v_role varchar(30);
begin
select database_role into v_role from v$database;
if v_role = 'PRIMARY' then
DBMS_SERVICE.START_SERVICE('db_prod');
else
DBMS_SERVICE.STOP_SERVICE('db_prod');
end if;
end;
/



The event after startup is fired, if an instance changes from status MOUNT to OPEN. If you use a logical standby, it is not fired, because the logical standby remains in status OPEN. You may use the event after db_role_change in this case. The creation of the trigger and of the service is accompanied with redo protocol (the Data Dictionary has changed) and therefore also present at orclstdb without any additional work to do there for the DBA. With the present setup, we have already achieved Connect Time Failover: Clients can use the same connect descriptor (MYDB) to get to the right (primary) database now, regardless of switchover or failover.

But sessions that are connected to orcl are disconnected if a switchover or failover to orclstdb takes place. They have got to connect again then. We can change that, so that a Runtime Failover is possible, under ideal circumstances, that failover is even completely transparent to the client and proceeds without error messages. To achieve that, you don’t have to touch the tnsnames.ora on the client side. Instead, you do the following on the primary database:

begin
dbms_service.modify_service
('db_prod',
FAILOVER_METHOD => 'BASIC',
FAILOVER_TYPE => 'SELECT',
FAILOVER_RETRIES => 200,
FAILOVER_DELAY => 1);
end;
/

Connections to the service db_prod are now automatically failed over together with the service to the new primary. Should they have done nothing during the time of the failover/switchover, or even if they had run a select statement, they will not receive any error but only notice a short interruption (about 20 seconds, in a typical case). Only if sessions have open transactions during the failover/switchover, they will receive error messages (“transaction must roll back”) after they try commit then.

I use to demonstrate that with a select on a table with 100000 rows that starts on the primary. Then I kill the SMON of that primary and the select stops at row 30000 something, waits a couple of seconds (maximal 200, with the above settings) and then continues on the new primary after the failover, fetching exactly the 100000 rows! That is always quite impressive and shows how robust Oracle Databases – especially combined with Data Guard.

Refernece :- Material from Uwe Site. -- Thanks Uwe

Switchover and Failover:-

Switchover

– Planned role reversal
– Used for OS or hardware maintenance

Failover
– Unplanned role reversal
– Used in an emergency
– Minimal or no data loss depending on the data-protection mode
– Fast-start failover can be enabled for automatic failover

Switchover
You can use the switchover feature to switch the role of the primary database to one of the available standby databases. The chosen standby database becomes the primary database, and the original primary database then becomes a standby database. There is no need to re-create any of the databases in the switchover operation. There is no data divergence between the original and the new primary database after the successful completion of the database switchover.

If the switchover operation involves a physical standby database, both the primary database and the physical standby database switching over to the primary role will be shut down and restarted.

However, there is no need to shut down and restart any other standby databases that are not participants in the switchover operation. If the switchover operation involves a logical standby database, there is no need to shut down and restart either the primary database or any of the standby databases. Logical standby databases do not need to be shut down and restarted.

Note: All extra RAC instances will be shut down and restarted

Failover
You invoke a failover operation when a catastrophic failure occurs on the primary database, and there is no possibility of recovering the primary database in a timely manner. During a failover operation, the incapacitated primary database is removed from the Data Guard environment and a standby database assumes the primary database role. You invoke the failover operation on the standby database that you want to fail over to the primary role.

Note: You can enable fast-start failover to fail over automatically when the primary database becomes unavailable. When fast-start failover is enabled, the broker determines if a failover is necessary and initiates the failover to the specified target standby database automatically, with no need for DBA intervention and with no loss of data. Fast-start failover is discussed in detail in the lesson titled “Enabling Fast-Start Failover.”

Performing Switchover to Physical Standby using SQL

Execute steps 1 through 3 on the original primary database:

1. Query the SWITCHOVER_STATUS column of the V$DATABASE view on the primary
database to verify that it is possible to perform a switchover operation.

"TO STANDBY" value in the SWITCHOVER_STATUS column indicates that it is possible to
switch the primary database to the standby role.

2. To transition the primary database to a physical standby database role, execute the following
SQL statement:
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN WAIT;

The WAIT option specifies that control is not returned to you until the statement completes.

3. Shut down the instance and restart the database in MOUNT mode:
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;

Execute steps 4 through 7 on the original standby database:

4. After you switch the primary database to the standby role and the switchover notification has been received by the standby database, you should verify that the switchover notification has been processed by the standby database by querying the SWITCHOVER_STATUS column of the V$DATABASE fixed view on the standby database. You should see a value of TO_PRIMARY.

5. Execute the following SQL statement on the physical standby database that you want to switch to the primary role:

ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

6. If the standby database was opened read-only, shut down and restart the new primary database.

SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP;

If it was not opened read-only, open the database with the ALTER DATABASE OPEN
command. The selected physical standby database is now transitioned to the primary database role.
There is no need to shut down and restart any other standby databases that were online at the time of the switchover operation.

7. Issue the following statement on the new primary database to start redo transport:
SQL> ALTER SYSTEM SWITCH LOGFILE;

Performing a Switchover to a Logical Standby by Using SQL

Perform steps 1 and 2 on the original primary database:

1. Query the SWITCHOVER_STATUS column of the V$DATABASE view on the primary
database to verify that it is possible to perform a switchover operation.
"TO STANDBY" or "SESSIONS ACTIVE" value in the SWITCHOVER_STATUS column
indicates that it is possible to switch the primary database to the standby role.
2. Issue the following SQL statement to prepare the current primary database for a logical
standby database role:
SQL> ALTER DATABASE PREPARE TO SWITCHOVER TO LOGICAL STANDBY;

This statement notifies the current primary database that it will soon switch to the logical standby role and begin receiving redo data from a new primary database.

Perform step 3 on the logical standby database:
3. Issue the following statement to build a LogMiner dictionary on the logical standby database that is the target of the switchover:

SQL> ALTER DATABASE PREPARE TO SWITCHOVER TO PRIMARY;

This statement also starts redo transport services on the logical standby database to begin transmitting its redo data to the current primary database and to other standby databases in the Data Guard configuration. The sites receiving redo data from this logical standby database accept the redo data, but they do not apply it.


Perform steps 4 and 5 on the original primary database:
4. Verify that the LogMiner dictionary was received by the primary database by querying the SWITCHOVER_STATUS column of V$DATABASE on the primary database. When the
query returns TO LOGICAL STANDBY in the SWITCHOVER_STATUS column, proceed
with step 5.
5. Issue the following SQL statement to transition the primary database to a logical standby
database role:
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO LOGICAL STANDBY;

This statement waits for all current transactions on the primary database to end and prevents any new users from starting new transactions. It also puts a marker in the redo data to provide a synchronization point for logical standby database operations.
Executing this statement also prevents users from making any changes to the data being maintained in the logical standby database. To ensure faster execution, ensure that the primary database is in a quiet state with no update activity before issuing the switchover statement. You can query V$TRANSACTIONS for the status of any current in-progress transactions that could delay execution of this statement.

Perform steps 6, 7, and 8 on the new primary database (original logical standby database):
6. Verify that the switchover notification was processed by the target standby database by querying the SWITCHOVER_STATUS column of the V$DATABASE fixed view on the target standby database. The SWITCHOVER_STATUS value is updated to show progress during the switchover. When the status is TO PRIMARY, proceed with step 7.

7. Issue the following SQL statement to switch the logical standby database to the primary role:
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

8. Issue the following statement to perform a log switch and to ensure that all logical standby databases begin receiving redo data from the new primary database:
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT

Perform step 9 on the new logical standby database:
9. Issue the following statement to start SQL Apply:
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;


Consider the following when performing a switchover to a logical standby database:
• Unlike a switchover to a physical standby database, a switchover to a logical standby database does not require a shutdown of the primary database.
• If you are switching over to a logical standby database, you do not need to terminate applications that are connected to the current primary database or to the logical standby database, because neither database is shut down during the switchover operation. However, because sessions on the old primary database may fail after the switchover operation completes and the database guard is turned on, you should terminate such open sessions now.

The database guard prevents users from making changes in the logical standby database.

• If you switch over to a logical standby database, there may be a loss of data if the logical standby database contains only a subset of the data that is present in the primary database.

The following situations prevent the execution of a switchover operation:
• Archived redo log files are unavailable: If there is a gap in the archived redo log files on the standby database, you are not able to switch over to that standby database.

• Point-in-time recovery is required:When you perform a switchover to a standby database, you always switch over to the current state of the primary database. You cannot switch over to a time in the past.

• Production database is not open and cannot be opened: A switchover is initiated on the primary database while it is in the open state. If you cannot open the primary database, a switchover is not possible.

Failover
You invoke a failover operation when a catastrophic failure occurs on the primary database and there is no possibility of recovering the primary database in a timely manner. During a failover operation, the primary database is removed from the Data Guard environment and a standby database assumes the primary database role. Failing over to a standby database is a permanent operation. You cannot undo the failover and return the database to its former role as a standby database. Because of this, you should invoke a failover operation only in an emergency.

It is not always necessary to fail over to the standby database. In some cases, recovery of the primary database may be faster. Most failures can be resolved at a primary database within a reasonable amount of time.

In a failover operation:
• The original primary database is presumed to be lost. You must re-create (or flash back) the original primary database as a new standby database, if desired.
• Standby databases that are online at the time of the failover operation, but are not involved in the role transition, do not need to be shut down and restarted.

Failover Considerations
During a failover operation, a standby database transitions to the primary role and the old primary database is rendered unable to participate in the configuration. Depending on the protection mode under which the old primary database was operating before the failover, there may be some or no data loss during a failover. A failover is typically used only when a primary database becomes incapacitated and there is no possibility of performing a switchover or successfully repairing the primary database within a reasonable amount of time. The specific actions that are performed
during a failover vary depending on whether a logical or physical standby database is involved in the failover operation, the state of the configuration at the time of the failover, and the specific SQL commands that are used to initiate the failover.
There is also a special-case failover in which the standby database is activated. This should be avoided unless absolutely necessary because it causes all other databases in the configuration to become permanently disabled.

Perform the following steps to fail over to a physical standby database by using SQL.
Note: If the target standby database was operating in maximum protection mode, no gaps in the archived redo log files should exist, and you can proceed directly to step 4. Otherwise, begin with step 1 to determine if any manual gap resolution steps must be performed.

1. To determine if there are gaps in the archived redo log files on the target standby database, query the V$ARCHIVE_GAP view. This view contains the sequence numbers of the archived redo log files that are known to be missing for each thread. The data returned reflects the highest gap only.

SQL> SELECT thread#, low_sequence#, high_sequence# FROM v$archive_gap;
If possible, copy all of the identified missing archived redo log files to the target standby database from the primary database and register them. Execute the following command to register the redo log files:

SQL> ALTER DATABASE REGISTER PHYSICAL LOGFILE 'filespec1';
Perform this step for each thread.

2. Repeat step 1 until all gaps are resolved. The query that is executed in step 1 displays information for the highest gap only. Repeat step 1 until the query returns no rows.

3. To determine if there are any other missing archived redo log files, query the
V$ARCHIVED_LOG view on the target standby database to obtain the highest sequence
number for each thread.

SQL> SELECT UNIQUE THREAD# AS THREAD, MAX(SEQUENCE#) OVER (PARTITION BY thread#) AS LAST FROM V$ARCHIVED_LOG;
Copy to the target standby database any available primary database archived redo log files that contain sequence numbers higher than the highest sequence number that is available on the target standby database. Then register those redo log files by issuing the following SQL statement:

SQL> ALTER DATABASE REGISTER PHYSICAL LOGFILE 'filespec1';
Perform this step for each thread.
After registering all available archived redo log files, query the V$ARCHIVE_GAP view (as described in step 1) to verify that no additional gaps were introduced in step 3.

4. Execute the following SQL statement to initiate the failover:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH FORCE;

5. Transition the physical standby database to the primary database role by issuing the following SQL statement:

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
After issuing this SQL statement, you can no longer use this database as a standby database.

Any subsequent redo that is received from the original primary database cannot be applied.

During the failover process, the standby redo log files are automatically archived and recovered on all other standby databases that are derived from the original primary database if the standby destinations are correctly defined on the new primary database.

Perform step 6 on the new primary database.
6. If the physical standby database has not been opened in read-only mode since the last time it was started, open the new primary database as follows and proceed to step 7:

SQL> ALTER DATABASE OPEN;

If the physical database has been opened in read-only mode since the last time it was started, shut down the target standby database (new primary database) and restart it as follows:
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP;
7. Perform an open backup of the database after opening the database. Although performing a backup immediately is not required, it is recommended because you cannot recover changes that are made after the failover without a complete backup copy of the database.

8. After a failover, the original primary database no longer participates in the configuration. After performing a failover, you may be able to optionally restore the failed primary database as a new standby database using either of the following methods:

- Use Flashback Database to restore the failed primary database to a point in time before the failover occurred, and then convert it into a standby database.

- Re-create the failed database and add it to the configuration as a new standby database.

To reuse the old primary database in the new configuration, you must re-create it as a standby database using a backup copy of the new primary database.
After the failed primary database is restored and is operating in the standby role,
you can optionally perform a switchover to transition the databases to their original
(pre-failure) roles.

Performing a Failover to a Logical Standby Database by Using SQL

To perform failover for logical standby databases using SQL:
1. If redo logs exist on the primary database that have not yet been applied on the logicalstandby database, manually copy the redo logs to that standby database.

2. You must register the redo log files that you manually copied from the original primary database. Issue the following SQL statement for each missing redo log file on the logical standby:

SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE 'filespec';

3. On the logical standby database, check if there is a partially written archived log file. If it exists, this file has a sequence number that is one greater than the last registered archived log file. If it exists, register the partially filled archived log file.

4. On the new primary database, ensure that the remaining redo logs have been applied by checking the V$LOGSTDBY_PROGRESS view. When the values in the APPLIED_SCN and LATEST_SCN columns are equal, all available redo has been applied. The logical standby database now contains as much data as possible from the primary database. Issue the following query:

SQL> SELECT APPLIED_SCN, LATEST_SCN FROM V$LOGSTDBY_PROGRESS;

5. To stop the RFS process, apply remaining redo, stop SQL Apply, and activate the database in the primary database role, issue the following statements on the logical standby database that you are transitioning to the new primary role:

SQL> ALTER DATABASE ACTIVATE LOGICAL STANDBY DATABASE FINISH APPLY;

6. Enable archiving of redo logs to all remote logical standby destinations, as in the following example:

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE

In general, when the database operates in the primary role, you must enable archiving of redo logs to remote destinations. When the database operates in the standby role, you must disable archiving of redo logs to remote destinations.

7. Issue the following command to begin SQL Apply operations on all logical standby databases in the configuration:

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY NEW PRIMARY dblink;
Note: Any logical standby databases that are more current (have applied more redo operations) than the standby database to which the primary database has failed over to must be re-created from a backup of the new primary database and added back to the configuration


Activating a Standby Database

When you are unable to fail over, you can activate a standby database by issuing one of the following commands (depending on the role of the database):

SQL> ALTER DATABASE ACTIVATE PHYSICAL STANDBY DATABASE;
SQL> ALTER DATABASE ACTIVATE LOGICAL STANDBY DATABASE;

You can use the ALTER DATABASE ACTIVATE STANDBY DATABASE command to force the
standby database into the primary role. Specify the appropriate option, PHYSICAL or LOGICAL, for the type of database that you are activating. You can issue this command in a situation in which you cannot perform a failover operation.

Restoring Databases After a Role Transition

Additional steps may be needed to restore your disasterrecovery
solution after a role transition:

• Databases may be disabled, but are not removed from the broker configuration.
• Reenable broker management of the databases:

– Reinstate databases using the REINSTATE DATABASE command or through
Enterprise Manager.
– Re-create databases from a copy of the primary database and reenable the database.

If a database can be reinstated, it has the following status after a complete failover: ORA-16661:
the standby database needs to be reinstated. The “Database must be reinstated” message is displayed in the Status column in Enterprise Manager. Reinstate the database using the DGMGRL REINSTATE DATABASE command or the reinstate option in Enterprise Manager.

A database that must be re-created from a copy of the new primary database, has the following status: ORA-16795: the broker detects that database re-creation is required. Re-create the standby database from a copy of the primary database and then reenable it.


Using Flashback Database After Failover
You invoke a failover operation when a catastrophic failure occurs on the primary database and there is no possibility of recovering the primary database in a timely manner. After a failover operation, the old standby database becomes the new primary database, and the old primary database is placed in the “needs reinstatement” state.
You can use the Flashback Database feature to convert the old primary database into a new standby database without re-creating the database. You can flash back the old primary database so that it contains only those changes that are already applied to the old standby database. This enables you to convert the old primary database into a new standby database without restoring the old primary database.

Using Flashback Database After Failover
Physical Standby Configuration

In a physical standby configuration, use the following procedure to avoid reinstantiating the old primary database after a failover:

Using SQL .

1. On the new primary database, issue the following query to determine the system change
number (SCN) at which the old standby database became the new primary database:
SELECT standby_became_primary_scn FROM v$database;

2. After the old primary database site is available, mount the old primary database.
STARTUP MOUNT;

3. Flash back the old primary database to the “standby became primary” SCN that you
determined in step 1:
FLASHBACK DATABASE TO SCN ;

4. On the old primary database, issue the following command to convert the control file to a standby control file:

ALTER DATABASE CONVERT TO PHYSICAL STANDBY;

5. Shut down the old primary instance.
SHUTDOWN IMMEDIATE;

6. Mount the old primary database. The old primary database is now your new standby
database.

STARTUP MOUNT;

7. On the new primary database, enable redo transport to the old primary database (new standby database). Check the status of the archive destinations and enable any that are not enabled.

SELECT DEST_ID, DEST_NAME, STATUS, PROTECTION_MODE,
DESTINATION, ERROR, SRL FROM V$ARCHIVE_DEST_STATUS;
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_n=ENABLE;

Then, archive a new log to the new standby by issuing the following command:
ALTER SYSTEM ARCHIVE LOG CURRENT;

8. On the new standby database, start managed standby recovery. The role reversal is now complete:

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;

If you are using real-time apply:

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
USING CURRENT LOGFILE DISCONNECT;


Using Flashback Database After Failover (continued)
Logical Standby Configuration

In a logical standby configuration, use the following procedure to avoid reinstantiating the old
primary database after a failover:
1. On the new primary database, issue the following query to determine the SCN to which you
want to flash back the failed primary database:
SELECT applied_scn AS flashback_scn
FROM v$logstdby_progress;
2. On the new primary database, issue the following query to determine if you need to copy any
redo log files to the failed primary database so that Flashback Database reaches a consistent
state:
SELECT NAME FROM DBA_LOGSDTBY_LOG
WHERE NEXT_CHANGE# >
(SELECT VALUE FROM DBA_LOGSTDBY_PARAMETERS
WHERE NAME = 'STANDBY_BECAME_PRIMARY_SCN')
AND FIRST_CHANGE <= (FLASHBACK_SCN from step 1);
3. Shutdown (if required) and mount the failed primary database.
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
4. Flash back the old primary database to the “standby became primary” SCN that you
determined in step 1:
FLASHBACK DATABASE TO SCN ;
5. Enable the Data Guard guard to prevent the job queue from executing:
ALTER DATABASE GUARD ALL;
6. Open the database with the RESETLOGS option:
ALTER DATABASE OPEN RESETLOGS;
7. Create a database link to the new primary database:
CREATE PUBLIC DATABASE LINK mylink
CONNECT TO system IDENTIFIED BY password
USING 'service_name_of_new_primary_database';
8. Start SQL Apply:
ALTER DATABASE START LOGICAL STANDBY
APPLY NEW PRIMARY mylink;


Using Flashback Database After Failover (continued)
Logical Standby Configuration
In a logical standby configuration, use the following procedure to avoid reinstantiating the old
primary database after a failover:
1. On the new primary database, issue the following query to determine the SCN to which you
want to flash back the failed primary database:
SELECT applied_scn AS flashback_scn
FROM v$logstdby_progress;
2. On the new primary database, issue the following query to determine if you need to copy any
redo log files to the failed primary database so that Flashback Database reaches a consistent
state:
SELECT NAME FROM DBA_LOGSDTBY_LOG
WHERE NEXT_CHANGE# >
(SELECT VALUE FROM DBA_LOGSTDBY_PARAMETERS
WHERE NAME = 'STANDBY_BECAME_PRIMARY_SCN')
AND FIRST_CHANGE <= (FLASHBACK_SCN from step 1);
3. Shutdown (if required) and mount the failed primary database.
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
4. Flash back the old primary database to the “standby became primary” SCN that you
determined in step 1:
FLASHBACK DATABASE TO SCN ;
5. Enable the Data Guard guard to prevent the job queue from executing:
ALTER DATABASE GUARD ALL;
6. Open the database with the RESETLOGS option:
ALTER DATABASE OPEN RESETLOGS;
7. Create a database link to the new primary database:
CREATE PUBLIC DATABASE LINK mylink
CONNECT TO system IDENTIFIED BY password
USING 'service_name_of_new_primary_database';
8. Start SQL Apply:
ALTER DATABASE START LOGICAL STANDBY
APPLY NEW PRIMARY mylink;

Fast Start Failover:-

When Will Fast-Start Failover Occur?
Fast-start failover occurs when any of the following conditions occurs:
• Loss of connectivity between both the primary database and the observer, and between the primary database and the fast-start failover target standby database, exceeds the fast-start failover threshold
• Database health-check mechanism determines that the primary database data files are offline
• An instance crash occurs for a single- instance database
• All instances of a Real Application Clusters (RAC) primary database crash
• Shutdown abort of the primary database occurs


Configuring Fast-Start Failover
1. Specify the target standby database.
2. Set the FastStartFailoverThreshold property.
3. Enable fast-start failover.
4. Start the observer.
5. Verify the configuration

No comments:

Post a Comment