Wednesday, December 31, 2008

RAC for Beginners

Real Application Clusters (RAC)


Oracle RAC, introduced with Oracle9i, is the successor to Oracle Parallel Server (OPS). Oracle RAC allows multiple instances to access the same database (storage) simultaneously. RAC provides fault tolerance, load balancing, and performance benefits by allowing the system to scale out, and at the same time since all nodes access the same database, the failure of one instance will not cause the loss of access to the database.

Oracle RAC 10g is a shared disk subsystem. All nodes in the cluster must be able to access all of the data, redo log files, control files and parameter files for all nodes in the cluster. The data disks must be globally available in order to allow all nodes to access the database. Each node has its own redo log file(s) and UNDO tablespace, but the other nodes must be able to access them (and the shared control file) in order to recover that node in the event of a system failure.

The difference between Oracle RAC and OPS is the addition of Cache Fusion. With OPS a request for data from one node to another required the data to be written to disk first, then the requesting node can read that data. With cache fusion, data is passed along a high-speed interconnect using a sophisticated locking algorithm.

With Oracle RAC 10g, the data files, redo log files, control files, and archived log files reside on shared storage on raw-disk devices, a NAS, ASM, or on a clustered file system

Oracle RAC is composed of two or more database instances.
They are composed of Memory structures and background processes same as the single instance database.

Oracle RAC instances use two processes

==> GES(Global Enqueue Service)
==> GCS(Global Cache Service) this enable cache fusion.

Oracle RAC instances are composed of following background processes:

ACMS—Atomic Controlfile to Memory Service (ACMS)
GTX0-j—Global Transaction Process
LMON—Global Enqueue Service Monitor
LMD—Global Enqueue Service Daemon
LMS—Global Cache Service Process
LCK0—Instance Enqueue Process
RMSn—Oracle RAC Management Processes (RMSn)
RSMN—Remote Slave Monitor


LMON

The background Global Enqueue Service Monitor (LMON) monitors the entire cluster to manage global resources. LMON manages instance and process failures and the associated recovery for the Global Cache Service (GCS) and Global Enqueue Service (GES). In particular, LMON handles the part of recovery associated with global resources. LMON-provided services are also known as cluster group services (CGS)

This process monitors global enques and resources across the cluster and performs global enqueue recovery operations.This is called as Global Enqueue Service Monitor.


LCKx

The LCK process manages instance global enqueue requests and cross-instance call operations. Workload is automatically shared and balanced when there are multiple Global Cache Service Processes (LMSx).

This process is called as Instance enqueue process.This process manages non-cache fusion resource requests such as libry and row cache requests.

LMSx

The Global Cache Service Processes (LMSx) are the processes that handle remote Global Cache Service (GCS) messages. Current Real Application Clusters software provides for up to 10 Global Cache Service Processes. The number of LMSx varies depending on the amount of messaging traffic among nodes in the cluster. The LMSx handles the acquisition interrupt and blocking interrupt requests from the remote instances for Global Cache Service resources. For cross-instance consistent read requests, the LMSx will create a consistent read version of the block and send it to the requesting instance. The LMSx also controls the flow of messages to remote instances.

This process is called as Global Cache service process.This process maintains statuses of datafiles and each cahed block by recording information in a Global Resource Dectory(GRD).This process also controls the flow of messages to remote instances and manages global data block access and transmits block images between the buffer caches of different instances.This processing is a part of cache fusion feature.


LMDx

The Global Enqueue Service Daemon (LMD) is the resource agent process that manages Global Enqueue Service (GES) resource requests. The LMD process also handles deadlock detection Global Enqueue Service (GES) requests. Remote resource requests are requests originating from another instance.

This process is called as global enqueue service daemon. This process manages incoming remote resource requests within each instance.

DIAG

The diagnose daemon is a Real Application Clusters background process that captures diagnostic data on instance process failures. No user control is required for this demo.

ACMS

ACMS stands for Atomic Controlfile Memory Service.In an Oracle RAC environment ACMS is an agent that ensures a distributed SGA memory update(ie)SGA updates are globally committed on success or globally aborted in event of a failure.

GTX0-j

The process provides transparent support for XA global transactions in a RAC environment.The database autotunes the number of these processes based on the workload of XA global transactions.

RMSn

This process is called as Oracle RAC management process.These pocesses perform managability tasks for Oracle RAC.Tasks include creation of resources related Oracle RAC when new instances are added to the cluster.

RSMN

This process is called as Remote Slave Monitor.This process manages background slave process creation andd communication on remote instances. This is a background slave process.This process performs tasks on behalf of a co-ordinating process running in another instance.

CRS

CRS (Cluster Ready Services) is a new feature for 10g Real Application Clusters that provides a standard cluster interface on all platforms and performs new high availability operations not available in previous versions. CRS manages cluster database functions including node membership, group services, global resource management, and high availability. CRS serves as the clusterware software for all platforms. It can be the only clusterware or run on top of vendor clusterware such as Sun Cluster, HP Serviceguard, etc.


CRS automatically starts the following resources:
· Nodeapps
o Virtual Internet Protocol(VIP) address for each node
o Global Services Daemon
o Oracle Net Listeners
o Oracle Network Services (ONS)
· Database Instance
· Services


Oracle Clusterware (Cluster Ready Services in 10g/ Cluster Manager in 9i) - provides infrastructure that binds multiple nodes that then operate as single server. Clusterware monitors all components like instances and listeners. There are two important components in Oracle clusterware, Voting Disk and OCR (Oracle Cluster Registry)

OCR & Voting Disk

Oracle, 10g RAC, provided its own cluster-ware stack called CRS. The main file components of CRS are the Oracle Cluster Repository (OCR) and the Voting Disk.
The OCR contains cluster and database configuration information for RAC and Cluster Ready Services (CRS). Some of this information includes the cluster node list, cluster database instance-to-node mapping information, and the CRS application resource profiles. The OCR contains configuration details for the cluster database and for high availability resources such as services, Virtual Inerconnect Protocoal (VIP) addresses.

The Voting Disk is used by the Oracle cluster manager in various layers. The Node Monitor (NM) uses the Voting Disk for the Disk Hearbeat, which is essential in the detection and resolution of cluster "split brain".


Cache Fusion:-

Oracle RAC is composed of two or more instances. When a block of data is read from datafile by an instance within the cluster and another instance is in need of the same block,it is easy to get the block image from the insatnce which has the block in its SGA rather than reading from the disk. To enable inter instance communication Oracle RAC makes use of interconnects. The Global Enqueue Service(GES) monitors and Instance enqueue process manages the cahce fusion

Cache Fusion and Global Cache Service (GCS)
Memory-to-memory copies between buffer caches over high-speed interconnects

· fast remote access times
· memory transfers for write or read access
· transfers for all types (e.g data, index, undo, headers )
· Cache coherency across the cluster
· globally managed access permissions to cached data
· GCS always knows whether and where a data block is cached
· a local cache miss may result in remote cache hit or disk read

@@@ Article is still under edit... will be adding more info @@@@

Monday, December 29, 2008

11g RAC



Oracle Real Application Clusters

Learn More about 11g RAC and its New Features, Refer the following ORACLE official Site for 11g RAC

Oracle Real Application Clusters 11g

Friday, December 26, 2008

The Courseware,materials * Guides needed for OCA/OCP/OCE Exam Preparation

Oracle Certifications are one of most recognized and value certification to acquire to add more weightage to your profile.

These Certifications not only provides you knowledge and also gives better scope to get good opportunities in the Industry

The experience and recognition i am getting on these certifications is immensive.
I am certified on Oracle 9i and 10g OCA/OCP/OCE and 10g OCM on Oracle Database Product.
So, The following information on these certification may useful to many people who aspire to build career in ORACLE Technology.

The following tips may be useful to attain these certifications:

1. First and Foremost thins is, You need to attend the Oracle University Course to
understand the technology and concepts of Oracle Database for particular Version.

2. It is suggested to use the Oracle official Material/Student Guide to get better
understanding of the concepts and features

3. The Course Material / Oracle ITL materials have set of Theory & Practical
questions which would help you understand & Test your knowledge on the product
before proceeding for OCP/OCE Certifications.

4. The Oracle course ware are made in simple english and very easy to understand,
and have very précised and handy information.

5. You may also practice exams of Self Test Software and Transcender are
recommended for preparing the OCA/OCP/OCE Exams.

6. The OCA/OCP/OCE Exams are Objective Type, so it is easy to choose the right
answer for each question

7. Except OCM, remaining all Oracle Certifications are objective type which are
binded with some practical related questions but there is no practical involved

8. The OCP/OCE questions are very tricky and confusing one each answers, so it is
advised to understand the concepts better to choose the right answer

9. DO NOT mug up or Copy the practice exam questions and answers, instead try
understanding the question and the answer.

Here is link for Oracle Student Guides/ILT (Instructor Led Training) Materials

# Where to get Student Guides/ILT (Instructor Led Training) Materials

# To know more about ORACLE Database OCA/OCP/OCE on all Database versions

# ORACLE Suggests ==> Self Test Software and Transcended Test engines

# For any suggestions, tips, experience of Others on OCA/OCP/OCE
Refer OCP Blog ==> ORACLE CERTIFIED PROFESSIONALS

# OTN Forum for Oracle Certifications

# Oracle Magazine Inside OCP Columns

# Oracle Certification Preparation (OCP)/Hidden Treasures

All the Best !! for better career through ORACLE!
Blog me anytime for any tips/suggestions :)

Oracle Database OCA OCP OCE OCM




Oracle Certification Program - Oracle Database

Oracle Database 11g
Oracle Database 10g
Oracle9i Database
News and Information

Database and Grids

The Courses and Certifications Offered for Database and Grids

Select a topic to view a course listing and its related categories.

* Oracle Database 11g
* Oracle Database 10g
* 9i Database
* 8i Database
* Oracle Enterprise Manager 10g

* Oracle Enterprise Manager 9i
* Oracle Application Testing Suite
* 9i Lite
* Oracle Berkeley
* Oracle TimesTen


For More Details, Refer the Official ORACLE Education Link ==> Database and Grids

Oracle Certification

The following Link is official Oracle Site for all ORACLE Product Certification and courses offered by Oracle University

ORACLE UNIVERISTY - Courses & Certifications on all ORACLE Products

How to change the database listener port with EBS system

How to change the database listener port with EBS system

1. In dbtier xml file under /appsutil change the below line to reflect
the right port

1541

2. In /network/admin/

change the prot values manually in listener.ira and tnsnames.ora

3. Run autoconfg in db tier. if it completes perform the following

4. In apps tier xml file under /admin change the below line to reflect the
right port

1541

5. Run autoconfig in apps tier

How to enable the multi-lingual option for some modules in R12

How to enable the multi-lingual option in R12
=============================================

Refer the following Metalink notes:

Note 252422.1:Requesting Translation Synchronization Patches
Note 550206.1:Oracle Applications NLS Release Notes, Release 12.0.4

Thursday, December 25, 2008

Implementing Dataguard on 11g RAC

Creating RAC Standby Database

Configuration Details:

• Primary Host Names are RAC_PRIM01 and RAC_PRIM02

• Standby Host Names are RAC_STDBY01 and RAC_STDBY02

• The primary database is RAC_PRIM

• Virtual Names are RAC_PRIM01-vip, RAC_PRIM02-vip, RAC_STDBY01-vip and RAC_STDBY02-vip

• Both the primary and standby databases use ASM for storage

• The following ASM disk groups are being used +DATA (for data) and +FRA for Recovery/Flashback

• The standby database will be referred to as RAC_STDBY

• Oracle Managed Files will be used.

• ORACLE_BASE is set to /u01/app/oracle



1. Configure Primary and Standby sites

For Better and Simpler configuration of Data Guard, it is recommended that the Primary and Standby machines have exactly the same structure, i.e.

• ORACLE_HOME points to the same mount point on both sites.
• ORACLE_BASE/admin points to the same mount point on both sites.
• ASM Disk Groups are the same on both sites


2. Install Oracle Software on each site.

• Oracle Clusterware

• Oracle database executables for use by ASM

• Oracle database executables for use by the RDBMS

3. Server Names / VIPs

The Oracle Real Application Clusters 11g virtual server names and IP addresses are used and maintained by Oracle Cluster Ready Services (CRS).

Note: Both short and fully qualified names will exist.

Server Name/Alias/Host Entry Purpose
RAC_PRIM01.local Public Host Name (PRIMARY Node 1)
RAC_PRIM02.local Public Host Name (PRIMARY Node 2)
RAC_STDBY01.local Public Host Name (STANDBY Node 1)
RAC_STDBY02.local Public Host Name (STANDBY Node 2)
RAC_PRIM01-vip.local Public Virtual Name (PRIMARY Node 1)
RAC_PRIM02-vip.local Public Virtual Name (PRIMARY Node 2)
RAC_STDBY01-vip.local Public Virtual Name (STANDBY Node 1)
RAC_STDBY02-vip.local Public Virtual Name (STANDBY Node 2)

4. Configure Oracle Networking

4.1 Configure Listener on Each Site

Each site will have a listener defined which will be running from the ASM Oracle Home. The following listeners have been defined in this example configuration.

Primary Role
Listener_RAC_PRIM01
Listener_RAC_PRIM02
Listener_RAC_STDBY01
Listener_RAC_STDBY02


4.2 Static Registration

Oracle must be able to access all instances of both databases whether they are in an open, mounted or closed state. This means that these must be statically registered with the listener.

These entries will have a special name which will be used to facilitate the use of the Data Guard Broker, discussed later.


4.3 Sample Listener.ora

LISTENER_RAC_STDBY01 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01-vip)(PORT = 1521)
(IP = FIRST))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01)(PORT = 1521)
(IP = FIRST))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)
)
)
SID_LIST_LISTENER_RAC_STDBY01 =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME=RAC_STDBY_dgmgrl.local)
(SID_NAME = RAC_STDBY1)
(ORACLE_HOME = $ORACLE_HOME)
)
)





4.4 Configure TNS entries on each site.

In order to make things simpler the same network service names will be generated on each site. These service names will be called:

Alias Comments
RAC_PRIM1_DGMGRL.local Points to the RAC_PRIM instance on RAC_PRIM01 using the service name RAC_PRIM_DGMGRL.local. This can be used for creating the standby database.
RAC_PRIM1.local Points to the RAC_PRIM instance on RAC_PRIM01. using the service name RAC_PRIM.local
RAC_PRIM2.local Points to the RAC_PRIM instance on RAC_PRIM02 using the service name RAC_PRIM.local
RAC_PRIM.local Points to the RAC_PRIM database i.e. Contains all database instances.
RAC_STDBY1_DGMGRL.local Points to the RAC_STDBY instance on RAC_STDBY01 using the service name RAC_STDBY1_DGMGRL ** This will be used for the database duplication.
RAC_STDBY1.local Points to the RAC_STDBY instance on RAC_STDBY01 using the service name RAC_STDBY.local
RAC_STDBY2.local Points to the RAC_STDBY instance on RAC_STDBY02 using the service name RAC_STDBY.local
RAC_STDBY.local Points to the RAC_STDBY database i.e. Contains all the database instances
listener_DB_UNIQUE_NAME.local This will be a tns alias entry consisting of two address lines. The first address line will be the address of the listener on Node1 and the second will be the address of the listener on Node 2. Placing both of the above listeners in the address list will ensure that the database automatically registers with both nodes. There must be two sets of entries. One for the standby nodes call listener_RAC_STDBY and one for the primary nodes called listener_RAC_PRIM


Sample tnsnames.ora (RAC_PRIM01)


RAC_PRIM1_DGMGRL.local =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM01-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_PRIM_DGMGRL.local)
)
)

RAC_PRIM1.local =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM01-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_PRIM.local)
(INSTANCE_NAME = RAC_PRIM1)
)
)

RAC_PRIM2.local =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM02-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_PRIM.local)
(INSTANCE_NAME = RAC_PRIM2)
)
)

RAC_PRIM.local =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM01-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM02-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_PRIM.local)
)
)

RAC_STDBY1_DGMGRL.local =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_STDBY_DGMGRL.local)
)
)


RAC_STDBY2.local=
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY02-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_STDBY.local)
(INSTANCE_NAME=RAC_STDBY2)
)
)

RAC_STDBY1.local=
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_STDBY.local)
(INSTANCE_NAME=RAC_STDBY1)
)
)

RAC_STDBY.local=
(DESCRIPTION =
(ADDRESS_LIST=
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY02-vip)(PORT = 1521)))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_STDBY.local)
)
)

LISTENERS_RAC_PRIM.local=
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM01-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM02-vip)(PORT = 1521))
)



4.5 Configure ASM on each Site

Certain initialisation parameters are only applicable when a database is running in either a standby or primary database role. Defining ALL of the parameters on BOTH sites will ensure that, if the roles are switched (Primary becomes Standby and Standby becomes the new Primary), then no further configuration will be necessary.

Some of the parameters will however be node-specific; therefore there will be one set of parameters for the Primary site nodes and one for the Standby site nodes.


4.6 Primary Site Preparation

The following initialisation parameters should be set on the primary site prior to duplication. Whilst they are only applicable to the primary site, they will be equally configured on the standby site.
Dg_broker_config_file1 Point this to a file within the ASM disk group – Note File need not exist.
Dg_broker_config_file2 Point this to a file within the ASM disk group – Note File need not exist.
db_block_checksum To enable datablock integrity checking (OPTIONAL)
db_block_checking To enable datablock consistency checking (OPTIONAL)

As long as performance implications allow and do not violate existing SLAs it should be mandatory to have db_block_checksum and db_block_checking enabled.

Additionally, the following must also be configured:

Archive Log Mode

The primary database must be placed into archive log mode.

Forced Logging

The standby database is kept up to date by applying transactions on the standby site, which have been recorded in the online redo logs. In some environments that have not previously utilized Data Guard, the NOLOGGING option may have been utilized to enhance database performance. Usage of this feature in a Data Guard protected environment is strongly undesirable.
From Oracle version 9.2, Oracle introduced a method to prevent NOLOGGING transactions from occurring. This is known as forced logging mode of the database. To enable forced logging, issue the following command on the primary database:

alter database force logging;

Password File
The primary database must be configured to use an external password file. This is generally done at the time of installation. If not, then a password file can be created using the following command:

orapwd file=$ORACLE_HOME/dbs/orapwRAC_PRIM1 password=mypasswd

Before issuing the command ensure that the ORACLE_SID is set to the appropriate instance – in this case RAC_PRIM1.

Repeat this for each node of the cluster.

Also ensure that the initialisation parameter remote_login_passwordfile is set to ‘exclusive’.

As with Oracle11.1 the Orale Net sessions for Redo Transport can alternatively be auhenticated through SSL (see also section 6.2.1 in the Data Guard Concepts manual).



Standby Site Preparation

Initialization Parameter File :

As part of the duplication process a temporary initialisation file will be used. For the purposes of this document this file will be called /tmp/initRAC_PRIM.ora have one line:

db_name=RAC_PRIM

Password File

The standby database must be configured to use a password file. This must be created by copying the password file from the primary site to the standby site and renaming it to reflect the standby instances.

Repeat this for each node of the cluster.

Additionally ensure that the initialisation parameter remote_login_passwordfile is set to xclusive.

Create Audit File Destination

Create a directory on each node of the standby system to hold audit files.
mkdir /u01/app/oracle/admin/RAC_STDBY/adump

Start Standby Instance

Now that everything is in place the standby instance needs to be started ready for duplication to commence:

export ORACLE_SID=RAC_STDBY1
sqlplus / as sysdba
startup nomount pfile=’/tmp/initRAC_PRIM.ora’

Test Connection

From the primary database test the connection to the standby database using the command:

sqlplus sys/mypasswd@RAC_STDBY_dgmgrl as sysdba

This should successfully connect.


Duplicate the Primary database

The standby database is created from the primary database. In order to achieve this, up to Oracle10g a backup of the primary database needs to be made and transferred to the standby and restored. Oracle RMAN 11g simplifies this process by providing a new method which allows an ‘on the fly’-duplicate to take place. This will be the method used here (the pre-11g method is described in the Appendicies).

From the primary database invoke RMAN using the following command:

export ORACLE_SID=RAC_PRIM1
rman target / auxiliary sys/mypasswd@RAC_STDBY1_dgmgrl

NOTE: If RMAN returns the error “rman: can’t open target” then ensure that ‘ORACLE_HOME/bin’ appears first in the PATH because there exists a Linux utility also named RMAN.

Next, issue the following duplicate command:

duplicate target database for standby from active database
spfile
set db_unique_name=’RAC_STDBY’
set control_files=’+DATA/RAC_STDBY/controlfile/control01.dbf’
set instance_number=’1’
set audit_file_dest=’/u01/app/oracle/admin/RAC_STDBY/adump’
set remote_listener=’LISTENERS_RAC_STDBY’
nofilenamecheck;



Create an SPFILE for the Standby Database

By default the RMAN duplicate command will have created an spfile for the instance located in $ORACLE_HOME/dbs.

This file will contain entries that refer to the instance names on the primary database. As part of this creation process the database name is being changed to reflect the DB_UNIQUE_NAME for the standby database, and as such the spfile created is essentially worthless. A new spfile will now be created using the contents of the primary database’s spfile.



Get location of the Control File

Before starting this process, note down the value of the control_files parameter from the currently running standby database


Create a text initialization pfile

The first stage in the process requires that the primary databases initialisation parameters be dumped to a text file:

set ORACLE_SID=RAC_PRIM1
sqlplus “/ as sysdba”
create pfile=’/tmp/initRAC_STDBY.ora’ from spfile;

Copy the created file ‘/tmp/initRAC_STDBY.ora’ to the standby server.


Edit the init.ora

On the standby server, edit the /tmp/initRAC_STDBY.ora file:

NOTE: Change every occurrence of RAC_PRIM with RAC_STDBY with the exception of the parameter DB_NAME which must NOT change.

Set the control_files parameter to reflect the value obtained in 4.3.8.1 above. This will most likely be +DATA/RAC_STDBY/controlfile/control01.dbf.

Save the changes.


Create SPFILE

Having created the textual initialisation file it now needs to be converted to a spfile and stored within ASM by issuing:

export ORACLE_SID=RAC_STDBY1
sqlplus “/ as sysdba”
create spfile=’+DATA/RAC_STDBY/spfileRAC_STDBY.ora’ from pfile= ’/tmp/initRAC_STDBY.ora’


Create Pointer File

With the spfile now being in ASM, the RDBMS instances need to be told where to find it.

Create a file in the $ORACLE_HOME/dbs directory of standby node 1 (RAC_STDBY01 ) called initRAC_STDBY1.ora . This file will contain one line:

spfile=’+DATA/RAC_STDBY/spfileRAC_STDBY.ora’

Create a file in the $ORACLE_HOME/dbs directory of standby node 2 (RAC_STDBY02) called initRAC_STDBY2.ora . This file will also contain one line:

spfile=’ +DATA/RAC_STDBY/spfileRAC_STDBY.ora’

Additionally remove the RMAN created spfile from $ORACLE_HOME/dbs located on standby node 1 (RAC_STDBY01 )

Create secondary control files

When the RMAN duplicate completed, it created a standby database with only one control file. This is not good practice, so the next step in the process is to create extra control files.
This is a two-stage process:

1. Shutdown and startup the database using nomount :

shutdown immediate;
startup nomount;


2. Change the value of the control_files parameter to ‘+DATA’,’ +FRA’

alter system set control_files=‘+DATA’,’ +FRA’ scope=spfile;

3. Shutdown and startup the database again :

shutdown immediate;
startup nomount;

3. Use RMAN to duplicate the control file already present:

export ORACLE_SID=RAC_STDBY1
rman target /
restore controlfile from ‘+DATA/RAC_STDBY/controlfile/control01.dbf’

This will create a control file in both the ASM Disk group’s +DATA and +FRA. It will also update the control file parameter in the spfile.

If you wish 3 to have control files simply update the control_files parameter to include the original controlfile as well as the ones just created.


Cluster-enable the Standby Database

The standby database now needs to be brought under clusterware control, i.e. registered with Cluster Ready Services.

Before commencing, check that it is possible to start the instance on the second standby node (RAC_STDBY02):

export ORACLE_SID=RAC_STDBY2
sqlplus “/ as sysdba”
startup mount;

Ensure Server Side Load Balancing is configured

Check whether the init.ora parameter remote_listener is defined in the standby instances.

If the parameter is not present then create an entry in the tnsnames.ora files (of all standby nodes) which has the following format:

LISTENERS_RAC_STDBY.local =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01 -vip.local)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY02-vip.local)(PORT = 1521))
)
)

Then set the value of the parameter remote_listener to LISTENERS_ RAC_STDBY.local.


Register the Database with CRS

Issue the following commands to register the database with Oracle Cluster Ready Services:

srvctl add database –d RAC_STDBY –o $ORACLE_HOME –m local –p “+DATA/RAC_STDBY/spfileRAC_STDBY.ora” –n RAC_PRIM –r physical_standby –s mount
srvctl add instance –d RAC_STDBY –i RAC_STDBY1 –n RAC_STDBY01
srvctl add instance –d RAC_STDBY –i RAC_STDBY2 –n RAC_STDBY02


Test

Test that the above has worked by stopping any running standby instances and then starting the database (all instances) using the command:

srvctl start database –d RAC_STDBY

Once started check that the associated instances are running by using the command:

srvctl status database –d RAC_STDBY


Temporary Files

Temporary files associated with a temporary tablespace are automatically created with a standby database.


Create Standby Redo Logs

Standby Redo Logs (SRL) are used to store redo data from the primary databases when the transport is configured using the Logwriter (LGWR), which is the default.

Each standby redo log file must be at least as large as the largest redo log file in the primary database. It is recommended that all redo log files in the primary database and the standby redo logs in the respective standby database(s) be of the same size.

The recommended number of SRLs is :

(# of online redo logs per primary instance + 1) * # of instances .

Whilst standby redo logs are only used by the standby site, they should be defined on both the primary as well as the standby sites. This will ensure that if the two databases change their roles (primary-> standby and standby -> primary) then no extra configuration will be required.

The standby database must be mounted (mount as ‘standby’ is the default) before SRLs can be created.

SRLs are created as follows (the size given below is just an example and has to be adjusted to the current environment):


1. sqlplus ‘ / a sysdba’

2. startup mount

3. alter database add standby logfile SIZE 100M;

NOTE: Standby Redo Logs are also created in logfile groups. But be aware of the fact that group numbers then must be greater than the group numbers which are associated with the ORLs in the primary database. Wrt group numbering Oracle makes no difference between ORLs and SRLs.

NOTE: Standby Redo Logs need to be created on both databases.

The standby database is now created. The next stage in the process concerns enabling transaction synchronisation. There are two ways of doing this:

1. Using SQL Plus

2. Using the Data Guard Broker


Configuring Data Guard using SQL Plus


Configure the Standby Database

The following initialisation parameters need to be set on the standby database:

Parameter Value (RAC_STDBY01 ) Value (RAC_STDBY02)
db_unique_name RAC_STDBY
db_block_checking TRUE (OPTIONAL)
db_block_checksum TRUE (OPTIONAL)
log_archive_config dg_config=(RAC_PRIM, RAC_STDBY)
log_archive_max_processes 5
fal_client RAC_STDBY1.local RAC_STDBY2.local
fal_server ‘RAC_PRIM1.local’, ‘RAC_PRIM2.local’
Standby_file_management Auto
log_archive_dest_2 service=RAC_PRIM LGWR SYNC AFFIRM db_unique_name=PRIMARY_RAC_PRIM VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE)
log_archive_dest_2 (Max. Performance Mode) service=RAC_PRIM ARCH db_unique_name=PRIMARY_RAC_PRIM VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE)


Configure the Primary Database

The following initialisation parameters need to be set on the primary database:


Parameter Value (RAC_PRIM01 ) Value (RAC_PRIM02)
db_unique_name RAC_PRIM
db_block_checking TRUE (OPTIONAL)
db_block_checksum TRUE (OPTIONAL)
log_archive_config dg_config=(RAC_PRIM, RAC_STDBY)
log_archive_max_processes 5
fal_client RAC_PRIM1.local RAC_PRIM2.local
fal_server ‘RAC_STDBY1.local’, ‘RAC_STDBY2.local’
standby_file_management Auto
Log_archive_dest_2 service=RAC_STDBY LGWR SYNC AFFIRM db_unique_name=RAC_STDBY VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE)
Log_archive_dest_2 (Max. Performance Mode) service=RAC_STDBY ARCH db_unique_name=RAC_STDBY VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE


Set the Protection Mode

In order to specify the protection mode, the primary database must be mounted but not opened.

NOTE: The database must be mounted in exclusive mode which effectively means that all RAC instances but one be shutdown and the remaining instance be started with a parameter setting of cluster_database=false.

Once this is the case then the following statement must be issued on the primary site:

If using Maximum Protection mode then use the command:

Alter database set standby database to maximize protection;

If using Maximum Availability mode then use the command:

Alter database set standby database to maximize availability;

If using Maximum Performance mode then use the command:

Alter database set standby database to maximize performance;


Enable Redo Transport & Redo Apply

Enabling the transport and application of redo to the standby database is achieved by the following:


Standby Site

The standby database needs to be placed into Managed Recovery mode. This is achieved by issuing the statement:

Alter database recover managed standby database disconnect;

Oracle 10gR2 introduced Real Time redo apply (SRLs required). Enabling real time apply is achieved by issuing the statement:

alter database recover managed standby database using current logfile disconnect;


Primary Site:

Set:

log_archive_dest_state_2=enable

in the init.ora file or issue via SQLPlus :

alter system set log_archive_dest_state_2=enable

For Complete and More details, Please refer the following ORACLE HA Best Practices Article:

Data Guard 11g Installation and Configuration Best Practices on Oracle RAC



====================================================================================

Monday, December 15, 2008

Business Continuity - Implement Dataguard on R12

IMPLEMENTING Dataguard Physical Standby on R12 on 11g Database Using RMAN Hot Backup
=====================================================================


==> Machine 01 has an E-Business Suite Release 12.0.4 (PROD) DB_NAME=PROD01

==> Machine 02 is clean and will be used as the Standby machine DB_NAME=PROD02

==> Machine 03 will be clean and used as the Observer

1. Prepare the Existing Database as PRIMARY ==> Machine 01

1.Enable Archivelog mode in PRIMARY

SQL>archive log list

>> The output for above command will show whether archive is enabled or Not <<
>> If output shows as follows,then proceed for next step in this section to ENABLE ARCHIVE LOG Mode

SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination /oracle/PROD/11.1.0/db_1/RDBMS/
Oldest online log sequence 386
Current log sequence 387

SQL> shutdown immediate
SQL> startup mount

SQL> alter database force logging;
SQL> alter database archivelog;


>>
>>


2. Create Database Password Files

cd $ORACLE_HOME/dbs
orapwd file=orapw password= ignorecase="Y"

3. Create IFILE and add ifile entry in the pfile

Add the following Entry at end of pfile

IFILE=/oracle/PROD/db/tech_st/11.1.0/dbs/PROD__ifile.ora

Create IFILE as follows:

[oracle@emeaicm01 dbs]$ vi PROD_emeaicm01_ifile.ora


4. Create directory called " PROD_emeaicm01" and place all network files under this folder

[oracle@emeaicm01 PROD_emeaicm01]$ mkdir PROD_emeaicm01

Add IFILE entry on both TNSNAMES.ORA and LISTENER.ORA to use IFILE at the end of file

In Listener.ora
IFILE=/oracle/PROD/db/tech_st/11.1.0/network/admin/PROD_emeaicm01/listener_ifile.ora

In Tnsnames.ora

IFILE=/oracle/PROD/db/tech_st/11.1.0/network/admin/PROD_emeaicm01/PROD_emeaicm01_ifile.ora

Now, Create common IFILE for both Listener and Tnsnames entry

[oracle@emeaicm01 PROD_emeaicm01]$ vi PROD_emeaicm01_ifile.ora



3. Add the following Entries in for standby configuration

PROD01=
(DESCRIPTION=
(ADDRESS_LIST=
(LOAD_BALANCE=YES)
(FAILOVER=YES)
(ADDRESS=(PROTOCOL=tcp)(HOST=emeaicm01.uk.oracle.com)(PORT=1533))
)
(CONNECT_DATA=
(SID=PROD)
)
)


PROD02=
(DESCRIPTION=
(ADDRESS_LIST=
(LOAD_BALANCE=YES)
(FAILOVER=YES)
(ADDRESS=(PROTOCOL=tcp)(HOST=emeaicm02.uk.oracle.com)(PORT=1533))
)
(CONNECT_DATA=
(SID=PROD)
)
)



4. At PRIMARY , Set following parameters in IFILE

LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST MANDATORY'
LOG_ARCHIVE_FORMAT='%t_%s_%r.dbf'
DB_RECOVERY_FILE_DEST_SIZE = 107374182400
DB_RECOVERY_FILE_DEST= '/oracle/archive'
DB_UNIQUE_NAME=PROD01
LOG_ARCHIVE_CONFIG='dg_config=(PROD01,PROD02)'
LOG_ARCHIVE_DEST_2='service=PROD02 valid_for=(online_logfiles,primary_role) db_unique_name=PROD02 LGWR ASYNC=20480 OPTIONAL REOPEN=15 NET_TIMEOUT=30'
LOG_ARCHIVE_DEST_STATE_2=enable
FAL_SERVER='PROD02'
FAL_CLIENT='PROD01'
#standby_archive_dest='LOCATION=USE_DB_RECOVERY_FILE_DEST'
standby_file_management=AUTO
parallel_execution_message_size=8192


5. Create standby redolog

>>>Create standby redo logs on the primary database to support the standby role. The standby redo logs must be the same size as the primary database online redo logs.
It is recommended that the number of standby redo logs is one more than the number of online redo logs. <<<<

Assume, We have 2 Online logs in PRIMARY, then we need to add 3 standby log files

SQL> alter database add standby logfile thread 1 group 3 ( '/oracle/PROD/db/apps_st/data/stdbylog03a.dbf', '/oracle/PROD/db/apps_st/data/stdbylog03b.dbf') size 1G;
SQL> alter database add standby logfile thread 1 group 4 ( '/oracle/PROD/db/apps_st/data/stdbylog04a.dbf', '/oracle/PROD/db/apps_st/data/stdbylog04b.dbf') size 1G;
SQL> alter database add standby logfile thread 1 group 5 ( '/oracle/PROD/db/apps_st/data/stdbylog05a.dbf', '/oracle/PROD/db/apps_st/data/stdbylog05b.dbf') size 1G;


Clone the Database Software to Standby
-----------------------------------------

6. Prepare the Database Tier for Cloning

As ORACLE user on one of the production database servers, run the adpreclone.pl script.
For Example:

cd $ORACLE_HOME/appsutil/scripts/
perl adpreclone.pl dbTier

In our case

cd $ORACLE_HOME/appsutil/scripts/PROD_emeaicm01
perl adpreclone.pl dbTier

Supply the APPS password when requested
Use the same port pool as Source

7. Copy Database Tier to Standby

Use following command to tar the source file , copy and untar in target ==> The following command will automatically tar,copy and untar in the target system

$ tar czvf - . |ssh -l oracle emeaicm02.uk.oracle.com "cd /oracle/PROD/db/tech_st; tar xvzf -"


>>>> It is recommended to use the same directory names on the standby site. <<<<<<


8. Configure the Standby Database Tier using Rapid Clone

At Standby Site, Execute the following adcfgclone.pl script for new database Oracle home:

$ cd $ORACLE_HOME/appsutil/clone/bin
perl adcfgclone.pl dbTechStack

Respond to the prompts appropriately:

Target instance is a Real Application Cluster (RAC) instance (y/n):
"n" in the single instance case
Target System database name:
Note, db_name, not db_unique_name
Number of DATA_TOP's on the target system [4]:
Usually only one for ASM.
Target system DATA_TOP 1:
For example, /oracle/PROD

Source The New Environment

$ cd /oracle/PROD/db/tech_st/11.1.0/
$ . . PROD_emeaicm02.env

9. Configure New Database for Communication between Primary and Standby

When adjusting parameters on the production site for Data Guard setup, you created an include file
holding TNS service definitions. Copy the file to the $TNS_ADMIN directory at your standby site and
name it _ifile.ora.

$ cd $TNS_ADMIN
$ mv PROD_emeaicm01_ifile.ora PROD_emeaicm02_ifile.ora


10. Configure Standby Database Data Guard Parameters

$ vi PROD_emeaicm02_ifile.ora
>> Add/change the following parameters <<

LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST MANDATORY'
LOG_ARCHIVE_FORMAT='%t_%s_%r.dbf'
DB_RECOVERY_FILE_DEST_SIZE = 107374182400
DB_RECOVERY_FILE_DEST= '/oracle/archive'
log_archive_config='dg_config=(PROD_IN01,PROD_IN02)'
LOG_ARCHIVE_DEST_STATE_2=enable
#standby_archive_dest='LOCATION=USE_DB_RECOVERY_FILE_DEST'
standby_file_management=AUTO
parallel_execution_message_size=8192
db_unique_name=PROD_IN02
log_archive_dest_2='service=PROD_IN01 valid_for=(online_logfiles,primary_role) db_unique_name=PROD_IN01 LGWR ASYNC=20480 OPTIONAL REOPEN=15 NET_TIMEOUT=30'
fal_server='PROD_IN01'
fal_client='PROD_IN02'
log_file_name_convert='xx','xx'



Clone Application Tier to Standby
----------------------------------

# Use the same ORACLE_BASE as Source i.e /oracle/PROD
# Use the same port pool as Source, i.e port pool 12
# Setup SSH on both Primary and Standby nodes - Follow - How to setup SSH.doc
# Setup cron to periodically synchronise concurrent manager log and out files - Follow - How to use crontab to rsync CCM log and out files.doc


11. Prepare the Application Tier for Cloning

Log in to the APPLMGR user and run the following commands to prepare the application tier for
cloning:

$ cd /admin/scripts
$ perl adpreclone.pl appsTier


12. Copy the Application Tier to Standby

SR czvf - . |ssh -l oracle emeaicm02.uk.oracle.com "cd /oracle/PROD/apps; tar xzvf -"

13. Configure Standby Application Tier using Rapid Clone
Run adcfgclone.pl first to start configuring the new application tier file systems. These steps must be run
on each application tier node:

$ cd /clone/bin
$ perl adcfgclone.pl atTechStack

Answer prompts appropriately:

SRget system database SID:
Note: db_name, not db_unique_name
SRget system database server node:
Database server hostname, could be any node for RAC node if RAC used

To complete the configuration of environment run AutoConfig using the following commands:

$ cd /ad/12.0.0/bin
$ adconfig.sh contextfile=/appl/admin/.xml run=INSTE8

>>>AutoConfig will report errors regarding to database connection, these errors can be ignored as the
database is unavailable. <<<<

14. Setup cron to periodically synchronise concurrent manager log and out files

emeaicm01.uk.oracle.com is machine A (Primary machine)
emeaicm02.uk.oracle.com is machine B (Standby machine)

On the Standby machine:
mkdir -p /log
mkdir -p /out

mkdir -p /oracle/PROD/inst/apps/PROD_emeaicm01/logs/appl/conc/log
mkdir -p /oracle/PROD/inst/apps/PROD_emeaicm01/logs/appl/conc/out

On the Primary Machine
mkdir -p /log
mkdir -p /out

mkdir -p /oracle/PROD/inst/apps/PROD_emeaicm02/logs/appl/conc/log
mkdir -p /oracle/PROD/inst/apps/PROD_emeaicm02/logs/appl/conc/out

You can set the timings to push files according to your requirement.
In our case, we have set the timing for 30 mintues as follows:

From machine A
===============

The files will be pushing from machine A every 30 minutes, on the hour and half hour, so our entries will be:

$ crontab -e

#
## Push log and out files from this machine to emeaicm02 every 30 minutes, on the hour and half hour
#
00,30 * * * * rsync -av /oracle/PROD/inst/apps/PROD_emeaicm01/logs/appl/conc oracle@emeaicm02: /oracle/PROD/inst/apps/PROD_emeaicm01/logs/appl --rsync-path=/usr/bin/rsync >> /tmp/rsync_01_00-30.log
#

As crontab uses “vi” syntax, save changes using :wq!

From machine B
==============

We will also be pushing files from machine B every 30 minutes, on the quarter hour and three quarter hour, so our entries will be:

#
## Push log and out from this machine to emeaicm01 every 30 minutes at quarter past, and quarter to.
#
15,45 * * * * rsync -av /oracle/PROD/inst/apps/PROD_emeaicm02/logs/appl/conc oracle@emeaicm01:/oracle/PROD/inst/apps/PROD_emeaicm02/logs/appl --rsync-path=/usr/bin/rsync >> /tmp/rsync_01_15-45.log
#


15. Setup SSH on both Primary and Standby nodes

On Primary Node: (Machine 1)
----------------------------

· Log in as oracle
· Generate the ssh key by issuing the following command:
ssh-keygen -t rsa (then press enter 3 times)

$ ssh-keygen -t rsa
Generating public/private rsa key pair.

· Press the Enter key three times.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
5a:58:7e:65:ae:71:1a:82:2c:95:98:5b:48:b3:52:ac oracle@emeaicm15.uk.oracle.com
· CD into the .ssh directory, which is located under the home directory. Note that a file called id_rsa.pub is created.

$ ls -lrt $HOME/.ssh/
total 24
-rw-r--r-- 1 oracle dba 948 Mar 3 10:57 known_hosts
-rw-r--r-- 1 oracle dba 240 Mar 4 10:15 id_rsa.pub
-rw------- 1 oracle dba 887 Mar 4 10:15 id_rsa

· Update authorized_keys file to enable ssh ‘loopback’

$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys

· Use SCP to copy id_rsa.pub to the /tmp directory of the Remote Nodes.
$ scp $HOME/.ssh/id_rsa.pub oracle@emeaicm16.uk.oracle.com:/tmp
oracle@emeaicm16.uk.oracle.com's password:
id_rsa.pub 100% 240 0.2KB/s 00:00

On the Remote Node(s):

· Login as the oracle user
· Remove specialized .login, .profile or similar files which cause extra verbiage to be displayed to the screen.
· Create a directory called .ssh in the home directory
$ mkdir $HOME/.ssh
mkdir: cannot create directory `/home/oracle/.ssh': File exists

· Copy the content of /tmp/id_rsa.pub (copied from the Primary Node) into a new file named authorized_keys. (If you already have a file called authorized_keys in this directory, open it in vi, go to the bottom of the file, add one blank line and then copy and paste the contents of the file /tmp/id_rsa.pub into the existing file and save it.)

$ cat /tmp/id_rsa.pub >> $HOME/.ssh/authorized_keys

· Verify that the .ssh directory permission is set to 700
$ chmod 700 $HOME/.ssh
· Verify that the authorized_keys file permission is set to 600
$ chmod 600 $HOME/.ssh/authorized_keys
· Verify that the $HOME directory does not have write permissions for group and others.
$ chmod go-w $HOME
· Ensure that the ssh daemon is running
$ ps -ef |grep -i ssh

root 2418 22004 0 10:25 ? 00:00:00 sshd: oracle [priv]
oracle 2420 2418 0 10:25 ? 00:00:00 sshd: oracle@pts/1
oracle 2690 2421 0 10:37 pts/1 00:00:00 grep -i ssh
root 22004 1 0 Feb29 ? 00:00:00 /usr/sbin/sshd

Test the installation:
~~~~~~~~~~~~~~~~~~~~~


· Login to the PrimaryNode as oracle user.

Execute the following command, replacing [user] with the username on the Remote Node, and [SERVER] with the host name of the Remote Node:
ssh -l [user] [SERVER]
$ uname -n
emeaicm15.uk.oracle.com

$ ssh -l oracle emeaicm15 (connect to same machine)
Last login: Tue Mar 4 10:41:52 2008 from emeaicm15.uk.oracle.com

$ uname -n
emeaicm15.uk.oracle.com
· When prompted to confirm the identity, respond "Yes".
· Disconnect from the server.

$ exit
Connection to emeaicm16 closed

$ ssh -l oracle emeaicm16 (connect to machine 2)
Last login: Tue Mar 4 10:41:52 2008 from emeaicm15.uk.oracle.com

$ uname -n
emeaicm16.uk.oracle.com
· When prompted to confirm the identity, respond "Yes".
· Disconnect from the server.

$ exit
Connection to emeaicm16 closed
· Execute the following commands:

$ uname -n
emeaicm15.uk.oracle.com

$ ssh oracle@emeaicm16 uname -n
emeaicm16.uk.oracle.com

The remote hostname should appear, followed by the prompt, and you should not be asked to enter and ID or password. If you are prompted for a password, verify the following:
§ The content of the authorized_keys file contains the content of id_rsa.pub of the client
§ The permissions of the authorized_keys file are set to 600.
§ The permissions of the .ssh directory are set to 700.
The permissions of the $HOME directory do not include execute for group and other.


Now repeat the steps again as follows (to enable SSH in the opposite direction)

On Standby Node: (machine 2)

· Log in as oracle
· Generate the ssh key by issuing the following command:
ssh-keygen -t rsa (then press enter 3 times)

$ ssh-keygen -t rsa
Generating public/private rsa key pair.

· Press the Enter key three times.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
6c:7d:4c:5c:21:9d:d6:8e:1f:6d:9c:52:2f:88:1c:3d oracle@emeaicm16.uk.oracle.com
· CD into the .ssh directory, which is located under the home directory. Note that a file called id_rsa.pub is created.

$ ls -lrt $HOME/.ssh/
-rw-r--r-- 1 oracle dba 703 Feb 18 09:54 known_hosts
-rw-r--r-- 1 oracle dba 240 Mar 4 10:32 authorized_keys
-rw-r--r-- 1 oracle dba 240 Mar 4 10:49 id_rsa.pub
-rw------- 1 oracle dba 887 Mar 4 10:49 id_rsa

· Update authorized_keys file to enable ssh ‘loopback’

$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
· Use SCP to copy id_rsa.pub to the /tmp directory of the Remote Nodes.

$ scp $HOME/.ssh/id_rsa.pub oracle@emeaicm15.uk.oracle.com:/tmp
oracle@emeaicm15.uk.oracle.com's password:
id_rsa.pub 100% 240 0.2KB/s 00:00

On the Primary Node: (Machine 1)

· Login as the oracle user
· Copy the content of /tmp/id_rsa.pub (copied from the Primary Node) into file named authorized_keys
$ cat /tmp/id_rsa.pub >> $HOME/.ssh/authorized_keys
· Verify that the .ssh directory permission is set to 700
$ chmod 700 $HOME/.ssh
· Verify that the authorized_keys file permission is set to 600
$ chmod 600 $HOME/.ssh/authorized_keys
· Verify that the $HOME directory does not have write permissions for group and others.
$ chmod go-w $HOME
· Ensure that the ssh daemon is running
$ ps -ef |grep -i ssh

root 2418 22004 0 10:25 ? 00:00:00 sshd: oracle [priv]
oracle 2420 2418 0 10:25 ? 00:00:00 sshd: oracle@pts/1
oracle 2690 2421 0 10:37 pts/1 00:00:00 grep -i ssh
root 22004 1 0 Feb29 ? 00:00:00 /usr/sbin/sshd

Test the installation at both Node(s):
--------------------------------------

· Login to the Remote Node as oracle user (Machine 2).
· Execute the following command, replacing [user] with the username on the Remote Node, and [SERVER] with the host name of the Remote Node:
ssh -l [user] [SERVER]

$ uname -n
emeaicm16.uk.oracle.com

$ ssh -l oracle emeaicm16 (connect to same machine)
Last login: Tue Mar 4 10:41:52 2008 from emeaicm16.uk.oracle.com

$ uname -n
emeaicm16.uk.oracle.com

$ exit
Connection to emeaicm16 closed

$ ssh -l oracle emeaicm15 (connect to machine 1)
Last login: Tue Mar 4 10:41:52 2008 from emeaicm16.uk.oracle.com

$ uname -n
emeaicm15.uk.oracle.com
· When prompted to confirm the identity, respond "Yes".
· Disconnect from the server.

$ exit
Connection to emeaicm15 closed
· Execute the following commands:

$ uname -n
emeaicm16.uk.oracle.com

$ ssh oracle@emeaicm15 uname -n
emeaicm15.uk.oracle.com

The remote hostname should appear, followed by the prompt, and you should not be asked to enter and ID or password. If you are prompted for a password, verify the following:
§ The content of the authorized_keys file contains the content of id_rsa.pub of the client
§ The permissions of the authorized_keys file are set to 600.
§ The permissions of the .ssh directory are set to 700.
§ The permissions of the $HOME directory do not include execute for group and other.



16. Establish the Standby Database

Create a hot backup using Recovery Manager (RMAN)

Take Backup of Primary Database and Copy the backup to Standby(Machine02) using RMAN HOT BACKUP

a)set RDBMS_ORACLE_HOME env file
b) $ rman target /

RMAN> run
{
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
backup database plus archivelog;
}

>>>>For RMAN: Monitoring Recovery Manager Jobs ==> Ref Note 144640.1

To monitor the Rman Backup job, Please execute the following in SQLPLUS as sysdba

SQL> SELECT sid, spid, client_info FROM v$process p, v$session s
WHERE p.addr = s.paddr
AND client_info LIKE '%id=rman%';



SQL> SELECT sid, serial#, context, sofar, totalwork,round(sofar/totalwork*100,2) "% Complete"
FROM v$session_longops
WHERE opname LIKE 'RMAN%'
AND opname NOT LIKE '%aggregate%'
AND totalwork != 0
AND sofar <> totalwork
/


SQL> SELECT sid, seconds_in_wait AS sec_wait, event FROM v$session_wait
WHERE wait_time = 0
ORDER BY sid;

Take backup of current controlfile at primary for standby site in RMAN

For Backup as Backupset:

RMAN> run
{
allocate channel c1 type disk;
backup current controlfile for standby;
}


For Backup as COPY :

RMAN> run
{
allocate channel c1 type disk;
backup as copy current controlfile for standby;
}

Once backup is done, copy the backup with archivelogs & Standby controlfile to remote location(standby)

rsync -av /oracle/archive oracle@machine02.oracle.com:/oracle/archive

17. At Standby (Machine02), Startup mount the Standby database

The following command will automatically duplicate the database using backup taken at primary and
mount the database

$ rman target sys/oracle@prod_in01 auxiliary sys/oracle@prod_in02

RMAN> DUPLICATE TARGET DATABASE FOR STANDBY
DORECOVER NOFILENAMECHECK;


Run the following command to check the standby database details

SQL> select database_role,switchover_status,checkpoint_change#,switchover#,dataguard_broker,guard_status,current_scn,flashback_on from v$database

The output should be similar to the following:


DATABASE_ROLE SWITCHOVER_STATUS CHECKPOINT_CHANGE# SWITCHOVER# DATAGUAR GUARD_S CURRENT_SCN FLASHBACK_ON
--------------- ------------------ ---------------------- ----------- --------------- ------------------- ------------------
PHYSICAL STANDBY NOT ALLOWED 5965111847227 83208404 ENABLED NONE 5965112465873 YES


18. On the primary database enable the previously deferred remote destination by executing this command
from SQL*Plus connected as sysdba:

SQL> alter system set log_archive_dest_state_2=enable SID='*';

19. Place the standby database in managed recovery by executing this command from SQL*Plus
connected as sysdba:

SQL> recover managed standby database using current logfile disconnect;

20. Verify whether the standby is correctly applying redo from the primary.
On the primary database, archive the current log using the following SQL statement:

>>Find the current Sequence and last archived log sequence:

SQL> archive log list;

>>Archive the current redolog sequence:

SQL> alter system archive log current;


On the standby database, query the v$archived_log view to verify that the logs are received and applied:

SQL> select thread#, sequence#, applied,
to_char(first_time, 'mm/dd/yy hh24:mi:ss') first,
to_char(next_time, 'mm/dd/yy hh24:mi:ss') next,
to_char(completion_time, 'mm/dd/yy hh24:mi:ss') completion
from v$archived_log order by first_time;

You can also execute the following query to find the shipping details on both primary and standby

SQL> select process,status,sequence# from v$managed_standby;

21. If incase the logs are not shipping, Please check the following

a) Check alertlog file on Primary and see whether log_archive_dest_2 is reachable and check any TNS
error, if the standby destination not reachable, fix the TNS issue

b) Check the status of the log_archive_dest_2

SQL> select status,error from v$archive_dest;

>> If any error from above statement, you shall defer it temporarily

SQL> alter system set log_archive_dest_state_2=defer;

c) Once the error is rectified on log_archive_dest_2, you can execute the following query to enable it

SQL> alter system set log_archive_dest_state_2=enable;

d) If logs are not shipping and if you want to apply manually, you shall move/copy the missing archivelog
from primary to standby manually using OS scp command and run the following command to apply
the changes

SQL> alter database register logfile '/oracle/archive/PROD_IN02/archivelog/2008_11_18/o1_mf_1_34_4l5bb7b4_.arc';


SWITCHOVER
============

1. Switchover Primary Database to Remote Standby Database

a) Shutdown Production E-Business Suite -
Shut down E-Business Suite application tier. Ensure the application is completely shutdown.

b) On the NoteIMARY database , Issue the following command to commit to switchover to
standby:

SQL> alter database commit to switchover to standby with session shutdown;

c) Shutdown the NoteIMARY database
SQL> shutdown immediate

d) Stop the NoteIMARY database listener: on all database nodes:

lsnrctl stop

e) On the STANDBY database, verify that it is ready to be converted to the new primary:

SQL> select switchover_status from v$database;

You should see the following value:
SWITCHOVER_STATUS
-----------------
TO PRIMARY

f) On the STANDBY database, execute the following command to convert it to be the new primary:

SQL> alter database commit to switchover to primary;

g) Enable Flashback on the STANDBY database (Optional)

SQL> alter database flashback on;

h) Open the STANDBY Database (New Primary database)

SQL> alter database open;

i) Remove the Old Application Topology
Connect to the new primary database using SQL*Plus as user APPS and execute the following
commands:

SQL> EXEC FND_CONC_CLONE.SETUP_CLEAN;
SQL> commit;

j) Configure the Standby Database Tier
Run AutoConfig on STANDBY database(New Primary) node to configure the Oracle home for use
by the E-Business Suite.

$ cd $ORACLE_HOME/appsutil/scripts/
$ ./adautocfg.sh

k) Start Original PRIMARY as Standby
Start the database listener on database node at the original primary site:

$ lsnrctl start

Start and mount database instance and then start managed recovery:

SQL> startup mount;
SQL> recover managed standby database using current logfile disconnect;

On the New PRIMARY database at the standby site enable remote archiving by executing this
command from SQL*Plus connected as sysdba:

SQL> alter system set log_archive_dest_state_2=enable SID='*';

Verify whether logs are switching from New PRIMARY to this Standby database as described in
Step 20.

l) Perform the Cloning Finishing Tasks as per Note 406982.1

Wednesday, December 10, 2008

How to change the characterset of Oracle 10g DB

Decide the character set you want to change and check whether new character is superset of old character set

1.SQL> shutdown immediate
2.SQL> startup open restrict
3.SQL> alter database character set internal_use UTF8;
4.SQL> shutdown immediate
5.SQL> startup

Refer the following Metalink Note for more details:

Note 225912.1 Changing the Database Character Set - a short overview
Note 227330.1 Character Sets & Conversion - Frequently Asked Questions

Tuesday, December 9, 2008

How to kill all ORACLE Process in one command

At OS prompt, Execute the following command to kill all ORACLE process

$ kill -9 `ps -ef |grep PROD |awk '{print $2}'`

Wednesday, December 3, 2008

RMAN Backup Validation Check

To Test and Check the Integrity for Backups
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The database prevents operations that result in unusable backup files or corrupt restored datafiles. The database server automatically does the following:

* Blocks access to datafiles while they are being restored or recovered
* Allows only one restore operation for each datafile at a time
* Ensures that incremental backups are applied in the correct order
* Stores information in backup files to allow detection of corruption

To Detect of Logical Block Corruption
--------------------------------------

This tests data and index blocks for logical corruption, such as corruption of a row piece or index entry. If RMAN finds logical corruption, then it logs the block in the alert.log

1. Start RMAN in nocatalog mode and connect to your database:

From the operating system prompt issue at database host :

$ rman target / nocatalog

2. From the RMAN> prompt issue the validate command with the "check logical"
clause:

The following example shows how to validate a single datafile:

run {
allocate channel d1 type disk;
backup check logical validate datafile 77;
release channel d1;
}


If you wish to monitor the progress of RMAN backup validate, you may issue this
query:

SQL> select sid, serial#, context, sofar, totalwork,
round(sofar/totalwork*100,2) "%_complete"
from v$session_longops
where opname like 'RMAN%'
and opname not like '%aggregate%'
and totalwork != 0
and sofar <> totalwork
/


3. Once the validate process is complete, you either check the alert log or a
view depending on the version of Oracle being used.

In Oracle9i and beyond you can query the view name V$DATABASE_BLOCK_CORRUPTION
to determine what corruption, if any, was found by RMAN.

Tuesday, December 2, 2008

Query to find All Background_processes in the Database

Background Processes

Background processes support various types of Oracle processing

Each version of Oracle has a set of associated background processes. Each background process has a descriptor in the fixed area of the SGA. Background process descriptors are of type ksbdp. The descriptors can be seen in the level 2 GLOBAL_AREA dump for example:

ALTER SESSION SET EVENTS 'immediate trace name global_area level 2';

Every background process descriptor has an internal (structure) name e.g. ktmprc_ and an external (display) name e.g. SMON. A list of background process descriptors together with the external and internal names can be obtained using the following query:

SELECT ksbdd.ksbddidn, ksmfsv.ksmfsnam, ksbdd.ksbdddsc
FROM x$ksbdd ksbdd, x$ksbdp ksbdp, x$ksmfsv ksmfsv
WHERE ksbdd.indx = ksbdp.indx
AND ksbdp.addr = ksmfsv.ksmfsadr
ORDER BY ksbdd.ksbddidn;

how to change expired status of user accounts.txt

SQL> select username,account_status from dba_users where username='DIP';

USERNAME ACCOUNT_STATUS
----------------------- ---------------
SCOTT EXPIRED & LOCKED


SQL> alter user SCOTT account unlock;

User altered.

SQL> select username,account_status from dba_users where username='SCOTT';

USERNAME ACCOUNT_STATUS
------------------------------ --------------------------------
SCOTT EXPIRED

SQL> select password from dba_users where username='SCOTT';

PASSWORD
------------------------------
CE4A36B8E06CA59C

SQL> alter user scott identified by values 'CE4A36B8E06CA59C';

User altered.

SQL> select username,account_status from dba_users where username='SCOTT';

USERNAME ACCOUNT_STATUS
------------------------------ --------------------------------
SCOTT OPEN

Difference Between AFN & RFN

RELATIVE FILE NO.

Relative file no is the file no. generated which belongs to only one tablespace.
Every tablespace has its own datafiles will be referred by Relative file no.
So the relative file no. can be repeated in each tablespaces.
Eg; TAblespace users(TS#1) has 2 datafiles then Relative file no. REL file no. 1 and 2 for tablespace #1.
Tablespace userdata(TS#2) has 2 datafiles then Relative file no. will be 1 and 2 for tablespace # 2.

A tablespace can have only 1022 max datafiles .. so Relative file no will be 1022 for each tablespace max.

ABSOLUTE FILE NO.

Whereas Absolute file no is Datafile No. generated uniquely at database level.