Real Application Clusters (RAC)
Oracle RAC, introduced with Oracle9i, is the successor to Oracle Parallel Server (OPS). Oracle RAC allows multiple instances to access the same database (storage) simultaneously. RAC provides fault tolerance, load balancing, and performance benefits by allowing the system to scale out, and at the same time since all nodes access the same database, the failure of one instance will not cause the loss of access to the database.
Oracle RAC 10g is a shared disk subsystem. All nodes in the cluster must be able to access all of the data, redo log files, control files and parameter files for all nodes in the cluster. The data disks must be globally available in order to allow all nodes to access the database. Each node has its own redo log file(s) and UNDO tablespace, but the other nodes must be able to access them (and the shared control file) in order to recover that node in the event of a system failure.
The difference between Oracle RAC and OPS is the addition of Cache Fusion. With OPS a request for data from one node to another required the data to be written to disk first, then the requesting node can read that data. With cache fusion, data is passed along a high-speed interconnect using a sophisticated locking algorithm.
With Oracle RAC 10g, the data files, redo log files, control files, and archived log files reside on shared storage on raw-disk devices, a NAS, ASM, or on a clustered file system
Oracle RAC is composed of two or more database instances.
They are composed of Memory structures and background processes same as the single instance database.
Oracle RAC instances use two processes
==> GES(Global Enqueue Service)
==> GCS(Global Cache Service) this enable cache fusion.
Oracle RAC instances are composed of following background processes:
ACMS—Atomic Controlfile to Memory Service (ACMS)
GTX0-j—Global Transaction Process
LMON—Global Enqueue Service Monitor
LMD—Global Enqueue Service Daemon
LMS—Global Cache Service Process
LCK0—Instance Enqueue Process
RMSn—Oracle RAC Management Processes (RMSn)
RSMN—Remote Slave Monitor
LMON
The background Global Enqueue Service Monitor (LMON) monitors the entire cluster to manage global resources. LMON manages instance and process failures and the associated recovery for the Global Cache Service (GCS) and Global Enqueue Service (GES). In particular, LMON handles the part of recovery associated with global resources. LMON-provided services are also known as cluster group services (CGS)
This process monitors global enques and resources across the cluster and performs global enqueue recovery operations.This is called as Global Enqueue Service Monitor.
LCKx
The LCK process manages instance global enqueue requests and cross-instance call operations. Workload is automatically shared and balanced when there are multiple Global Cache Service Processes (LMSx).
This process is called as Instance enqueue process.This process manages non-cache fusion resource requests such as libry and row cache requests.
LMSx
The Global Cache Service Processes (LMSx) are the processes that handle remote Global Cache Service (GCS) messages. Current Real Application Clusters software provides for up to 10 Global Cache Service Processes. The number of LMSx varies depending on the amount of messaging traffic among nodes in the cluster. The LMSx handles the acquisition interrupt and blocking interrupt requests from the remote instances for Global Cache Service resources. For cross-instance consistent read requests, the LMSx will create a consistent read version of the block and send it to the requesting instance. The LMSx also controls the flow of messages to remote instances.
This process is called as Global Cache service process.This process maintains statuses of datafiles and each cahed block by recording information in a Global Resource Dectory(GRD).This process also controls the flow of messages to remote instances and manages global data block access and transmits block images between the buffer caches of different instances.This processing is a part of cache fusion feature.
LMDx
The Global Enqueue Service Daemon (LMD) is the resource agent process that manages Global Enqueue Service (GES) resource requests. The LMD process also handles deadlock detection Global Enqueue Service (GES) requests. Remote resource requests are requests originating from another instance.
This process is called as global enqueue service daemon. This process manages incoming remote resource requests within each instance.
DIAG
The diagnose daemon is a Real Application Clusters background process that captures diagnostic data on instance process failures. No user control is required for this demo.
ACMS
ACMS stands for Atomic Controlfile Memory Service.In an Oracle RAC environment ACMS is an agent that ensures a distributed SGA memory update(ie)SGA updates are globally committed on success or globally aborted in event of a failure.
GTX0-j
The process provides transparent support for XA global transactions in a RAC environment.The database autotunes the number of these processes based on the workload of XA global transactions.
RMSn
This process is called as Oracle RAC management process.These pocesses perform managability tasks for Oracle RAC.Tasks include creation of resources related Oracle RAC when new instances are added to the cluster.
RSMN
This process is called as Remote Slave Monitor.This process manages background slave process creation andd communication on remote instances. This is a background slave process.This process performs tasks on behalf of a co-ordinating process running in another instance.
CRS
CRS (Cluster Ready Services) is a new feature for 10g Real Application Clusters that provides a standard cluster interface on all platforms and performs new high availability operations not available in previous versions. CRS manages cluster database functions including node membership, group services, global resource management, and high availability. CRS serves as the clusterware software for all platforms. It can be the only clusterware or run on top of vendor clusterware such as Sun Cluster, HP Serviceguard, etc.
CRS automatically starts the following resources:
· Nodeapps
o Virtual Internet Protocol(VIP) address for each node
o Global Services Daemon
o Oracle Net Listeners
o Oracle Network Services (ONS)
· Database Instance
· Services
Oracle Clusterware (Cluster Ready Services in 10g/ Cluster Manager in 9i) - provides infrastructure that binds multiple nodes that then operate as single server. Clusterware monitors all components like instances and listeners. There are two important components in Oracle clusterware, Voting Disk and OCR (Oracle Cluster Registry)
OCR & Voting Disk
Oracle, 10g RAC, provided its own cluster-ware stack called CRS. The main file components of CRS are the Oracle Cluster Repository (OCR) and the Voting Disk.
The OCR contains cluster and database configuration information for RAC and Cluster Ready Services (CRS). Some of this information includes the cluster node list, cluster database instance-to-node mapping information, and the CRS application resource profiles. The OCR contains configuration details for the cluster database and for high availability resources such as services, Virtual Inerconnect Protocoal (VIP) addresses.
The Voting Disk is used by the Oracle cluster manager in various layers. The Node Monitor (NM) uses the Voting Disk for the Disk Hearbeat, which is essential in the detection and resolution of cluster "split brain".
Cache Fusion:-
Oracle RAC is composed of two or more instances. When a block of data is read from datafile by an instance within the cluster and another instance is in need of the same block,it is easy to get the block image from the insatnce which has the block in its SGA rather than reading from the disk. To enable inter instance communication Oracle RAC makes use of interconnects. The Global Enqueue Service(GES) monitors and Instance enqueue process manages the cahce fusion
Cache Fusion and Global Cache Service (GCS)
Memory-to-memory copies between buffer caches over high-speed interconnects
· fast remote access times
· memory transfers for write or read access
· transfers for all types (e.g data, index, undo, headers )
· Cache coherency across the cluster
· globally managed access permissions to cached data
· GCS always knows whether and where a data block is cached
· a local cache miss may result in remote cache hit or disk read
@@@ Article is still under edit... will be adding more info @@@@
This site is intend to Share & Exchange ORACLE Core & APPS Knowledge across Global Expertise
Showing posts with label 11g RAC. Show all posts
Showing posts with label 11g RAC. Show all posts
Wednesday, December 31, 2008
Monday, December 29, 2008
11g RAC
Oracle Real Application Clusters
Learn More about 11g RAC and its New Features, Refer the following ORACLE official Site for 11g RAC
Oracle Real Application Clusters 11g
Thursday, December 25, 2008
Implementing Dataguard on 11g RAC
Creating RAC Standby Database
Configuration Details:
• Primary Host Names are RAC_PRIM01 and RAC_PRIM02
• Standby Host Names are RAC_STDBY01 and RAC_STDBY02
• The primary database is RAC_PRIM
• Virtual Names are RAC_PRIM01-vip, RAC_PRIM02-vip, RAC_STDBY01-vip and RAC_STDBY02-vip
• Both the primary and standby databases use ASM for storage
• The following ASM disk groups are being used +DATA (for data) and +FRA for Recovery/Flashback
• The standby database will be referred to as RAC_STDBY
• Oracle Managed Files will be used.
• ORACLE_BASE is set to /u01/app/oracle
1. Configure Primary and Standby sites
For Better and Simpler configuration of Data Guard, it is recommended that the Primary and Standby machines have exactly the same structure, i.e.
• ORACLE_HOME points to the same mount point on both sites.
• ORACLE_BASE/admin points to the same mount point on both sites.
• ASM Disk Groups are the same on both sites
2. Install Oracle Software on each site.
• Oracle Clusterware
• Oracle database executables for use by ASM
• Oracle database executables for use by the RDBMS
3. Server Names / VIPs
The Oracle Real Application Clusters 11g virtual server names and IP addresses are used and maintained by Oracle Cluster Ready Services (CRS).
Note: Both short and fully qualified names will exist.
Server Name/Alias/Host Entry Purpose
RAC_PRIM01.local Public Host Name (PRIMARY Node 1)
RAC_PRIM02.local Public Host Name (PRIMARY Node 2)
RAC_STDBY01.local Public Host Name (STANDBY Node 1)
RAC_STDBY02.local Public Host Name (STANDBY Node 2)
RAC_PRIM01-vip.local Public Virtual Name (PRIMARY Node 1)
RAC_PRIM02-vip.local Public Virtual Name (PRIMARY Node 2)
RAC_STDBY01-vip.local Public Virtual Name (STANDBY Node 1)
RAC_STDBY02-vip.local Public Virtual Name (STANDBY Node 2)
4. Configure Oracle Networking
4.1 Configure Listener on Each Site
Each site will have a listener defined which will be running from the ASM Oracle Home. The following listeners have been defined in this example configuration.
Primary Role
Listener_RAC_PRIM01
Listener_RAC_PRIM02
Listener_RAC_STDBY01
Listener_RAC_STDBY02
4.2 Static Registration
Oracle must be able to access all instances of both databases whether they are in an open, mounted or closed state. This means that these must be statically registered with the listener.
These entries will have a special name which will be used to facilitate the use of the Data Guard Broker, discussed later.
4.3 Sample Listener.ora
LISTENER_RAC_STDBY01 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01-vip)(PORT = 1521)
(IP = FIRST))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01)(PORT = 1521)
(IP = FIRST))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)
)
)
SID_LIST_LISTENER_RAC_STDBY01 =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME=RAC_STDBY_dgmgrl.local)
(SID_NAME = RAC_STDBY1)
(ORACLE_HOME = $ORACLE_HOME)
)
)
4.4 Configure TNS entries on each site.
In order to make things simpler the same network service names will be generated on each site. These service names will be called:
Alias Comments
RAC_PRIM1_DGMGRL.local Points to the RAC_PRIM instance on RAC_PRIM01 using the service name RAC_PRIM_DGMGRL.local. This can be used for creating the standby database.
RAC_PRIM1.local Points to the RAC_PRIM instance on RAC_PRIM01. using the service name RAC_PRIM.local
RAC_PRIM2.local Points to the RAC_PRIM instance on RAC_PRIM02 using the service name RAC_PRIM.local
RAC_PRIM.local Points to the RAC_PRIM database i.e. Contains all database instances.
RAC_STDBY1_DGMGRL.local Points to the RAC_STDBY instance on RAC_STDBY01 using the service name RAC_STDBY1_DGMGRL ** This will be used for the database duplication.
RAC_STDBY1.local Points to the RAC_STDBY instance on RAC_STDBY01 using the service name RAC_STDBY.local
RAC_STDBY2.local Points to the RAC_STDBY instance on RAC_STDBY02 using the service name RAC_STDBY.local
RAC_STDBY.local Points to the RAC_STDBY database i.e. Contains all the database instances
listener_DB_UNIQUE_NAME.local This will be a tns alias entry consisting of two address lines. The first address line will be the address of the listener on Node1 and the second will be the address of the listener on Node 2. Placing both of the above listeners in the address list will ensure that the database automatically registers with both nodes. There must be two sets of entries. One for the standby nodes call listener_RAC_STDBY and one for the primary nodes called listener_RAC_PRIM
Sample tnsnames.ora (RAC_PRIM01)
RAC_PRIM1_DGMGRL.local =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM01-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_PRIM_DGMGRL.local)
)
)
RAC_PRIM1.local =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM01-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_PRIM.local)
(INSTANCE_NAME = RAC_PRIM1)
)
)
RAC_PRIM2.local =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM02-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_PRIM.local)
(INSTANCE_NAME = RAC_PRIM2)
)
)
RAC_PRIM.local =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM01-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM02-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_PRIM.local)
)
)
RAC_STDBY1_DGMGRL.local =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_STDBY_DGMGRL.local)
)
)
RAC_STDBY2.local=
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY02-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_STDBY.local)
(INSTANCE_NAME=RAC_STDBY2)
)
)
RAC_STDBY1.local=
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_STDBY.local)
(INSTANCE_NAME=RAC_STDBY1)
)
)
RAC_STDBY.local=
(DESCRIPTION =
(ADDRESS_LIST=
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY02-vip)(PORT = 1521)))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_STDBY.local)
)
)
LISTENERS_RAC_PRIM.local=
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM01-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM02-vip)(PORT = 1521))
)
4.5 Configure ASM on each Site
Certain initialisation parameters are only applicable when a database is running in either a standby or primary database role. Defining ALL of the parameters on BOTH sites will ensure that, if the roles are switched (Primary becomes Standby and Standby becomes the new Primary), then no further configuration will be necessary.
Some of the parameters will however be node-specific; therefore there will be one set of parameters for the Primary site nodes and one for the Standby site nodes.
4.6 Primary Site Preparation
The following initialisation parameters should be set on the primary site prior to duplication. Whilst they are only applicable to the primary site, they will be equally configured on the standby site.
Dg_broker_config_file1 Point this to a file within the ASM disk group – Note File need not exist.
Dg_broker_config_file2 Point this to a file within the ASM disk group – Note File need not exist.
db_block_checksum To enable datablock integrity checking (OPTIONAL)
db_block_checking To enable datablock consistency checking (OPTIONAL)
As long as performance implications allow and do not violate existing SLAs it should be mandatory to have db_block_checksum and db_block_checking enabled.
Additionally, the following must also be configured:
Archive Log Mode
The primary database must be placed into archive log mode.
Forced Logging
The standby database is kept up to date by applying transactions on the standby site, which have been recorded in the online redo logs. In some environments that have not previously utilized Data Guard, the NOLOGGING option may have been utilized to enhance database performance. Usage of this feature in a Data Guard protected environment is strongly undesirable.
From Oracle version 9.2, Oracle introduced a method to prevent NOLOGGING transactions from occurring. This is known as forced logging mode of the database. To enable forced logging, issue the following command on the primary database:
alter database force logging;
Password File
The primary database must be configured to use an external password file. This is generally done at the time of installation. If not, then a password file can be created using the following command:
orapwd file=$ORACLE_HOME/dbs/orapwRAC_PRIM1 password=mypasswd
Before issuing the command ensure that the ORACLE_SID is set to the appropriate instance – in this case RAC_PRIM1.
Repeat this for each node of the cluster.
Also ensure that the initialisation parameter remote_login_passwordfile is set to ‘exclusive’.
As with Oracle11.1 the Orale Net sessions for Redo Transport can alternatively be auhenticated through SSL (see also section 6.2.1 in the Data Guard Concepts manual).
Standby Site Preparation
Initialization Parameter File :
As part of the duplication process a temporary initialisation file will be used. For the purposes of this document this file will be called /tmp/initRAC_PRIM.ora have one line:
db_name=RAC_PRIM
Password File
The standby database must be configured to use a password file. This must be created by copying the password file from the primary site to the standby site and renaming it to reflect the standby instances.
Repeat this for each node of the cluster.
Additionally ensure that the initialisation parameter remote_login_passwordfile is set to xclusive.
Create Audit File Destination
Create a directory on each node of the standby system to hold audit files.
mkdir /u01/app/oracle/admin/RAC_STDBY/adump
Start Standby Instance
Now that everything is in place the standby instance needs to be started ready for duplication to commence:
export ORACLE_SID=RAC_STDBY1
sqlplus / as sysdba
startup nomount pfile=’/tmp/initRAC_PRIM.ora’
Test Connection
From the primary database test the connection to the standby database using the command:
sqlplus sys/mypasswd@RAC_STDBY_dgmgrl as sysdba
This should successfully connect.
Duplicate the Primary database
The standby database is created from the primary database. In order to achieve this, up to Oracle10g a backup of the primary database needs to be made and transferred to the standby and restored. Oracle RMAN 11g simplifies this process by providing a new method which allows an ‘on the fly’-duplicate to take place. This will be the method used here (the pre-11g method is described in the Appendicies).
From the primary database invoke RMAN using the following command:
export ORACLE_SID=RAC_PRIM1
rman target / auxiliary sys/mypasswd@RAC_STDBY1_dgmgrl
NOTE: If RMAN returns the error “rman: can’t open target” then ensure that ‘ORACLE_HOME/bin’ appears first in the PATH because there exists a Linux utility also named RMAN.
Next, issue the following duplicate command:
duplicate target database for standby from active database
spfile
set db_unique_name=’RAC_STDBY’
set control_files=’+DATA/RAC_STDBY/controlfile/control01.dbf’
set instance_number=’1’
set audit_file_dest=’/u01/app/oracle/admin/RAC_STDBY/adump’
set remote_listener=’LISTENERS_RAC_STDBY’
nofilenamecheck;
Create an SPFILE for the Standby Database
By default the RMAN duplicate command will have created an spfile for the instance located in $ORACLE_HOME/dbs.
This file will contain entries that refer to the instance names on the primary database. As part of this creation process the database name is being changed to reflect the DB_UNIQUE_NAME for the standby database, and as such the spfile created is essentially worthless. A new spfile will now be created using the contents of the primary database’s spfile.
Get location of the Control File
Before starting this process, note down the value of the control_files parameter from the currently running standby database
Create a text initialization pfile
The first stage in the process requires that the primary databases initialisation parameters be dumped to a text file:
set ORACLE_SID=RAC_PRIM1
sqlplus “/ as sysdba”
create pfile=’/tmp/initRAC_STDBY.ora’ from spfile;
Copy the created file ‘/tmp/initRAC_STDBY.ora’ to the standby server.
Edit the init.ora
On the standby server, edit the /tmp/initRAC_STDBY.ora file:
NOTE: Change every occurrence of RAC_PRIM with RAC_STDBY with the exception of the parameter DB_NAME which must NOT change.
Set the control_files parameter to reflect the value obtained in 4.3.8.1 above. This will most likely be +DATA/RAC_STDBY/controlfile/control01.dbf.
Save the changes.
Create SPFILE
Having created the textual initialisation file it now needs to be converted to a spfile and stored within ASM by issuing:
export ORACLE_SID=RAC_STDBY1
sqlplus “/ as sysdba”
create spfile=’+DATA/RAC_STDBY/spfileRAC_STDBY.ora’ from pfile= ’/tmp/initRAC_STDBY.ora’
Create Pointer File
With the spfile now being in ASM, the RDBMS instances need to be told where to find it.
Create a file in the $ORACLE_HOME/dbs directory of standby node 1 (RAC_STDBY01 ) called initRAC_STDBY1.ora . This file will contain one line:
spfile=’+DATA/RAC_STDBY/spfileRAC_STDBY.ora’
Create a file in the $ORACLE_HOME/dbs directory of standby node 2 (RAC_STDBY02) called initRAC_STDBY2.ora . This file will also contain one line:
spfile=’ +DATA/RAC_STDBY/spfileRAC_STDBY.ora’
Additionally remove the RMAN created spfile from $ORACLE_HOME/dbs located on standby node 1 (RAC_STDBY01 )
Create secondary control files
When the RMAN duplicate completed, it created a standby database with only one control file. This is not good practice, so the next step in the process is to create extra control files.
This is a two-stage process:
1. Shutdown and startup the database using nomount :
shutdown immediate;
startup nomount;
2. Change the value of the control_files parameter to ‘+DATA’,’ +FRA’
alter system set control_files=‘+DATA’,’ +FRA’ scope=spfile;
3. Shutdown and startup the database again :
shutdown immediate;
startup nomount;
3. Use RMAN to duplicate the control file already present:
export ORACLE_SID=RAC_STDBY1
rman target /
restore controlfile from ‘+DATA/RAC_STDBY/controlfile/control01.dbf’
This will create a control file in both the ASM Disk group’s +DATA and +FRA. It will also update the control file parameter in the spfile.
If you wish 3 to have control files simply update the control_files parameter to include the original controlfile as well as the ones just created.
Cluster-enable the Standby Database
The standby database now needs to be brought under clusterware control, i.e. registered with Cluster Ready Services.
Before commencing, check that it is possible to start the instance on the second standby node (RAC_STDBY02):
export ORACLE_SID=RAC_STDBY2
sqlplus “/ as sysdba”
startup mount;
Ensure Server Side Load Balancing is configured
Check whether the init.ora parameter remote_listener is defined in the standby instances.
If the parameter is not present then create an entry in the tnsnames.ora files (of all standby nodes) which has the following format:
LISTENERS_RAC_STDBY.local =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01 -vip.local)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY02-vip.local)(PORT = 1521))
)
)
Then set the value of the parameter remote_listener to LISTENERS_ RAC_STDBY.local.
Register the Database with CRS
Issue the following commands to register the database with Oracle Cluster Ready Services:
srvctl add database –d RAC_STDBY –o $ORACLE_HOME –m local –p “+DATA/RAC_STDBY/spfileRAC_STDBY.ora” –n RAC_PRIM –r physical_standby –s mount
srvctl add instance –d RAC_STDBY –i RAC_STDBY1 –n RAC_STDBY01
srvctl add instance –d RAC_STDBY –i RAC_STDBY2 –n RAC_STDBY02
Test
Test that the above has worked by stopping any running standby instances and then starting the database (all instances) using the command:
srvctl start database –d RAC_STDBY
Once started check that the associated instances are running by using the command:
srvctl status database –d RAC_STDBY
Temporary Files
Temporary files associated with a temporary tablespace are automatically created with a standby database.
Create Standby Redo Logs
Standby Redo Logs (SRL) are used to store redo data from the primary databases when the transport is configured using the Logwriter (LGWR), which is the default.
Each standby redo log file must be at least as large as the largest redo log file in the primary database. It is recommended that all redo log files in the primary database and the standby redo logs in the respective standby database(s) be of the same size.
The recommended number of SRLs is :
(# of online redo logs per primary instance + 1) * # of instances .
Whilst standby redo logs are only used by the standby site, they should be defined on both the primary as well as the standby sites. This will ensure that if the two databases change their roles (primary-> standby and standby -> primary) then no extra configuration will be required.
The standby database must be mounted (mount as ‘standby’ is the default) before SRLs can be created.
SRLs are created as follows (the size given below is just an example and has to be adjusted to the current environment):
1. sqlplus ‘ / a sysdba’
2. startup mount
3. alter database add standby logfile SIZE 100M;
NOTE: Standby Redo Logs are also created in logfile groups. But be aware of the fact that group numbers then must be greater than the group numbers which are associated with the ORLs in the primary database. Wrt group numbering Oracle makes no difference between ORLs and SRLs.
NOTE: Standby Redo Logs need to be created on both databases.
The standby database is now created. The next stage in the process concerns enabling transaction synchronisation. There are two ways of doing this:
1. Using SQL Plus
2. Using the Data Guard Broker
Configuring Data Guard using SQL Plus
Configure the Standby Database
The following initialisation parameters need to be set on the standby database:
Parameter Value (RAC_STDBY01 ) Value (RAC_STDBY02)
db_unique_name RAC_STDBY
db_block_checking TRUE (OPTIONAL)
db_block_checksum TRUE (OPTIONAL)
log_archive_config dg_config=(RAC_PRIM, RAC_STDBY)
log_archive_max_processes 5
fal_client RAC_STDBY1.local RAC_STDBY2.local
fal_server ‘RAC_PRIM1.local’, ‘RAC_PRIM2.local’
Standby_file_management Auto
log_archive_dest_2 service=RAC_PRIM LGWR SYNC AFFIRM db_unique_name=PRIMARY_RAC_PRIM VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE)
log_archive_dest_2 (Max. Performance Mode) service=RAC_PRIM ARCH db_unique_name=PRIMARY_RAC_PRIM VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE)
Configure the Primary Database
The following initialisation parameters need to be set on the primary database:
Parameter Value (RAC_PRIM01 ) Value (RAC_PRIM02)
db_unique_name RAC_PRIM
db_block_checking TRUE (OPTIONAL)
db_block_checksum TRUE (OPTIONAL)
log_archive_config dg_config=(RAC_PRIM, RAC_STDBY)
log_archive_max_processes 5
fal_client RAC_PRIM1.local RAC_PRIM2.local
fal_server ‘RAC_STDBY1.local’, ‘RAC_STDBY2.local’
standby_file_management Auto
Log_archive_dest_2 service=RAC_STDBY LGWR SYNC AFFIRM db_unique_name=RAC_STDBY VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE)
Log_archive_dest_2 (Max. Performance Mode) service=RAC_STDBY ARCH db_unique_name=RAC_STDBY VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE
Set the Protection Mode
In order to specify the protection mode, the primary database must be mounted but not opened.
NOTE: The database must be mounted in exclusive mode which effectively means that all RAC instances but one be shutdown and the remaining instance be started with a parameter setting of cluster_database=false.
Once this is the case then the following statement must be issued on the primary site:
If using Maximum Protection mode then use the command:
Alter database set standby database to maximize protection;
If using Maximum Availability mode then use the command:
Alter database set standby database to maximize availability;
If using Maximum Performance mode then use the command:
Alter database set standby database to maximize performance;
Enable Redo Transport & Redo Apply
Enabling the transport and application of redo to the standby database is achieved by the following:
Standby Site
The standby database needs to be placed into Managed Recovery mode. This is achieved by issuing the statement:
Alter database recover managed standby database disconnect;
Oracle 10gR2 introduced Real Time redo apply (SRLs required). Enabling real time apply is achieved by issuing the statement:
alter database recover managed standby database using current logfile disconnect;
Primary Site:
Set:
log_archive_dest_state_2=enable
in the init.ora file or issue via SQLPlus :
alter system set log_archive_dest_state_2=enable
For Complete and More details, Please refer the following ORACLE HA Best Practices Article:
Data Guard 11g Installation and Configuration Best Practices on Oracle RAC
====================================================================================
Configuration Details:
• Primary Host Names are RAC_PRIM01 and RAC_PRIM02
• Standby Host Names are RAC_STDBY01 and RAC_STDBY02
• The primary database is RAC_PRIM
• Virtual Names are RAC_PRIM01-vip, RAC_PRIM02-vip, RAC_STDBY01-vip and RAC_STDBY02-vip
• Both the primary and standby databases use ASM for storage
• The following ASM disk groups are being used +DATA (for data) and +FRA for Recovery/Flashback
• The standby database will be referred to as RAC_STDBY
• Oracle Managed Files will be used.
• ORACLE_BASE is set to /u01/app/oracle
1. Configure Primary and Standby sites
For Better and Simpler configuration of Data Guard, it is recommended that the Primary and Standby machines have exactly the same structure, i.e.
• ORACLE_HOME points to the same mount point on both sites.
• ORACLE_BASE/admin points to the same mount point on both sites.
• ASM Disk Groups are the same on both sites
2. Install Oracle Software on each site.
• Oracle Clusterware
• Oracle database executables for use by ASM
• Oracle database executables for use by the RDBMS
3. Server Names / VIPs
The Oracle Real Application Clusters 11g virtual server names and IP addresses are used and maintained by Oracle Cluster Ready Services (CRS).
Note: Both short and fully qualified names will exist.
Server Name/Alias/Host Entry Purpose
RAC_PRIM01.local Public Host Name (PRIMARY Node 1)
RAC_PRIM02.local Public Host Name (PRIMARY Node 2)
RAC_STDBY01.local Public Host Name (STANDBY Node 1)
RAC_STDBY02.local Public Host Name (STANDBY Node 2)
RAC_PRIM01-vip.local Public Virtual Name (PRIMARY Node 1)
RAC_PRIM02-vip.local Public Virtual Name (PRIMARY Node 2)
RAC_STDBY01-vip.local Public Virtual Name (STANDBY Node 1)
RAC_STDBY02-vip.local Public Virtual Name (STANDBY Node 2)
4. Configure Oracle Networking
4.1 Configure Listener on Each Site
Each site will have a listener defined which will be running from the ASM Oracle Home. The following listeners have been defined in this example configuration.
Primary Role
Listener_RAC_PRIM01
Listener_RAC_PRIM02
Listener_RAC_STDBY01
Listener_RAC_STDBY02
4.2 Static Registration
Oracle must be able to access all instances of both databases whether they are in an open, mounted or closed state. This means that these must be statically registered with the listener.
These entries will have a special name which will be used to facilitate the use of the Data Guard Broker, discussed later.
4.3 Sample Listener.ora
LISTENER_RAC_STDBY01 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01-vip)(PORT = 1521)
(IP = FIRST))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01)(PORT = 1521)
(IP = FIRST))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)
)
)
SID_LIST_LISTENER_RAC_STDBY01 =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME=RAC_STDBY_dgmgrl.local)
(SID_NAME = RAC_STDBY1)
(ORACLE_HOME = $ORACLE_HOME)
)
)
4.4 Configure TNS entries on each site.
In order to make things simpler the same network service names will be generated on each site. These service names will be called:
Alias Comments
RAC_PRIM1_DGMGRL.local Points to the RAC_PRIM instance on RAC_PRIM01 using the service name RAC_PRIM_DGMGRL.local. This can be used for creating the standby database.
RAC_PRIM1.local Points to the RAC_PRIM instance on RAC_PRIM01. using the service name RAC_PRIM.local
RAC_PRIM2.local Points to the RAC_PRIM instance on RAC_PRIM02 using the service name RAC_PRIM.local
RAC_PRIM.local Points to the RAC_PRIM database i.e. Contains all database instances.
RAC_STDBY1_DGMGRL.local Points to the RAC_STDBY instance on RAC_STDBY01 using the service name RAC_STDBY1_DGMGRL ** This will be used for the database duplication.
RAC_STDBY1.local Points to the RAC_STDBY instance on RAC_STDBY01 using the service name RAC_STDBY.local
RAC_STDBY2.local Points to the RAC_STDBY instance on RAC_STDBY02 using the service name RAC_STDBY.local
RAC_STDBY.local Points to the RAC_STDBY database i.e. Contains all the database instances
listener_DB_UNIQUE_NAME.local This will be a tns alias entry consisting of two address lines. The first address line will be the address of the listener on Node1 and the second will be the address of the listener on Node 2. Placing both of the above listeners in the address list will ensure that the database automatically registers with both nodes. There must be two sets of entries. One for the standby nodes call listener_RAC_STDBY and one for the primary nodes called listener_RAC_PRIM
Sample tnsnames.ora (RAC_PRIM01)
RAC_PRIM1_DGMGRL.local =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM01-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_PRIM_DGMGRL.local)
)
)
RAC_PRIM1.local =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM01-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_PRIM.local)
(INSTANCE_NAME = RAC_PRIM1)
)
)
RAC_PRIM2.local =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM02-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_PRIM.local)
(INSTANCE_NAME = RAC_PRIM2)
)
)
RAC_PRIM.local =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM01-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM02-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_PRIM.local)
)
)
RAC_STDBY1_DGMGRL.local =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_STDBY_DGMGRL.local)
)
)
RAC_STDBY2.local=
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY02-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_STDBY.local)
(INSTANCE_NAME=RAC_STDBY2)
)
)
RAC_STDBY1.local=
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_STDBY.local)
(INSTANCE_NAME=RAC_STDBY1)
)
)
RAC_STDBY.local=
(DESCRIPTION =
(ADDRESS_LIST=
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY02-vip)(PORT = 1521)))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC_STDBY.local)
)
)
LISTENERS_RAC_PRIM.local=
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM01-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM02-vip)(PORT = 1521))
)
4.5 Configure ASM on each Site
Certain initialisation parameters are only applicable when a database is running in either a standby or primary database role. Defining ALL of the parameters on BOTH sites will ensure that, if the roles are switched (Primary becomes Standby and Standby becomes the new Primary), then no further configuration will be necessary.
Some of the parameters will however be node-specific; therefore there will be one set of parameters for the Primary site nodes and one for the Standby site nodes.
4.6 Primary Site Preparation
The following initialisation parameters should be set on the primary site prior to duplication. Whilst they are only applicable to the primary site, they will be equally configured on the standby site.
Dg_broker_config_file1 Point this to a file within the ASM disk group – Note File need not exist.
Dg_broker_config_file2 Point this to a file within the ASM disk group – Note File need not exist.
db_block_checksum To enable datablock integrity checking (OPTIONAL)
db_block_checking To enable datablock consistency checking (OPTIONAL)
As long as performance implications allow and do not violate existing SLAs it should be mandatory to have db_block_checksum and db_block_checking enabled.
Additionally, the following must also be configured:
Archive Log Mode
The primary database must be placed into archive log mode.
Forced Logging
The standby database is kept up to date by applying transactions on the standby site, which have been recorded in the online redo logs. In some environments that have not previously utilized Data Guard, the NOLOGGING option may have been utilized to enhance database performance. Usage of this feature in a Data Guard protected environment is strongly undesirable.
From Oracle version 9.2, Oracle introduced a method to prevent NOLOGGING transactions from occurring. This is known as forced logging mode of the database. To enable forced logging, issue the following command on the primary database:
alter database force logging;
Password File
The primary database must be configured to use an external password file. This is generally done at the time of installation. If not, then a password file can be created using the following command:
orapwd file=$ORACLE_HOME/dbs/orapwRAC_PRIM1 password=mypasswd
Before issuing the command ensure that the ORACLE_SID is set to the appropriate instance – in this case RAC_PRIM1.
Repeat this for each node of the cluster.
Also ensure that the initialisation parameter remote_login_passwordfile is set to ‘exclusive’.
As with Oracle11.1 the Orale Net sessions for Redo Transport can alternatively be auhenticated through SSL (see also section 6.2.1 in the Data Guard Concepts manual).
Standby Site Preparation
Initialization Parameter File :
As part of the duplication process a temporary initialisation file will be used. For the purposes of this document this file will be called /tmp/initRAC_PRIM.ora have one line:
db_name=RAC_PRIM
Password File
The standby database must be configured to use a password file. This must be created by copying the password file from the primary site to the standby site and renaming it to reflect the standby instances.
Repeat this for each node of the cluster.
Additionally ensure that the initialisation parameter remote_login_passwordfile is set to xclusive.
Create Audit File Destination
Create a directory on each node of the standby system to hold audit files.
mkdir /u01/app/oracle/admin/RAC_STDBY/adump
Start Standby Instance
Now that everything is in place the standby instance needs to be started ready for duplication to commence:
export ORACLE_SID=RAC_STDBY1
sqlplus / as sysdba
startup nomount pfile=’/tmp/initRAC_PRIM.ora’
Test Connection
From the primary database test the connection to the standby database using the command:
sqlplus sys/mypasswd@RAC_STDBY_dgmgrl as sysdba
This should successfully connect.
Duplicate the Primary database
The standby database is created from the primary database. In order to achieve this, up to Oracle10g a backup of the primary database needs to be made and transferred to the standby and restored. Oracle RMAN 11g simplifies this process by providing a new method which allows an ‘on the fly’-duplicate to take place. This will be the method used here (the pre-11g method is described in the Appendicies).
From the primary database invoke RMAN using the following command:
export ORACLE_SID=RAC_PRIM1
rman target / auxiliary sys/mypasswd@RAC_STDBY1_dgmgrl
NOTE: If RMAN returns the error “rman: can’t open target” then ensure that ‘ORACLE_HOME/bin’ appears first in the PATH because there exists a Linux utility also named RMAN.
Next, issue the following duplicate command:
duplicate target database for standby from active database
spfile
set db_unique_name=’RAC_STDBY’
set control_files=’+DATA/RAC_STDBY/controlfile/control01.dbf’
set instance_number=’1’
set audit_file_dest=’/u01/app/oracle/admin/RAC_STDBY/adump’
set remote_listener=’LISTENERS_RAC_STDBY’
nofilenamecheck;
Create an SPFILE for the Standby Database
By default the RMAN duplicate command will have created an spfile for the instance located in $ORACLE_HOME/dbs.
This file will contain entries that refer to the instance names on the primary database. As part of this creation process the database name is being changed to reflect the DB_UNIQUE_NAME for the standby database, and as such the spfile created is essentially worthless. A new spfile will now be created using the contents of the primary database’s spfile.
Get location of the Control File
Before starting this process, note down the value of the control_files parameter from the currently running standby database
Create a text initialization pfile
The first stage in the process requires that the primary databases initialisation parameters be dumped to a text file:
set ORACLE_SID=RAC_PRIM1
sqlplus “/ as sysdba”
create pfile=’/tmp/initRAC_STDBY.ora’ from spfile;
Copy the created file ‘/tmp/initRAC_STDBY.ora’ to the standby server.
Edit the init.ora
On the standby server, edit the /tmp/initRAC_STDBY.ora file:
NOTE: Change every occurrence of RAC_PRIM with RAC_STDBY with the exception of the parameter DB_NAME which must NOT change.
Set the control_files parameter to reflect the value obtained in 4.3.8.1 above. This will most likely be +DATA/RAC_STDBY/controlfile/control01.dbf.
Save the changes.
Create SPFILE
Having created the textual initialisation file it now needs to be converted to a spfile and stored within ASM by issuing:
export ORACLE_SID=RAC_STDBY1
sqlplus “/ as sysdba”
create spfile=’+DATA/RAC_STDBY/spfileRAC_STDBY.ora’ from pfile= ’/tmp/initRAC_STDBY.ora’
Create Pointer File
With the spfile now being in ASM, the RDBMS instances need to be told where to find it.
Create a file in the $ORACLE_HOME/dbs directory of standby node 1 (RAC_STDBY01 ) called initRAC_STDBY1.ora . This file will contain one line:
spfile=’+DATA/RAC_STDBY/spfileRAC_STDBY.ora’
Create a file in the $ORACLE_HOME/dbs directory of standby node 2 (RAC_STDBY02) called initRAC_STDBY2.ora . This file will also contain one line:
spfile=’ +DATA/RAC_STDBY/spfileRAC_STDBY.ora’
Additionally remove the RMAN created spfile from $ORACLE_HOME/dbs located on standby node 1 (RAC_STDBY01 )
Create secondary control files
When the RMAN duplicate completed, it created a standby database with only one control file. This is not good practice, so the next step in the process is to create extra control files.
This is a two-stage process:
1. Shutdown and startup the database using nomount :
shutdown immediate;
startup nomount;
2. Change the value of the control_files parameter to ‘+DATA’,’ +FRA’
alter system set control_files=‘+DATA’,’ +FRA’ scope=spfile;
3. Shutdown and startup the database again :
shutdown immediate;
startup nomount;
3. Use RMAN to duplicate the control file already present:
export ORACLE_SID=RAC_STDBY1
rman target /
restore controlfile from ‘+DATA/RAC_STDBY/controlfile/control01.dbf’
This will create a control file in both the ASM Disk group’s +DATA and +FRA. It will also update the control file parameter in the spfile.
If you wish 3 to have control files simply update the control_files parameter to include the original controlfile as well as the ones just created.
Cluster-enable the Standby Database
The standby database now needs to be brought under clusterware control, i.e. registered with Cluster Ready Services.
Before commencing, check that it is possible to start the instance on the second standby node (RAC_STDBY02):
export ORACLE_SID=RAC_STDBY2
sqlplus “/ as sysdba”
startup mount;
Ensure Server Side Load Balancing is configured
Check whether the init.ora parameter remote_listener is defined in the standby instances.
If the parameter is not present then create an entry in the tnsnames.ora files (of all standby nodes) which has the following format:
LISTENERS_RAC_STDBY.local =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01 -vip.local)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY02-vip.local)(PORT = 1521))
)
)
Then set the value of the parameter remote_listener to LISTENERS_ RAC_STDBY.local.
Register the Database with CRS
Issue the following commands to register the database with Oracle Cluster Ready Services:
srvctl add database –d RAC_STDBY –o $ORACLE_HOME –m local –p “+DATA/RAC_STDBY/spfileRAC_STDBY.ora” –n RAC_PRIM –r physical_standby –s mount
srvctl add instance –d RAC_STDBY –i RAC_STDBY1 –n RAC_STDBY01
srvctl add instance –d RAC_STDBY –i RAC_STDBY2 –n RAC_STDBY02
Test
Test that the above has worked by stopping any running standby instances and then starting the database (all instances) using the command:
srvctl start database –d RAC_STDBY
Once started check that the associated instances are running by using the command:
srvctl status database –d RAC_STDBY
Temporary Files
Temporary files associated with a temporary tablespace are automatically created with a standby database.
Create Standby Redo Logs
Standby Redo Logs (SRL) are used to store redo data from the primary databases when the transport is configured using the Logwriter (LGWR), which is the default.
Each standby redo log file must be at least as large as the largest redo log file in the primary database. It is recommended that all redo log files in the primary database and the standby redo logs in the respective standby database(s) be of the same size.
The recommended number of SRLs is :
(# of online redo logs per primary instance + 1) * # of instances .
Whilst standby redo logs are only used by the standby site, they should be defined on both the primary as well as the standby sites. This will ensure that if the two databases change their roles (primary-> standby and standby -> primary) then no extra configuration will be required.
The standby database must be mounted (mount as ‘standby’ is the default) before SRLs can be created.
SRLs are created as follows (the size given below is just an example and has to be adjusted to the current environment):
1. sqlplus ‘ / a sysdba’
2. startup mount
3. alter database add standby logfile SIZE 100M;
NOTE: Standby Redo Logs are also created in logfile groups. But be aware of the fact that group numbers then must be greater than the group numbers which are associated with the ORLs in the primary database. Wrt group numbering Oracle makes no difference between ORLs and SRLs.
NOTE: Standby Redo Logs need to be created on both databases.
The standby database is now created. The next stage in the process concerns enabling transaction synchronisation. There are two ways of doing this:
1. Using SQL Plus
2. Using the Data Guard Broker
Configuring Data Guard using SQL Plus
Configure the Standby Database
The following initialisation parameters need to be set on the standby database:
Parameter Value (RAC_STDBY01 ) Value (RAC_STDBY02)
db_unique_name RAC_STDBY
db_block_checking TRUE (OPTIONAL)
db_block_checksum TRUE (OPTIONAL)
log_archive_config dg_config=(RAC_PRIM, RAC_STDBY)
log_archive_max_processes 5
fal_client RAC_STDBY1.local RAC_STDBY2.local
fal_server ‘RAC_PRIM1.local’, ‘RAC_PRIM2.local’
Standby_file_management Auto
log_archive_dest_2 service=RAC_PRIM LGWR SYNC AFFIRM db_unique_name=PRIMARY_RAC_PRIM VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE)
log_archive_dest_2 (Max. Performance Mode) service=RAC_PRIM ARCH db_unique_name=PRIMARY_RAC_PRIM VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE)
Configure the Primary Database
The following initialisation parameters need to be set on the primary database:
Parameter Value (RAC_PRIM01 ) Value (RAC_PRIM02)
db_unique_name RAC_PRIM
db_block_checking TRUE (OPTIONAL)
db_block_checksum TRUE (OPTIONAL)
log_archive_config dg_config=(RAC_PRIM, RAC_STDBY)
log_archive_max_processes 5
fal_client RAC_PRIM1.local RAC_PRIM2.local
fal_server ‘RAC_STDBY1.local’, ‘RAC_STDBY2.local’
standby_file_management Auto
Log_archive_dest_2 service=RAC_STDBY LGWR SYNC AFFIRM db_unique_name=RAC_STDBY VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE)
Log_archive_dest_2 (Max. Performance Mode) service=RAC_STDBY ARCH db_unique_name=RAC_STDBY VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE
Set the Protection Mode
In order to specify the protection mode, the primary database must be mounted but not opened.
NOTE: The database must be mounted in exclusive mode which effectively means that all RAC instances but one be shutdown and the remaining instance be started with a parameter setting of cluster_database=false.
Once this is the case then the following statement must be issued on the primary site:
If using Maximum Protection mode then use the command:
Alter database set standby database to maximize protection;
If using Maximum Availability mode then use the command:
Alter database set standby database to maximize availability;
If using Maximum Performance mode then use the command:
Alter database set standby database to maximize performance;
Enable Redo Transport & Redo Apply
Enabling the transport and application of redo to the standby database is achieved by the following:
Standby Site
The standby database needs to be placed into Managed Recovery mode. This is achieved by issuing the statement:
Alter database recover managed standby database disconnect;
Oracle 10gR2 introduced Real Time redo apply (SRLs required). Enabling real time apply is achieved by issuing the statement:
alter database recover managed standby database using current logfile disconnect;
Primary Site:
Set:
log_archive_dest_state_2=enable
in the init.ora file or issue via SQLPlus :
alter system set log_archive_dest_state_2=enable
For Complete and More details, Please refer the following ORACLE HA Best Practices Article:
Data Guard 11g Installation and Configuration Best Practices on Oracle RAC
====================================================================================
Tuesday, October 28, 2008
Oracle 11g Release 1 RAC On Linux Using VMware Server
Oracle 11g Release 1 RAC On Linux Using VMware Server
This article describes the installation of Oracle 11g release 1 (11.1) RAC on Linux (Oracle Enterprise Linux 5) using VMware Server with no additional shared disk devices.- Introduction
- Download Software
- VMware Server Installation
- Virtual Machine Setup
- Guest Operating System Installation
- Oracle Installation Prerequisites
- Install VMware Client Tools
- Create Shared Disks
- Clone the Virtual Machine
- Install the Clusterware Software
- Install the Database Software and Create an ASM Instance
- Create a Database using the DBCA
- TNS Configuration
- Check the Status of the RAC
Introduction
One of the biggest obstacles preventing people from setting up test RAC environments is the requirement for shared storage. In a production environment, shared storage is often provided by a SAN or high-end NAS device, but both of these options are very expensive when all you want to do is get some experience installing and using RAC. A cheaper alternative is to use a FireWire disk enclosure to allow two machines to access the same disk(s), but that still costs money and requires two servers. A third option is to use VMware Server to fake the shared storage.Using VMware Server you can run multiple Virtual Machines (VMs) on a single server, allowing you to run both RAC nodes on a single machine. In additon, it allows you to set up shared virtual disks, overcoming the obstacle of expensive shared storage.
Before you launch into this installation, here are a few things to consider.
- The finished system includes the host operating system, two guest operating systems, two sets of Oracle Clusterware, two ASM instances and two Database instances all on a single server. As you can imagine, this requires a significant amount of disk space, CPU and memory. I was able to complete this installation on a 3.4G Pentium 4 with 3G of memory, but it was extremely slow.
- This procedure provides a bare bones installation to get the RAC working. There is no redundancy in the Clusterware installation or the ASM installation. To add this, simply create double the amount of shared disks and select the "Normal" redundancy option when it is offered. Of course, this will take more disk space.
- During the virtual disk creation, I always choose not to preallocate the disk space. This makes virtual disk access slower during the installation, but saves on wasted disk space.
- This is not, and should not be considered, a production-ready system. It's simply to allow you to get used to installing and using RAC.
Download Software
Download the following software.VMware Server Installation
Regardless of the host OS, the setup of the virtual machines should be similar.First, install the VMware Server software. On Linux you do this with the following command as the root user.
Then finish the configuration by running the vmware-config.pl script as the root user. Most of the questions can be answered with the default response by pressing the return key. The output below shows my responses to the questions.# rpm -Uvh VMware-server-*.rpm
Preparing... ########################################### [100%]
1:VMware-server ########################################### [100%]
#
The VMware Server Console is started by issuing the command "vmware" at the command prompt, or by selecting it from the "System Tools" menu.# vmware-config.pl
Making sure services for VMware Server are stopped.
Stopping VMware services:
Virtual machine monitor [ OK ]
You must read and accept the End User License Agreement to continue.
Press enter to display it.
VMWARE, INC.
SOFTWARE BETA TEST AGREEMENT
*** Editied out license agreement ***
Do you accept? (yes/no) yes
Thank you.
Configuring fallback GTK+ 2.4 libraries.
In which directory do you want to install the mime type icons?
[/usr/share/icons]
What directory contains your desktop menu entry files? These files have a
.desktop file extension. [/usr/share/applications]
In which directory do you want to install the application's icon?
[/usr/share/pixmaps]
Trying to find a suitable vmmon module for your running kernel.
The module bld-2.6.9-5.EL-i686smp-RHEL4 loads perfectly in the running kernel.
Do you want networking for your virtual machines? (yes/no/help) [yes]
Configuring a bridged network for vmnet0.
The following bridged networks have been defined:
. vmnet0 is bridged to eth0
All your ethernet interfaces are already bridged.
Do you want to be able to use NAT networking in your virtual machines? (yes/no)
[yes]
Configuring a NAT network for vmnet8.
Do you want this program to probe for an unused private subnet? (yes/no/help)
[yes]
Probing for an unused private subnet (this can take some time)...
The subnet 172.16.210.0/255.255.255.0 appears to be unused.
The following NAT networks have been defined:
. vmnet8 is a NAT network on private subnet 172.16.210.0.
Do you wish to configure another NAT network? (yes/no) [no]
Do you want to be able to use host-only networking in your virtual machines?
[yes] no
Trying to find a suitable vmnet module for your running kernel.
The module bld-2.6.9-5.EL-i686smp-RHEL4 loads perfectly in the running kernel.
Please specify a port for remote console connections to use [902]
Stopping xinetd: [ OK ]
Starting xinetd: [ OK ]
Configuring the VMware VmPerl Scripting API.
Building the VMware VmPerl Scripting API.
Using compiler "/usr/bin/gcc". Use environment variable CC to override.
The installation of the VMware VmPerl Scripting API succeeded.
Do you want this program to set up permissions for your registered virtual
machines? This will be done by setting new permissions on all files found in
the "/etc/vmware/vm-list" file. [no] yes
Generating SSL Server Certificate
In which directory do you want to keep your virtual machine files?
[/var/lib/vmware/Virtual Machines] /u01/VM
Do you want to enter a serial number now? (yes/no/help) [no] yes
Please enter your 20-character serial number.
Type XXXXX-XXXXX-XXXXX-XXXXX or 'Enter' to cancel: ENTER-YOUR-SERIAL-NUMBER
Starting VMware services:
Virtual machine monitor [ OK ]
Virtual ethernet [ OK ]
Bridged networking on /dev/vmnet0 [ OK ]
Host-only networking on /dev/vmnet8 (background) [ OK ]
NAT service on /dev/vmnet8 [ OK ]
Starting VMware virtual machines... [ OK ]
The configuration of VMware Server e.x.p build-22874 for Linux for this running
kernel completed successfully.
#
On the "Connect to Host" dialog, accept the "Local host" option by clicking the "Connect" button.
You are then presented with the main VMware Server Console screen.
The VMware Server is now installed and ready to use.
Virtual Machine Setup
Now we must define the two virtual RAC nodes. We can save time by defining one VM, then cloning it when it is installed.Click the "Create a new virtual machine" button to start the "New Virtual Machine Wizard". Click the "Next" button onthe welcome page.
Select the "Custom" virtual machine configuration and click the "Next" button.
Select the "Linux" guest operating system option, and set the version to "Red Hat Enterprise Linux 4", then click the "Next" button. The RHEL4 option is used because at the time of writing, the current version of VMware server doesn't explicitly support RHEL5, but this setting works fine.
Enter the name "RAC1" and the location should default to "/u01/VM/RAC1", then click the "Next" button.
Select the required number of processors and click the "Next" button.
Uncheck the "Make this virtual machine private" checkbox and click the "Next" button.
Select the amount of memory to associate with the virtual machine. Remember, you are going to need two instances, so don't associate too much, but you are going to need approximately 1 Gig (1024 Meg) to compete the installation successfully.
Accept the "Use bridged networking" option by clicking the "Next" button.
Accept the "LSI Logic" option by clicking the "Next" button.
Select the "Create a new virtual disk" option and click the "Next" button.
Accept the "SCSI" option by clicking the "Next" button. It's a virtual disk, so you can still use this option even if your physical disk is IDE or SATA.
Set the disk size to "10.0" GB and uncheck the "Allocate all disk space now" option. The latter will make disk access slower, but will save you wasting disk space.
Accept "RAC1.vmdk" as the disk file name and complete the VM creation by clicking the "Finish" button.
On the "VMware Server Console" screen, click the "Edit virtual machine settings" button.
On the "Virtual Machine Settings" screen, highlight the "Floppy 1" drive and click the "- Remove" button.
Click the "+ Add" button, select a hardware type of "Ethernet Adapter", then click the "Next" button.
Accept the "Bridged" option by clicking the "Finish" button.
Finish by clicking the "OK" button on the Virtual Machine Settings dialog.
The virtual machine is now configured so we can start the guest operating system installation.
Guest Operating System Installation
Place the first OEL 5 disk in the CD drive and start the virtual machine by clicking the "Power on this virtual machine" button. The right pane of the VMware Server Console should display a boot loader, then the OEL installation screen.Continue through the OEL 5 installation as you would for a normal server. A general pictorial guide to the installation can be found here. More specifically, it should be a server installation with a minimum of 2G swap, firewall and SELinux disabled and the following package groups installed:
- GNOME Desktop Environment
- Editors
- Graphical Internet
- Text-based Internet
- Development Libraries
- Development Tools
- Server Configuration Tools
- Administration Tools
- Base
- System Tools
- X Window System
- hostname: rac1.localdomain
- IP Address eth0: 192.168.2.101 (public address)
- Default Gateway eth0: 192.168.2.1 (public address)
- IP Address eth1: 192.168.0.101 (private address)
- Default Gateway eth1: none
Once the basic installation is complete, install the following packages whilst logged in as the root user.
# From Enterprise Linux 5 Disk 1
cd /media/cdrom/Server
rpm -Uvh binutils-2.*
rpm -Uvh elfutils-libelf-0.*
rpm -Uvh glibc-2.*
rpm -Uvh glibc-common-2.*
rpm -Uvh libaio-0.*
rpm -Uvh libgcc-4.*
rpm -Uvh libstdc++-4.*
rpm -Uvh make-3.*
cd /
eject
# From Enterprise Linux 5 Disk 2
cd /media/cdrom/Server
rpm -Uvh compat-libstdc++-33*
rpm -Uvh elfutils-libelf-devel-*
rpm -Uvh glibc-headers*
rpm -Uvh glibc-devel-2.*
rpm -Uvh libgomp*
rpm -Uvh gcc-4.*
rpm -Uvh gcc-c++-4.*
rpm -Uvh libaio-devel-0.*
rpm -Uvh libstdc++-devel-4.*
rpm -Uvh unixODBC-2.*
rpm -Uvh unixODBC-devel-2.*
cd /
eject
# From Enterprise Linux 5 Disk 3
cd /media/cdrom/Server
rpm -Uvh sysstat-7.*
cd /
eject
Oracle Installation Prerequisites
Perform the following steps whilst logged into the RAC1 virtual machine as the root user.The /etc/hosts file must contain the following information.
Add the following lines to the /etc/sysctl.conf file.127.0.0.1 localhost.localdomain localhost
# Public
192.168.2.101 rac1.localdomain rac1
192.168.2.102 rac2.localdomain rac2
#Private
192.168.0.101 rac1-priv.localdomain rac1-priv
192.168.0.102 rac2-priv.localdomain rac2-priv
#Virtual
192.168.2.111 rac1-vip.localdomain rac1-vip
192.168.2.112 rac2-vip.localdomain rac2-vip
Run the following command to change the current kernel parameters.kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default=4194304
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=262144
Add the following lines to the /etc/security/limits.conf file./sbin/sysctl -p
Add the following lines to the /etc/pam.d/login file, if it does not already exist.oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
Disable secure linux by editing the /etc/selinux/config file, making sure the SELINUX flag is set as follows.session required /lib/security/pam_limits.so
session required pam_limits.so
Alternatively, this alteration can be done using the GUI tool (System > Administration > Security Level and Firewall). Click on the SELinux tab and disable the feature.SELINUX=disabled
Create the new groups and users.
Create the directories in which the Oracle software will be installed.groupadd oinstall
groupadd dba
groupadd oper
groupadd asmadmin
useradd -u 500 -g oinstall -G dba,oper,asmadmin oracle
passwd oracle
Login as the oracle user and add the following lines at the end of the .bash_profile file.mkdir -p /u01/crs/oracle/product/11.1.0/crs
mkdir -p /u01/app/oracle/product/11.1.0/db_1
chown -R oracle:oinstall /u01
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE_HOSTNAME=rac1.localdomain; export ORACLE_HOSTNAME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.1.0/db_1; export ORACLE_HOME
ORACLE_SID=RAC1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
PATH=/usr/sbin:$PATH; export PATH
PATH=$ORACLE_HOME/bin:$PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH
if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
Install VMware Client Tools
Login as the root user on the RAC1 virtual machine, then select the "VM > Install VMware Tools..." option from the main VMware Server Console menu.This should mount a virtual CD containing the VMware Tools software. Double-click on the CD icon labelled "VMware Tools" to open the CD. Right-click on the ".rpm" package and select the "Open with 'Install Packages'" menu option.
Click the "Continue" button on the "Completed System Preparation" screen and wait for the installation to complete.
Once the package is loaded, the CD should unmount automatically. You must then run the "vmware-config-tools.pl" script as the root user.
Accept all the default settings and pick the screen resolution of your choice. Ignore any warnings or errors. The VMware client tools are now installed. Reboot the server before proceeding. After the reboot, it is possible the monitor will not be recognised. If this is the case don't panic. Follow th instructions provided on the screen and reconfigure the monitor setting, which will allow the XServer to function correctly.# vmware-config-tools.pl
Create Shared Disks
Shut down the RAC1 virtual machine using the following command.Create a directory on the host system to hold the shared virtual disks.# shutdown -h now
On the VMware Server Console, click the "Edit virtual machine settings" button. On the "Virtual Machine Settings" screen, click the "+ Add" button.# mkdir -p /u01/VM/shared
Click the "Next" button on the welcome screen, then select the hardware type of "Hard Disk" and click the "Next" button.
Accept the "Create a new virtual disk" option by clicking the "Next" button.
Accept the "SCSI" option by clicking the "Next" button.
Set the disk size to "10.0" GB and uncheck the "Allocate all disk space now" option, then click the "Next" button.
Set the disk name to "/u01/VM/shared/ocr.vmdk" and click the "Advanced" button.
Set the virtual device node to "SCSI 1:1" and the mode to "Independent" and "Persistent", then click the "Finish" button.
Repeat the previous hard disk creation steps 4 more times, using the following values:
- File Name: /u01/VM/shared/votingdisk.vmdk
Virtual Device Node: SCSI 1:2
Mode: Independent and Persistent - File Name: /u01/VM/shared/asm1.vmdk
Virtual Device Node: SCSI 1:3
Mode: Independent and Persistent - File Name: /u01/VM/shared/asm2.vmdk
Virtual Device Node: SCSI 1:4
Mode: Independent and Persistent - File Name: /u01/VM/shared/asm3.vmdk
Virtual Device Node: SCSI 1:5
Mode: Independent and Persistent
Edit the contents of the "/u01/VM/RAC1/RAC1.vmx" file using a text editor, making sure the following entries are present. Some of the tries will already be present, some will not.
Start the RAC1 virtual machine by clicking the "Power on this virtual machine" button on the VMware Server Console. When the server has started, log in as the root user so you can partition the disks. The current disks can be seen by issueing the following commands.disk.locking = "FALSE"
diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.dataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize = "4096"
diskLib.maxUnsyncedWrites = "0"
scsi1.present = "TRUE"
scsi1.virtualDev = "lsilogic"
scsi1.sharedBus = "VIRTUAL"
scsi1:1.present = "TRUE"
scsi1:1.mode = "independent-persistent"
scsi1:1.fileName = "/u01/VM/shared/ocr.vmdk"
scsi1:1.deviceType = "plainDisk"
scsi1:1.redo = ""
scsi1:2.present = "TRUE"
scsi1:2.mode = "independent-persistent"
scsi1:2.fileName = "/u01/VM/shared/votingdisk.vmdk"
scsi1:2.deviceType = "plainDisk"
scsi1:2.redo = ""
scsi1:3.present = "TRUE"
scsi1:3.mode = "independent-persistent"
scsi1:3.fileName = "/u01/VM/shared/asm1.vmdk"
scsi1:3.deviceType = "plainDisk"
scsi1:3.redo = ""
scsi1:4.present = "TRUE"
scsi1:4.mode = "independent-persistent"
scsi1:4.fileName = "/u01/VM/shared/asm2.vmdk"
scsi1:4.deviceType = "plainDisk"
scsi1:4.redo = ""
scsi1:5.present = "TRUE"
scsi1:5.mode = "independent-persistent"
scsi1:5.fileName = "/u01/VM/shared/asm3.vmdk"
scsi1:5.deviceType = "plainDisk"
scsi1:5.redo = ""
Use the "fdisk" command to partition the disks sdb to sdf. The following output shows the expected fdisk output for the sdb disk.# cd /dev
# ls sd*
sda sda1 sda2 sdb sdc sdd sde sdf
#
In each case, the sequence of answers is "n", "p", "1", "Return", "Return", "p" and "w".# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305):
Using default value 1305
Command (m for help): p
Disk /dev/sdb: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 1305 10482381 83 Linux
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
#
Once all the disks are partitioned, the results can be seen by repeating the previous "ls" command.
Add the following commands to the /etc/rc.local file.# cd /dev
# ls sd*
sda sda1 sda2 sdb sdb1 sdc sdc1 sdd sdd1 sde sde1 sdf sdf1
#
The shared disks are now configured. We don't have to worry about defining raw devices, which are deprecated in Enterprise Linux 5.chown oracle:oinstall /dev/sdb1
chown oracle:oinstall /dev/sdc1
chown oracle:oinstall /dev/sdd1
chown oracle:oinstall /dev/sde1
chown oracle:oinstall /dev/sdf1
chmod 600 /dev/sdb1
chmod 600 /dev/sdc1
chmod 600 /dev/sdd1
chmod 600 /dev/sde1
chmod 600 /dev/sdf1
Clone the Virtual Machine
The current version of VMware Server does not include an option to clone a virtual machine, but the following steps illustrate how this can be achieved manually.Shut down the RAC1 virtual machine using the following command.
Copy the RAC1 virtual machine using the following command.# shutdown -h now
Edit the contents of the "/u01/VM/RAC2/RAC1.vmx" file, making the following change.# cp -R /u01/VM/RAC1 /u01/VM/RAC2
Ignore discrepancies with the file names in the "/u01/VM/RAC2" directory. This does not affect the action of the virtual machine.displayName = "RAC2"
In the VMware Server Console, select the File > Open menu options and browse for the "/u01/VM/RAC2/RAC1.vmx" file. Once opened, the RAC2 virtual machine is visible on the console. Start the RAC2 virtual machine by clicking the "Power on this virtual machine" button and click the "Create" button on the subsequent "Question" screen.
Ignore any errors during the server startup. We are expecting the networking components to fail at this point.
Log in to the RAC2 virtual machine as the root user and start the "Network Configuration" tool (System > Administration > Network).
Remove the devices with the "%.bak" nicknames. To do this, highlight a device, deactivate, then delete it. This will leave just the regular "eth0" and "eth1" devices. Highlight the "eth0" interface and click the "Edit" button on the toolbar and alter the IP address to "192.168.2.102" in the resulting screen.
Click on the "Hardware Device" tab and click the "Probe" button. Then accept the changes by clicking the "OK" button.
Repeat the process for the "eth1" interface, this time setting the IP Address to "192.168.0.102", and making sure the default gateway is not net for the "eth1" interface.
Click on the "DNS" tab and change the host name to "rac2.localdomain", then click on the "Devices" tab.
Once you are finished, save the changes (File > Save) and activate the network interfaces by highlighting them and clicking the "Activate" button. Once activated, the screen should look like the following image.
Edit the /home/oracle/.bash_profile file on the RAC2 node to correct the ORACLE_SID and ORACLE_HOSTNAME values.
Start the RAC1 virtual machine and restart the RAC2 virtual machine. When both nodes have started, check they can both ping all the public and private IP addresses using the following commands.ORACLE_SID=RAC2; export ORACLE_SID
ORACLE_HOSTNAME=rac1.localdomain; export ORACLE_HOSTNAME
At this point the virtual IP addresses defined in the /etc/hosts file will not work, so don't bother testing them.ping -c 3 rac1
ping -c 3 rac1-priv
ping -c 3 rac2
ping -c 3 rac2-priv
Configure SSH on each node in the cluster. Log in as the "oracle" user and perform the following tasks on each node.
The RSA public key is written to the ~/.ssh/id_rsa.pub file and the private key to the ~/.ssh/id_rsa file.su - oracle
mkdir ~/.ssh
chmod 700 ~/.ssh
/usr/bin/ssh-keygen -t rsa # Accept the default settings.
Log in as the "oracle" user on RAC1, generate an "authorized_keys" file on RAC1 and copy it to RAC2 using the following commands.
Next, log in as the "oracle" user on RAC2 and perform the following commands.su - oracle
cd ~/.ssh
cat id_rsa.pub >> authorized_keys
scp authorized_keys rac2:/home/oracle/.ssh/
The "authorized_keys" file on both servers now contains the public keys generated on all RAC nodes.su - oracle
cd ~/.ssh
cat id_rsa.pub >> authorized_keys
scp authorized_keys rac1:/home/oracle/.ssh/
To enable SSH user equivalency on the cluster member nodes issue the following commands on each node.
You should now be able to SSH and SCP between servers without entering passwords.ssh rac1 date
ssh rac2 date
ssh rac1.localdomain date
ssh rac2.localdomain date
exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add
Before installing the clusterware, check the prerequisites have been met using the "runcluvfy.sh" utility in the clusterware root directory.
If you get any failures be sure to correct them before proceeding./mountpoint/clusterware/runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose
It's a good idea to take a snapshot of the virtual machines, so you can repeat the following stages if you run into any problems. To do this, shutdown both virtual machines and issue the following commands.
The virtual machine setup is now complete.# cd /u01/VM
# tar -cvf RAC-PreClusterware.tar RAC1 RAC2 shared
# gzip RAC-PreClusterware.tar
Install the Clusterware Software
Start the RAC1 and RAC2 virtual machines, login to RAC1 as the oracle user and start the Oracle installer.On the "Welcome" screen, click the "Next" button../runInstaller
Accept the default inventory location by clicking the "Next" button.
Enter the appropriate name and path for the Oracle Home and click the "Next" button.
Wait while the prerequisite checks are done. If you have any failures correct them and retry the tests before clicking the "Next" button.
The "Specify Cluster Configuration" screen shows only the RAC1 node in the cluster. Click the "Add" button to continue.
Enter the details for the RAC2 node and click the "OK" button.
Click the "Next" button to continue.
The "Specific Network Interface Usage" screen defines how each network interface will be used. Highlight the "eth0" interface and click the "Edit" button.
Set the "eht0" interface type to "Public" and click the "OK" button.
Leave the "eth1" interface as private and click the "Next" button.
Click the "External Redundancy" option, enter "/dev/raw/raw1" as the OCR Location and click the "Next" button. To have greater redundancy we would need to define another shared disk for an alternate location.
Click the "External Redundancy" option, enter "/dev/raw/raw2" as the Voting Disk Location and click the "Next" button. To have greater redundancy we would need to define another shared disk for an alternate location.
On the "Summary" screen, click the "Install" button to continue.
Wait while the installation takes place.
Once the install is complete, run the orainstRoot.sh and root.sh scripts on both nodes as directed on the following screen.
The output from the orainstRoot.sh file should look something like that listed below.
The output of the root.sh will vary a little depending on the node it is run on. The following text is the output from the RAC1 node.# cd /u01/app/oraInventory
# ./orainstRoot.sh
Changing permissions of /u01/app/oraInventory to 770.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete
#
Ignore the directory ownership warnings. We should really use a separate directory structure for the clusterware so it can be owned by the root user, but it has little effect on the finished results.# cd /u01/crs/oracle/product/11.1.0/crs
# ./root.sh
WARNING: directory '/u01/crs/oracle/product/11.1.0' is not owned by root
WARNING: directory '/u01/crs/oracle/product' is not owned by root
WARNING: directory '/u01/crs/oracle' is not owned by root
WARNING: directory '/u01/crs' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up Network socket directories
Oracle Cluster Registry configuration upgraded successfully
The directory '/u01/crs/oracle/product/11.1.0' is not owned by root. Changing owner to root
The directory '/u01/crs/oracle/product' is not owned by root. Changing owner to root
The directory '/u01/crs/oracle' is not owned by root. Changing owner to root
The directory '/u01/crs' is not owned by root. Changing owner to root
The directory '/u01' is not owned by root. Changing owner to root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node:
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/sdc1
Format of 1 voting devices complete.
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Cluster Synchronization Services is active on these nodes.
rac1
Cluster Synchronization Services is inactive on these nodes.
rac2
Local node checking complete. Run root.sh on remaining nodes to start CRS daemons.
#
The output from the RAC2 node is listed below.
Here you can see that some of the configuration steps are omitted as they were done by the first node. In addition, the final part of the script ran the Virtual IP Configuration Assistant (VIPCA) in silent mode.# cd /u01/crs/oracle/product/11.1.0/crs
# ./root.sh
WARNING: directory '/u01/crs/oracle/product/11.1.0' is not owned by root
WARNING: directory '/u01/crs/oracle/product' is not owned by root
WARNING: directory '/u01/crs/oracle' is not owned by root
WARNING: directory '/u01/crs' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up Network socket directories
Oracle Cluster Registry configuration upgraded successfully
The directory '/u01/crs/oracle/product/11.1.0' is not owned by root. Changing owner to root
The directory '/u01/crs/oracle/product' is not owned by root. Changing owner to root
The directory '/u01/crs/oracle' is not owned by root. Changing owner to root
The directory '/u01/crs' is not owned by root. Changing owner to root
The directory '/u01' is not owned by root. Changing owner to root
clscfg: EXISTING configuration version 4 detected.
clscfg: version 4 is 11 Release 1.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node:
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Cluster Synchronization Services is active on these nodes.
rac1
rac2
Cluster Synchronization Services is active on all the nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Creating VIP application resource on (2) nodes...
Creating GSD application resource on (2) nodes...
Creating ONS application resource on (2) nodes...
Starting VIP application resource on (2) nodes...
Starting GSD application resource on (2) nodes...
Starting ONS application resource on (2) nodes...
Done.
#
You should now return to the "Execute Configuration Scripts" screen on RAC1 and click the "OK" button.
Wait for the configuration assistants to complete.
When the installation is complete, click the "Exit" button to leave the installer.
It's a good idea to take a snapshot of the virtual machines, so you can repeat the following stages if you run into any problems. To do this, shutdown both virtual machines and issue the following commands.
The clusterware installation is now complete.# cd /u01/VM
# tar -cvf RAC-PostClusterware.tar RAC1 RAC2 shared
# gzip RAC-PostClusterware.tar
Install the Database Software and Create an ASM Instance
Start the RAC1 and RAC2 virtual machines, login to RAC1 as the oracle user and start the Oracle installer.On the "Welcome" screen, click the "Next" button../runInstaller
Select the "Enterprise Edition" option and click the "Next" button.
Enter the name and path for the Oracle Home and click the "Next" button.
Select the "Cluster Install" option and make sure both RAC nodes are selected, the click the "Next" button.
Wait while the prerequisite checks are done. If you have any failures correct them and retry the tests before clicking the "Next" button.
Select the "Configure Automatic Storage Management (ASM)" option, enter the SYS password for the ASM instance, then click the "Next" button.
Select the "External" redundancy option (no mirroring), select all three raw disks (/dev/sdd1, /dev/sde1 and /dev/sdf1), then click the "Next" button.
The candidate disks may not be listed at first. If this is the case, click the "Change Disk Discovery Path..." button, enter the value "/dev/sd*", then click the "OK" button. After a short pause, the candidate disks should be listed as above.
Click the "next" button to avoid the Oracle Configuration Manager Registration.
On the "Summary" screen, click the "Install" button to continue.
Wait while the database software installs.
Once the installation is complete, wait while the configuration assistants run.
Execute the "root.sh" scripts on both nodes, as instructed on the "Execute Configuration scripts" screen, then click the "OK" button.
When the installation is complete, click the "Exit" button to leave the installer.
It's a good idea to take a snapshot of the virtual machines, so you can repeat the following stages if you run into any problems. To do this, shutdown both virtual machines and issue the following commands.
The database software installation and ASM creation step is now complete.# cd /u01/VM
# tar -cvf RAC-PostASM.tar RAC1 RAC2 shared
# gzip RAC-PostASM.tar
Create a Database using the DBCA
Start the RAC1 and RAC2 virtual machines, login to RAC1 as the oracle user and start the Database Configuration Assistant.On the "Welcome" screen, select the "Oracle Real Application Clusters database" option and click the "Next" button.dbca
Select the "Create a Database" option and click the "Next" button.
Highlight both RAC nodes and click the "Next" button.
Select the "Custom Database" option and click the "Next" button.
Enter the values "RAC.WORLD" and "RAC" for the Global Database Name and SID Prefix respectively, then click the "Next" button.
Accept the management options by clicking the "Next" button. If you are attempting the installation on a server with limited memory, you may prefer not to configure Enterprise Manager at this time.
Enter database passwords then click the "Next" button.
Select the "Automatic Storage Management (ASM)" option, then click the "Next" button.
Select the "DATA" disk group, then click the "Next" button.
Accept the "Use Oracle-Managed Files" database location by the "Next" button.
Check both the "Specify Flash Recovery Area" and "Enable Archiving" options. Enter "+DATA" as the Flash Recovery Area, then click the "Next" button.
Uncheck all but the "Enterprise Manager Repository" option, then click the "Standard Database Components..." button.
Uncheck all but the "Oracle JVM" and "Oracle XML DB" options, then click the "OK" button, followed by the "Next" button on the previous screen. If you are attempting the installation on a server with limited memory, you may prefer not to install the JVM at this time.
Accept the default memory settings by clicking the "Next" button.
Accept the default security settings by clicking the "Next" button.
Accept the default automatic maintenence task settings by clicking the "Next" button.
Accept the database storage settings by clicking the "Next" button.
Accept the database creation options by clicking the "Finish" button.
Accept the summary information by clicking the "OK" button.
Wait while the database is created.
Once the database creation is complete you are presented with the following screen. Make a note of the information on the screen and click the "Exit" button.
The RAC database creation is now complete.
TNS Configuration
Once the installation is complete, the "$ORACLE_HOME/network/admin/listener.ora" file on each RAC node will contain entries similar to the following.The "$ORACLE_HOME/network/admin/tnsnames.ora" file on each RAC node will contain entries similar to the following.LISTENER_RAC1 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521)(IP = FIRST))
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.2.101)(PORT = 1521)(IP = FIRST))
)
)
This configuration allows direct connections to specific instance, or using a load balanced connection to the main service.RAC =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC.WORLD)
)
)
LISTENERS_RAC =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip)(PORT = 1521))
)
RAC2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC.WORLD)
(INSTANCE_NAME = RAC2)
)
)
RAC1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC.WORLD)
(INSTANCE_NAME = RAC1)
)
)
$ sqlplus / as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Mon Oct 22 09:09:01 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
SQL> CONN sys/password@rac1 AS SYSDBA
Connected.
SQL> SELECT instance_name, host_name FROM v$instance;
INSTANCE_NAME HOST_NAME
---------------- ----------------------------------------------------------------
RAC1 rac1.localdomain
SQL> CONN sys/password@rac2 AS SYSDBA
Connected.
SQL> SELECT instance_name, host_name FROM v$instance;
INSTANCE_NAME HOST_NAME
---------------- ----------------------------------------------------------------
RAC2 rac2.localdomain
SQL> CONN sys/password@rac AS SYSDBA
Connected.
SQL> SELECT instance_name, host_name FROM v$instance;
INSTANCE_NAME HOST_NAME
---------------- ----------------------------------------------------------------
RAC1 rac1.localdomain
SQL>
Check the Status of the RAC
There are several ways to check the status of the RAC. Thesrvctl
utility shows the current configuration and status of the RAC database.The$ srvctl config database -d RAC
rac1 RAC1 /u01/app/oracle/product/11.1.0/db_1
rac2 RAC2 /u01/app/oracle/product/11.1.0/db_1
$
$ srvctl status database -d RAC
Instance RAC1 is running on node rac1
Instance RAC2 is running on node rac2
$
V$ACTIVE_INSTANCES
view can also display the current status of the instances.Finally, the$ sqlplus / as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Mon Oct 22 09:09:01 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
SQL> SELECT * FROM v$active_instances;
INST_NUMBER INST_NAME
----------- ------------------------------------------------------------
1 rac1.localdomain:RAC1
2 rac2.localdomain:RAC2
SQL>
GV$
allow you to display global information for the whole RAC.If you have configured Enterprise Manager, it can be used to view the configuration and current status of the database using a URL like "https://rac1.localdomain:1158/em".SQL> SELECT inst_id, program, sid, serial# FROM gv$session;
SQL> /
INST_ID PROGRAM SID SERIAL#
---------- ------------------------------------------------ ---------- ----------
1 oracle@rac1.localdomain (q002) 121 46
.
.
1 racgimon@rac1.localdomain (TNS V1-V3) 170 11
2 sqlplus@rac2.localdomain (TNS V1-V3) 120 51
.
.
2 oracle@rac2.localdomain (RSMN) 170 3
77 rows selected.
SQL>
For more information see:
- Oracle 11g Release 1 RAC On Linux Using NFS
- Oracle 10g RAC On Linux Using VMware Server
- Clusterware Installation Guide for Linux
- Real Application Clusters Installation Guide for Linux and UNIX
- Oracle Database Installation Guide 11g Release 1 (11.1) for Linux
- Direct and Asynchronous I/O
You may refer the following link for above
http://www.oracle-base.com/articles/11g/OracleDB11gR1RACInstallationOnOEL5UsingVMware.php
Subscribe to:
Posts (Atom)