Thursday, September 20, 2012

How to Master-Slave replication in mysql 

First you have to configure /etc/my.cnf in both master and slave server,
1.Master server configuration minimum needed

server_id=10 (it may any number and may unique)
sync_binlog=1
then you have to specify the bin log location
log_bin_index=/myarch/master_bin_log.index
log-bin=/myarch/master_bin_log

/myarch should have mysql:mysql ownership

2.Slave server minimum configuration needed

server_id=20 (should be different from master)
sync_binlog=1
log_bin_index=/myarch/slave_bin_log.index
log-bin=/myarch/slave_bin_log
relay-log-info-file=/myreplog/slave_rep.info
relay-log=/myreplog/slave_rep.log
relay-log-index=/myreplog/slave_rep.index

/myreplog also have mysql.mysql ownership

3. Procedure for creating master-slave replication

1. Stop any application  that points to the production db that we  are going to sync.

2. login as root and create replication user on master server
$mysql -u root -p
mysql> GRANT REPLICATION SLAVE,REPLICATION CLIENT ON *.* TO 'slaveuser'@'%' IDENTIFIED BY 'slave';
mysql>FLUSH PRIVILEGES;

3. check all the open connections to the server
mysql> show processlist;
kill the process if any;
mysql> KILL processid;

4. take database full backup
$mysqldump -v -u root -p --all-databases >/root/dumpfilename.bkp

5. Closes all open tables and locks all tables for all databases with a global read lock.
mysql>FLUSH TABLES WITH READ LOCK;

Leave the terminal open, otherwise, the database will be unlocked. Write down theFile and Position values, we will need them later.
Then copy this sql dump backup to slave server

7.Determine the current binary log file name and position and note down this,
Which we need later.

mysql > SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000003 | 3303     |                    |                  |  
+------------------+----------+--------------+------------------+
The File column shows the name of the log file and Position shows the position within the file. In this example, the binary log file is mysql-bin.000003   and the position is 3303. Record these values. You need them later when you are setting up the slave.

4. Restoring and syncing the database at DR

1.restore the full backup or particular database backup in DR
(here we are going sync all database in the master server )
$ mysql -v -u root -p  </thepathofthedumpfile.bkp
2. stop the Slave I/O threads
mysql > stop slave;

To use CHANGE MASTER TO, the slave replication threads must be stopped (use STOP SLAVE if necessary).
3. Now issue the following to mysql using the right parameters taken from SHOW MASTER STATUS above:
   CHANGE MASTER TO MASTER_HOST='10.10.40.212',
   MASTER_USER='slaveuser', MASTER_PASSWORD='slave',
   MASTER_LOG_FILE='mysql-bin.000003', MASTER_LOG_POS=3303;

4.then start slave i/o threads.
mysql>slave start;

5.check slave status by
mysql> show slave status \G;

6.At last revoke the lock hold on tables from master server , go to live server and execute
Mysql > unlock tables;
UNLOCK TABLES explicitly releases any table locks held by the current session.also it release the  global read lock acquired with the FLUSH TABLES WITH READ LOCK statement
New features in Oracle database 11g in AMM , data guard , expdp , rman etc

1.New features in Auotmatic memory management (AMM) 
Oracle simplified memory management over the last few versions of the database. Oracle 9i automated PGA management by introducing PGA_AGGREGATE_TARGET parameter. Oracle 10g continued this trend by automating SGA management using the SGA_TARGET parameter. Oracle 11g takes this one step further by allowing you to allocate one chunk of memory, which Oracle uses to dynamically manage both the SGA and PGA.
Automatic memory management is configured using two new initialization parameters:
•MEMORY_TARGET: The amount of shared memory available for Oracle to use when dynamically controlling the SGA and PGA. This parameter is dynamic, so the total amount of memory available to Oracle can be increased or decreased, provided it does not exceed the MEMORY_MAX_TARGET limit. The default value is "0".
•MEMORY_MAX_TARGET: This defines the maximum size the MEMORY_TARGET can be increased to without an instance restart. If the MEMORY_MAX_TARGET is not specified, it defaults to MEMORY_TARGET setting.
When using automatic memory management, the SGA_TARGET and PGA_AGGREGATE_TARGET act as minimum size settings for their respective memory areas. To allow Oracle to take full control of the memory management, these parameters should be set to zero.

2. New features in  Data guard 
2.a Active Data Guard
In Oracle Database 10g and below you could open the physical standby database for read-only activities, but only after stopping the recovery process. 
In Oracle 11g, you can query the physical standby database in real time while applying the archived logs. This means standby continue to be in sync with primary but can use the standby for reporting. 
Let us see the steps now..
First, cancel the managed standby recovery:
SQL> alter database recover managed standby database cancel;
Database altered.
Then, open the database as read only: 
SQL> alter database open read only;
Database altered.
While the standby database is open in read-only mode, you can resume the managed recovery process. 
SQL> alter database recover managed standby database disconnect;
Database altered.

2.b Snapshot Standby database 
In Oracle Database 11g, physical standby database can be temporarily converted into an updateable one called Snapshot Standby Database.
In that mode, you can make changes to database. Once the test is complete, you can rollback the changes made for testing and convert the database into a standby undergoing the normal recovery. This is accomplished by creating a restore point in the database, using the Flashback database feature to flashback to that point and undo all the changes.
Steps:
Configure the flash recovery area, if it is not already done.
SQL> alter system set db_recovery_file_dest_size = 2G;
System altered.
SQL> alter system set db_recovery_file_dest= '+FRADG';
System altered.
Stop the recovery. 
SQL> alter database recover managed standby database cancel;
Database altered.
Convert this standby database to snapshot standby using command
SQL> alter database convert to snapshot standby;
Database altered.
Now recycle the database
SQL> shutdown immediate
...
SQL> startup
ORACLE instance started.
Database is now open for read/write operations
SQL> select open_mode, database_role from v$database;
OPEN_MODE DATABASE_ROLE
---------- ----------------
READ WRITE SNAPSHOT STANDBY
After your testing is completed, you would want to convert the snapshot standby database back to a regular physical standby database by following the steps below
SQL> connect / as sysdba
Connected. 
SQL> shutdown immediate
SQL> startup mount
...
Database mounted.
SQL> alter database convert to physical standby; 
Database altered.
Now shutdown, mount the database and start managed recovery. 
SQL> shutdown
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
...
Database mounted.
Start the managed recovery process
SQL> alter database recover managed standby database disconnect;
Now the standby database is back in managed recovery mode. When the database was in snapshot standby mode, the archived logs from primary were not applied to it. They will be applied now.

2.c Redo Compression
In Oracle Database 11g you can compress the redo that goes across to the standby server via SQL*Net using a parameter compression set to true. This works only for the logs shipped during the gap resolution. Here is the command you can use to enable compression.
alter system set log_archive_dest_2 = 'service=STDBYDB LGWR ASYNC 
valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=STDBYDB compression=enable'

3.New features in expdp
COMPRESSION parameter in expdp
One of the big issues with Data Pump was that the dumpfile couldn't be compressed while getting created. In Oracle Database 11g, Data Pump can compress the dumpfiles while creating them by using parameter COMPRESSION in the expdp command line. The parameter has three options: 
METDATA_ONLY - only the metadata is compressed
DATA_ONLY - only the data is compressed; the metadata is left alone. 
ALL - both the metadata and data are compressed. 
NONE - this is the default; no compression is performed. 
Encryption

The dumpfile can be encrypted while getting created. The encryption uses the same technology as TDE (Transparent Data Encryption) and uses the wallet to store the master key. This encryption occurs on the entire dumpfile, not just on the encrypted columns as it was in the case of Oracle Database 10g. 
Data Masking 

when you import data from production to test system, you may want to make sure sensitive data are altered in such a way that they are not identifiable. Data Pump in Oracle Database 11g enables you do that by creating a masking function and then using that during import.
the sensitive data is masked using  this new remap_data parameter available in Oracle 11g Datapump utility.

REMAP_TABLE
Allows you to rename tables during an import operation. 
 Example
The following is an example of using the REMAP_TABLE parameter to rename the employees table to a new name of emps:
impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expschema.dmp TABLES=hr.employees REMAP_TABLE=hr.employees:emps

4.New Features in RMAN
Advice on recovery
To find out failure...
RMAN> list failure;
To get the advice on recovery
RMAN> advise failure;
Recovery Advisor generates a script that can be used to repair the datafile or resolve the issue. The script does all the work.
To verify what the script actually does ...
RMAN> repair failure preview;
Now execute the actual repair by issuing...
RMAN> repair failure;

Proactive Health Checks
In Oracle Database 11g, a new command in RMAN, VALIDATE DATABASE, can check database blocks for physical corruption.
RMAN> validate database;
Parallel backup of the same datafile.
In 10g each datafile is backed by only one channel. In Oracle Database 11g RMAN, the multiple channels can backup one datafiles parallel by breaking the datafile into chunks known as "sections." 

Optimized backup of undo tablespace.

In 10g, when the RMAN backup runs, it backs up all the data from the undo tablespace. But during recovery, the undo data related to committed transactions are no longer needed.
In Oracle Database 11g, RMAN bypasses backing up the committed undo data that is not required in recovery. The uncommitted undo data that is important for recovery is backed up as usual. This reduces the size and time of the backup.

Improved Block Media Recovery Performance
If flashback logs are present, RMAN will use these in preference to backups during block media recovery (BMR), which can significantly improve BMR speed.

Block Change Tracking Support for Standby Databases
Block change tracking is now supported on physical standby databases, which in turn means fast incremental backups are now possible on standby databases.

Faster Backup Compression
RMAN now supports the ZLIB binary compression algorithm as part of the Oracle Advanced Compression option. The ZLIB algorithm is optimized for CPU efficiency, but produces larger zip files than the BZIP2 algorithm available previously, which is optimized for compression.

Archived Log Deletion Policy Enhancements
The archived log deletion policy of Oracle 11g has been extended to give greater flexibility and protection in a Data Guard environment. The Oracle 10g and Oracle 11g syntax is displayed below.

# Oracle 10g Syntax.
CONFIGURE ARCHIVELOG DELETION POLICY {CLEAR | TO {APPLIED ON STANDBY | NONE}}
# Oracle 11g Syntax.
ARCHIVELOG DELETION POLICY {CLEAR | TO {APPLIED ON [ALL] STANDBY |BACKED UP integer TIMES TO DEVICE TYPE deviceSpecifier |NONE | SHIPPED TO [ALL] STANDBY}[ {APPLIED ON [ALL] STANDBY | BACKED UP integer TIMES TO DEVICE TYPE deviceSpecifier |NONE | SHIPPED TO [ALL] STANDBY}]...}
The extended syntax allows for configurations where logs are eligible for deletion only after being applied to, or transferred to, one or more standby database destinations.

5.Read only tables
Read-Only Tables in Oracle Database 11g 
Oracle 11g allows tables to be marked as read-only using the ALTER TABLE command.
ALTER TABLE table_name READ ONLY;
ALTER TABLE table_name READ WRITE;
Any DML statements that affect the table data and SELECT ... FOR UPDATE queries result in an ORA-12081 error message.

Tuesday, September 18, 2012

Transparent application failover (TAF) - A practical study

When considering the availability of the Oracle database, Oracle RAC 11g provides a superior solution with its advanced failover mechanisms. Oracle RAC 11g includes the required components that all work within a clustered configuration responsible for providing continuous availability; when one of the participating systems fail within the cluster, the users are automatically migrated to the other available systems.
 
A major component of Oracle RAC 11g that is responsible for failover processing is the Transparent Application Failover (TAF) option. All database connections (and processes) that lose connections are reconnected to another node within the cluster. The failover is completely transparent to the user.

One important note is that TAF happens automatically within the OCI libraries. Thus your application (client) code does not need to change in order to take advantage of TAF. Certain configuration steps, however, will need to be done on the Oracle TNS file tnsnames.ora.

i created following entry in my client tnsnames.ora file in my laptop

rac =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.211)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.212)(PORT = 1521))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = RAC.localdomain)
      (FAILOVER_MODE =
        (TYPE = SELECT)
        (METHOD = BASIC)
        (RETRIES = 180)
        (DELAY = 5)
      )
    )
  )

Note:- Both ip used here are the vips of the cluster nodes.Also all these things TYPE,METHOD,RETRIES,DELAY are importanct parameter while configuring TAF.
Before going ahead check the tnsping connectivity .

C:\Users\DELL>tnsping rac

TNS Ping Utility for 64-bit Windows: Version 11.2.0.1.0 - Production on 18-SEP-2012 14:34:49
Copyright (c) 1997, 2010, Oracle.  All rights reserved.
Used parameter files:
e:\oracle11g\oracle\product\11.2.0\dbhome_1\network\admin\sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.211)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.212)(PORT = 1
521)) (LOAD_BALANCE = yes) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RAC.localdomain) (FAILOVER_MODE = (TYPE = SELECT) (METHOD = BASIC) (RETRIES = 1
80) (DELAY = 5))))
OK (280 msec)

SQL Query to Check the Session's Failover Information

The following SQL query can be used to check a session's failover type, failover method, and if a failover has occurred. We will be using this query throughout this example. Here i connect to my rac instance through MAHI user.


col instance_name for a15
col host_name for a17
col failover_method for a17
col failed_over for a18
set lines 300
select distinct  v.instance_name as instance_name, v.host_name as host_name,
s.failover_type as failover_type, s.failover_method as failover_method,
s.failed_over as failed_over  from v$instance v , v$session s
where s.username ='MAHI';

We can see, that we are connected to the Instance RAC1 which is running on racha1.localdomain . Now we stop this Instance without disconnecting from the client. Either you can use srvctl or through sqlplus to shutdown the RAC1 instance. Here i used srvctl command. I used separate env files for rdbms and grid environment.Here i sourced the grid.env file , in order to invoke srvctl utility.


Now let's go back to our SQL session on the client and rerun the SQL statement:


We can see that the above session has now been failed over to instance RAC2 on racha2.localdomain. After completing the test start the RAC1 instance .


hope it will help you...:)

Monday, September 17, 2012

PGA_AGGREGATE_TARGET concept

In order to  understand About PGA_AGGREGATE_TARGET parameter let’s have a look at parameter *_AREA_SIZE .

SQL> SHOW PARAMETER _AREA_SIZE

NAME TYPE VALUE
———————————— ———– ——————————
bitmap_merge_area_size integer 1048576
create_bitmap_area_size integer 8388608
hash_area_size integer 131072
sort_area_size integer 65536
workarea_size_policy string AUTO

Here we see the parameter workarea_size_policy is set to AUTO because we have set non-zero value to pga_aggregate_target.

SQL> SHOW PARAMETER PGA_AGGREGATE_TARGET

NAME TYPE VALUE
———————————— ———– ——————————
pga_aggregate_target big integer 525M

Now we try to set pga_aggregate_target to a zero value.

SQL> ALTER SYSTEM SET pga_aggregate_target=0;
ALTER SYSTEM SET pga_aggregate_target=0
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00093: pga_aggregate_target must be between 10M and 4096G-1

Whenever we try to set to as scope=spfile it is not set because 0 is not valid value.
So, I set it to zero in the pfile.

SQL> CREATE PFILE=’/export/home/oracle/pfile.ora’ FROM SPFILE;
File created.

SQL> !vi /export/home/oracle/pfile.ora
*.pga_aggregate_target=0

Now I have started database with this pfile.

SQL> STARTUP FORCE PFILE=’/export/home/oracle/pfile.ora’;
ORACLE instance started.

Total System Global Area 1660944384 bytes
Fixed Size 2021216 bytes
Variable Size 218106016 bytes
Database Buffers 1426063360 bytes
Redo Buffers 14753792 bytes
Database mounted.
Database opened.

Now have a look at the values. We will see that workarea_size_policy parameter is set to MANUAL.

SQL> SHOW PARAMETER _AREA_SIZE

NAME TYPE VALUE
———————————— ———– ——————————
bitmap_merge_area_size integer 1048576
create_bitmap_area_size integer 8388608
hash_area_size integer 131072
sort_area_size integer 65536
workarea_size_policy string MANUAL
SQL> SHOW PARAMETER PGA_AGGREGATE_TARGET

NAME TYPE VALUE
———————————— ———– ——————————
pga_aggregate_target big integer 0

Some notes on pga_aggregate_target


• PGA_AGGREGATE_TARGET specifies the target aggregate PGA memory available to all server processes attached to the instance.

• The default value for PGA_AGGREGATE_TARGET is non zero.Unless you specify otherwise Oracle sets it’s value to 20% of the SGA or 10 MB, whichever is greater.

• Setting PGA_AGGREGATE_TARGET to a nonzero value has the effect of automatically setting the WORKAREA_SIZE_POLICY parameter to AUTO. This means that SQL working areas used by memory-intensive SQL operators such as sort, group-by, hash-join, bitmap merge, and bitmap create will be automatically sized. In that case we don’t have to bother about settings of sort_area_size , hash_area_size etc.

• If you set PGA_AGGREGATE_TARGET to 0 then oracle automatically sets the WORKAREA_SIZE_POLICY parameter to MANUAL. This means that SQL workareas are sized using the *_AREA_SIZE parameters.

• Oracle attempts to keep the amount of private memory below the target specified by this parameter by adapting the size of the work areas to private memory.

• The memory allocated for PGA_AGGREGATE_TARGET has no relation with SGA_TARGET. The similarity is both is taken from total memory of the system.

• The minimum value of this parameter is 10 MB and maximum is 4096 GB – 1.

source:- http://samadhandba.wordpress.com

Saturday, August 18, 2012

RAC Licensing in standard and enterprise edition

Standard Edition
RAC option free
Maximum two nodes
Maximum four CPUs
Must use Oracle Clusterware
Must use Automatic Storage Management (ASM)
No extended clusters

Enterprise Edition
RAC option 50% extra (per EE license)
Maximum number of Nodes 100
No limit on number of CPUs
Can use any shared storage (ASM, CFS or NFS)
Can use Enterprise Manager Packs (Diagnostics, Tuning..)

Friday, August 17, 2012

Features in Oracle 9i/10g/11g RAC

Oracle 9i RAC
  • OPS (Oracle Parallel Server) was renamed as RAC
  • CFS (Cluster File System) was supported
  • OCFS (Oracle Cluster File System) for Linux and Windows
  • watchdog timer replaced by hangcheck timer
Oracle 10g R1 RAC

  • Cluster Manager replaced by CRS
  • ASM introduced
  • Concept of Services expanded
  • ocrcheck introduced
  • ocrdump introduced
  • AWR was instance specific
Oracle 10g R2 RAC

  • CRS was renamed as Clusterware
  • asmcmd introduced
  • CLUVFY introduced
  • OCR and Voting disks can be mirrored
  • Can use FAN/FCF with TAF for OCI and ODP.NET
Oracle 11g R1 RAC
  • Oracle 11g RAC parallel upgrades - Oracle 11g have rolling upgrade features whereby RAC database can be upgraded without any downtime.
  • Hot patching - Zero downtime patch application.
  • Oracle RAC load balancing advisor - Starting from 10g R2 we have RAC load balancing advisor utility. 11g RAC load balancing advisor is only available with clients who use .NET, ODBC, or the Oracle Call Interface (OCI).
  • ADDM for RAC - Oracle has incorporated RAC into the automatic database diagnostic monitor, for cross-node advisories. The script addmrpt.sql run give report for single instance, will not report all instances in RAC, this is known as instance ADDM. But using the new package DBMS_ADDM, we can generate report for all instances of RAC, this known as database ADDM.
  • ADR command-line tool - Oracle Automatic Diagnostic repository (ADR) has a new command-line interface named ADRCI, ADR Command Interface. ADRCI can be used to access the 11g alert log:
    $adrci
    adrci> show alert
  • Optimized RAC cache fusion protocols - moves on from the general cache fusion protocols in 10g to deal with specific scenarios where the protocols could be further optimized.
  • Oracle 11g RAC Grid provisioning - The Oracle grid control provisioning pack allows us to "blow-out" a RAC node without the time-consuming install, using a pre-installed "footprint".
  • Data Guard - Standby snapshot - The new standby snapshot feature allows us to encapsulate a snapshot for regression testing. We can collect a standby snapshot and move it into our QA database, ensuring that our regression test uses real production data.
  • Quick Fault Resolution - Automatic capture of diagnostics (dumps) for a fault.
Oracle 11g R2 RAC

  • We can store everything on the ASM. We can store OCR & voting files also on the ASM.
  • ASMCA
  • Single Client Access Name (SCAN) - eliminates the need to change tns entry when nodes are added to or removed from the Cluster. RAC instances register to SCAN listeners as remote listeners. SCAN is fully qualified name. Oracle recommends assigning 3 addresses to SCAN, which create three SCAN listeners.
  • AWR is consolidated for the database.
  • 11g Release 2 Real Application Cluster (RAC) has server pooling technologies so it’s easier to provision and manage database grids. This update is geared toward dynamically adjusting servers as corporations manage the ebb and flow between data requirements for datawarehousing and applications.
  • By default, LOAD_BALANCE is ON.
  • GSD (Global Service Deamon), gsdctl introduced.
  • GPnP profile.
  • Oracle RAC OneNode is a new option that makes it easier to consolidate databases that aren’t mission critical, but need redundancy.
  • raconeinit - to convert database to RacOneNode.
  • raconefix - to fix RacOneNode database in case of failure.
  • racone2rac - to convert RacOneNode back to RAC.
  • Oracle Restart - the feature of Oracle Grid Infrastructure's High Availability Services (HAS) to manage associated listeners, ASM instances and Oracle instances.
  • Oracle Omotion - Oracle 11g release2 RAC introduces new feature called Oracle Omotion, an online migration utility. This Omotion utility will relocate the instance from one node to another, whenever instance failure happens.
  • Omotion utility uses Database Area Network (DAN) to move Oracle instances. Database Area Network (DAN) technology helps seamless database relocation without losing transactions.
  • Oracle Local Registry (OLR) - From Oracle 11gR2 “Oracle Local Registry(OLR)” something new as part of Oracle Clusterware. OLR is node’s local repository, similar to OCR (but local) and is managed by OHASD. It pertains data of local node only and is not shared among other nodes.
  • racone2rac - to convert RacOneNode back to RAC.Cluster Time Synchronization Service (CTSS) is a new feature in Oracle 11g R2 RAC, which is used to synchronize time across the nodes of the cluster.

    Source: http://satya-racdba.blogspot.in

11gR2 RAC Step By Step Configuration On VMWare part5

STAGE 5
Oracle rdbms instance configuration  

Go to Node1 and login as root


[root@racha1 ~]# xhost +

access control disabled, clients can connect from any host
[root@racha1 ~]# su - oracle
[oracle@racha1 ~]$xclock ---> it should return clock display
[oracle@racha1 ~]$ cd /install/disk1
[oracle@racha1 disk1]$ ls  
doc install response rpm runInstaller sshsetup stage welcome.html
[oracle@racha1 disk1]$./runInstaller
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 80 MB. Actual 9852 MB Passed
Checking swap space: must be greater than 150 MB. Actual 3887 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2011-07-25_08-46-27PM. Please wait ...

Uncheck the security updates checkbox and click the "Next" button.


  



 Accept the "Create and configure a database" option by clicking the "Next" button.





Accept the "Server Class" option by clicking the "Next" button.



  

Make sure both nodes are selected, then click the "Next" button.



   

 Accept the "Typical install" option by clicking the "Next" button.




Enter "/u01/app/oracle/product/11.2.0/dbhome_1" for the software location. The storage type should be set to "Automatic Storage Manager". Enter the appropriate passwords and database name, in this case "RAC.localdomain".


  



Wait for the prerequisite check to complete. If there are any problems either fix them, or check the "Ignore All" checkbox and click the "Next" button.



  

If you are happy with the summary information, click the "Finish" button.



  







Once the software installation is complete the Database Configuration Assistant (DBCA) will start automatically.



  

Once the Database Configuration Assistant (DBCA) has finished, click the "OK" button.



  

When prompted, run the configuration scripts on each node. When the scripts have been run on each node, click the "OK" button.



  


Node 1:
[root@racha1 ~]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0/dbhome_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) 
[n]:
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
[root@racha1 ~]#

Node 2: 
[root@racha2 ~]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0/dbhome_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
[root@racha2 ~]#

Two Node RAC Installation Successfully completed. Click the "Close" button to exit the installer.


Check the Status of the RAC
There are several ways to check the status of the RAC. The srvctl utility shows the current configuration and status of the RAC database.For that you have to set the environment for  GRID infrastructure. I made a grid.env in my oracle home and it contains

[oracle@racha1 ~]$ cat grid.env
export GRID_HOME=/u01/app/11.2.0/grid
export PATH=$ORACLE_HOME/bin:$PATH
export ORACLE_SID=+ASM1
[oracle@racha1 ~]$. grid.env
$ srvctl config database -d RAC
Database unique name: RAC
Database name: RAC
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/RAC/spfileRAC.ora
Domain: localdomain
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: RAC
Database instances: RAC1,RAC2
Disk Groups: DATA
Services: 
Database is administrator managed
$

$ srvctl status database -d RAC
Instance RAC1 is running on node racha1
Instance RAC2 is running on node racha2
$
The V$ACTIVE_INSTANCES view can also display the current status of the instances. 

Just Check
Node 1:
[oracle@racha1 ~]$. .bash_profile (sourcing the rdbms home environment) 
[oracle@racha1 ~]$ export ORACLE_SID=RAC1
[oracle@racha1 ~]$ sqlplus
SQL*Plus: Release 11.2.0.1.0 Production on Tue Jul 26 13:00:31 2011
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Enter user-name: / as sysdba
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL> select name from v$database;
NAME
---------
RAC

SQL> select instance_name from v$instance;
INSTANCE_NAME
----------------
RAC1

SQL> SELECT inst_name FROM v$active_instances;

INST_NAME
-------------------------------------------------------------------
racha1.localdomain:RAC1
racha2.localdomain:RAC2


    
                                         I hope this may help you , enjoy ..!!!