Sunday, May 17, 2015

Convert a single instance database to 2 node RAC cluster database.

Here I am going to explain how to convert a single instance database to a two node RAC database .Keep in mind that, even if your source database is in RAC , there is no direct method to convert it into a  clone RAC database . First we need to convert it into a single instance database , later we need to add the newly configured single instance to cluster.

1.First take the full database backup of the single instance database,

BACKUP AS COMPRESSED BACKUPSET FULL TAG 'DFWEBFULL' DATABASE FORMAT '/nfs_1/DB_ADHOC_BACKUPS/DFWEBFULL/Backup_%d_%T_%s'
INCLUDE CURRENT CONTROLFILE PLUS ARCHIVELOG format '/nfs_1/DB_ADHOC_BACKUPS/DFWEBFULL/ARCHIVE_%d_%T_%s';

Note:- change the permission of the backup directory so that oracle software owner can read those files , otherwise you wil receive error.
$cd /nfs_1/DB_ADHOC_BACKUPS/DFWEBFULL
$chmod 777 *
Here I took my rman backups into a nfs mount point , and its accessible  from the auxiliary database so that no need to copy the entire backup set .

2. Take one init parameter from the Cluster ( if your cluster have already some instance configured) where you want to configure your RAC instances, otherwise make a new one .keep the locat_listener and remote_listener as it is . Change the other parameter accordingly .Here I am going to configure RAC database TTWEB having two instances TTWEB1 and TTWEB2. Here is my init parameter and is named as initTTWEB1.ora
Note:- Comment out *.cluster_database=TRUE
$vi initTTWEB1.ora
TTWEB2.__db_cache_size=956301312
TTWEB1.__db_cache_size=1610612736
TTWEB2.__java_pool_size=67108864
TTWEB1.__java_pool_size=67108864
TTWEB2.__large_pool_size=83886080
TTWEB1.__large_pool_size=83886080
TTWEB2.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
TTWEB1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
TTWEB2.__pga_aggregate_target=1493172224
TTWEB1.__pga_aggregate_target=1493172224
TTWEB2.__sga_target=2801795072
TTWEB1.__sga_target=2801795072
TTWEB2.__shared_io_pool_size=0
TTWEB1.__shared_io_pool_size=0
TTWEB2.__shared_pool_size=1660944384
TTWEB1.__shared_pool_size=1006632960
TTWEB2.__streams_pool_size=0
TTWEB1.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/TTWEB/adump'
*.audit_trail='db'
#*.cluster_database=TRUE
*.compatible='11.2.0.4'
*.control_files='+ORADATA/TTWEB/control1.ctl','+ORAFRA/TTWEB/control2.ctl'
*.db_block_size=8192
*.db_create_file_dest='+ORADATA'
*.db_create_online_log_dest_1='+ORADATA'
*.db_create_online_log_dest_2='+ORAFRA'
*.db_domain=''
*.db_name='TTWEB'
*.db_recovery_file_dest='+ORAFRA'
*.db_recovery_file_dest_size=107374182400
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=TTWEBXDB)'
TTWEB1.instance_number=1
TTWEB2.instance_number=2
TTWEB1.local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=10.81.155.146)(PORT=1521))'
TTWEB2.local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=10.81.155.157)(PORT=1521))'
*.memory_target=4G
*.nls_language='AMERICAN'
*.open_cursors=300
*.pga_aggregate_target=536870912
*.processes=500
*.remote_listener='ptgracscan:1521'
*.remote_login_passwordfile='exclusive'
*.sga_target=1610612736
TTWEB1.thread=1
TTWEB2.thread=2
TTWEB1.undo_tablespace='UNDOTBS1'
TTWEB2.undo_tablespace='UNDOTBS2'
Copy initTTWEB1.ora into $ORACLE_HOME/dbs/ ( in node1)

3. Make adump directories on both node,
mkdir -p /u01/app/oracle/admin/TTWEB/adump (in both the nodes)

4. Edit /et/oratab (in both the node) ,
in node1
TTWEB1:/u01/app/oracle/product/11.2.0.4/db_1:N
TTWEB:/u01/app/oracle/product/11.2.0.4/db_1:N
in node2
TTWEB2:/u01/app/oracle/product/11.2.0.4/db_1:N
TTWEB:/u01/app/oracle/product/11.2.0.4/db_1:N
if you don't add 'TTWEB' in /etc/oratab file it will be automatically added by oraagent once
your instance are configured through srvctl command.

5.Set oraenv at node1 and start rman duplication,
[oracle@dbnode1]:[] $ . oraenv
ORACLE_SID = [oracle] ? TTWEB1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@dbnode1]:[TTWEB1] $sqlplus / as sysdba
SQL>startup nomount;
[oracle@dbnode1]:[TTWEB1] $rman auxiliary /
DUPLICATE DATABASE TO TTWEB BACKUP LOCATION '/nfs_1/DB_ADHOC_BACKUPS/DFWEBFULL';
Once the duplication finish , do the rest of the work as given below

6.Create spfile in one of the disk group,
SQL> create spfile='+ORADATA' from pfile;
File created.
SQL>
Check the location of the spfile created by
[oracle@dbnode1]:[TTWEB1] $ su - grid
Password:
[grid@dbnode1 ~]$. oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/oracle
[grid@dbnode1 ~]$ asmcmd
ASMCMD> cd ORADATA
ASMCMD> ls
ASMCMD> cd TTWEB/
ASMCMD> ls
CONTROLFILE/
DATAFILE/
ONLINELOG/
PARAMETERFILE/
TEMPFILE/
control1.ctl
spfileTTWEB.ora
ASMCMD> cd PARAMETERFILE/
ASMCMD> ls
spfile.1465.871230913
ASMCMD> ls -lt
Type Redund Striped Time Sys Name
PARAMETERFILE UNPROT COARSE FEB 09 16:00:00 Y spfile.1465.871230913
ASMCMD> pwd

7.Make an entry in initTTWEB1.ora as,
spfile='+ORADATA/TTWEB/PARAMETERFILE/spfile.1465.871230913'
--- note down any of the spfile name from the above output
SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup;
ORACLE instance started.
Total System Global Area 4275781632 bytes
Fixed Size 2260088 bytes
Variable Size 2315256712 bytes
Database Buffers 1946157056 bytes
Redo Buffers 12107776 bytes
Database mounted.
Database opened.
SQL> show parameter spfile
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string +ORADATA/ttweb/parameterfile/s
pfile.1465.871230913
SQL>

8. Add undo tablespace for second instance,
SQL> create undo tablespace UNDOTBS2 datafile '+ORADATA' size 100M;
Tablespace created.
SQL>

9. Add redo log groups for the second instance , for that we have to verify how redo logs are configured for first instance.
SQL> set lines 300
col member for a80
SELECT a.group#,b.thread#, a.member, b.bytes/1024/1024 FROM v$logfile a, v$log b WHERE a.group# = b.group#;
GROUP# THREAD# MEMBER B.BYTES/1024/1024
---------- ---------- -------------------------------------------------------------------------------- -----------------
3 1 +ORADATA/ttweb/onlinelog/group_3.1460.871230663 50
3 1 +ORAFRA/ttweb/onlinelog/group_3.1775.871230663 50
2 1 +ORADATA/ttweb/onlinelog/group_2.1459.871230661 50
2 1 +ORAFRA/ttweb/onlinelog/group_2.9378.871230661 50
1 1 +ORADATA/ttweb/onlinelog/group_1.1458.871230661 50
1 1 +ORAFRA/ttweb/onlinelog/group_1.464.871230661 50
6 rows selected.
SQL>
Here we have 3 redo log groups having two members each , so we will configure redo groups 4,5 and 6 for the second instance.

10. Add redo log groups for the second instance,
alter database add logfile thread 2 group 4 ('+ORADATA','+ORAFRA') size 50m reuse;
alter database add logfile thread 2 group 5 ('+ORADATA','+ORAFRA') size 50m reuse;
alter database add logfile thread 2 group 6 ('+ORADATA','+ORAFRA') size 50m reuse;
SQL> SELECT a.group#,b.thread#, a.member, b.bytes/1024/1024 FROM v$logfile a, v$log b WHERE a.group# = b.group#;
GROUP# THREAD# MEMBER B.BYTES/1024/1024
---------- ---------- -------------------------------------------------------------------------------- -----------------
3 1 +ORADATA/ttweb/onlinelog/group_3.1460.871230663 50
3 1 +ORAFRA/ttweb/onlinelog/group_3.1775.871230663 50
2 1 +ORADATA/ttweb/onlinelog/group_2.1459.871230661 50
2 1 +ORAFRA/ttweb/onlinelog/group_2.9378.871230661 50
1 1 +ORADATA/ttweb/onlinelog/group_1.1458.871230661 50
1 1 +ORAFRA/ttweb/onlinelog/group_1.464.871230661 50
4 2 +ORADATA/ttweb/onlinelog/group_4.1468.871234269 50
4 2 +ORAFRA/ttweb/onlinelog/group_4.1825.871234269 50
5 2 +ORADATA/ttweb/onlinelog/group_5.1469.871234297 50
5 2 +ORAFRA/ttweb/onlinelog/group_5.7489.871234297 50
6 2 +ORADATA/ttweb/onlinelog/group_6.1470.871234303 50
GROUP# THREAD# MEMBER B.BYTES/1024/1024
---------- ---------- -------------------------------------------------------------------------------- -----------------
6 2 +ORAFRA/ttweb/onlinelog/group_6.8822.871234303 50
12 rows selected.
Now we have second thread of online redo logs in order to start instance 2 and we can enable thread 2.
SQL> alter database enable public thread 2;
Database altered.
SQL>

11. Set cluster_database to true,before starting the SRVCTL utility
cluster_database is a static parameter and it require to bounce the database
SQL> show parameter cluster
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cluster_database boolean FALSE
cluster_database_instances integer 1
cluster_interconnects string
SQL>
SQL> alter system set cluster_database=true scope=spfile sid='*';
System altered.
SQL> shut immediate;
sDatabase closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup;
ORACLE instance started.
Total System Global Area 4275781632 bytes
Fixed Size 2260088 bytes
Variable Size 2399142792 bytes
Database Buffers 1862270976 bytes
Redo Buffers 12107776 bytes
Database mounted.
Database opened.
SQL> show parameter cluster
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cluster_database boolean TRUE
cluster_database_instances integer 2
cluster_interconnects string
SQL>

12. Copy initTTWEB1.ora into seconds nodes's $ORACLE_HOME/dbs directory
[oracle@dbnode1]:[TTWEB1] $ scp initTTWEB1.ora oracle@dbnode2:/u01/app/oracle/product/11.2.0.4/db_1/dbs
in node2 ,go to dbnode2:/u01/app/oracle/product/11.2.0.4/db_1/dbs and rename the init file.
$mv initTTWEB1.ora initTTWEB2.ora

13. Register the database instances with CRS framework using srvctl command:
[oracle@dbnode1]:[TTWEB1] $ cat /etc/oratab | grep TTWEB
TTWEB:/u01/app/oracle/product/11.2.0.4/db_1:N
TTWEB1:/u01/app/oracle/product/11.2.0.4/db_1:N
[oracle@dbnode1]:[TTWEB1] $ srvctl add database -d TTWEB -o /u01/app/oracle/product/11.2.0.4/db_1
[oracle@dbnode1]:[TTWEB1] $ srvctl add instance -d TTWEB -i TTWEB1 -n dbnode1
[oracle@dbnode1]:[TTWEB1] $ srvctl add instance -d TTWEB -i TTWEB2 -n dbnode2
Now the instances are registered with CRS, use SRVCTL to stop and start the database.
[oracle@dbnode1]:[TTWEB1] $ srvctl status database -d TTWEB
Instance TTWEB1 is not running on node dbnode1
Instance TTWEB2 is not running on node dbnode2
At first time, srvctl won't detect the status of the instances even when our instances are up and running . so stop it and start it again throuhg srvctl command .
[oracle@dbnode1]:[TTWEB1] $ srvctl stop database -d TTWEB
[oracle@dbnode1]:[TTWEB1] $ srvctl start database -d TTWEB
[oracle@dbnode1]:[TTWEB1] $ srvctl status database -d TTWEB
Instance TTWEB1 is running on node dbnode1
Instance TTWEB2 is running on node dbnode2
[oracle@dbnode1]:[TTWEB1] $
Now check the pmon process at the node2 , you can see new instance TTWEB2 is running there
[oracle@dbnode2]:[] $ ps -ef | grep pmon | grep TT
oracle 15312 1 0 17:55 ? 00:00:00 ora_pmon_TTWEB2
[oracle@dbnode2]:[] $
You can also see TTWEB got automatically added in /etc/oratab in node2
[oracle@dbnode2]:[] $ cat /etc/oratab | grep TT
TTWEB:/u01/app/oracle/product/11.2.0.4/db_1:N # line added by Agent
[oracle@dbnode2]:[] $
You have to add an entry for TTWEB2 if it not already added , once enviormnet are set check the status of the instances through node2.
[oracle@dbnode2]:[TTWEB2] $ srvctl status database -d TTWEB2
PRCD-1120 : The resource for database TTWEB2 could not be found.
PRCR-1001 : Resource ora.ttweb2.db does not exist
[oracle@dbnode2]:[TTWEB2] $
[oracle@dbnode2]:[TTWEB2] $ srvctl status database -d TTWEB
Instance TTWEB1 is running on node dbnode1
Instance TTWEB2 is running on node dbnode2
[oracle@dbnode2]:[TTWEB2] $

14. Finally crete one password file for each instance
[oracle@dbnode1]:[TTWEB1] $ orapwd file= $ORACLE_HOME/dbs/orapwTTWEB1 password=xxxxxxx
[oracle@dbnode2]:[TTWEB2] $ orapwd file= $ORACLE_HOME/dbs/orapwTTWEB2 password=xxxxxxx

Check the logs:-
It is strongly advised to check all the logs related to Cluster, Database and instances when you perform installation, system changes or patch updates…etc. Make sure to check the log files to see the unexpected issues, if any.
CRS_HOME/log/hostname/crsd/ – The log files for the CRS daemon
CRS_HOME/log/hostname/cssd/ – The log files for the CSS daemon
CRS_HOME/log/hostname/evmd/ – The log files for the EVM daemon
CRS_HOME/log/hostname/client/ – The log files for the Oracle Cluster Registry (OCR)
CRS_HOME/log/hostname/racg/ – The log files for the Oracle RAC high availability component
CRS_HOME/log/hostname/racg – The log files for the Oracle RAC high availability component
CRS_HOME/log/hostanme/alert.log – The alert.log for Clusterware issues.
Please note that the CRS_HOME is the directory in which the Oracle Clusterware software was installed and hostname is the name of the node.

---------------------------------------------------------------------------------------------------------------------------------------------------------

No comments:

Post a Comment