At
present :
1. OS certification , compatibility and patchset levels should match with other nodes in the cluster
Note: remember this time addNode.sh is invoked from ORACLE_HOME/oui/bin .
It will copy and configure RDBMS software on the remote node and end up with asking to execute following script as root
#$ORACLE_HOME/root.sh
Add New Instance to Clustered Database--
Two node
RAC with nodes as (sa-aplha and sa-beta) Hosting a database MYDB with instance
as (MYDB1 and MYDB2)
Aim:
Add new
node sa-gamma to this two node cluster and to create MYDB3 on the new node
sa-gama.
Prerequisite:-
1. OS certification , compatibility and patchset levels should match with other nodes in the cluster
2. All
the packages required by Oracle should be installed on new node.
3. If
you are using ASMLIB, appropriate binaries should be installed on new node
,
4. OS
limits should match with other nodes (/etc/security/limit.conf)
5. OS
kernel parameter in place with other nodes (/etc/sysctl.conf)
6. DNS
configuration should be match with other nodes -
Check the contents of /etc/resolve.conf and compare with other nodes .
if you don't have a DNS for name resolution , make sure you
have similar /etc/hosts file across all nodes .
7.
Configure auto-SSH connectivity for grid user among all 3 nodes
8. Time
should be synchronized across all nodes in the cluster , if there is a
problem try
for ntpd restart on each node. From oracle 11g onwards we can use
CTSS
instead of relying on operating system NTP. So if your cluster is configured
with CTSS , you need to de-configure ntpd on the new node
with CTSS , you need to de-configure ntpd on the new node
#service ntpd stop
#chkconfig ntpd off
#mv /etc/ntp.conf /etc/ntp.conf.original
#rm /var/run/ntpd.pid
#chkconfig ntpd off
#mv /etc/ntp.conf /etc/ntp.conf.original
#rm /var/run/ntpd.pid
9.Create
Job Role Separation Operating System Privileges Groups, Users, and
Directories.
While creating the new users make sure UID and the GID of oracle/grid are
While creating the new users make sure UID and the GID of oracle/grid are
identical to that of the other RAC nodes.
where oracle --> rdbms software owner
grid --> grid infrastructure onwer
where oracle --> rdbms software owner
grid --> grid infrastructure onwer
10.Configure
Network Components:-
While configuring Network components make sure your public/private NIC must
While configuring Network components make sure your public/private NIC must
have same name compared to other nodes in the cluster .
For example , in first and second node if eth0 is configured for public network
For example , in first and second node if eth0 is configured for public network
you have to choose the same NIC for configuring public network for the
new node.
Note:- Public n/w is a rout-able network -- you have to give the default
Note:- Public n/w is a rout-able network -- you have to give the default
gateway address while configuring the NIC for public n/w, but private n/w should
not be a rout-able n/w that is we don't configure default gateway for private n/w.
11.Configure
Access to the Shared Storage:-
Either contact your system administrator to make this task done , or try to accomplish it by yourself -
if you are using iscsi (NAS) , after configuring the storage part do the following from the new node:-
As root execute following commnads -->
service iscsci restart
lsscsi - should list the all available LUN
/etc/init.d/oracleasm scandisks
/etc/init.d/oracleasm listdisks
it should list all ASM labeled disks
Either contact your system administrator to make this task done , or try to accomplish it by yourself -
if you are using iscsi (NAS) , after configuring the storage part do the following from the new node:-
As root execute following commnads -->
service iscsci restart
lsscsi - should list the all available LUN
/etc/init.d/oracleasm scandisks
/etc/init.d/oracleasm listdisks
it should list all ASM labeled disks
Verify New Node using cluvfy
Use CVU
as the Oracle Grid Infrastructure owner to determine the integrity of the
cluster and whether it is ready for the new Oracle RAC node to be added.
From the GRID_HOME as grid owner execute the following
From the GRID_HOME as grid owner execute the following
$cluvfy stage -pre nodeadd -n sa-gamma -verbose
You can
ignore the PRVF-5449 message if you have already configured and verified that
the ASM shared disks are visible on the third node. The error was a result of
having the voting disks stored in Oracle ASM storage, a new feature of Oracle
11g Release2.
if you
have any ignorable error previously do the following before actually involking
the addNode.sh script
export IGNORE_PREADDNODE_CHECKS=Y
export IGNORE_PREADDNODE_CHECKS=Y
Extend the GRID_HOME to the new node
$export IGNORE_PREADDNODE_CHECKS=Y
$cd $GRID_HOME/oui/bin
$ ./addNode.sh -silent "CLUSTER_NEW_NODES={sa-gamma}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={sa-gamma-vip}"
$cd $GRID_HOME/oui/bin
$ ./addNode.sh -silent "CLUSTER_NEW_NODES={sa-gamma}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={sa-gamma-vip}"
Basically
it will copy & configure your GRID software to your remote node and end up
up with asking to execute following script as root -- > orainstRoot.sh and root.sh
where root.sh is the main script that will create olr,it
will update OCR , start all the clusterware doemon in third node along with ASM
.
#$GRID_HOME/root.sh
Running Oracle 11g root script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/11.2.0.2/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Running Oracle 11g root script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/11.2.0.2/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0.2/grid/crs/install/crsconfig _params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful. OLR initialization - successful
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node ayu2, number 2, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0.2/grid/crs/install/crsconfig _params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful. OLR initialization - successful
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node ayu2, number 2, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Here you
can see , the new node is getting joined to cluster .
Post verification :-
$cluvy stage -post nodeadd -n sa-gamma
------------------------------------------------------------------------------------------
$cluvy stage -post nodeadd -n sa-gamma
------------------------------------------------------------------------------------------
Extend the RDBMS ORACLE_HOME to inlude the new node
From an existing node – As the database software owner – run the following command to extend the Oracle database software to the new node "sa-gama"
From an existing node – As the database software owner – run the following command to extend the Oracle database software to the new node "sa-gama"
As
oracle user
$cd
$ORACLE_HOME/oui/bin
$./addNode.sh -silent
"CLUSTER_NEW_NODES={sa-gamma}"
Note: remember this time addNode.sh is invoked from ORACLE_HOME/oui/bin .
It will copy and configure RDBMS software on the remote node and end up with asking to execute following script as root
#$ORACLE_HOME/root.sh
Add New Instance to Clustered Database--
Invoke
DBCA from any running instance - select instance management -then choose
the following .
Welcome:
Select Oracle Real Application Clusters (RAC) database as the database type
that you would like to create or administer.
Operations:
Select Instance Management.
Instance
Management: Select Add an instance.
List of
cluster databases: Enter the sys username and password.
List of
cluster database instances: Verify the existing database instances.
Instance
naming and node selection: Verify the instance name (MYDB3) to be added and the
node (sa-gama) on which to add the instance.
Instance
Storage: Verify the storage configuration.
Summary:
Verify the set up and click on OK.
Basically it will do following task for you -
-> add instance specific parameter to the spfile ,
-> add undotablespace for the third instance
-> add redo log gruop for thread 3
-> update the metadata of new instance in OCR:-
-> add undotablespace for the third instance
-> add redo log gruop for thread 3
-> update the metadata of new instance in OCR:-
if you want to do it manually execute the commands from any of the
running nodes
SQL> alter database add logfile thread 3 group 5 ('+DATADG','+FRADG') size 50M, group 6 ('+DATADG','+FRADG') size 50M;
Database altered.
SQL> alter system set thread=3 scope=spfile sid='MYDB3';
System altered.
SQL> alter database enable public thread 3;
Database altered.
SQL> create undo tablespace undotbs3 datafile '+DATADG' size 200M autoextend on;
Tablespace created.
SQL> alter system set undo_tablespace='undotbs3' scope=spfile sid='MYDB3';
System altered.
SQL> alter system set instance_number=3 scope=spfile sid='MYDB3';
System altered.
SQL> alter system set cluster_database_instances=3 scope=spfile sid='*';
System altered.
Update Oracle Cluster Registry (OCR)
The OCR will be updated to account for a new instance – "MYDB3" – being added to the "MYDB" cluster database . Add "MYDB3" instance to the "MYDB" database and verify
#srvctl config database -d MYDB
SQL> alter database add logfile thread 3 group 5 ('+DATADG','+FRADG') size 50M, group 6 ('+DATADG','+FRADG') size 50M;
Database altered.
SQL> alter system set thread=3 scope=spfile sid='MYDB3';
System altered.
SQL> alter database enable public thread 3;
Database altered.
SQL> create undo tablespace undotbs3 datafile '+DATADG' size 200M autoextend on;
Tablespace created.
SQL> alter system set undo_tablespace='undotbs3' scope=spfile sid='MYDB3';
System altered.
SQL> alter system set instance_number=3 scope=spfile sid='MYDB3';
System altered.
SQL> alter system set cluster_database_instances=3 scope=spfile sid='*';
System altered.
Update Oracle Cluster Registry (OCR)
The OCR will be updated to account for a new instance – "MYDB3" – being added to the "MYDB" cluster database . Add "MYDB3" instance to the "MYDB" database and verify
#srvctl config database -d MYDB
Reference:- https://www.youtube.com/watch?v=ZV_P5-qpLhs
No comments:
Post a Comment