Sunday, May 24, 2015

Data Guard Configuration for RAC through RMAN active database duplication.
In this topic I am going to cover how we can  configure dataguard in a RAC enviorment .
The main advantage of setting up dataguard in 11g/12c is that the database can be opened in Read-Only mode allowing the Users to access the physical standby database for fetching reports and on the same time the physical standby database can be in recovery mode. In other words, the physical standby database would be in recovery mode and in hand the standby database can be used for reporting purposes.

Assumptions
I have two servers (physical or VMs) with an operating system and Oracle installed on them. In this case I've used Oracle Linux 5.6 and Oracle Database 12.1.0.2.
The primary server has a running instance (TDDWH1 & TDDWH2)
The standby server has a software only installation.

Here is my enviornment setup .I am going to make a standby database for my cluster database TDDWH .
pdnode1 - node 1 primary 
pdnode2 - node 2 primary 
stnode1 - node 1 standby -10.81.155.146
stnode2 - node 2 standby -10.81.155.147

Primary Server Setup

Enable Archivelog Mode
Check that the primary database is in archivelog mode.

SQL>SELECT log_mode FROM v$database;

LOG_MODE
------------
NOARCHIVELOG

SQL>

If it is noarchivelog mode, switch is to archivelog mode.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount
ORACLE instance started.

Total System Global Area 1241513984 bytes
Fixed Size                  1273420 bytes
Variable Size             318767540 bytes
Database Buffers          905969664 bytes
Redo Buffers               15503360 bytes
Database mounted.

SQL> alter database archivelog;

Database altered.

SQL> ALTER SYSTEM SET LOG_ARCHIVE_FORMAT='%t_%s_%r.arc' SCOPE=SPFILE;

Where %tthread number , %s log sequence number and %r resetlogs ID that ensures unique names are constructed for the archived log files across multiple incarnations of the database

SQL> alter database open;

Database altered.

SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     34
Next log sequence to archive   36
Current log sequence           36

Note:- 

Enabled forced logging by issuing the following command.
Any nologging operations performed on the primary database do not get fully logged within the redo stream. As Oracle Data Guard relies on the redo stream to maintain the standby database, this can result in data inconsistencies between the primary and standby along with a massive headache for the DBA to resolve. To prevent this from occurring, one solution is to place the primary database into force logging mode. In this mode, all nologging operations are permitted to run without error, however, the changes will be placed in the redo stream anyway.

SQL> ALTER DATABASE FORCE LOGGING;

To verify force logging is enabled for the database:


SQL> select force_logging from v$database;

FORCE_LOGGING
--------------
YES
Configure a Standby Redo Log
A Standby Redo log is added to enable Data Guard Maximum Availability and Maximum Protection modes. It is important to configure the Standby Redo Logs (SRL) with the same size/more than the size  of the online redo logs. Oracle recommented to create one more standby redo log file group than the number of online redo log file groups on the primary database. 


A best practice generally followed is to create the standby redo logs on both the primary and the standby database so as to make role transitions smoother. By creating the standby redo logs at this stage, it is assured that they will exist on both the primary and the newly created standby database. From the primary database, connect as SYS and run the following to create four standby redo log file groups:

SQL>select thread#, group# ,bytes/1024/1024 from v$log order by 1,2;
  THREAD#     GROUP# BYTES/1024/1024
---------- ---------- ---------------
         1          1              50
         1          2              50
         2          3             100
         2          4             100

Here thread 1 have two groups for each thread . So we need to create 3 standby redo log groups for thread 1 and 2.

Here thread 2 is configured with size 100MB , so we have to create standby redo logs of size 100 M for thread 2.

SQL> alter database add standby logfile thread 1 group 5 size 50M,group 6 size 50M,group 7  size 50M;

SQL> alter database add standby logfile thread 2 group 8 size 100M,group 9 size 100M,group 10 size 100M;

Check the standby redos by 

SQL> select GROUP#,TYPE from v$logfile ;
SQL> SELECT GROUP#,THREAD#,SEQUENCE#,ARCHIVED,STATUS FROM V$STANDBY_LOG;

Create a Password File
As part of the new redo transport security and authentication features, it is now mandatory that each database in an Oracle Data Guard configuration utilize a password file. In addition, the SYS password must be identical on every database in order for redo transport to function. If a password file does not exist for the primary database, create one using the following steps: create a new password file or copy the existing password file from node1 of primary and copy to the rest of the box.

[oracle@pdnode1]:[TDDWH1] $ scp orapwTDDWH1 pdnode2:/u01/app/oracle/product/12.1.0.2/db_1/dbs/orapwTDDWH2  -- node 2 of primary
[oracle@pdnode1]:[TDDWH1] $ scp orapwTDDWH1 10.81.155.146:/u01/app/oracle/product/12.1.0.2/db_1/dbs/orapwTGDWH1 -- node 1 of standby 
[oracle@pdnode1]:[TDDWH1] $ scp orapwTDDWH1 10.81.155.147:/u01/app/oracle/product/12.1.0.2/db_1/dbs/orapwTGDWH2 -- node 2 of standby 

After creating the password file, set the remote_login_passwordfile initialization parameter to EXCLUSIVE in the spfile on the primary database. Since this parameter cannot be dynamically modified for the current running instance, the change will have to be made to the spfile and bounced in order to take effect:

SQL> alter system set remote_login_passwordfile=exclusive scope=spfile;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup;
ORACLE instance started.
Total System Global Area 1241513984 bytes
Fixed Size                  1273420 bytes
Variable Size             318767540 bytes
Database Buffers          905969664 bytes
Redo Buffers               15503360 bytes
Database mounted.

Database opened.

Note:- 
If you have implemented any kind of  encryption  in your primary database you need to copy  those wallet directory to the standby boxes .If you forgot to copy your wallet directory the  MRP process won't come up in standby side.

[oracle@pdnode1]:[TDOSB1] $ scp -r TDDWH1  10.81.155.146:/etc/ORACLE/WALLETS/
Go to 10.81.155.146 and rename /etc/ORACLE/WALLETS/TDDWH1 to TGDWH1
oracle@pdnode1]:[TDOSB1] $ scp -r TDDWH1  10.81.155.147:/etc/ORACLE/WALLETS/
Go to 10.81.155.146 and rename /etc/ORACLE/WALLETS/TDDWH1 to TGDWH2

Configure the Primary Database Initialization Parameters
The configuration examples use the names shown in the following table:
DatabaseDB_UNIQUE_NAMEOracle Net Service Name
PrimaryTDDWHTDDWH
Physical standbyTGDWHTGDWH
Check the setting for the DB_NAME and DB_UNIQUE_NAME parameters. In this case they are both set to "TDDWH" on the primary database.

SQL> show parameter db_name

NAME      TYPE  VALUE
------------------------------------ ----------- ------------------------------
db_name       string  TDDWH

SQL> show parameter db_unique_name

NAME      TYPE  VALUE
------------------------------------ ----------- ------------------------------
db_unique_name      string  TDDWH

The DB_NAME of the standby database will be the same as that of the primary, but it must have a different DB_UNIQUE_NAME value. The DB_UNIQUE_NAME values of the primary and standby database should be used in the DG_CONFIG setting of the LOG_ARCHIVE_CONFIG parameter. For this example, the standby database will have the value "TGDWH".

Set Standby related parameters as follows ,
SQL> alter system set log_archive_dest_1='location=+ORADATA/TDDWH/ARCHIVELOG valid_for=(all_logfiles,all_roles) db_unique_name=TDDWH' sid='*' scope=both;

SQL> alter system set log_archive_dest_2='service=TGDWH NOAFFIRM LGWR ASYNC COMPRESSION=ENABLE VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=TGDWH' sid='*' scope=both;

 SQL>alter system set LOG_ARCHIVE_DEST_state_1=ENABLE sid='*' scope=both;
SQL>alter system set LOG_ARCHIVE_DEST_state_2=ENABLE sid='*' scope=both;

SQL>alter system set log_archive_config='DG_CONFIG=(TDDWH,TGDWH)' sid='*' scope=both;

Here TDDWH & TGDWH are db_unique_name for primary and standby (not oracle-net entry )

Set FAL_CLIENT and FAL_SERVER Parameters
Under certain circumstances, either because of network failures or a busy source environment, a gap builds up between the target sending the information and the destination receiving the changes and applying them. Since the MRP/LSP process has no direct communication link with the primary database to obtain a copy of the missing archive files, such gaps or archive gaps are resolved by the fetch archive log (FAL) client and server,identified by the initialization parameters FAL_CLIENT and FAL_SERVER.

FAL_SERVER specifies the FAL (fetch archive log) server for a standby database.The value is an Oracle Net service name, which is assumed to be configured properly on the standby database system to point to the desired FAL server.

FAL_CLIENT specifies the FAL (fetch archive log) client name that is used by the FAL service, configured through the FAL_SERVER parameter, to refer to the FAL client. The value is an Oracle Net service name, which is assumed to be configured properly on the FAL server system to point to the FAL client (standby database). Given the dependency of FAL_CLIENT on FAL_SERVER, the two parameters should be configured or changed at the same time.

FAL_CLIENT and FAL_SERVER are initialization parameters used to configure log gap detection and resolution at the standby database side of a physical database configuration. This functionality is provided by log apply services and is used by the physical standby database to manage the detection and resolution of archived redo logs.


FAL_CLIENT and FAL_SERVER only need to be defined in the initialization parameter file for the standby database(s). It is possible; however, to define these two parameters in the initialization parameter for the primary database server to ease the amount of work that would need to be performed if the primary database were required to transition its role.

Create oracle-net entry for TGDWH and also for TDDWH  on both nodes

SQL> alter system set fal_client='TDDWH' sid='*' scope=both;

SQL> alter system set fal_server='TGDWH' sid='*' scope=both;

Make sure tns entry for both TDDWH and TGDWH are present in all 4 boxes ie, you should able to tnsping TDDWH and TGDWH from all 4 boxes

The initialization parameter, STANDBY_FILE_MANAGEMENT, enables you to control whether or not adding a datafile to the primary database is automatically propagated to the standby database, as follows:

SQL>ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;

Duplicating stanby databsae from active database 

Make an entry UR=A for the auxiliary instance both in standby and primary ,otherwise make a static listener entry for the standby database 

From node 1 of standby

[oracle@stnode1]:[TGDWH1] $ tnsping TGDWH

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 28-APR-2015 17:40:53

Copyright (c) 1997, 2014, Oracle.  All rights reserved.

Used parameter files:
/u01/app/oracle/product/12.1.0.2/db_1/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ptgracscan)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = TGDWH)(UR=A)))
OK (0 msec)
[oracle@stnode1]:[TGDWH1] $

From node 1 of primary

[oracle@pdnode1]:[TDDWH1] $ tnsping TGDWH

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 28-APR-2015 17:41:40

Copyright (c) 1997, 2014, Oracle.  All rights reserved.

Used parameter files:
/u01/app/oracle/product/12.1.0.2/db_1/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 10.81.155.146)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = TGDWH)(UR=A)))
OK (0 msec)
[oracle@pdnode1]:[TDDWH1] $

Make a proper initTGDWH1.ora and put it in the dbs directory of 10.81.155.146
the important perameter to consider 
*.db_unique_name='TGDWH'
*.db_name='TDDWH'
*.compatible='same as that of primary'
*.cluster_database=FALSE
*.db_create_file_dest='+ORADATA'
*.db_recovery_file_dest='+ORAFRA'
*.db_recovery_file_dest_size=4000M
*.control_files='+ORADATA/tgdwh/controlfile/stanby_control1.ctl','+ORAFRA/tgdwh/controlfile/standby_control2.ctl'
*.fal_server='TDDWH'
*.fal_client='TGDWH'
*.log_archive_config='DG_CONFIG=(TDDWH, TGDWH)'
*.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=TGDWH'
*.log_archive_dest_2='service=TDDWH NOAFFIRM LGWR ASYNC COMPRESSION=ENABLE valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=TDDWH'
*.log_archive_dest_state_2='defer'
*.STANDBY_FILE_MANAGEMENT=AUTO

Note:- if you are using use_db_recovery_file_dest , you have to define db_recovery_file_dest . You don't have to worry about other parameter like adump , as oracle will automatically create while doing duplicaste

SQL>startup noumount pfile='initTGDWH1.ora';
 



====> duplicate the standby database by 
[oracle@stnode1]:[TGDWH1] $rman TARGET sys/Orcl1234@TDDWH AUXILIARY sys/Orcl1234@TGDWH
run {
allocate channel prmy1 type disk;
allocate channel prmy2 type disk;
allocate auxiliary channel stby type disk;
duplicate target database for standby from active database dorecover;
}

Once duplicate database finished its woks , start redo apply by

SQL> alter database recover managed standby database using current logfile disconnect from session;
SQL> select  thread#, sequence# ,name,applied from v$archived_log order by sequence#

SQL> select process,status,sequence#,block#,blocks from v$managed_standby;

---- if rfs is not coming check LOG_ARCHIVE_DEST_state_2 is set to enable or not in primary.

Once mrp process start applying the redo records  check error from primary 

set linesize 500
col DESTINATION for a30
SELECT inst_id,DEST_ID , STATUS ,DESTINATION , ERROR FROM gV$ARCHIVE_DEST WHERE DEST_ID <4 order by 1,2;

Adding instances to cluster 
Once mrp start applying archives , we can configure the new database to cluster,

[oracle@stnode1]:[TGHTB1] $ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Apr 29 18:22:36 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, Oracle Label Security,
OLAP, Advanced Analytics, Oracle Database Vault and Real Application Testing options

SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> 
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, Oracle Label Security,
OLAP, Advanced Analytics, Oracle Database Vault and Real Application Testing options
[oracle@stnode1]:[TGHTB1] $
[oracle@stnode1]:[TGHTB1] $ vi initTGHTB1.ora
[oracle@stnode1]:[TGHTB1] $ here I made changes cluster_database='TRUE'
[oracle@stnode1]:[TGHTB1] $ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Apr 29 18:26:33 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup  nomount;
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE instance started.

Total System Global Area 1610612736 bytes
Fixed Size                  2924928 bytes
Variable Size             989859456 bytes
Database Buffers          603979776 bytes
Redo Buffers               13848576 bytes
SQL>
SQL> create spfile='+ORAFRA' from pfile;

File created.

------------------------> note down the location of spfile created .
SQL> shut immediate;
ORA-01507: database not mounted
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@stnode1]:[TGHTB1] $ cp initTGHTB1.ora initTGHTB1.ora_old
[oracle@stnode1]:[TGHTB1] $ vi initTGHTB1.ora
[oracle@stnode1]:[TGHTB1] $ here I added spfile='+ORAFRA/TGHTB/PARAMETERFILE/spfile.11344.878322445'
[oracle@stnode1]:[TGHTB1] $ pwd
/u01/app/oracle/product/12.1.0.2/db_1/dbs
[oracle@stnode1]:[TGHTB1] $ scp initTGHTB1.ora stnode2:/u01/app/oracle/product/12.1.0.2/db_1/dbs/initTGHTB2.ora
initTGHTB1.ora                                                                                                             100%   60     0.1KB/s   00:00
[oracle@stnode1]:[TGHTB1] $
[oracle@stnode1]:[TGHTB1] $ cat /etc/oratab | grep TGHTB1
TGHTB1:/u01/app/oracle/product/12.1.0.2/db_1:N          # line added by Agent
[oracle@stnode1]:[TGHTB1] $ srvctl add database -d TGHTB -o /u01/app/oracle/product/12.1.0.2/db_1
[oracle@stnode1]:[TGHTB1] $ srvctl add instance -d TGHTB -i TGHTB1 -n stnode1
[oracle@stnode1]:[TGHTB1] $ srvctl add instance -d TGHTB -i TGHTB2 -n stnode2
[oracle@stnode1]:[TGHTB1] $ srvctl status database -d TGHTB
Instance TGHTB1 is not running on node stnode1
Instance TGHTB2 is not running on node stnode2
[oracle@stnode1]:[TGHTB1] $ srvctl start database -d TGHTB
[oracle@stnode1]:[TGHTB1] $ srvctl status database -d TGHTB
Instance TGHTB1 is running on node stnode1
Instance TGHTB2 is running on node stnode2
[oracle@stnode1]:[TGHTB1] $

1 comment:

  1. Hi can u deeply explain how fal server and fal client used for gap detection and resolution

    ReplyDelete