Tuesday, April 12, 2016

Oracle12c OCP sample question and answer part 1


I am going through Oracle12c OCP  sample question and I would like to share the answer that came in my mind  .I also tried to gather some useful nots for each question that I went through .I am not pretty sure all answer given here are correct , I would like to here from you If my answers are wrong .


1)Your multitenant container (CDB) contains two pluggable databases (PDB), HR_PDB and
ACCOUNTS_PDB, both of which use the CDB tablespace. The temp file is called temp01.tmp.
A user issues a query on a table on one of the PDBs and receives the following error:
ERROR at line 1:
ORA-01565: error in identifying file ‘/u01/app/oracle/oradata/CDB1/temp01.tmp’
ORA-27037: unable to obtain file status
Identify two ways to rectify the error.

A.
Add a new temp file to the temporary tablespace and drop the temp file that that produced the
error.

B.
Shut down the database instance, restore the temp01.tmp file from the backup, and then restart
the database.

C.
Take the temporary tablespace offline, recover the missing temp file by applying redo logs, and
then bring the temporary tablespace online.

D.
Shutdown the database instance, restore and recover the temp file from the backup, and then
open the database with RESETLOGS.

E.
Shut down the database instance and then restart the CDB and PDBs.

Correct answer is A and E.
Notes:-
Temporary tablespace is never backed up in RMAN. No need of restoration and recovery.You can create a new temp file and drop the old one. or restart the CDB, temp file will be created automatically

----------------------------------------------------------------------------------------------------------------------------------------------
2)You are administering a database stored in Automatic Storage Management (ASM). You use RMAN to back up the database and the MD_BACKUP command to back up the ASM metadata regularly. You lost an ASM disk group DG1 due to hardware failure. In which three ways can you re-create the lost disk group and restore the data?

A.
Use the MD_RESTORE command to restore metadata for an existing disk group by passing
the existing disk group name as an input parameter and use RMAN to restore the data.

B.
Use the MKDG command to restore the disk group with the same configuration as the backedup disk group and data on the disk group.

C.
Use the MD_RESTORE command to restore the disk group with the changed disk group
specification, failure group specification, name, and other attributes and use RMAN to restore the
data.

D.
Use the MKDG command to restore the disk group with the same configuration as the backedup disk group name and same set of disks and failure group configuration, and use RMAN to
restore the data.

E.
Use the MD_RESTORE command to restore both the metadata and data for the failed disk
group.

F.
Use the MKDG command to add a new disk group DG1 with the same or different specifications
for failure group and other attributes and use RMAN to restore the data.

Correct answer is C,D,F

Read more about Disk group restore  :- http://www.oracle-class.com/?p=1930
----------------------------------------------------------------------------------------------------------------------------------------------
3)Your multitenant container database, CDB1, is running in ARCHIVELOG mode and has two pluggable databases, HR_PDB and ACCOUNTS_PDB. An RMAN backup exists for the database. You issue the command to open ACCOUNTS_PDB and find that the USERDATA.DBF data file for the default permanent tablespace USERDATA belonging to ACCOUNTS_PDB is corrupted.What should you do before executing the commands to restore and recover the data file in ACCOUNTS_PDB?

A.
Place CDB1 in the mount stage and then the USERDATA tablespace offline in
ACCOUNTS_PDB.

B.
Place CDB1 in the mount stage and issue the ALTER PLUGGABLE DATABASE accounts_pdb
CLOSE IMMEDIATE command.

C.
Issue the ALTER PLUGGABLE DATABASE accounts_pdb RESTRICTED command.

D.
Take the USERDATA tablespace offline in ACCOUNTS_PDB.

Correct answer:- D
Read more about this:- http://blog.itpub.net/28552278/viewspace-1248526/ 

Want to know how Point-In-Time recovery works with PDB ? Here is a brief note

Performing Point-In-Time Recovery for a Pluggable Database
SQL>alter pluggable database pdb2 close;

connect to root container and

run {
set until SCN = 2263440 ;
restore pluggable database pdb2;
recover pluggable database pdb2 auxiliary destination='/stage/db_backup_files/cdb1';
alter pluggable database pdb2 open resetlogs;
}

Read more about this:- http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/12c/r1/pdb/pdb_pitr/pdb_pitr.html

----------------------------------------------------------------------------------------------------------------------------------------------

4)Your multitenant container (CDB) containing three pluggable databases (PDBs) is running in ARCHIVELOG mode. You find that the SYSAUX tablespace is corrupted in the root container.
The steps to recover the tablespace are as follows:
1. Mount the CDB.
2. Close all the PDBs.
3. Open the database.
4. Apply the archive redo logs.
5. Restore the data file.
6. Take the SYSAUX tablespace offline.
7. Place the SYSAUX tablespace online.
8. Open all the PDBs with RESETLOGS.
9. Open the database with RESETLOGS.
10. Execute the command SHUTDOWN ABORT.
Which option identifies the correct sequence to recover the SYSAUX tablespace?

A.
6, 5, 4, 7

B.
10, 1, 2, 5, 8

C.
10, 1, 2, 5, 4, 9, 8

D.
10, 1, 5, 8, 10

Correct answer:- A

Notes:-
RMAN> ALTER TABLESPACE sysaux OFFLINE IMMEDIATE;
RMAN> RESTORE TABLESPACE sysaux;
RMAN> RECOVER TABLESPACE sysaux;
RMAN> ALTER TABLESPACE sysaux ONLINE;
----------------------------------------------------------------------------------------------------------------------------------------------
5)Which three are direct benefits of the multiprocess, multithreaded architecture of Oracle Database 12c when it is enabled?

A.
Reduced logical I/O

B.
Reduced virtual memory utilization

C.
Improved parallel Execution performance

D.
Improved Serial Execution performance

E.
Reduced physical I/O

F.
Reduced CPU utilization

Correct answer:- B,C,F
Notes:-
Oracle Database on Unix machines (Linux, Solaris, AIX, etc..) have always been a multiprocess database with each new version or release adding a new set of processes to the database architecture. However, On the contrary Oracle Database on Windows machines always runs a single (Multi-threaded) process.

Multi-threading has a wide range of advantages over the multiprocess architecture. A process in general context is an address space and creating a new process requires creating an address space in the memory. On the contrary creating a thread is less expensive as it doesn’t require a new address space and will run in the current process address space. Further, the time taken to switch between threads is much less than the time taken to switch between processes mainly due to the fact that switching between threads do not require to switching between address spaces.
What is new?

The wait is finally over. Oracle has introduced the Multi-threaded architecture for Unix systems with the Oracle Database Release 12c, where a Oracle Database on Unix machines can utilize a Multi-threaded process model.

How to configure Multithreading for Oracle Database

Configuring Multi-threaded process model is very simple. In Oracle Database 12c, Multi-threading is facilitated by means of the database initialization parameter THREADED_EXECUTION
THREADED_EXECUTION=FALSE or TRUE

FALSE: This is the default value and causes Oracle to run with Multiprocess architecture.
TRUE: If the value is set to TRUE, it implements Multi-threaded architecture for the database.
However, this configuration is not enough for obtaining a complete Multi-threaded process model. When, we set the parameter THREADED_EXECUTION=TRUE; by default it converts almost all of the background processes (like CKPT, LGWR, DBWR, SMON, etc.) into THREADS.

When this initialization parameter is set to TRUE, which enables the multithreaded Oracle model,
operating system authentication is not supported. Attempts to connect to the database using
operating system authentication (for example, CONNECT / AS SYSDBA or CONNECT / ) when
this initialization parameter is set to TRUE receive an ORA-01031″insufficient privileges” error- 

Read more about this :- http://www.oraclebuffer.com/oracle-internals/oracle-database-12c-the-multi-threaded-process-model/
----------------------------------------------------------------------------------------------------------------------------------------------
6)In order to exploit some new storage tiers that have been provisioned by a storage administrator, the partitions of a large heap table must be moved to other tablespaces in your Oracle 12c database?
Both local and global partitioned B-tree Indexes are defined on the table.
A high volume of transactions access the table during the day and a medium volume of
transactions access it at night and during weekends.
Minimal disrupt ion to availability is required.
Which three statements are true about this requirement?

A.
The partitions can be moved online to new tablespaces.

B.
Global indexes must be rebuilt manually after moving the partitions.

C.
The partitions can be compressed in the same tablespaces.

D.
The partitions can be compressed in the new tablespaces.

E.
Local indexes must be rebuilt manually after moving the partitions.

Correct answer:- A,C,D

Read more on:- http://orabase.org/index.php/exam-prepare/1z0-060-prepare/1z0-060-q10/
----------------------------------------------------------------------------------------------------------------------------------------------
7)You notice that the performance of your production 24/7 Oracle database significantly degraded.Sometimes you are not able to connect to the instance because it hangs. You do not want to restart the database instance. How can you detect the cause of the degraded performance?

A.
Enable Memory Access Mode, which reads performance data from SGA.

B.
Use emergency monitoring to fetch data directly from SGA analysis.

C.
Run Automatic Database Diagnostic Monitor (ADDM) to fetch information from the latest
Automatic Workload Repository (AWR) snapshots.

D.
Use Active Session History (ASH) data and hang analysis in regular performance monitoring.

E.
Run ADDM in diagnostic mode.

Correct answer:- E??

Emergency Monitoring

The Emergency Monitoring feature enables the database administrator to connect to an unresponsive database through a special proprietary mechanism and diagnose performance issues when normal mode connection is not possible. It has an in-built hang analysis capability that enables you to identify blocking sessions and kill these blockers with a click of a button.
Real-time Automatic Database Diagnostic Monitor

Real-Time Automatic Database Diagnostic Monitor (ADDM) is an innovative way to analyze problems in extremely slow or hung databases, which would have traditionally required a database restart. Real-Time ADDM can help resolve issues such as deadlocks, hangs, shared pool contentions, and many other exception situations without resorting to a restart of the database.

Lean more here :- https://www.youtube.com/watch?v=FkMOzsIecQU
----------------------------------------------------------------------------------------------------------------------------------------------
8)In your multitenant container database (CDB) containing pluggable databases (PDB), users complain about performance degradation.
How does real-time Automatic database Diagnostic Monitor (ADDM) check performance
degradation and provide solutions?

A.
It collects data from SGA and compares it with a preserved snapshot.

B.
It collects data from SGA, analyzes it, and provides a report.

C.
It collects data from SGA and compares it with the latest snapshot.

D.
It collects data from both SGA and PGA, analyzes it, and provides a report.
Correct answer:- B
Notes:-
Real-Time ADDM access just the ASH recent activity from the SGA. It doesn’t compare snapshots.
----------------------------------------------------------------------------------------------------------------------------------------------
9) Your multitenant container database (CDB) is running in ARCHIVELOG mode. You connect to the CDB RMAN.
Examine the following command and its output: 


You execute the following command:
RMAN > BACKUP DATABASE PLUS ARCHIVELOG;
Which data files will be backed up?

A.
Data files that belong to only the root container

B.
Data files that belong to the root container and all the pluggable databases (PDBs)

C.
Data files that belong to only the root container and PDB$SEED

D.
Data files that belong to the root container and all the PDBs excluding PDB$SEED

Correct answer:B
----------------------------------------------------------------------------------------------------------------------------------------------
10)You are administering a database stored in Automatic Storage management (ASM). The files are stored in the DATA disk group. You execute the following command:
SQL > ALTER DISKGROUP data ADD ALIAS ‘+data/prod/myfile.dbf’ FOR ‘+data.231.45678’;
What is the result?

A.
The file ‘+data.231.54769’ is physically relocated to ‘+data/prod’ and renamed as ‘myfile.dbf’.

B.
The file ‘+data.231.54769’ is renamed as ‘myfile.dbf’, and copied to ‘+data/prod’.

C.
The file ‘+data.231.54769’ remains in the same location and a synonym ‘myfile.dbf’ is created.

D.
The file ‘myfile.dbf’ is created in ‘+data/prod’ and the reference to ‘+data.231.54769’ in the data dictionary removed.

Correct answer:C
----------------------------------------------------------------------------------------------------------------------------------------------
11)Which three functions are performed by the SQL Tuning Advisor?

A.
Building and implementing SQL profiles

B.
Recommending the optimization of materialized views

C.
Checking query objects for missing and stale statistics

D.
Recommending bitmap, function-based, and B-tree indexes

E.
Recommending the restructuring of SQL queries that are using bad plans

Correct answer: ?? is it A,C,D, E ?

----------------------------------------------------------------------------------------------------------------------------------------------
12) Examine the following command:
ALTER SYSTEM SET enable_ddl_logging=FALSE;
Which statement is true?

A.
None of the data definition language (DDL) statements are logged in the trace file.

B.
Only DDL commands that resulted in errors are logged in the alert log file.

C.
A new log.xml file that contains the DDL statements is created, and the DDL command details
are removed from the alert log file.

D.
Only DDL commands that resulted in the creation of new database files are logged.

Correct answer: A

Note:- if you enable enable_ddl_logging=true , orale will create one file name ddl_INSTNAME.log inside $ORACLE_BASE/DIAG/rdbms/DBName/INSTNAME/log
and one log.xml inside $ORACLE_BASE/DIAG/rdbms/DBName/INSTNAME/log/ddl
Bothe these files contain information about DDL

----------------------------------------------------------------------------------------------------------------------------------------------
13)Your multitenant container database (CDB) contains three pluggable database (PDBs). You find that the control file is damaged. You plan to use RMAN to recover the control file. There are no startup triggers associated with the PDBs.
Which three steps should you perform to recover the control file and make the database fully operational?

A.
Mount the container database (CDB) and restore the control file from the control file auto
backup.

B.
Recover and open the CDB in NORMAL mode.

C.
Mount the CDB and then recover and open the database, with the RESETLOGS option.

D.
Open all the pluggable databases.

E.
Recover each pluggable database.

F.
Start the database instance in the nomount stage and restore the control file from control file auto backup.

Correct answer: F,C and D
----------------------------------------------------------------------------------------------------------------------------------------------
14)Identify three valid options for adding a pluggable database (PDB) to an existing multitenant container database (CDB).

A.
Use the CREATE PLUGGABLE DATABASE statement to create a PDB using the files from the
SEED.

B.
Use the CREATE DATABASE . . . ENABLE PLUGGABLE DATABASE statement to provision a
PDB by copying file from the SEED.

C.
Use the DBMS_PDB package to clone an existing PDB.

D.
Use the DBMS_PDB package to plug an Oracle 12c non-CDB database into an existing CDB.

E.
Use the DBMS_PDB package to plug an Oracle 11 g Release 2 (11.2.0.3.0) non-CDB database into an existing CDB.
Correct answer: A and D
To clone an existing PDB you dont need to use DBMS_PDB package, you simply use Alter database unplug into ‘file.xml’ and then create pluggable database using ‘file.xml’
----------------------------------------------------------------------------------------------------------------------------------------------
15)SQL > exec DBMS_STATS.SET_TABLE_PREFS (‘SH’, ‘CUSTOMERS’, ‘PUBLISH’, ‘false’);
Which three statements are true about the effect of this command?

A.
Statistics collection is not done for the CUSTOMERS table when schema stats are gathered.

B.
Statistics collection is not done for the CUSTOMERS table when database stats are gathered.

C.
Any existing statistics for the CUSTOMERS table are still available to the optimizer at parse
time.

D.
Statistics gathered on the CUSTOMERS table when schema stats are gathered are stored as
pending statistics.

E.
Statistics gathered on the CUSTOMERS table when database stats are gathered are stored as
pending statistics.
Correct answer: ??? is it C,D,E
----------------------------------------------------------------------------------------------------------------------------------------------
16)Examine the following impdp command to import a database over the network from a pre-12c Oracle database (source):

Which three are prerequisites for successful execution of the command?

A.
The import operation must be performed by a user on the target database with the
DATAPUMP_IMP_FULL_DATABASE role, and the database link must connect to a user on the source database with the DATAPUMP_EXD_FULL_DATABASE role.

B.
All the user-defined tablespaces must be in read-only mode on the source database.

C.
The export dump file must be created before starting the import on the target database.

D.
The source and target database must be running on the same platform with the same
endianness.

E.
The path of data files on the target database must be the same as that on the source database.

F.
The impdp operation must be performed by the same user that performed the expdp operation.
Correct answer: A,B ,D
----------------------------------------------------------------------------------------------------------------------------------------------
17)Which three statements are true concerning unplugging a pluggable database (PDB)?

A.
The PDB must be open in read only mode.

B.
The PDB must be dosed.

C.
The unplugged PDB becomes a non-CDB.

D.
The unplugged PDB can be plugged into the same multitenant container database (CDB)

E.
The unplugged PDB can be plugged into another CDB.

F.
The PDB data files are automatically removed from disk.
Correct answer:BDE
option A is only used for PDB cloning – where read only PDB is not unplug but copy to the another slot.
----------------------------------------------------------------------------------------------------------------------------------------------
18)Examine the following command:
CREATE TABLE PRODUCTS (prod_id number(4),
Prod_name varchar2 (20),
Category_id number(30),
Quantity_on_hand number (3) INVISIBLE);
Which three statements are true about using an invisible column in the PRODUCTS table?

A.
The %ROWTYPE attribute declarations in PL/SQL to access a row will not display the invisible
column in the output.

B.
The DESCRIBE commands in SQL *Plus will not display the invisible column in the output.

C.
Referential integrity constraint cannot be set on the invisible column.

D.
The invisible column cannot be made visible and can only be marked as unused.

E.
A primary key constraint can be added on the invisible column.
Correct answer: A,B,E
Notes:- the following ones will not display or work with invisible table columns:

    -SELECT * FROM in SQL instructions
    -The DESCRIBE statement when used in either SQL*PLUS
    -%ROWTYPE attribute in PL/SQL variable declarations

----------------------------------------------------------------------------------------------------------------------------------------------
19)Which statement is true concerning dropping a pluggable database (PDB)?

A.
The PDB must be open in read-only mode.

B.
The PDB must be in mount state.

C.
The PDB must be unplugged.

D.
The PDB data files are always removed from disk.

E.
A dropped PDB can never be plugged back into a multitenant container database (CDB).
Correct answer:B & C

Notes:-
From Oracle documentation could see that The following prerequisites must be met for dropping a PDB:
The PDB must be in mounted mode, or it must be unplugged.
D --> is wrong. When dropping a PDB, the default is to KEEP DATAFILES. You have the option to use the INCLUDING DATAFILES option as well.
E --> Sure you can if the datafiles are still there and you have the xml file.
----------------------------------------------------------------------------------------------------------------------------------------------
20)You notice a high number of waits for the db file scattered read and db file sequential read events in the recent Automatic Database Diagnostic Monitor (ADDM) report. After further investigation, you find that queries are performing too many full table scans and indexes are not being used even though the filter columns are indexed. Identify three possible reasons for this.

A.
Missing or stale histogram statistics

B.
Undersized shared pool

C.
High clustering factor for the indexes

D.
High value for the DB_FILE_MULTIBLOCK_READ_COUNT parameter

E.
Oversized buffer cache
Correct answer:A,C,D
Notes:-
ACD as the indexes can be defeated by missing stats/histograms, high clustering factor, high value for db_file_multiblock_read_count which can make full table scan for small tables cheaper that using indexes

----------------------------------------------------------------------------------------------------------------------------------------------
21)You notice a performance change in your production Oracle 12c database. You want to know which change caused this performance difference.
Which method or feature should you use?

A.
Compare Period ADDM report

B.
AWR Compare Period report

C.
Active Session History (ASH) report

D.
Taking a new snapshot and comparing it with a preserved snapshot
Correct answer: A
----------------------------------------------------------------------------------------------------------------------------------------------
22)Which three statements are true about Automatic Workload Repository (AWR)?

A.
All AWR tables belong to the SYSTEM schema.

B.
The AWR data is stored in memory and in the database.

C.
The snapshots collected by AWR are used by the self-tuning components in the database

D.
AWR computes time model statistics based on time usage for activities, which are displayed in
the v$SYS time model and V$SESS_TIME_MODEL views.

E.
AWR contains system wide tracing and logging information.
Correct answer:B,C,D
Notes:-
The statistics collected and processed by AWR include: Time model statistics based on time usage for activities, displayed in the V$SYS_TIME_MODEL and V$SESS_TIME_MODEL views
----------------------------------------------------------------------------------------------------------------------------------------------
23)
You upgraded your database from pre-12c to a multitenant container database (CDB) containing pluggable databases (PDBs).

Examine the query and its output: 


Which two tasks must you perform to add users with SYSBACKUP, SYSDG, and SYSKM privilege
to the password file?

A.
Assign the appropriate operating system groups to SYSBACKUP, SYSDG, SYSKM.

B.
Grant SYSBACKUP, SYSDG, and SYSKM privileges to the intended users.

C.
Re-create the password file with SYSBACKUP, SYSDG, and SYSKM privilege and the FORCE
argument set to No.

D.
Re-create the password file with SYSBACKUP, SYSDG, and SYSKM privilege, and FORCE
arguments set to Yes.

E.
Re-create the password file in the Oracle Database 12c format.

Correct answer: B & D

Notes:-
B, D is the correct answer because in  12c you can create pwdfile with privileges.

Check – http://docs.oracle.com/database/121/ADMIN/dba.htm#ADMIN11059

Creating a Database Password File with ORAPWD
The syntax of the ORAPWD command is as follows, where FORCE=y permits overwriting an existing password file.

orapwd FILE=filename [ENTRIES=numusers] [FORCE={y|n}] [ASM={y|n}]
[DBUNIQUENAME=dbname] [FORMAT={12|legacy}] [SYSBACKUP={y|n}] [SYSDG={y|n}]
[SYSKM={y|n}] [DELETE={y|n}] [INPUT_FILE=input-fname]
----------------------------------------------------------------------------------------------------------------------------------------------
24)You are about to plug a multi-terabyte non-CDB into an existing multitenant container database (CDB).
The characteristics of the non-CDB are as follows:
Version: Oracle Database 11g Release 2 (11.2.0.2.0) 64-bit
Character set: AL32UTF8
National character set: AL16UTF16
O/S: Oracle Linux 6 64-bit
The characteristics of the CDB are as follows:
Version: Oracle Database 12c Release 1 64-bit
Character Set: AL32UTF8
National character set: AL16UTF16
O/S: Oracle Linux 6 64-bit
Which technique should you use to minimize down time while plugging this non-CDB into the
CDB?

A.
Transportable database

B.
Transportable tablespace

C.
Data Pump full export/import

D.
The DBMS_PDB package

E.
RMAN
Correct answer: B
Notes:-
Transportable database - Only available from 11.2.0.3 onwards
Data Pump full export/import is slow compared to Transportable tablespace
The DBMS_PDB package is not available for 11.2.0.2.0 option .
----------------------------------------------------------------------------------------------------------------------------------------------
25)An administrator account is granted the CREATE SESSION and SET CONTAINER system privileges. A multitenant container database (CDB) instant has the following parameter set:
THREADED_EXECUTION = FALSE
Which four statements are true about this administrator establishing connections to root in a CDB that has been opened in read only mode?

A.
You can conned as a common user by using the connect statement.

B.
You can connect as a local user by using the connect statement.

C.
You can connect by using easy connect.

D.
You can connect by using OS authentication.

E.
You can connect by using a Net Service name.

F.
You can connect as a local user by using the SET CONTAINER statement.
Correct answer: A,C,D and E
Notes:-
The question tells you an administrator user can login (CREATE SESSION) and can issue SET CONTAINER commands. It also tells you it is a CDB and that the multi-threaded architecture is disabled. That means OS authentication will work. So D is correct. As long as TCP/IP is configured, then using EZ Connect would work, so C is correct. Assuming there’s a listener and a tnsnames.ora file correctly configured would make E correct. Since the connection is to the ROOT container, that immediately eliminates any answer referencing LOCAL users because ROOT cannot have any LOCAL users.

When this initialization parameter is set to TRUE, which enables the multithreaded Oracle model,
operating system authentication is not supported. Attempts to connect to the database using
operating system authentication (for example, CONNECT / AS SYSDBA or CONNECT / ) when
this initialization parameter is set to TRUE receive an ORA-01031″insufficient privileges” error.
----------------------------------------------------------------------------------------------------------------------------------------------



 

Saturday, March 26, 2016

Steps involved in node addition in oracle RAC

At present :
Two node RAC with nodes as (sa-aplha and sa-beta) Hosting a database MYDB with instance as (MYDB1 and MYDB2)
Aim:
Add new node sa-gamma to this two node cluster and to create MYDB3 on the new node sa-gama.
Prerequisite:-
 
1. OS certification , compatibility and patchset levels should match with other nodes in the cluster  
2. All the packages required by Oracle should  be installed on new node. 
3. If you are using ASMLIB, appropriate binaries should be installed on new node , 
4. OS limits should match with other nodes  (/etc/security/limit.conf) 
5. OS kernel parameter in place with other nodes (/etc/sysctl.conf) 
6. DNS configuration should be match with other nodes -                  
    Check the contents of /etc/resolve.conf  and compare with other nodes .
    if you don't  have a DNS for name resolution , make sure you have similar /etc/hosts file across all  nodes . 
7. Configure auto-SSH connectivity for grid user among all 3 nodes  
8. Time should be synchronized across all nodes in the cluster , if there is a problem  try 
    for  ntpd restart on each node. From oracle 11g onwards we can use CTSS 
    instead of  relying on operating system NTP. So if your cluster is configured  
    with  CTSS  , you need to de-configure ntpd on the new node
    #service ntpd stop
    #chkconfig ntpd off
    #mv /etc/ntp.conf /etc/ntp.conf.original
    #rm /var/run/ntpd.pid 
9.Create Job Role Separation Operating System Privileges Groups, Users, and  Directories.
    While creating the new users make sure  UID and the GID of oracle/grid are 
    identical to   that of the other RAC nodes.
    where oracle --> rdbms software owner
    grid --> grid infrastructure onwer  
10.Configure Network Components:-
     While configuring Network components make sure your public/private NIC must 
     have same name compared to other nodes in the cluster .
     For example , in first and second node if eth0 is configured for public network 
     you have  to choose the same NIC for configuring public network for the new node.
     Note:- Public n/w is a rout-able network -- you have to give the default 
     gateway  address while configuring the NIC for public n/w, but private n/w should
     not be a rout-able n/w  that is we don't configure default gateway for private n/w.
             
11.Configure Access to the Shared Storage:-
       Either contact your system administrator to make this task done , or try to accomplish it by yourself -
       if you are using iscsi (NAS) , after configuring the storage part do the following from the new node:-
                As root execute following commnads -->
                 service iscsci restart
                 lsscsi - should list the all available LUN
                 /etc/init.d/oracleasm scandisks
                 /etc/init.d/oracleasm listdisks
                 it should list all ASM labeled disks
 
Verify New Node using cluvfy 
Use CVU as the Oracle Grid Infrastructure owner  to determine the integrity of the cluster and whether it is ready for the new Oracle RAC node to be added.
From the GRID_HOME as grid owner execute the following                  
$cluvfy stage -pre nodeadd -n sa-gamma -verbose 
You can ignore the PRVF-5449 message if you have already configured and verified that the ASM shared disks are visible on the third node. The error was a result of having the voting disks stored in Oracle ASM storage, a new feature of Oracle 11g Release2.
if you have any ignorable error previously do the following before actually involking the addNode.sh script 
export IGNORE_PREADDNODE_CHECKS=Y 
Extend the GRID_HOME to the new node 
$export IGNORE_PREADDNODE_CHECKS=Y
$cd $GRID_HOME/oui/bin   
$ ./addNode.sh -silent "CLUSTER_NEW_NODES={sa-gamma}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={sa-gamma-vip}"
Basically it will copy & configure your GRID software to your remote node and end up up with asking to execute following script as root  -- > orainstRoot.sh  and root.sh 
where root.sh is the main script that will create olr,it will update OCR , start all the clusterware doemon in third node along with ASM .  

#$GRID_HOME/root.sh
Running Oracle 11g root script...
The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/11.2.0.2/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...  
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0.2/grid/crs/install/crsconfig            _params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful. OLR initialization - successful
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.

ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.  

CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node ayu2, number 2, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded  
Here you can see , the new node is getting joined  to cluster .

Post verification :-
$cluvy stage -post nodeadd -n sa-gamma
------------------------------------------------------------------------------------------
Extend the RDBMS ORACLE_HOME to inlude the new node
From an existing node – As the database software owner – run the following command to extend the Oracle database software to the new node "sa-gama"
As oracle user 
$cd $ORACLE_HOME/oui/bin    
$./addNode.sh -silent "CLUSTER_NEW_NODES={sa-gamma}" 

Note: remember this time addNode.sh is invoked from ORACLE_HOME/oui/bin .
It will copy and configure RDBMS software on  the remote node and end up with asking to execute following script as root
    
 #$ORACLE_HOME/root.sh
   
 Add New Instance to Clustered Database--  
Invoke DBCA from any running instance - select instance management -then choose  the following .
Welcome: Select Oracle Real Application Clusters (RAC) database as the database type that you would like to create or administer.
Operations: Select Instance Management.
Instance Management: Select Add an instance.
List of cluster databases: Enter the sys username and password.
List of cluster database instances: Verify the existing database instances.
Instance naming and node selection: Verify the instance name (MYDB3) to be added and the node (sa-gama) on which to add the instance.
Instance Storage: Verify the storage configuration.
Summary: Verify the set up and click on OK. 
  Basically it will do following task for you - 
  -> add instance specific parameter to the spfile ,
  -> add undotablespace for the third instance
  -> add redo log gruop  for thread 3
  -> update the metadata of new instance in OCR:- 
 
if you want to do it manually execute the commands from any of the running nodes
   
SQL> alter database add logfile thread 3 group 5 ('+DATADG','+FRADG') size 50M, group 6 ('+DATADG','+FRADG') size 50M; 
 
Database altered. 
SQL> alter system set thread=3 scope=spfile sid='MYDB3'; 
System altered. 
SQL> alter database enable public thread 3; 
 
Database altered. 
SQL> create undo tablespace undotbs3 datafile '+DATADG' size 200M autoextend on; 
 
Tablespace created. 
  SQL> alter system set undo_tablespace='undotbs3' scope=spfile sid='MYDB3'; 
 
System altered. 
 
SQL> alter system set instance_number=3 scope=spfile sid='MYDB3'; 
 
System altered. 
 
SQL> alter system set cluster_database_instances=3 scope=spfile sid='*'; 
 
System altered.    


Update Oracle Cluster Registry (OCR)
The OCR will be updated to account for a new instance – "MYDB3" – being added to the "MYDB" cluster database . Add "MYDB3" instance to the "MYDB" database and verify
#srvctl config database -d  MYDB  
             
 Reference:-  https://www.youtube.com/watch?v=ZV_P5-qpLhs

Saturday, March 12, 2016

Things to know about Block media recovery in Oracle

Block Media Recovery
Oracle9i introduces the concept of block media recovery (BMR) to the recovery process.
You can use block media recovery to recover one or more corrupted data blocks within data files.If you’re using Oracle Database 10g or Oracle9i Database, use the BLOCKRECOVER command to perform block media recovery. In 11g instead of BLOCKRECOVER command we can simply use RECOVER command.

To perform BMR, the database must be open or mounted and in archive log mode and must have a current, usable control file. The database must not be a standby database.You must use level 0 or full backups for the restore. 
All of the required archived redo logs must be available for the recovery process.If you have enabled Flashback Database then RMAN will search the flashback logs for uncorrupted versions of the required blocks (from 11g onwards) . So if your blocks are available in the flashback logs RMAN won't look for any backups and it recover the corrupt block straight from flashback logs and it is probably faster.

Identification of Corrupt Blocks
The V$DATABASE_BLOCK_CORRUPTION view displays blocks marked corrupt by database components such as RMAN, ANALYZE, dbv, and SQL queries. So any SQL or RMAN section that encounter a corrupted block will update the V$DATABASE_BLOCK_CORRUPTION. The following types of corruption result in rows added to this view:

Physical corruption (sometimes called media corruption)
The database does not recognize the block: the checksum is invalid, the block contains all zeros, or the block header is corrupt.BMR can completeley recover physical corruption happened on  a block . 

Logical corruption
The block has a valid checksum, the header and footer match, and so on, but the contents are logically inconsistent. Block media recovery may not be able to repair all logical block corruptions. In these cases, alternate recovery methods, such as tablespace point-in-time recovery, or dropping and re-creating the affected objects, may repair the corruption.

If you are trying to take a RMAN database backup and your database have some corrupted block then your backup will fail. RMAN will stops at the first corrupt block, logging just that single one. To get the completed list of corrupt blocks you should issue a backup validate command: 

So to check the datafiles for any physical or logical corruption and verify whether a datafile is in the correct directory, use the backup validate command.By default, RMAN checks for physical corruption.  By using check logical syntax, we can check for logical corruption as well.

To validate the tablespace, use the following command:  

RMAN> backup validate tablespace USERS;

The above command is doing nothing but just validating USERS tablespace for backup operation., if it found any corrupt blocks,it will update v$database_block_corruption

To validate a specific datafile, use the following command:

RMAN> backup validate datafile 8;

To check the whole database along with archivelog use the following command:

RMAN> backup validate database archivelog all;

The above commands will look only for physical corruption.Add the CHECK LOGICAL clause to include checks for logical corruption.

RMAN> backup validate check logical database archivelog all;

Note:- From 11g onwards instead of backup validate we can use simple VALIDATE command .
RMAN> validate datafile 8;

Any block corruptions are visible in the V$DATABASE_BLOCK_CORRUPTION view. You can identify the objects containing a corrupt block using a query like this.
COLUMN owner FORMAT A20
COLUMN segment_name FORMAT A30

SELECT DISTINCT owner, segment_name
FROM   v$database_block_corruption dbc
JOIN dba_extents e ON dbc.file# = e.file_id AND dbc.block# BETWEEN e.block_id and e.block_id+e.blocks-1
ORDER BY 1,2;

Recover specific data block
rman> recover datafile 8 block 13;
rman> recover datafile 8 block 13 datafile 10 block 12;

Recovering All Blocks in V$DATABASE_BLOCK_CORRUPTION
RECOVER CORRUPTION LIST command will recover all the blocks that are marked corrupt .
rman> recover corruption list;
from following output it could see that RMAN recoverd the corrupt block from flashback logs . 
RMAN> blockrecover corruption list;
Starting recover at 16-NOV-10
using channel ORA_DISK_1
searching flashback logs for block images
finished flashback log search, restored 1 blocks
starting media recovery
media recovery complete, elapsed time: 00:00:01
Finished recover at 16-NOV-10
Automatic Block Corruption Repair ( in 11g Active data guard )
In addition to the real time query capability of the 11g Active Data Guard feature, we can also add to our high availability capability by using the Automatic Block Media(ABMR) Repair feature whereby data block corruptions on the Primary database can be repaired by obtaining those blocks from the standby site – all performed by a background process (ABMR) transparent to the application. To learn more read below link 

Basic concept:-
At the first encounter of corruption oracle marks a block as media corrupt and it won't keep that block in buffer cache anymore and oracle write that corrupted block into disk .No subsequent read of the block is successful until the block is recovered.You perform block media recovery with the RECOVER ... BLOCK command. By default, RMAN searches the flashback logs for good copies of the blocks, and then searches for the blocks in full or level 0 incremental backups. When RMAN finds good copies, it restores them and performs media recovery on the blocks

Advantages:

  1. With BMR, you can restore individual data blocks from your RMAN backups, and recover those blocks to the point of the failure. During the block recovery process, Oracle will continue to allow access to all blocks that are not corrupted.The only user impact will be to those users who want to access the corrupt blocks, and they will have been impacted anyway.
  2. Enables affected data files to remain online during recovery , that is no need to take the affected tabledpace/datafile offline .
  3. Lowers the mean time to recover (MTTR) because only blocks needing recovery are restored and recovered
  4. Without block media recovery, if even a single block is corrupt, then you must take the data file offline and restore a backup of the data file. You must apply all redo generated for the data file after the backup was created. The entire file is unavailable until media recovery completes. With block media recovery, only the blocks actually being recovered are unavailable during the recovery.
  5. Block media recovery is most useful for physical corruption problems that involve a small, well-known number of blocks.Block media recovery is not intended for cases where the extent of data loss or corruption is unknown and the entire data file requires recovery. In such cases, data file media recovery is the best solution.


Wednesday, March 2, 2016

How to check the disk utilization where ASMLIB is configured


Get the list all physical disk that are mapped to ASM disk by using following shell script,

[root@todbex01 mp]# cat asm_to_dik.sh#!/bin/bash
for asmlibdisk in `ls /dev/oracleasm/disks/*`
  do
    echo "ASMLIB disk name: $asmlibdisk"
    asmdisk=`kfed read $asmlibdisk | grep dskname | tr -s ' '| cut -f2 -d' '`
    echo "ASM disk name: $asmdisk"
    majorminor=`ls -l $asmlibdisk | tr -s ' ' | cut -f5,6 -d' '`
    device=`ls -l /dev | tr -s ' ' | grep -w "$majorminor" | cut -f10 -d' '`
    echo "Device path: /dev/$device"
  done
[root@todbex01 mp]# sh asm_to_dik.sh
ASMLIB disk name: /dev/oracleasm/disks/COR01
ASM disk name: OCR_VOTE_0000
Device path: /dev/dm-40
ASMLIB disk name: /dev/oracleasm/disks/COR02
ASM disk name: OCR_VOTE_0001
Device path: /dev/dm-37
ASMLIB disk name: /dev/oracleasm/disks/COR03
ASM disk name: OCR_VOTE_0002
Device path: /dev/dm-34
ASMLIB disk name: /dev/oracleasm/disks/DATA01
ASM disk name: ORADATA_0000
Device path: /dev/dm-24

You can get the same result by using oracleasm utility too,

[root@todbex01 mp]# /usr/sbin/oracleasm querydisk -v DATA01
Disk "DATA01" is a valid ASM disk
[root@todbex01 mp]# /usr/sbin/oracleasm querydisk -v -d DATA01
Disk "DATA01" is a valid ASM disk on device [252,24]
[root@todbex01 mp]# ls -l /dev | grep 252,|grep 24
brw-rw----  1 root root     252,  24 May 26  2015 dm-24
[root@todbex01 mp]#

Once you find out the physical disk associated with your diskgroup you can make use of iostat to get the I/O statistics for that particular disk .

Some information about iostat:-

iostat - Report Central Processing Unit (CPU) statistics and input/output statistics for devices and partitions.

Commonly used option:-
-c     The -c option is exclusive of the -d option and displays only the CPU usage report.
        that is iostat option -c, displays only the CPU usage statistics as shown below.
-d     The -d option is exclusive of the -c option and displays only the device utilization report.
-k    Display statistics in kilobytes per second instead of blocks per second. 
-m   Display statistics in megabyte per second instead of blocks per second.  
-n    Displays only the device and NFS statistics.
-x     Display  extended statistics.

Eg,
       iostat -d 2
              Display a continuous device report at two second intervals.(until you press Ctl-C).
       iostat -d 2 6
              Display six reports at two second intervals for all devices.

       iostat -x hda hdb 2 6
              Display six reports of extended statistics at two second intervals for devices hda and hdb.

       iostat -p sda 2 6
              Display six reports at two second intervals for device sda and all its partitions (sda1, etc.)

   
[root@todbex01 mp]#iostat -dkx /dev/dm-24
Linux 2.6.39-400.214.4.el5uek (todbex01)       03/02/2016

Device:         rrqm/s   wrqm/s   r/s   w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
dm-24             0.00     0.00 174.69  9.00 16721.00    41.19   182.50     0.16    0.88   0.12   2.20

The main figure that we have consider is the %util field,

%util: When this figure is consistently approaching above 80% you will need to take any of the following actions -

    increasing RAM so dependence on disk reduces
    increasing RAID controller cache so disk dependence decreases
    increasing number of disks so disk throughput increases (more spindles working parallely)

[root@todbex01 mp]#

To get the input/output statistics for all disks at two second intervals use the following command

[root@todbex01 mp]#iostat -dkx 2