Wednesday, September 4, 2013

Partitioning an existing table using datapump - a sample demo

First create a sample  table for testing purpose, 
SQL>CREATE TABLE big_table (
  id            NUMBER(10),
  created_date  DATE,
  lookup_id     NUMBER(10),
  data          VARCHAR2(50)
);

Execute the following procedure to insert large number of rows ,

DECLARE
  l_lookup_id    NUMBER(10);
  l_create_date  DATE;
BEGIN
  FOR i IN 1 .. 1000000 LOOP
    IF MOD(i, 3) = 0 THEN
      l_create_date := ADD_MONTHS(SYSDATE, -24);
      l_lookup_id   := 2;
    ELSIF MOD(i, 2) = 0 THEN
      l_create_date := ADD_MONTHS(SYSDATE, -12);
      l_lookup_id   := 1;
    ELSE
      l_create_date := SYSDATE;
      l_lookup_id   := 3;
    END IF;
    INSERT INTO big_table (id, created_date, lookup_id, data)
    VALUES (i, l_create_date, l_lookup_id, 'This is some data for ' || i);
  END LOOP;
  COMMIT;
END;
/
-- Apply some constraints and index to the table.

SQL> ALTER TABLE big_table ADD (CONSTRAINT big_table_pk PRIMARY KEY (id));

SQL>CREATE INDEX bita_created_date_i ON big_table(created_date);

Now big_table is created 

SQL>  select table_name, tablespace_name, partitioned
      from dba_tables where table_name like 'BIG_%';

TABLE_NAME                     TABLESPACE_NAME                PAR
------------------------------ ------------------------------ ---
BIG_TABLE                      EXAMPLE                        NO  ----->(No partition)

Export the table using datapump:
C:\Windows\system32>expdp admin/admin@oradb1 tables=sample.big_table directory=ar1 dumpfile=big_table.dmp parallel=4

Screen shot of my expdp:
--------------------------
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "ADMIN"."SYS_EXPORT_TABLE_01":  admin/********@oradb1 tables=sample.big_table directory=ar1 dumpfile=big_table.dmp parallel=4
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 55 MB
. . exported "SAMPLE"."BIG_TABLE"                        46.61 MB 1000000 rows
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Master table "ADMIN"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ADMIN.SYS_EXPORT_TABLE_01 is:
  /u01/software/datapump/big_table.dmp
Job "ADMIN"."SYS_EXPORT_TABLE_01" successfully completed at 14:52:38

Now drop and recreate the table ,
SQL> drop table big_table;

Table dropped.

SQL> CREATE TABLE big_table (
    id            NUMBER(10),
    created_date  DATE,
    lookup_id     NUMBER(10),
    data          VARCHAR2(50)
)
PARTITION BY RANGE (created_date)
(PARTITION big_table_2003 VALUES LESS THAN (TO_DATE('01/01/2004', 'DD/MM/YYYY')),
 PARTITION big_table_2004 VALUES LESS THAN (TO_DATE('01/01/2005', 'DD/MM/YYYY')),
 PARTITION big_table_2005 VALUES LESS THAN (MAXVALUE));

SQL>  select table_name, tablespace_name, partitioned
      from dba_tables where table_name like 'BIG_%';

TABLE_NAME                     TABLESPACE_NAME                PAR
------------------------------ ------------------------------ ---
BIG_TABLE                      EXAMPLE                        YES ----> TABLE IS PARTITIONED

SQL> select count(*) from big_table;

  COUNT(*)
----------
         0  ----------> no data

Import the table using datapump
C:\Windows\system32> impdp admin/admin@oradb1 directory=ar1 dumpfile=big_table.dmp logfile=big_table.log table_exists_action=append parallel=4

note: job will be completed with one error because of "TABLE_EXISTS_ACTION=append" parameter.Use ignore=y if you are using old export utility.
Exclude Metadata and copy only data to avoid the error caused by the above parameter

screen shot:
-------------
Import: Release 11.2.0.1.0 - Production on Tue Jun 28 15:11:53 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Master table "ADMIN"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "ADMIN"."SYS_IMPORT_FULL_01":  admin/********@oradb1 directory=ar1 dumpfile=big_table.dmp logfile=big_table.log table_exists_action=append parallel=4
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39152: Table "SAMPLE"."BIG_TABLE" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SAMPLE"."BIG_TABLE"                        46.61 MB 1000000 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Job "ADMIN"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 15:12:49

SQL> select count(*) from big_table;

  COUNT(*)
----------
   1000000


SQL> insert into big_table (id, created_date, lookup_id, data)
  2  values(1001,'27-jun-03',5,'This is some old data');

1 row created.


SQL> select * from big_table partition(big_table_2004);

no rows selected

SQL> select * from big_table partition(big_table_2003);

        ID CREATED_D  LOOKUP_ID DATA
---------- --------- ---------- -------------------------
      1001 27-JUN-03          5 This is some old data

TESTING PARTITION PRUNING

SQL>  SET AUTOTRACE TRACEONLY EXPLAIN
SQL> select * from big_table
  2  where created_date='27-JUN-03';

Execution Plan
----------------------------------------------------------
Plan hash value: 1710340555

----------------------------------------------------------------------------------------
| Id  | Operation              | Name      | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |           |     1 |    37 |     3   (0)| 00:00:01 |       |       |
|   1 |  PARTITION RANGE SINGLE|           |     1 |    37 |     3   (0)| 00:00:01 |   KEY |   KEY |
|*  2 |   TABLE ACCESS FULL    | BIG_TABLE |     1 |    37 |     3   (0)| 00:00:01 |   KEY |   KEY |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("CREATED_DATE"='27-JUN-03')

Source:- http://arundaskalathil.blogspot.in/2011/06/partitioning-existing-table-using.html

Tuesday, September 3, 2013

Oracle Cluster tables - an efficient way to reduce the I/O

If you two are more tables are joined together on a single column and most of the time you issue join queries on them, then consider creating a cluster of these tables.
 A cluster is a group tables that share the same data blocks i.e. all the tables are physically stored together.

For example EMP and DEPT table are joined on DEPTNO column. If you cluster them, Oracle physically stores all rows for each department from both the emp and dept tables in the same data blocks.
Since cluster stores related rows of different tables in same data blocks, Disk I/O is reduced and access time improves for joins of clustered tables.
Each cluster key value is stored only once each in the cluster and the cluster index, no matter how many rows of different tables contain the value.
Therefore, less storage might be required to store related table and index data in a cluster than is necessary in non-clustered table format.

Creating a Clustered Table 
 To create clustered tables. First, create a cluster and create index on it. Then create tables in it.
For example to create a cluster of EMP and DEPT tables in which the DEPTNO will be cluster key, first create the cluster by typing the following command.

SQL>create cluster emp_dept (deptno number(2));

Then create index on it.

SQL>create index on cluster emp_dept;

Now create table in the cluster like this

SQL>create table dept (deptno number(2),
                    name varchar2(20),
                    loc varchar2(20))
                    cluster emp_dept (deptno);

SQL>create table emp (empno number(5),
            name varchar2(20),
            sal number(10,2),
            deptno number(2)) cluster emp_dept (deptno)      

Dropping Clusters
To drop a cluster use DROP CLUSTER statement. For example to drop the emp_dept cluster give the following command.

SQL>drop cluster emp_dept;

This will drop the cluster, if the cluster is empty i.e. no tables are existing it it. If tables are there in the cluster first drop the tables and then drop the cluster. If you want to drop the cluster even when tables are there then give the following command.

SQL>drop cluster emp_dept including tables;

Listing Information about Clusters
To see how many clusters are there in your schema give the following statement.

SQL>select * from user_clusters;

To see which tables are part of a cluster. Give the following command.

SQL>select * from tab

TABLE_NAME    TYPE            CLUSTER_ID
----------             ----              -----------
EMP                  TABLE         1
SALGRADE        TABLE        
CUSTOMER        TABLE        
DEPT                TABLE         1
In the above example notice the CLUSTER_ID column, for EMP and DEPT table the cluster_id is 1. That means these tables are in cluster whose cluster_id is 1. You can see the cluster_id’s name in USER_CLUSTERS table.
Source:- http://www.oracle-dba-online.com/

Friday, August 23, 2013

Installing Oracle 10g RAC on Oracle VM in windows 2008 using openfiler as shared storage

Dear friends those who interested in knowing more about RAC architecture , they need to prepare one RAC system for their study purpose.Here i got some helpful information from you-tube that provides detailed information  about installing Oracle 10g RAC on windows  .
Thanks Mr.Ahmed D.sherif for his nice presentation.  Here in his demonstration he uses Oracle VM as virtual box, windows server 2003 (32bit) as guest operating system , openfiler+Microsoft iscsi  initiator as shared storage ,oracle database and clusterware (10.2.0.1).

But some softwares listed above  are not available for me and i completed the same setup using  windows server 2008 standard edition(64bit) and oracle software (10.2.0.4) (both clusterware and database software).
The youtube link that i followed is given below,
Part1
http://www.youtube.com/watch?v=6BmwrR3Vcpc
Part2
http://www.youtube.com/watch?v=w7foLjbac4c
Part3
http://www.youtube.com/watch?v=BMPd_XAmtdc
Part4
http://www.youtube.com/watch?v=shVTEpyBohI
Part5
http://www.youtube.com/watch?v=shVTEpyBohI

You can fulfill the RAC set up by following  above link . But i hitched in several stage ,and it eat up my valuable time . Here i want to share that experience.

1. You can install Openfiler software by watching the video , during installation you will prompted for root user password. After installation you will get an url for openfiler interface just like
https://192.168.1.141:446 , it will ask for username and password
Here you have to give  'openfiler' as username not 'root'  ,as  i don't know the default password for 'openfiler' user , so i reset the password by 'root' user.

2.You don't need to download microsoft iscsi software  for windows 2008 server , as it comes in built with windows 2008 server  editions.

3. Once you configured shared disk for RAC in openfiler , You  have to configure the disk on each nodes through microsoft-iscsi. After configuring microsoft-iscsi you can see the shard disk has been appeared in 'Disk Management'  window on every node. Each disk should be in 'online' mode.After that you have to partition each disk (here 5) without any file system and drive name. After these stage i struggled a lot , because while cluster-ware installation it did not detect my previously created shared disk.After googling and studying a lot of articles for more than one weeks, i came to know that in windows clusterware does not support partition created as primary .In windows 2008 server ,by default  it creates partition as 'primary' partition .So you have to use diskpart command to create extended partition in windows 2008. So i deleted every partition that i mounted from storage. Then i created extended partition by using diskpart command.
How to use diskpart command 
1. Take command prompt and run diskpart command 
then type 
DISKPART>list disk
it will show all available disks, 
2.Then select the disk that you want to make extended partition by
DISKPART>select disk 1 
3.Then create extended partition on the selected disk  
DISKPART> create partition extended
You have to follow the same step for remaining disk . Once you created extended partition for all allocated disk , you can create logical partition by right clicking  each disk in the  'Disk Management' window.While creating logical drive keep in mind that not to format the disk  drive also not assign any drive letter.  The end result  become 5 logical disk (as per the video) and are ready to recognized by the clusterware software. Once the disks are ready 
you can go ahead with clusterware installation .

Note:- 
In oracle 10g raw devices are used for storing voting disk and ocr , but from 11g onwards you can keep these things in ASM . So here first two raw device are used for voting disk and ocr (both are configured in external redundancy - having only one disk) and remaining three raw devices are used for ASM.
Rest of my installation never stuck in any stage and i competed my two node RAC setup.

Hope it helps somebody those interested in oracle RAC architecture. 
Enjoy the reading :)
  

Wednesday, August 14, 2013

Different types of Failover mechanism in Oracle

FAILOVER: 
In the context of Oracle Net, failover refers to the mechanism of switching over to an alternate resource when connection to the primary resource gets terminated due to any reason. Connection failure may be broadly categorized as:
Those that occur while making the initial connection. (Connect Time Failover)
Those that occur after a connection has been successfully established. (TAF)

The first category of connection failure can be dealt with comparatively easily. If your attempt to connect to an instance fails, you can make the connection attempt again, but this time to a backup instance. As long as you have backup instances configured, you can continue trying to connect, until a successful connection is established. This process is technically termed as Connect Time Failover.

Connect Time Failover.
The connect time failover feature allows client to connet to another listener if the initial connection to the first listener fails.Multiple listener locations are specified in the client tnsnames.ora file. If a connection attempt to the first listener fails , a connection request to the next listener in the list is attempted .You can acheive this feature by adding Multiple listener addresses within an address list of tnsnames.Here is an example of what a tnsnames.ora file looks like with a connect-time failover enabled.

TNS Parameter for Connect time failover is FAILOVER. Default value of this is ON.
(failover=on) is default for ADDRESS_LISTs and DESCRIPTION_LISTs , so we do not need to explicitly specify this parameter to put it on.

Sample for TNS net service alias for Client Side Connect time Failover :-

myrac_failover =  
(DESCRIPTION =  
 (ADDRESS_LIST =  
 (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1522))
  (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip)(PORT = 1522))  
 )  
 (CONNECT_DATA =  
 (SERVICE_NAME = myrac)  
 )  
 )

Notice the additional entry under the ADDRESS_LIST section. Here two listener are specified .If a connection is unsuccessful  when attempting to connect to the rac1-vip host on port 1521, a connection attempt is made to the rac2-vip host on port 1521 . 

Note:- Entries in the ADDRESS_LIST do not have to be a RAC nodes . The example that i explained here is for two node RAC database. You can also configure connect-time failover with a standby database , so that one of the entries in the list may be a standby database .As long as the defined service(Database) in the ADDRESS_LIST is availabe on the other 
node , client is able to connect to that node.
For example,

failover_norac=
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS= (PROTOCOL=TCP) (HOST=prod_db) (PORT=1521))
(ADDRESS= (PROTOCOL=TCP) (HOST=standby_db) (PORT=1521))
(FAILOVER= TRUE)
)
(CONNECT_DATA=
(SERVICE_NAME= orcl)
)
)
The other criteria, of cource , that the address in the list will allow the client to get to the data that is needed by the application. With oracle RAC , we know that data is always the same, as it is the same database. In the case of physical or logical standby database  
whether or not the data is accessible depends upon how these standby database is configured.

Note:- An important issue to be aware of at this point is that Connect Time Failover only works for you, if you are using dynamic registration. 

Transparent application failover (TAF)
Now, let’s look at how TAF works. Unlike connect time failover, which is invoked before the connection is made, TAF comes into play after the connection is made (and then, broken). If the connection is lost while the application is running, Oracle Net will transparently reconnect the application to another instance accessing the same database.

TAF supports two types of failover: SESSION and SELECT. A SESSION failover connection is not over ambitious. It just fails over to a backup instance. All work in progress at that point are irrevocably lost. SELECT failover is more intricate in as much as it enables some type of read only application to failover without losing the work in progress. If a SELECT statement was in progress at the time of the termination of the connection, then as the connection is reestablished to a backup instance, Oracle Net re-executes the SELECT statement and positions the cursor in a manner that the client can seamlessly continue fetching the rows. But that’s about all that TAF has to offer. It doesn’t have any mechanism to recover DML statements that were in progress, when the failure occurred, or even for SELECT statements, you lose global temporary tables, package states and session settings.
TAF supports two failover methods: BASIC and PRECONNECT. In BASIC method, you connect to the backup instance when the primary connection fails. In the PRECONNECT method, you connect to the backup instance at the same time you connect to the primary instance. This has the obvious benefit of having a backup connection available all of the time, thus reducing the time of ‘failover’. But the downside is that you have to pay the extra ‘cost’ in terms of resources spent, of having a backup connection open all the while.
TAF is configured by adding a FAILOVER_MODE parameter to the CONNECT_DATA parameter for a net service name. Below is an example for configuring TAF with two node RAC

Note:-
Specifies a different net service name to be used to establish the backup connection. A backup should be specified when using PRECONNECT to pre-establish connections. Specifying a BACKUP is strongly recommended for BASIC methods; otherwise, reconnection might first attempt the instance that has just failed, adding additional delay until the client reconnects. 
http://oracleinquisitive.blogspot.in/2012/09/transparent-application-failover-taf.html

Tuesday, August 13, 2013

How load balancing works in oracle ?

Load Balancing
Oracle RAC systems provide two types of load balancing for automatic workload management:
* Server load balancing distributes processing workload among Oracle RAC nodes.
* Client load balancing distributes new connections among Oracle RAC nodes so that no one server is overwhelmed with connection requests. For example, when a connection fails over to another node because of hardware failure, client load balancing ensures that the redirected connection requests are distributed among the other nodes in the RAC.
The primary difference between these two methods is that the former method distributes processing and the latter method distributes connection attempts.Lets look in details

Client side load balancing 
Client side load balancing is determined by configuring the tnsnames.ora with multiple address in the description ,with parameter LOAD_BALANCE=yes (can specify 'ON' also) , as in this example .
myrac =
  (DESCRIPTION =
   (LOAD_BALANCE=ON)
   (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521))
   (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip)(PORT = 1521))
    (CONNECT_DATA =
     (SERVICE_NAME = myrac)
    )
  )

Client load balancing helps distribute new connections in your environment so that no one server is overwhelmed with connection requests. When client load balancing is enabled, connection attempts are made randomly among RAC nodes. Suppose you have the Oracle RAC environment  with multiple Oracle RAC nodes, A, B, C, and D. Without client load balancing enabled, connection attempts may be front-loaded, meaning that most connection attempts would try Node A first, then Node B, and so on until a connection attempt is successful.This creates a situation where Node A and Node B can become overloaded with connection requests. Thus in client side load balancing the client will randomly pick an address to try ,without regards to the availability of the node or to how heavily loaded the node may be.

Server Load Balancing
Server side load balancing is controlled by two parameters LOCAL_LISTENER and REMOTE_LISTENER.By default LOCAL_LISTENER is set to blank . Setting local_listener parameter is mandatory when using a different port rather than default .LOCAL_LISTENER on each node should point to the listener on that node.Oracle database have the ability to automatically register their 
presence with an existing default listener with the help of pmon , or you must specify the init parameter LOCAL_LISTENER for non default listener .

A remote listener is a listener residing on one computer that redirects connections to a database instance on another computer. Remote listeners are typically used in an Oracle Real Application Clusters (Oracle RAC) environment. You can configure registration to remote listeners, such as in the case of Oracle RAC, for dedicated server or shared server environments. The REMOTE_LISTENER parameter in the spfile should be pointing to a TNSNAMES entry on each server mode, which in turn list the IPs and ports of all available listeners in the cluster. DBCA will set these entries up for you if you use it to create your databases (recommended).

The remote_listener parameter is then set to listeners_myrac in the spfile

*.remote_listener='LISTENERS_MYRAC'

In tnsnames.ora you can see entry like this,

LISTENERS_MYRAC =
  (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip )(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip )(PORT = 1521))
  )

There are two types of server-side load balancing: 
Load Based — Server side load balancing redirects connections by default depending on node load. This id default.
Session Based — Session based load balancing takes into account the number of sessions connected to each node and then distributes the connections to balance the number of sessions across the different nodes.

Note:- Use listener_<instance_name> as the alias name for the local listener, and listeners_<servicename> for the remote listener alias.
The way you set the remote listener changed in rac 11.2 , instead of registering with local listener, the database register with SCAN listener.
This approach is similar to above, but the pain of configuring individual nodes/their VIPs in client tnsnames.ora is eliminated. All we need is a Round Robin enabled DNS name resolution and use the SCAN DNS name in the client tnsnames.ora.If you do not use DBCA, you should set the REMOTE_LISTENER database parameter to scan_name:scan_port. 

Server-side load balancing divides the connection load evenly between all available listeners by determining the total number of connections on each listener, and then distributing new user session connection requests to the least loaded listener(s) based on the total number of sessions already connected. While a bit more complex to implement because it requires configuration of multiple listeners, it most definitely helps to even out connections across all available listeners in a database system.

To implement server-side load balancing, at least two listeners must be configured. Also, the REMOTE_LISTENERS initialization parameter must be added to the database’s PFILE or SPFILE so that the database knows to search out the value provided in that parameter in the database server’s TNSNAMES.ORA configuration file. When server-side load balancing is activated, each listener that contributes a listening endpoint communicates with the other listener(s) via each database instance’s PMON process. Oracle then determines how many user connections each listener is servicing, and it will distribute any new connection requests so that the load is balanced evenly across all servers. The entries in TNSNAMES.ORA direct the listeners to share information about the relative load connectivity.
   

Tuesday, May 14, 2013

Data Guard Protection Modes and redo transport modes in oracle

Log Transport Services enables and controls the automated transfer of redo data within a Data Guard configuration from the primary site to each of its standby sites.
Log transport services also controls the level of data protection for your database. The DBA will configure log transport services to balance data protection and availability against database performance. Log transport services will also coordinate with Log Apply Services and Role Transition Services for switchover and failover operations.

Maximum Performance Mode
Maximum Performance mode is the default protection mode and provides the highest level of data protection that is possible without affecting the performance or availability of the primary database. With this protection mode, a transaction is committed as soon as the redo data needed to recover the transaction is written to the local (online) redo log.
When configuring the standby destination service in the LOG_ARCHIVE_DEST_n initialization parameter on the primary database, log transport services can be set to use either LGWR / ASYNC or ARCH. In order to reduce the amount of data loss on the standby destination if the primary database were to fail, set the LGWR and ASYNC attribute. Using this configuration, the primary database writes its redo stream to the standby redo logs on the standby database asynchronously with respect to the commitment of the transactions that create the redo data. When the nodes are connected with sufficient bandwidth, this mode provides a level of data protection that approaches that of Maximum Availability mode with minimal impact on primary database performance.
Note that the use of standby redo logs while operating in Maximum Performance mode is only necessary when configuring log transport services to use LGWR. When log transport services is configured to use ARCH, standby redo logs are not required.
The minimum requirements are described in the following table:



For example,
log_archive_dest_2='service=testdb_standby ARCH NOAFFIRM'
or
log_archive_dest_2='service=testdb_standby LGWR ASYNC NOAFFIRM'

After modifying these setting in init file ,execute the following command from sql prompt 
SQL>  ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE  ;
The above command is actually not required as MAXIMIZE PERFORMANCE is the default redo transport mode in data guard .

Maximum Protection Mode
Maximum Protection mode offers the ultimate in data protection. It guarantees no data loss will occur in the event the primary database fails. In order to provide this level of protection, the redo data needed to recover each transaction must be written to both the local (online) redo log and to a standby redo log on at least one standby database before the transaction can be committed. In order to guarantee no loss of data can occur, the primary database will shut down if a fault prevents it from writing its redo data to at least one remote standby redo log.
In a multiple-instance RAC database environment, Data Guard will shut down the primary database if it is unable to write the redo data to at least one properly configured database instance (see minimum requirements below).
In order to participate in Maximum Protection mode:
At least one standby instance has to be configured with standby redo logs.
When configuring the standby destination service in the LOG_ARCHIVE_DEST_n initialization parameter on the primary database, you must use the LGWR, SYNC, and AFFIRM attributes.
The minimum requirements are described in the following table:



For example:
log_archive_dest_2='service=testdb_standby LGWR SYNC AFFIRM'
and
SQL>  ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PROTECTION   ;
will keep the data guard in maximum protection mode . 
The SYNC attribute with the LGWR process specifies that network I/O is to be performed synchronously for the destination. This means that once the I/O (redo data) is initiated, the archiving process waits for the I/O to complete before continuing. When specifying the SYNC attribute, all network I/O operations are performed synchronously, in conjunction with each write operation to the online redo log. The transaction is not committed on the primary database until the redo data necessary to recover that transaction is received by the destination.

It is highly recommended that a Data Guard configuration operating in Maximum Protection mode contain at least two physical standby databases that meet the requirements listed in the table above. That way, the primary database can continue processing if one of the physical standby databases cannot receive redo data from the primary database. If only one standby database is configured with the minimum requirements listed above, the primary database will shut down when the physical standby databases cannot receive redo data from the primary database!

Net Timeout
The Data Guard environment works by sending the redo data to the standby server by connecting to the database instance there. If the instance does not respond in time, the log shipping service will wait for a specified timeout value and then give up. This timeout value can be set in Oracle Database, using a parameter called net_timeout. In maximum protection mode, the log shipping service will retry for 20 times before giving up.


Maximum Availability Mode
Maximum Availability mode provides the highest level of data protection that is possible without affecting the availability of the primary database. This protection mode is very similar to Maximum Protection where a transaction will not commit until the redo data needed to recover that transaction is written to both the local (online) redo log and to at least one remote standby redo log. Unlike Maximum Protection mode; however, the primary database will not shut down if a fault prevents it from writing its redo data to a remote standby redo log. Instead, the primary database will operate in Maximum Performance mode until the fault is corrected and all log gaps have been resolved. After all log gaps have been resolved, the primary database automatically resumes operating in Maximum Availability mode.

Please note that Maximum Availability mode guarantees that no data will be lost if the primary fails, but only if a second fault does not prevent a complete set of redo data from being sent from the primary database to at least one standby database.
Just like Maximum Protection mode, Maximum Availability requires:
At least one standby instance has to be configured with standby redo logs.
When configuring the standby destination service in the LOG_ARCHIVE_DEST_n initialization parameter on the primary database, you must use the LGWR, SYNC, and AFFIRM attributes.
The minimum requirements are described in the following table:



For example:
log_archive_dest_2='service=testdb_standby LGWR SYNC AFFIRM'
and
SQL>  ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE  AVAILABILITY ;
will keep the data guard in maximum protection mode . 

Oracle Data Guard : Synchronous vs. Asynchronous Redo TransportData Guard Redo Transport Services coordinate the transmission of redo from a primary database
to the standby database. While LGWR process in Primary database is writing redo to its Online Redo Log files (ORL), a separate Data Guard process called the Log Network Server (LNS) is reading from the redo buffer in SGA and passes redo to Oracle Net Services for transmission to the standby database. Redo records transmitted by the LNS are received at the standby database by another Data Guard process called the Remote File Server (RFS) that writes it to a sequential file called a standby redo log file (SRL).

Synchronous Redo Transport
Its also called “zero data loss” method as the LGWR is not allowed to acknowledge a commit has succeeded until the LNS confirms that the redo needed to recover the transaction has been written to disk at the standby site.

So that's how it works

1. when user performs commits. The LGWR reads the redo record from the log buffer, writes it to the online redo log file, and waits for confirmation from the LNS.

2. The LNS reads the same redo record from the log buffer and transmits it to the standby database using Oracle Net Services. The RFS receives the redo at the standby database and writes it to a standby redo log file.

3. When the RFS receives a write-complete from the disk, it transmits an acknowledgment back to the LNS process on the primary database, which in turn notifies the LGWR that transmission is complete. The LGWR then sends a commit acknowledgment to the user.

This transport mode is used by the Maximum Protection and Maximum Availability data protection modes

Asynchronous Redo Transport
Asynchronous transport (ASYNC) LGWR  process does not wait for the acknowledgment from the LNS. This creates a near zero performance impact on the primary database regardless of the distance between primary and standby locations . The asynchronous redo transport mode transmits redo data asynchronously with respect to transaction commitment. A transaction can commit without waiting for the redo generated by that transaction to be successfully sent to any redo transport destination that uses the asynchronous redo transport mode. This transport mode is used by the Maximum Performance data protection mode



Tuesday, April 30, 2013

Detailed information on different methods of generating explain plan

Whenever an SQL statement is executed, Oracle (that is the RBO or CBO) designs an execution plan for it. This execution plan is basically a step by step instruction for how the statement must be executed. That is, the order in which tables are read, if indexes are used, which join methods are used to join tables and so on. The execution plan for an SQL statement can be viewed with the explain plan statement. The query execution plan is elaborated during an SQL statement's parse phase.

For generating execution plan or explain plan we can use different method depends upon the condition,
1. By using explain plan for command 
2. By using the package DBMS_XPLAN
3. You can use the V$SQL_PLAN views to display the execution plan of a SQL statement
4. Using SQL*Plus Autotrace

You can execute the explain plan for command from sql*plus , however it will not show the plan but save it into a table named PLAN_TABLE.

The PLAN_TABLE Output Table
Starting with release 10g, this table is automatically available as a global temporary table. With previous releases, you have to create it in each schema as needed. You can use the SQL script utlxplan.sql to manually create a local PLAN_TABLE in your schema. The exact name and location of this script depends on your operating system. On Unix, it is located in the $ORACLE_HOME/rdbms/admin directory.
SQL>conn sys/password as sysdba -- or mention your schema/password
SQL@$ORACLE_HOME/rdbms/admin/utlxplan.sql
SQL>CREATE PUBLIC SYNONYM plan_table FOR sys.plan_table;
SQL>GRANT ALL ON sys.plan_table TO public;
PLAN_TABLE is the default sample output table into which the EXPLAIN PLAN statement inserts rows describing execution plans. Oracle Corporation recommends that you drop and rebuild your local PLAN_TABLE table after upgrading the version of the database because the columns might change. This can cause scripts to fail or cause TKPROF to fail, if you are specifying the table.

1. By using explain plan for command 
The EXPLAIN PLAN method doesn't require the query to be run, greatly reducing the time it takes to get an execution plan for long-running queries compared to AUTOTRACE. First the query must be explained.To explain a SQL statement,we can use the EXPLAIN PLAN FOR clause immediately before the statement. For example:

SQL> EXPLAIN PLAN FOR
  2  SELECT *
  3  FROM   emp e, dept d
  4  WHERE  e.deptno = d.deptno
  5  AND    e.ename  = 'SMITH';
Explained.
SQL>

Displaying PLAN_TABLE Output
After you have explained the plan, use the following method to retrieve the explain plan. 
The explain plan for command explains the plan into the PLAN_TABLE table. You can then select the execution plan from PLAN_TABLE. For that you have to query PLAN_TABLE

SQL> rem  Show plan_table execution from Plan table.
col COL1        format A6 head 'Id-Pid'
col parent_id   format 99 head pnt
col operation   format A18
col object_name format A18 truncate 
col object_type format A18
col options     format A20 truncate
col optimizer   format A15  head 'Optmzer'
select    id||'-'||parent_id COL1
         ,operation
         ,object_name
         ,object_type
         ,options
         ,optimizer
from plan_table 
order by id
;
The output should be like this,


Querying PLAN_TABLE gives too much headache as we have to know each field in details and it is rarely used. From version 8.1.5, Oracle has supplied 2 scripts to extract formatted explain plans from plan_tables. One is for serial plans and the other is for parallel plans. They can be found under $ORACLE_HOME/rdbms/admin. Examples of their usage are below.

To obtain a formatted execution plan for serial plans:
SQL> set lines 150
SQL> set head off
SQL> @?/rdbms/admin/utlxpls.sql

To obtain a formatted execution plan for parallel plans:

SQL> set lines 150
SQL> set head off
SQL> @?/rdbms/admin/utlxplp.sql

For example we can use utlxpls.sql to display the formatted explain plan after executing explain plan for command , The output should be like this ,




Identifying Statements for EXPLAIN PLAN
With multiple statements, you can specify a statement identifier and use that to identify your specific execution plan. Before using SET STATEMENT ID, remove any existing rows for that statement ID. The above example we can rewrite as follow ,

SQL> EXPLAIN PLAN
  SET STATEMENT_ID = 'st1' FOR
  SELECT * FROM   emp e, dept d WHERE  e.deptno = d.deptno 
  AND    e.ename  = 'SMITH';
  Explained.
SQL>

Specifying Different Tables for EXPLAIN PLAN
You can specify the INTO clause to specify a different table. Here is the example, 

SQL> EXPLAIN PLAN
INTO my_plan_table
FOR
SELECT last_name FROM employees;
Explained.
SQL>

You can also specify a statement Id when using the INTO clause.
SQL>EXPLAIN PLAN
 SET STATEMENT_ID = 'st1'
 INTO my_plan_table
 FOR
 SELECT last_name FROM employees;
 Explained.
 SQL>

2. By using the package DBMS_XPLAN
From Oracle 9i onward, you should display execution plans using the DBMS_XPLAN package. The DBMS_XPLAN package is used to format the output of an explain plan. It was introduced in Oracle 9i as a replacement for the "utlxpls.sql" script or custom queries of the plan table. Subsequent database versions have increased the functionality of the package.

2.1 Display function:- 
The DISPLAY function allows us to display the execution plan stored in the plan table. First we explain an SQL statement by using explain plan for command. The DBMS_XPLAN.DISPLAY function will accept mainly 3 optional parameters in the following order.

  • table_name - Name of the PLAN_TABLE, default value 'PLAN_TABLE'.
  • statement_id - Statement id of the plan to be displayed. The default value is NULL, which displays the most recent execution plan in the PLAN_TABLE.
  • format - Controls the level of detail displayed, default value 'TYPICAL'. Other values include 'BASIC', 'ALL', 'SERIAL'. There is also an undocumented 'ADVANCED' setting.

Example:- 
SQL> EXPLAIN PLAN FOR
SELECT * FROM   emp e, dept d
WHERE  e.deptno = d.deptno
AND    e.ename  = 'SMITH';
explained.
SQL>
You can use dbms_xplan  with display option to generate the execution plan ,




You can specify the format parameter to give different look to the explain plan , here i used BASIC parameter.


Basic usage of display function

To display the result of the last EXPLAIN PLAN command stored in the plan table:
SELECT * FROM table (DBMS_XPLAN.DISPLAY);
To display from other than the default plan table, "my_plan_table":
SELECT * FROM table (DBMS_XPLAN.DISPLAY('my_plan_table'));
To display the minimum plan information:
SELECT * FROM table (DBMS_XPLAN.DISPLAY('plan_table', null, 'basic'));
To display the plan for a statement identified by 'foo', such as statement_id='foo':
SELECT * FROM table (DBMS_XPLAN.DISPLAY('plan_table', 'foo'));


2.2 Display_cursor function:- 
In Oracle 10g Release 1 Oracle introduced the DISPLAY_CURSOR function. Rather than displaying an execution plan from the PLAN_TABLE, it displays the actual execution plan used to run a query stored in the cursor cache. This information is gathered from the V$SQL_PLAN_STATISTICS_ALLV$SQL and V$SQL_PLAN views, so the user must have access to these. It accepts  three optional parameters in the following order
  • sql_id - The SQL_ID of the statement in the cursor cache. The SQL_ID as available from the V$SQL and V$SQLAREA views, or from the V$SESSION view using the PREV_SQL_ID column. If omitted, the last cursor executed by the session is displayed.
  • child_number - The child number of the cursor specified by the SQL_ID parameter. If not specified, all cursors for the specified SQL_ID are displayed.
  • format - In addition to the setting available for the DISPLAY function, this function also has 'RUNSTATS_LAST' and 'RUNSTATS_TOT' to display the last and total runtime statistics respectively. These additional format options require "STATISTICS_LEVEL=ALL".
Example:- 
The following example show the advanced output from a query on the SCOTT schema.

SQL> CONN / AS SYSDBA
Connected.
SQL> GRANT SELECT ON v_$session TO scott;
Grant succeeded.
SQL> GRANT SELECT ON v_$sql TO scott;
Grant succeeded.
SQL> GRANT SELECT ON v_$sql_plan TO scott;
Grant succeeded.
SQL> GRANT SELECT ON v_$sql_plan_statistics_all TO scott;
Grant succeeded.
SQL> CONN scott/scott
Connected.


SQL> SELECT * FROM   emp e, dept d WHERE  e.deptno = d.deptno AND    e.ename  = 'SMITH';

--- out put omitted ---


SQL> SET LINESIZE 130
SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(null,null,'ADVANCED'));

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
SQL_ID  am2k9vaqsq9c9, child number 0
-------------------------------------
SELECT * FROM   emp e, dept d WHERE  e.deptno = d.deptno AND    e.ename  =
'SMITH'

Plan hash value: 351108634

----------------------------------------------------------------------------------------
| Id  | Operation                    | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |         |       |       |     4 (100)|          |

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
|   1 |  NESTED LOOPS                |         |     1 |    57 |     4   (0)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL          | EMP     |     1 |    37 |     3   (0)| 00:00:01 |
|   3 |   TABLE ACCESS BY INDEX ROWID| DEPT    |     1 |    20 |     1   (0)| 00:00:01 |
|*  4 |    INDEX UNIQUE SCAN         | PK_DEPT |     1 |       |     0   (0)|          |
----------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$1
   2 - SEL$1 / E@SEL$1

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
   3 - SEL$1 / D@SEL$1
   4 - SEL$1 / D@SEL$1

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
      ALL_ROWS

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "E"@"SEL$1")
      INDEX_RS_ASC(@"SEL$1" "D"@"SEL$1" ("DEPT"."DEPTNO"))
      LEADING(@"SEL$1" "E"@"SEL$1" "D"@"SEL$1")
      USE_NL(@"SEL$1" "D"@"SEL$1")
      END_OUTLINE_DATA
  */
Predicate Information (identified by operation id):
---------------------------------------------------
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
   2 - filter("E"."ENAME"='SMITH')
   4 - access("E"."DEPTNO"="D"."DEPTNO")

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - "E"."EMPNO"[NUMBER,22], "E"."ENAME"[VARCHAR2,10], "E"."JOB"[VARCHAR2,9],
       "E"."MGR"[NUMBER,22], "E"."HIREDATE"[DATE,7], "E"."SAL"[NUMBER,22],
       "E"."COMM"[NUMBER,22], "E"."DEPTNO"[NUMBER,22], "D"."DEPTNO"[NUMBER,22],
       "D"."DNAME"[VARCHAR2,14], "D"."LOC"[VARCHAR2,13]
   2 - "E"."EMPNO"[NUMBER,22], "E"."ENAME"[VARCHAR2,10], "E"."JOB"[VARCHAR2,9],

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
       "E"."MGR"[NUMBER,22], "E"."HIREDATE"[DATE,7], "E"."SAL"[NUMBER,22],
       "E"."COMM"[NUMBER,22], "E"."DEPTNO"[NUMBER,22]
   3 - "D"."DEPTNO"[NUMBER,22], "D"."DNAME"[VARCHAR2,14], "D"."LOC"[VARCHAR2,13]
   4 - "D".ROWID[ROWID,10], "D"."DEPTNO"[NUMBER,22]

60 rows selected.

SQL>
Basic usage of display_cursor function 

To display the execution plan of the last SQL statement executed by the current session:
SELECT * FROM table (DBMS_XPLAN.DISPLAY_CURSOR);
To display the execution plan of all children associated with the SQL ID 'atfwcg8anrykp':
SELECT * FROM table (DBMS_XPLAN.DISPLAY_CURSOR('atfwcg8anrykp'));
To display runtime statistics for the cursor included in the preceding statement:
SELECT * FROM table (DBMS_XPLAN.DISPLAY_CURSOR('atfwcg8anrykp', NULL, 'ALLSTATS_ LAST');


2.3 Other Functions
There are some other useful functions in this package, but they are not using very often, so they are summarized below.
DISPLAY_AWR - Introduced in Oracle 10g Release 1, this function displays an execution plan of stored sql statement  in the Advanced Workload Repository (AWR).
DISPLAY_SQLSET - Introduced in Oracle 10g Release 2, this function displays the execution plan of a given statement stored in a SQL tuning set.
DISPLAY_SQL_PLAN_BASELINE - Introduced in Oracle 11g Release 1, this function displays one or more execution plans for the specified SQL handle of a SQL plan baseline.
DISPLAY_PLAN - Introduced in Oracle 11g Release 2, this function displays the contents of the plan table in a variety of formats.

Out of which display_awr is commonly used rather than others,
2.4 Display_awr function:- 
This table function displays the contents of an execution plan stored in the AWR, and its syntax is ,
DBMS_XPLAN.DISPLAY_AWR( 
   sql_id            IN      VARCHAR2,
   plan_hash_value   IN      NUMBER DEFAULT NULL,
   db_id             IN      NUMBER DEFAULT NULL,
   format            IN      VARCHAR2 DEFAULT TYPICAL);


sql_id
Specifies the SQL_ID of the SQL statement. You can retrieve the appropriate value for the SQL statement of interest by querying the column SQL_ID in DBA_HIST_SQLTEXT.
plan_hash_value
Specifies the PLAN_HASH_VALUE of a SQL statement. This parameter is optional. If omitted, the table function will return all stored execution plans for a given SQL_ID.
db_id
Specifies the database_id for which the plan of the SQL statement, identified by SQL_ID should be displayed. If not supplied, thedatabase_id of the local database will be used, as shown in V$DATABASE.
format
Controls the level of details for the plan. It accepts four values:
  • BASIC: Displays the minimum information in the plan—the operation ID, the operation name and its option.
  • TYPICAL: This is the default. Displays the most relevant information in the plan (operation id, name and option, #rows, #bytes and optimizer cost). Pruning, parallel and predicate information are only displayed when applicable. Excludes only PROJECTIONALIAS and REMOTE SQL information (see below).
  • SERIAL: Like TYPICAL except that the parallel information is not displayed, even if the plan executes in parallel.
  • ALL: Maximum user level. Includes information displayed with the TYPICAL level with additional information (PROJECTIONALIAS and information about REMOTE SQL if the operation is distributed).
Basic usage of display_awr function 
To display the different execution plans associated with the SQL ID 'atfwcg8anrykp':
SQL> SELECT * FROM table(DBMS_XPLAN.DISPLAY_AWR('atfwcg8anrykp'));
To display all execution plans of all stored SQL statements containing the string 'TOTO':
SQL>SELECT tf.* FROM DBA_HIST_SQLTEXT ht, table
    (DBMS_XPLAN.DISPLAY_AWR(ht.sql_id,null, null,  'ALL' )) tf 
     WHERE ht.sql_text like '%TOTO%';

3.By using V$SQL_PLAN view  (commonly used in 9i and earlier )
Many DBAs query views such as V$SQL or V$SQLAREA to find queries to tune. They then take that query and run it through explain plan. The problem with the explain plan statement, used in this manner, is that it determines how Oracle would execute the given SQL statement. It does not tell you how Oracle has executed that SQL statement.
Oracle 9i introduced the V$SQL_PLAN dynamic performance view. The purpose of this view is to tell you how Oracle has executed a specific SQL statement. With this view, the DBA can be sure they know how the SQL statement was executed. This view is commonly used in oracle 9i  and earlier as dbms_xplan.display_Cursor later introduced in 10g can display the execution plan from v$Sql_plan and associated structure  in a better format .

4. Using SQL*Plus Autotrace
The autotrace utility is a very underutilized feature of SQL*Plus. It offers statement tracing and instant feedback on any successful SELECT, INSERT, UPDATE or DELETE statement. The utility requires a plan table (for the Explain Plan) under the current user’s schema. In addition, it requires the plustrace or DBA role for the user executing the trace. 
The AUTOTRACE facility is only available in SQL*Plus. Unlike the EXPLAIN PLAN command, the AUTOTRACE facility must use a table called PLAN_TABLE. The EXPLAIN PLAN command can use a plan table of any name. 


Setting up AUTOTRACE

Follow these steps to set up the AUTOTRACE facility to display an explain plan.
1. Ensure that the PLAN_TABLE table exists. To see if your schema has a
PLAN_TABLE, try to describe it:
SQL> desc plan_table;
If the PLAN_TABLE doesn’t exist, you need to create one. Run this script to create the PLAN_TABLE in
your schema:
SQL> @?/rdbms/admin/utlxplan.sql
2. Your schema also needs access to the PLUSTRACE role. You can verify access to
the PLUSTRACE role using the following:
select username,granted_role from user_role_privs
where granted_role='PLUSTRACE';
If you don’t have access to the PLUSTRACE role, run steps 3 and 4 as the SYS schema:
3. Connect as SYS and run the plustrce.sql script:
SQL> conn / as sysdba
SQL> @?/sqlplus/admin/plustrce.sql
4. Grant the PLUSTRACE role to developers (or to a specific role) who want to use
the AUTOTRACE facility:
SQL> grant plustrace to star1;

AUTOTRACE OPTIONS: -

1) SET AUTOTRACE OFF - No AUTOTRACE report is generated. This is the default. Queries are run as normal.
2) SET AUTOTRACE ON - The query execution takes place and it gives the output
and the AUTOTRACE report includes both the optimizer execution path and the SQL statement execution statistics.
3) SET AUTOTRACE ON EXPLAIN - The query is run as normal and it gives the output
and the AUTOTRACE report shows only the optimizer execution path.
4) SET AUTOTRACE ON STATISTICS - The query is run as normal  and it gives the output
and the AUTOTRACE report shows only the SQL statement execution statistics.
5) SET AUTOTRACE TRACEONLY - Like SET AUTOTRACE ON, but suppresses the printing of the query output, if any.
6) SET AUTOTRACE TRACEONLY EXPLAIN - Like SET AUTOTRACE TRACEONLY,but suppresses the display of the execution statistics,showing only the query plan. This setting does not actually execute the query. It only parses and explains the query.
7) SET AUTOTRACE TRACEONLY STATISTICS - Like SET AUTOTRACE TRACEONLY,but suppresses the display of the query plan. It shows only the execution statistics.

Note:- Just SET AUTOTRACE ON, returns all the rows of the query followed by the execution
plan and statistics of the query run. Now, SET AUTOTRACE TRACE which is synonymous  to SET AUTOTRACE TRACEONLY does not return the rows of the query but returns the  count of rows (like it always does at the end of every SELECT query) along with the  execution plan and the statistics of the query.

This implies that SET AUTOTRACE ON is really of no use for us, DBAs, since it  returns the result set of the query before it shows up the explain plan and the  statistics of the query - If the result set is of say 10 million records then one  has to wait for all the records to come up before (s)he gets a chance to look at the  explain plan and statistics. It is suggested to use SET AUTOTRACE TRACE(ONLY) as it gives us the information of number of rows returned, explain plan and statistics.