Monday, April 22, 2013

Different methods to trace a session in oracle

Sometimes a user may be running an application in which hundreds of queries are executed. In such scenarios, it may be hard to pinpoint exactly which SQL statement is causing performance issues can turn on SQL tracing to capture statistics regarding all SQL statements run by a user. Through trace you can instruct Oracle to capture resources consumed and execution plans for all SQL statements run by a session.
Here are the general steps to take to trace a session:
1. Enable tracing.
2. Run the SQL statements you want traced.
3. Disable tracing.
4. Use a utility such as tkprof, trcsess, or the Oracle Trace Analyzer to translate
the trace file into a human-readable format

Oracle provides a wide variety of methods to generate a SQL resource usage trace file (quite frankly, to the point of being confusing), including:
  • DBMS_SESSION
  • DBMS_MONITOR ( available from 10g onwards and is recommended )
  • DBMS_SYSTEM
  • DBMS_SUPPORT  (not installed by default)
  • ALTER SESSION
  • ALTER SYSTEM
  • oradebug  (commonly used in oracle 8i and older version)
The method you use depends on your personal preferences and various aspects of your
environment, such as the version of the database and PL/SQL packages installed. Each of these tracing methods is described briefly in the following subsections.

Trace levels
Trace levels can be used to define the type of information in the trace file.  There are a number of different trace levels, which generate different logging information.
0 - trace is off
1 – standard SQL (execute-fetch-parse) trace no, no wait events, or bind variables. level 1 is the default trace level.
2.same as level 1 and not commonly used
4 – Bind variables only
8 – Wait events only
12 – Bind Variables and Wait Events

Init.ora/spfile parameters
There are a few init.ora parameters that need to be set for the sessions you want to trace.  They are as follows:
timed_statistics=true
statistics_level=typical|all
This is only really required for the waits or sql timing information.  Should be enabled by default on most system. 
max_dump_file_size=unlimited
This allows the trace file to grow as required by the trace.  If this is not set to unlimited, then you may find your trace file does not cover all activity, and you will see a message similar to “*** DUMP FILE IS LIMITED TO xxxxx BYTES ***” at the end of your trace file..

11g R1 and above: 
With the introduction of the new diagnosability infrastructure introduced in Oracle Database 11g Release 1, traces and core files are placed in a location controlled by the DIAGNOSTIC_DEST initialization parameter.
To show the location of the DIAGNOSTIC_DEST, the following command can be used: 
 


SQL> show parameter diagnostic_dest

Oracle allows you to set the TRACEFILE_IDENTIFIER parameter at session level, allowing you to include some recognizable text into the trace file name.

SQL>alter session set tracefile_identifier='TEST1';

The tracefile name will be something like <instance><_ora_<spid>.trc.
in case if you give any tracefie identifier like alter session set tracefile_identifier='TEST1'; then trace file will be something like  <instance><_ora_<spid>_TEST1.trc.

Using DBMS_SESSION  (Available from oracle 7.3.4)
Using the DBMS_SESSION package was covered in the solution section of this recipe. Here’s a brief summary of how to enable and disable tracing using DBMS_SESSION:
SQL> exec dbms_session.set_sql_trace(sql_trace=>true);
SQL> -- run sql statements that you want to trace...
SQL> exec dbms_session.set_sql_trace(sql_trace=>false);

Using DBMS_SYSTEM  (Available from oracle 7.3.4)
To enable SQL tracing in another session, you can use the DBMS_SYSTEM package. You first must identify the session you want to trace:
SQL> select username,sid,serial# from v$session;
Pass the appropriate values to the following lines of code:
SQL> exec dbms_system.set_sql_trace_in_session(sid=>200,serial#=>5, sql_trace=>true);
Run the following to disable tracing for the session:
SQL> exec dbms_system.set_sql_trace_in_session(sid=>200,serial#=>5, sql_trace=>false);
You can also use DBMS_SYSTEM to capture wait events:
SQL> exec dbms_system.set_ev(si=>123, se=>1234, ev=>10046, le=>8, nm=>' ');
SQL> exec dbms_system.set_ev(si=>123, se=>1234, ev=>10046, le=>0, nm=>' ');
where si=sid , se=serial# , ev=events , le=level and nm= name.

Using DBMS_SUPPORT  (Available from oracle 8.0.6)
This technique requires that you first load the DBMS_SUPPORT package (it’s not created by default):
SQL> @?/rdbms/admin/dbmssupp.sql
Use the following syntax to enable and disable tracing in your current session:
SQL> exec dbms_support.start_trace(waits=>TRUE, binds=>TRUE);
SQL> -- run sql statements that you want to trace...
SQL> exec dbms_support.stop_trace;
Use this syntax to enable and disable tracing in session other than your own:
SQL> exec dbms_support.start_trace_in_session(sid=>123, serial=>1234,-
> waits=>TRUE, binds=>TRUE);
SQL> exec dbms_support.stop_trace_in_session(sid=>123, serial=>1234);

Using DBMS_MONITOR (Available from oracle 10.1)
If you are using Oracle Database 10g or higher, it is  recommend to use DBMS_MONITOR package, which offers a high degree of flexibility, for facilitating tracing. To enable and disable tracing within the current session use the following statements:
SQL> exec dbms_monitor.session_trace_enable;
SQL> -- run sql statements that you want to trace...
SQL> exec dbms_monitor.session_trace_disable;
Use the WAIT and BINDS parameters to enable tracing with wait and bind variable information:
SQL> exec dbms_monitor.session_trace_enable(waits=>TRUE, binds=>TRUE);
Note:- Wait events track the amount of time spent waiting for a resource. Bind variables are substitution variables used in place of literal variables.
Use the SESSION_ID and SERIAL_NUM parameters to enable and disable tracing for an already connected session. First run this SQL query to determine the SESSION_ID and SERIAL_NUM for target session:

Use the following query 
column line format a88
select 'ospid: ' || p.spid || ' # ''' ||s.sid||','||s.serial#||''' '||

s.osuser || ' ' ||s.machine ||' '||s.username ||' '||s.program line
from v$session s , v$process p
where p.addr = s.paddr
and s.username ='&USER'and s.status='ACTIVE' and s.type='USER'; 

 
Now use the appropriate values when calling DBMS_MONITOR
SQL> exec dbms_monitor.session_trace_enable(session_id=>1234, serial_num=>12345);
To stop the session level tracing .
SQL> exec dbms_monitor.session_trace_disable(session_id=>1234, serial_num=>12345);
You can also enable the tracing of wait and bind variable information as follows:
SQL> exec dbms_monitor.session_trace_enable(session_id=>1234, serial_num=>12345, waits=>TRUE, binds=>TRUE);

check: 
SELECT sql_trace,sql_trace_waits,sql_trace_binds FROM v$session;
for trace enbaled session

From 11g onwards to identify the trace file execute the following query 
SELECT p.tracefile FROM   v$session s JOIN v$process p ON s.paddr = p.addr WHERE  s.sid = &sid
 You can also specify client_id in dbms_monitor,
SQL> EXEC DBMS_MONITOR.client_id_trace_enable(client_id=>'tim_hall');
SQL> EXEC DBMS_MONITOR.client_id_trace_enable(client_id=>'tim_hall', waits=>TRUE, binds=>FALSE);
SQL> EXEC DBMS_MONITOR.client_id_trace_disable(client_id=>'tim_hall');
You can turn on trace at database level,
SQL>exec DBMS_MONITOR.DATABASE_TRACE_ENABLE (waits => TRUE,binds => TRUE,instance_name > NULL);
turn off
SQL>exec DBMS_MONITOR.DATABASE_TRACE_DISABLE(instance_name > NULL);

check:
SQL>SELECT * FROM dba_enabled_traces;




Altering Your Session
You can use ALTER SESSION to turn tracing on and off:
SQL> alter session set sql_trace=true;
SQL> -- run sql commands...
SQL> alter session set sql_trace=false;

Note:-  The SQL_TRACE parameter is deprecated. Oracle recommends that you use the DBMS_MONITOR or DBMS_SESSION packages to enable tracing.

Altering the System
You can turn on tracing for all sessions in the database by using this ALTER SYSTEM statement:
SQL> alter system set sql_trace=true;
Use the following SQL to disable system-wide tracing:
SQL> alter system set sql_trace=false;
Caution:-  it is not  recommend to set SQL tracing at the system level as it can severely degrade system performance.

Using oradebug
The oradebug utility can be used to enable and disable tracing for a session. You need SYSDBA privileges to run this utility.
You can identify a process to trace either by its operating system process ID or its Oracle process ID.
To determine these IDs, run the following SQL query for the username you want to trace (in this example the user is mahi ):
SQL> select spid os_pid, pid ora_pid from v$process
where addr=(select paddr from v$session where username='MAHI');
Here is the output for this example:
OS_PID ORA_PID
------------------------ ----------
8828 17
Next, use oradebug with either the SETOSPID or SETORAPID option to attach oradebug to a session. This example uses the SETOSPID option:
SQL> oradebug setospid 8828; - here we started tracing on os process id 8828
Now you can set tracing on for the session:
SQL> oradebug EVENT 10046 TRACE NAME CONTEXT FOREVER, LEVEL 8;
You can verify that tracing is enabled by viewing the trace file name:
SQL> oradebug TRACEFILE_NAME;
Use the following syntax to disable tracing:
SQL> oradebug EVENT 10046 TRACE NAME CONTEXT OFF;
Use the HELP option to view all features available in the oradebug utility:
SQL> oradebug help

Diagnostic events
when insufficient information is available to resolve a given problem we can use oracle
diagnostic events to produce additional diagnostic information.The Oracle database kernel is instrumented with over four hundred so-called “pseudoerror debugging events.”
Most of these events are primarily used to produce additional diagnostic inforamation to change Oracle's behaviour and to trace the inner working of oracle because these events can enable some of the undocumeted features of oracle. For example most commonly used diagnostic events are 10046 , 10053 etc.Oracle diagnostic events are classified into 4 major categories based on their usage,

Immediate dump events
immediate dump event will dump the diagnostic information in a trace file as soon as the command is fired. Normally immediate dump are used to dump the system state , process state, and file headers (controlf , redo_hdrs,file_hdrs) .It can't invoked by setting EVENT parameter in the init file. It can be triggered by alter session set events command or by using oradebug command.
 SQL> alter session set events 'immediate trace name systemstate level 8';
 The preceding command creates a trace file , which contains detailed information
about every oracle process active on the system, as well as the details about the
resources held/request by those process. Another example of an immediate dump event is

 SQL>alter session set events 'immediate trace name controlf level 10';

On error dump events
On error dump events  are similar to immediate dump events but they are invoked only
when the error occurs
. They are not used to dump the system state , prcoess state or
error stack when a particular oracle error occures .Normally set using the EVENT parameter in the INIT.ora file. Following example shows an entry in the init.ora file to dump the error stack to trace file when ORA-04020 (Dead lock detected when waiting for an object ) is encountered.

event= "4020 trace name errorstack level 1"
On error dumps events are most commonly used to identify the cause of a particular error, and the oracle support may ask you to dump the error stack for further investigation. For example , to diagnose the shared memory leak (ORA- 04031) will need the following settings at init.ora file
event = " 4031 trace name errorstack level 10"
Usaully on error  dumps level set between 1 and 10.

Change behaviour events
change behaviour events are set using EVENT parameter in the init.ora file . These events are very powerful events and should be used with care.These events are used to enable or disable certain functionalitied of the oracle kernel.Unlike other diagnostic events these events do not have level numbers because these event do not create a trace file.

Change behaviour events have additional keyword : forever, without these keyword
the event id fired only once and no subsequent actions are triggered. Change behaviour events uses a reserverd set of numbers to bring about the change in oracle behaviour.

for example following event disables the automatic shrinking of rollback segment by smon process. which is usually happnes every 12 hrs .
event = " 10512 trace name context forever"

Many of the change behaviour events are potentially dangerous to the database as they can change the default behaviour of database .

Process trace events
proces strace events are used to get additional information when the process is running .These events are normally harmless , and they do generate a trace file. Following are the valid example for setting two events,

event="4031 trace name context forever, level 10"
evnet="10046 trace name context forever, level 12"

or you can
event="4031 trace name context forever, level 10: 10046 trace name context forever, level 12"or by
SQL>alter session set events '10046 trace name context forever, level 12';


we  came to know that  what these events does, it enables extended SQL trace. But why such difficult syntax?
This syntax actually reveals some of the power of KSD diagnostic event syntax (KSD=kernel service debug):
10046

The first word in event string (the 10046) specifies the when should some action should be taken. 10046 events enable sql statement timing ,and everything after that (trace name context forever, level 12) is that action.
trace
The first word in action (trace) specifies what type of action Oracle needs to take if that event (10046) occurs. In event syntax, “trace” is most generic action, which is used for tracing, behavior changing and Oracle dumps.
name
The “name” specifies that the name of what to dump/trace will follow. The “name” is always present when “trace” option is used (as far as I know).
context
Now the next keyword (context) is the one where you can define whether you want Oracle to dump something when the event is hit or just do context specific trace. If you replace the “context” with “errorstack” for example, you wouldn’t get SQL trace output, but rather an Oracle errorstack dump whenever event 10046 is hit.You can use “oradebug dumplist” to find all the possible dump commands what you can set as actions for an event.
forever
The next keyword (forever) is actually an option to the action, not an action keyword itself. Forever means that keep invoking the action when the 10046 event is hit, forever (or until explicitly disabled). If we don’t specify forever, then the action would be invoked only once and the event will disable itself after that.
level 12
The “level 12? is also just another option to the action, specifying that the value for that event handler in given session should be 12. This means that whenever some Oracle function is checking whether that event is set, they will be returned value 12, the calling function interprets the value and acts appropriately (traces both binds and waits in our case).


Note:-
1. you can also set event using oracle supplied package procedure called  DBMS_SYSTEM.SET_EV
2. alter session set sql_trace = true; which in functionality is equivalent to the
alter session set events '10046 trace name context forever, level 1'; and dbms_session.set_sql_trace (true);


Wednesday, April 17, 2013

what is histogram in oracle and its importance

DBMS_STATS by default calculates the number of rows in a table,The number of distinct values in a column etc.Without additional information, optimizer assumes that values are evenly distributed .But if they are not evenly distributed,  inefficient execution plans can result. A histogram holds the data distribution of values within a column of a table.Number of occurrences for a specific value/range .

Histogram is collected by using DBMS_STATS.Histograms are feature in CBO and it helps to optimizer to determine how data are skewed(distributed) with in the column. Histogram is good to create for the column which are included in the WHERE clause where the column is highly skewed. Histogram helps to optimizer to decide whether to use an index or full-table scan or help the optimizer to determine the fastest table join order.

When a query executes in database, Optimizer records the predicate usage in sys.col_usage$ (like number of equi joins, non equi joins, between operators & like operators). This recorded values in sys.col_usage$ will be used by dbms_stats to decide the need for histograms on the columns or *NOT*.

Types of Histograms
Histograms are classified into frequency histograms and height-balanced histograms .
Type of histogram is stored in the HISTOGRAM  column of the *tab_col_statistics views (USER/DBA) and its Value will be (‘HEIGHT BALANCED’, ‘FREQUENCY’,  or ‘NONE’).
For example,

SQL>select column_name, num_distinct, histogram FROM user_tab_col_statistics
WHERE table_name = 'SALES';
Histogram and Buckets
A bucket is a range of value for a particular column, for example if we had a column named true_or_false , and it could only have two values  true or false ,then we could have upto 2 buckets and these two buckets would have a value describing the number of times that where true and false.When Histograms are created the number of  buckets can be specified otherwise oracle will automatically chose the right choice.It is this number that controls the type of histogram created. 
When Distinct Values <= Buckets 
– Then Frequency Histogram is created
– Else Height-Balanced Histogram is created

One limitation of Oracle's frequency histograms is that it has a hard limit of 254 buckets.
If you have a small number of distinct values, say less than 100 and queries are frequently executed against these columms for specific values and column is skewed, then Frequency Histograms are your best option.If your number of distinct values is more than 254, Frequency Histograms are out of the question. You will have to use Height Balanced Histograms.

Frequency Histograms
Each value of the column corresponds to a single bucket of the histogram. Each bucket contains the number of occurrences of that single value.Frequency histograms are automatically created instead of height-balanced histograms when the number of distinct values is less than or equal to the number of histogram buckets specified. 




Height-Balanced Histograms
In a height-balanced histogram, the column values are divided into bands so that each 
band contains approximately the same number of rows. The column values are divided into bands so that each band contains approximately the same number of rows. For instances, we have 10 distinct values in the column and only five buckets. It will create height based(Height balanced) histograms and it will evenly spread values through the buckets. A height-based histogram is when there are more distinct values than the number of buckets and the histogram statistics shows a range of rows across the buckets .The useful information that the histogram  provides is where in the range of values the 
endpoints fall.





** EPN - end point number & EPV - end point value 


How to Create a Histogram
You have the following options to create histograms using Oracle:
ANALYZE TABLE 
As part of the ANALYZE TABLE command for compiling statistics, you can specify
FOR ALL [INDEXED] COLUMNS [SIZE <buckets>]
to create histograms (<buckets> > 1). If you do not specify the SIZE, 75 buckets are used. For example, with the following command, histograms are created with 75 buckets for all indexed columns of the T100 table:
SQL> ANALYZE TABLE T100 COMPUTE STATISTICS FOR ALL INDEXED COLUMNS;

DBMS_STATS
You can also specify the METHOD_OPTS argument as part of the DBMS_STATS command GATHER_*_STATS, the contents of this argument are syntactically identical to the FOR expression of ANALYZE TABLE.
Therefore, the following is comparable with the above ANALYZE TABLE example:
SQL>DBMS_STATS.GATHER_TABLE_STATS('SAPR3', 'T100',
 METHOD_OPT=>'FOR ALL INDEXED COLUMNS');

Method_opt Parameter: This is the parameter which tells about creating histogram while collecting the statistics. The default is FOR ALL COLUMNS SIZE AUTO in Oracle10g. But in oracle9i, the default is FOR ALL COLUMN SIZE 1 which will turn off the histogram collection. 

METHOD_OPT dictates values how histograms are done ,
  • for all columns - gather histograms on all columns -- hugely expensive and rarely should be used
  • for all indexed columns - gather histograms on all columns that are currently indexed. This is typically to be the "most useful" although you can arguably skip nearly unique columns such as primary keys
  • for all hidden columns - gather histograms on columns you cannot see, usefulif using the object relational features which incorporate tons of hidden columnsin your objects.
  • for columns <list> - gather histograms on these specific columns, could be useful to skip primary key indexes or just to get the 2 or 3 columns of interest
Following options are available fro size clause,
  • integer : Number of histogram buckets. Must be in the range [1,254] (if you not specify the integer oracle will take its default 75)
  • REPEATsays to do the histograms just like you did them last time . It reads the data dictionary to figure our what to do.
  • AUTO : Oracle determines the columns to collect histograms based on data distribution and the workload of the columns. We have a table called sys.col_usage$ that stores information about column usage. dbms_stats use this information to determine whether histogram is required for the columns.
  • SKEWONLY : Oracle determines the columns to collect histograms based on the data distribution of the columns.
For example,

SQL> execute dbms_stats.gather_table_stats  (ownname => 'oe', tabname => 'inventories', 

METHOD_OPT => 'FOR COLUMNS SIZE 10 quantity_on_hand');

Demonstration for how optimizer  work with and without histogram
SQL> create table t nologging as
  2  select case when rownum = 1 then 5 else mod(rownum,4) end as id, a.* from all_objects a;

Table created.

SQL> select count(*),id from t group by id;

  COUNT(*)         ID
---------- ----------
     12508          1
     12508          2
        1              5
     12508          3
     12508          0

SQL> create index t_ind on t(id) nologging;

Index created.

SQL> begin
  2  DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'HR', TABNAME =>     'T',ESTIMATE_PERCENT =>
  3  10, METHOD_OPT => 'FOR ALL COLUMNS SIZE 1',CASCADE => TRUE);
  4  end;
  5  /
PL/SQL procedure successfully completed.

Note:- Histogram collection is disabled by "FOR ALL COLUMNS SIZE 1"

SQL> set autotrace traceonly explain;
SQL> select * from t where id = 1;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 12455 |  1167K|   162   (1)| 00:00:02 |
|*  1 |  TABLE ACCESS FULL| T    | 12455 |  1167K|   162   (1)| 00:00:02 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"=1)

SQL> select * from t where id = 5;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |  4152 |   389K|   162   (1)| 00:00:02 |
|*  1 |  TABLE ACCESS FULL| T    |  4152 |   389K|   162   (1)| 00:00:02 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"=5)

SQL>
Conclusion: Optimizer is using full table scan for the query which returns 12508  records as well as it using full table scan for query which returns just only one record. This is obviously incorrect. This problem will be resolved by collecting histogram. Let us see in the next scenario.
Scenario 2 : Let us generate the statistics with histogram and see what kind of execution path optimizer is using. FOR COLUMN SIZE 5 ID will create 5 bucket for column ID. If we are not sure the distinct number of values in the column, then we can use AUTO option to collect histogram. With this histogram, oracle optimizer knows that, the column ID is highly skewed and it has 5 bucket and 4 bucket has around  12508  records with values 0 to 3 and another bucket has only one record with ID equal to 5 . Now depends upon the query, optimizer decides whether to use index or Full table scan.


SQL> set autotrace off;
SQL> begin
  2  DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'HR', TABNAME => 'T',ESTIMATE_PERCENT =>
  3  10, METHOD_OPT => 'FOR COLUMNS SIZE 5 ID',CASCADE => TRUE);
  4  end;
  5  /

PL/SQL procedure successfully completed.

SQL> set autotrace traceonly explain;
SQL> select * from t where id = 1;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 12150 |  1139K|   162   (1)| 00:00:02 |
|*  1 |  TABLE ACCESS FULL| T    | 12150 |  1139K|   162   (1)| 00:00:02 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("ID"=1)

SQL> select * from t where id = 5;

Execution Plan
----------------------------------------------------------
Plan hash value: 1376202287

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |  2025 |   189K|   119   (0)| 00:00:02 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T     |  2025 |   189K|   119   (0)| 00:00:02 |
|*  2 |   INDEX RANGE SCAN          | T_IND |  2025 |       |     4   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("ID"=5)

SQL>

Conclusion: Optimizer is using full table scan for the query which returns 12508   records. At the same time, optimizer is using index scan when for other query which returns one record. This scenario, the optimizer choose the right execution plan based on the query WHERE clause.

Rules On When To Create Histograms

  • First there are no rules of thumb that are always correct.
  • When creating using dbms_stats, the use of the DEGREE (for parallel) can skew the value of density
  • Running dbms_stats can drop histograms ,so Always check if histogram exists on table before DBMS_STATS is run.Use METHOD_OPT FOR ALL COLUMNS REPEAT to prevent deletion of histograms  data.
  • Histograms are stored in the dictionary.
  • There is a maintenance and space cost for using histograms.
  • Only create histograms on columns that are used in WHERE clauses of queries and have a highly-skewed data distribution.Method_opt “size auto” is supposed to do just that but does not always get the second part right.
When Not To Use Histograms
  • all predicates on the column use bind variables 
  • the column data is uniformly distributed 
  • the column is not used in WHERE clauses of queries 
  • the column is unique and is used only with equality predicates
  • If your database exclusively uses bind variables, Oracle recommends deleting any existing Oracle histograms and disabling Oracle histogram generation (method opt) for any future dbms_stats analysis.  This approach will use the number if distinct values to determine the selectivity of a column.
Reference :- http://rajeshwaranbtech.blogspot.in &  http://myorastuff.blogspot.in 

Thursday, April 4, 2013

Automatic Storage Management (ASM) lessons 4

Using ASM Storage
We have discussed management of an ASM instance. This section covers how to actually
use ASM from an Oracle instance. You can put all sorts of Oracle-related files into an
ASM instance, including these:
  • Oracle datafiles
  • Database tempfiles
  • Online redo logs
  • Archived redo logs
  • Control files
  • Spfiles
  • RMAN backup sets
  • The flash recovery area (FRA)
  • Data-pump dump sets

What Are ASM Files?

We have already created ASM disk groups. To actually use the ASM disk groups, we have
to populate them with ASM files. In this section, we will discuss what ASM files are and
then we will discuss the different kinds of ASM filenames that you might deal with.

ASM Files
ASM files are created in a number of different ways; for example, when you execute the
create tablespace command and you indicate that the resulting datafile(s) should be
stored in an ASM disk group, the result will be the creation of ASM files in that ASM
disk group.A goodly number of Oracle file types can be stored in ASM, including datafiles, controlfiles, redo logs, and archived redo logs. There are some Oracle files that cannot be stored inan ASM group. These are mostly the administrative files like trace files, the alert log, andso on.

ASM Filename Types
When a file is created on an ASM disk, the filename is generated by ASM. There is a number of different kinds of ASM filename types:
  • Fully qualified ASM filenames
  • Numeric ASM filenames
  • Alias ASM filenames
  • Alias filenames with templates
  • Incomplete filenames
  • Incomplete filenames with templates .
For example,
  • 1. Fully Qualified ASM Filename: +group/dbname/file_type/file_type_tag.file.incarnation
    Example:
    +dgroup2/sample/controlfile/Current.256.541956473
  • 2. Numeric ASM Filename: +group.file.incarnation
    Example:
    +dgroup2.257.541956473
  • 3. Alias ASM Filenames: +group/dir_1/…/dir_n/filename
    Example:
    +dgroup1/myfiles/control_file1
    +dgroup2/mydir/second.dbf
  • 4. Alias ASM Filename with Template: +group(template_name)/alias
    Example:
    +dgroup1(my_template)/config1
  • 5. Incomplete ASM Filename:+group
    Example:
    +dgroup1
  • 6. Incomplete ASM Filename with Template: +group(template_name)
    Example:
    +dgroup1(my_template)
Note:- Incomplete ASM file names are most  commonly used by DBA.  Incomplete ASM file names are used only for file creation operations. They consist of a disk group name only. ASM will then use the appropriate default template to translate the incomplete ASM file name, as defined by its file type. For example, here is the SQL command I executed originally to create the TBSASM tablespace in the ASM1DG1 disk group:

SQL> CREATE TABLESPACE tbsasm DATAFILE '+ASM1DG1' SIZE 32M;

Defining ASM as the Default Destination for Database Files

If you decide you want to allow Oracle to create all file types as ASM file types, you can set the values of various parameters such that ASM will automatically be employed. One of the big benefits of this feature is the standardization of your database, ensuring that all files get placed where they belong and in the ASM structure to which they belong. You can define default ASM destinations be defining incomplete ASM filenames. The following database parameters take incomplete ASM filenames:

  • DB_CREATE_FILE_DEST
  • DB_CREATE_ONLINE_LOG_DEST_n
  • DB_RECOVERY_FILE_DEST
  • CONTROL_FILES
  • LOG_ARCHIVE_DEST_n (log_archive_dest_format will be ignored)
  • LOG_ARCHIVE_DEST (log_archive_dest_format will be ignored)
  • STANDBY_ARCHIVE_DEST

Here is an example of using an incomplete name when setting the DB_CREATE_FILE_DEST
parameter so that it will use the ASM disk group +sp_dgroup1:

SQL> alter system set db_create_file_dest=’+cooked_dgroup1’ scope=both;

Creating a Tablespace Using an ASM Disk Group as the Destination
There are different ways to create tablespaces using ASM disks. In this section, we will first look at creating an ASM tablespace, allowing the default ASM disk location to be used (as a result of having set the DB_CREATE_FILE_DEST parameter as we did earlier). We will then look at how to create a tablespace datafile by explicitly referencing the ASM disk group that it is supposed to be assigned to.

Creating Tablespaces Using Default ASM Assignments

Now that you have seen how to define a default ASM location, you can use the create
tablespace command to create a tablespace that will have a file in the ASM disk group by
default, as shown in this example:

SQL> create tablespace test_rgf datafile size 100k;
Let’s see where Oracle put the datafile now by querying the DBA_DATA_FILES view:

SQL> Select tablespace_name, file_name
from dba_data_files Where tablespace_name=’TEST_RGF’;
TABLESPACE FILE_NAME
---------- ---------------------------------------------------------
TEST_RGF +COOKED_DGROUP1/11gDB/datafile/test_rgf.256.613064385


You can have a mix of ASM datafiles and normal datafiles assigned to a tablespace, as shown in this create table statement:
SQL> Create tablespace part_asm_tbs Datafile ’c:\oracle\oradata\11gDB\part_asm_tbs_01.dbf’ size 10m, ’+COOKED_DGROUP1’ size 100k;

Let’s look and see where the datafiles were created:
SQL> Select tablespace_name, file_name from dba_data_files Where tablespace_name=’PART_ASM_TBS’;
TABLESPACE_NAME FILE_NAME
--------------- ------------------------------------------------------------
PART_ASM_TBS C:\ORACLE\ORADATA\11GDB\PART_ASM_TBS_01.DBF
PART_ASM_TBS +COOKED_DGROUP1/11GDB/datafile/part_asm_tbs.256.613066047


Note:- if you drop the PART_ASM_TBS tablespace, only the ASM files
related to that tablespace would be removed from the disk when you issue the drop
tablespace command. In cases such as these, you need to make sure you include the
including contents and datafiles parameter with the drop tablespace command.

Creating Tablespaces Referencing Specific ASM Disk Groups
There are going to be many times when you will not want to define a default ASM disk
group to write all tablespaces to. In this case, you will want to reference the specific ASM
disk group that you want a datafile created in when you issue the create tablespace command. Here is an example:
SQL> create tablespace another_test datafile ’+COOKED_DGROUP1’ size 100k;


Wednesday, April 3, 2013

Automatic Storage Management (ASM) lessons 3

Manually rebalancing disks within ASM is typically not required since ASM will perform
this operation automatically. However, in cases where you might want to have some more
granular control over the disk-rebalance process, you can use the alter diskgroup command
along with the rebalance parameter to manually rebalance ASM disks.

When we discuss rebalancing disks in ASM, we often discuss the power level that is
assigned to that rebalance operation. Setting power with regard to a rebalance operation
really defines the urgency of that operation with respect to other operations occurring on
the system (for example, other databases or applications). When a rebalance operation
occurs with a low power (for example, 1, the typical default), then that operation is not
given a high priority on the system As a result, the rebalance operation can take some time.When a higher power setting is used (for example, 11, the maximum), the ASM is given higher priority.

Note: what is re balancing ? 
ASM ensures that file extents are evenly distributed across all disks in a disk group. This is true for the initial file creation and for file resize operations. That means we should always have a balanced space distribution across all disks in a disk group.

You can set the default power limit for the ASM instance by changing the asm_power_limit parameter. Here is an example of starting a manual rebalance of a disk group:

SQL> alter diskgroup cooked_dgroup1 rebalance power 5 wait;

In this example, you will notice that we used the wait parameter. This makes this rebalance operation synchronous for our session. Thus, when the SQL prompt returns, we know that the rebalance operation has completed. The default is nowait, which will cause the operation to be synchronous in nature. You can check the status of the rebalance operation using the V$ASM_OPERATION view during asynchronous rebalance operations.

Finally, you can also use the rebalance parameter along with the power parameter when
adding, dropping, or resizing disks within a disk group, as shown in this example:
SQL> alter diskgroup cooked_dgroup1 resize all rebalance power 5;

Note:- ASM automatically initiates a rebalance after storage configuration changes, such as when you add, drop, or resize disks. The power setting parameter determines the speed with which rebalancing operations occur.

Checking the Consistency of a Disk Group
On occasion you might wonder if there is some problem with an ASM disk group, and you
will want to check the consistency of the ASM disk group metadata. This need might arise
because of an error that occurs when the ASM instance is started or as the result of an Oracle database error that might be caused by some ASM corruption. To perform this check, simply use the alter diskgroup command with the check all parameter, as shown in this example:
SQL> alter diskgroup sp_dgroup2 check all;
When you execute the alter diskgroup check all command the results are written to
the alert log of the instance. ASM will attempt to correct any errors that are detected.

Creating ASM Disk Group Directories
When you create an ASM disk group, it includes a system-generated directory structure for
the ASM files that will be stored in that disk group. The system-generated directory structure takes on the following format, where disk_group is the root of the directory hierarchy:
+disk_group/database_name/object_type/ASM_file_name
for Eg, +DATA/oracle/datafile/system.259.724180831
The database name will be the name of the database that the data is associated with. The
object_type is the type of object being stored (for example, datafile) and the ASM_file_
name is the system-generated filename assigned to that ASM file.
ASM allows you to create your own directories within these predefined structures. This
allows you to give alias names to the ASM files that you will create. This can make working
with ASM files easier.
To create a directory structure within a diskgroup, you use the alter diskgroup command with the add directory parameter, as shown in this example:
SQL> ALTER DISKGROUP cooked_dgroup1 ADD DIRECTORY ‘+cooked_dgroup1/stuff’;

Using the ASMCMD Command-Line Utility

The ASMCMD tool is a command-line utility that allows you to manage ASM instances
and the disk structures and files within those instances. With ASMCMD, you can do the
following:
  • List contents of ASM disk groupNN s
  • Perform searches (like directory listings)
  • Add or remove directories
  • Display space availability and utilization 

Starting ASMCMD
To start ASMCMD, simply set your ORACLE_SID to +ASM and then type asmcmd from the
command line, as shown here:
C:\>set ORACLE_SID=+ASM
C:\>asmcmd
Or from Unix:
/opt/oracle>export ORACLE_SID=+ASM
/opt/oracle>asmcmd

Note:- You will need to make sure that perl.exe is in the path before you run ASMCMD. If you have installed more than one ORACLE_HOME, it may take some setup to get the environment to  set correctly. Make sure the following is set to the correct 
ORACLE_HOME, PATH, PERL5LIB,PERLBIN .

ASMCMD Commands
ASMCMD has a basic set of commands, many of which mimic Unix commands. You can
see these commands from the ASMCMD prompt if you type in help. The commands are
pretty straightforward and easy to use.Below Table  lists the different ASMCMD commands
and their purposes. Reference summary of commonly used asmcmd commands:

cd Changes the current directory to the specified directory.

du Displays the total disk space occupied by ASM files in the
specified ASM directory and all its subdirectories, recursively.

exit Exits ASMCMD.

find Lists the paths of all occurrences of the specified name (with
wildcards) under the specified directory.

help Displays the syntax and description of ASMCMD commands.

ls Lists the contents of an ASM directory, the attributes of the
specified file, or the names and attributes of all disk groups.

lsct Lists information about current ASM clients.

lsdg Lists all disk groups and their attributes.

mkalias Creates an alias for a system-generated filename.

mkdir Creates ASM directory.

pwd Displays the path of the current ASM directory.

rm Deletes the specified ASM files or directories.

rmalias Deletes the specified alias, retaining the file that the alias points to.

Overview of ASM Data Dictionary Views


Several data dictionary views exist to help manage ASM. These data dictionary views are available both when connected to the ASM instance as well as any Oracle 10g database. Each view is slightly different in its presentation depending on whether the instance you are looking at is an ASM instance or a database instance (and some views are only used in the ASM instance). The table below shows these views.


View NameDescription
V$ASM_DISKGROUP
In an ASM Instance: This view will describe a given disk group.
In a Database: This view contains a single row for each ASM disk group that is mounted by the local ASM instance.Note that discovery will occur each time you query thisview. This can have performance impacts.
V$ASM_DISK
In an ASM Instance: This view describes each disk that was discovered by the ASM instance. All disks are reported, even those not assigned to disk groups.
In a Database: This view describes each disk that is assigned to that database.Note that discovery will occur each time you query thisview. This can have performance impacts.
V$ASM_DISKGROUP_STAT
This view is equivalent to the V$ASM_DISKGROUP view. However, this view will not cause discovery to occur. This can help with performance, but the data in the view may not be accurate.
V$ASM_FILE
In an ASM Instance: Displays each ASM file contained in the ASM instance.
In a Database: Not used in a database instance.
V$ASM_DISK_STATThis view is equivalent to the V$ASM_DISK view. However, this view will not cause discovery to occur. This can help with performance, but the data in the view may not be accurate.
V$ASM_TEMPLATE
In an ASM Instance: Displays each ASM template contained in the ASM instance by disk group.
In a Database: Not used in a database instance.
V$ASM_ALIAS
In an ASM Instance: Displays each alias contained in the ASM instance by disk group.
In a Database: Not used in a database instance.
V$ASM_OPERATION
In an ASM Instance: Displays each long running operation occurring on the ASM instance.
In a Database: Not used in a database instance
V$ASM_CLIENT
In an ASM Instance: Displays each database that is using at least one disk group managed by the ASM instance.
In a Database: Not used in a database instance.