Showing posts with label Performance tuning. Show all posts
Showing posts with label Performance tuning. Show all posts

Wednesday, April 17, 2013

what is histogram in oracle and its importance

DBMS_STATS by default calculates the number of rows in a table,The number of distinct values in a column etc.Without additional information, optimizer assumes that values are evenly distributed .But if they are not evenly distributed,  inefficient execution plans can result. A histogram holds the data distribution of values within a column of a table.Number of occurrences for a specific value/range .

Histogram is collected by using DBMS_STATS.Histograms are feature in CBO and it helps to optimizer to determine how data are skewed(distributed) with in the column. Histogram is good to create for the column which are included in the WHERE clause where the column is highly skewed. Histogram helps to optimizer to decide whether to use an index or full-table scan or help the optimizer to determine the fastest table join order.

When a query executes in database, Optimizer records the predicate usage in sys.col_usage$ (like number of equi joins, non equi joins, between operators & like operators). This recorded values in sys.col_usage$ will be used by dbms_stats to decide the need for histograms on the columns or *NOT*.

Types of Histograms
Histograms are classified into frequency histograms and height-balanced histograms .
Type of histogram is stored in the HISTOGRAM  column of the *tab_col_statistics views (USER/DBA) and its Value will be (‘HEIGHT BALANCED’, ‘FREQUENCY’,  or ‘NONE’).
For example,

SQL>select column_name, num_distinct, histogram FROM user_tab_col_statistics
WHERE table_name = 'SALES';
Histogram and Buckets
A bucket is a range of value for a particular column, for example if we had a column named true_or_false , and it could only have two values  true or false ,then we could have upto 2 buckets and these two buckets would have a value describing the number of times that where true and false.When Histograms are created the number of  buckets can be specified otherwise oracle will automatically chose the right choice.It is this number that controls the type of histogram created. 
When Distinct Values <= Buckets 
– Then Frequency Histogram is created
– Else Height-Balanced Histogram is created

One limitation of Oracle's frequency histograms is that it has a hard limit of 254 buckets.
If you have a small number of distinct values, say less than 100 and queries are frequently executed against these columms for specific values and column is skewed, then Frequency Histograms are your best option.If your number of distinct values is more than 254, Frequency Histograms are out of the question. You will have to use Height Balanced Histograms.

Frequency Histograms
Each value of the column corresponds to a single bucket of the histogram. Each bucket contains the number of occurrences of that single value.Frequency histograms are automatically created instead of height-balanced histograms when the number of distinct values is less than or equal to the number of histogram buckets specified. 




Height-Balanced Histograms
In a height-balanced histogram, the column values are divided into bands so that each 
band contains approximately the same number of rows. The column values are divided into bands so that each band contains approximately the same number of rows. For instances, we have 10 distinct values in the column and only five buckets. It will create height based(Height balanced) histograms and it will evenly spread values through the buckets. A height-based histogram is when there are more distinct values than the number of buckets and the histogram statistics shows a range of rows across the buckets .The useful information that the histogram  provides is where in the range of values the 
endpoints fall.





** EPN - end point number & EPV - end point value 


How to Create a Histogram
You have the following options to create histograms using Oracle:
ANALYZE TABLE 
As part of the ANALYZE TABLE command for compiling statistics, you can specify
FOR ALL [INDEXED] COLUMNS [SIZE <buckets>]
to create histograms (<buckets> > 1). If you do not specify the SIZE, 75 buckets are used. For example, with the following command, histograms are created with 75 buckets for all indexed columns of the T100 table:
SQL> ANALYZE TABLE T100 COMPUTE STATISTICS FOR ALL INDEXED COLUMNS;

DBMS_STATS
You can also specify the METHOD_OPTS argument as part of the DBMS_STATS command GATHER_*_STATS, the contents of this argument are syntactically identical to the FOR expression of ANALYZE TABLE.
Therefore, the following is comparable with the above ANALYZE TABLE example:
SQL>DBMS_STATS.GATHER_TABLE_STATS('SAPR3', 'T100',
 METHOD_OPT=>'FOR ALL INDEXED COLUMNS');

Method_opt Parameter: This is the parameter which tells about creating histogram while collecting the statistics. The default is FOR ALL COLUMNS SIZE AUTO in Oracle10g. But in oracle9i, the default is FOR ALL COLUMN SIZE 1 which will turn off the histogram collection. 

METHOD_OPT dictates values how histograms are done ,
  • for all columns - gather histograms on all columns -- hugely expensive and rarely should be used
  • for all indexed columns - gather histograms on all columns that are currently indexed. This is typically to be the "most useful" although you can arguably skip nearly unique columns such as primary keys
  • for all hidden columns - gather histograms on columns you cannot see, usefulif using the object relational features which incorporate tons of hidden columnsin your objects.
  • for columns <list> - gather histograms on these specific columns, could be useful to skip primary key indexes or just to get the 2 or 3 columns of interest
Following options are available fro size clause,
  • integer : Number of histogram buckets. Must be in the range [1,254] (if you not specify the integer oracle will take its default 75)
  • REPEATsays to do the histograms just like you did them last time . It reads the data dictionary to figure our what to do.
  • AUTO : Oracle determines the columns to collect histograms based on data distribution and the workload of the columns. We have a table called sys.col_usage$ that stores information about column usage. dbms_stats use this information to determine whether histogram is required for the columns.
  • SKEWONLY : Oracle determines the columns to collect histograms based on the data distribution of the columns.
For example,

SQL> execute dbms_stats.gather_table_stats  (ownname => 'oe', tabname => 'inventories', 

METHOD_OPT => 'FOR COLUMNS SIZE 10 quantity_on_hand');

Demonstration for how optimizer  work with and without histogram
SQL> create table t nologging as
  2  select case when rownum = 1 then 5 else mod(rownum,4) end as id, a.* from all_objects a;

Table created.

SQL> select count(*),id from t group by id;

  COUNT(*)         ID
---------- ----------
     12508          1
     12508          2
        1              5
     12508          3
     12508          0

SQL> create index t_ind on t(id) nologging;

Index created.

SQL> begin
  2  DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'HR', TABNAME =>     'T',ESTIMATE_PERCENT =>
  3  10, METHOD_OPT => 'FOR ALL COLUMNS SIZE 1',CASCADE => TRUE);
  4  end;
  5  /
PL/SQL procedure successfully completed.

Note:- Histogram collection is disabled by "FOR ALL COLUMNS SIZE 1"

SQL> set autotrace traceonly explain;
SQL> select * from t where id = 1;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 12455 |  1167K|   162   (1)| 00:00:02 |
|*  1 |  TABLE ACCESS FULL| T    | 12455 |  1167K|   162   (1)| 00:00:02 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"=1)

SQL> select * from t where id = 5;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |  4152 |   389K|   162   (1)| 00:00:02 |
|*  1 |  TABLE ACCESS FULL| T    |  4152 |   389K|   162   (1)| 00:00:02 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"=5)

SQL>
Conclusion: Optimizer is using full table scan for the query which returns 12508  records as well as it using full table scan for query which returns just only one record. This is obviously incorrect. This problem will be resolved by collecting histogram. Let us see in the next scenario.
Scenario 2 : Let us generate the statistics with histogram and see what kind of execution path optimizer is using. FOR COLUMN SIZE 5 ID will create 5 bucket for column ID. If we are not sure the distinct number of values in the column, then we can use AUTO option to collect histogram. With this histogram, oracle optimizer knows that, the column ID is highly skewed and it has 5 bucket and 4 bucket has around  12508  records with values 0 to 3 and another bucket has only one record with ID equal to 5 . Now depends upon the query, optimizer decides whether to use index or Full table scan.


SQL> set autotrace off;
SQL> begin
  2  DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'HR', TABNAME => 'T',ESTIMATE_PERCENT =>
  3  10, METHOD_OPT => 'FOR COLUMNS SIZE 5 ID',CASCADE => TRUE);
  4  end;
  5  /

PL/SQL procedure successfully completed.

SQL> set autotrace traceonly explain;
SQL> select * from t where id = 1;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 12150 |  1139K|   162   (1)| 00:00:02 |
|*  1 |  TABLE ACCESS FULL| T    | 12150 |  1139K|   162   (1)| 00:00:02 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("ID"=1)

SQL> select * from t where id = 5;

Execution Plan
----------------------------------------------------------
Plan hash value: 1376202287

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |  2025 |   189K|   119   (0)| 00:00:02 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T     |  2025 |   189K|   119   (0)| 00:00:02 |
|*  2 |   INDEX RANGE SCAN          | T_IND |  2025 |       |     4   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("ID"=5)

SQL>

Conclusion: Optimizer is using full table scan for the query which returns 12508   records. At the same time, optimizer is using index scan when for other query which returns one record. This scenario, the optimizer choose the right execution plan based on the query WHERE clause.

Rules On When To Create Histograms

  • First there are no rules of thumb that are always correct.
  • When creating using dbms_stats, the use of the DEGREE (for parallel) can skew the value of density
  • Running dbms_stats can drop histograms ,so Always check if histogram exists on table before DBMS_STATS is run.Use METHOD_OPT FOR ALL COLUMNS REPEAT to prevent deletion of histograms  data.
  • Histograms are stored in the dictionary.
  • There is a maintenance and space cost for using histograms.
  • Only create histograms on columns that are used in WHERE clauses of queries and have a highly-skewed data distribution.Method_opt “size auto” is supposed to do just that but does not always get the second part right.
When Not To Use Histograms
  • all predicates on the column use bind variables 
  • the column data is uniformly distributed 
  • the column is not used in WHERE clauses of queries 
  • the column is unique and is used only with equality predicates
  • If your database exclusively uses bind variables, Oracle recommends deleting any existing Oracle histograms and disabling Oracle histogram generation (method opt) for any future dbms_stats analysis.  This approach will use the number if distinct values to determine the selectivity of a column.
Reference :- http://rajeshwaranbtech.blogspot.in &  http://myorastuff.blogspot.in 

Thursday, February 7, 2013

How Oracle locks the rows on behalf of the transactions by using ITL

How Oracle locks rows on behalf of transactions? How does the data block grant the locks? Where will it store the locking information for the rows the transaction is asking for? Here comes the structure – Interested Transaction List (ITL). This is a small structure in oracle block, which holds the information about the list of transactions, which are interested in some of the rows of that block. That’s why it’s called Interested Transaction List. ITL is present in the variable portion of the Oracle block. 

 Variable header info grows from the top down (just below the fixed header) if necessary and rows are inserted from the bottom up (just above the tail but depends on PCTFREE and also the size of variable part of data block header cannot exceed 50% of block size. ). It is this portion of data block, which holds the information about locking.

This variable portion of the data block contains slots for transactions to put the locking information. When a row in the block is locked for the first time, the transaction places a lock in one of the slots with the rowid of the row that is locked. In other words, the transaction makes it known that it is interested in the row (hence the name “Interested Transaction List”). When the same transaction or another one locks another row, the information is stored in another slot, and so on.

The next logical question that comes up is, how many slots are typically available? During the table creation, the INITRANS parameter defines how many slots are initially created in the ITL. When the transactions exhaust all the available slots and a new transaction comes in to lock a row, the ITL grows to create another slot. The ITL can grow up to the number defined by the MAXTRANS parameter of the table, provided there is space in the block. Nevertheless, if there is no more room in the block, even if the MAXTRANS is high enough, the ITL cannot grow.

What if there are not enough slots available in ITL? What will happen? Obviously the new transaction asking for the lock on the rows have to wait until the existing transaction is finished and frees up the slot. This is called ITL wait event. Once the existing transaction finishes and frees up the slot, the new transaction can proceed.
When the ITL can’t grow due to the lack of space in the block, the session waits will the event “enq: TX - allocate ITL entry”


ITL Wait in detail
Let’s see this description of the wait in action. Assume our table has INITRANS of one and MAXTRANS 11. A typical data block right after the creation of the table will look like figure 1 below.
Since the INITRANS is one, there is only one slot for the ITL. The rest of the block is empty. Now we inserted three rows into the table. These will go into this block, and the block will look like figure 2.
Note how the empty space is reduced. At this point, a transaction called Txn1 updates Row1, but does not commit. This locks Row1, and the transaction places the lock in the slot number one in the ITL as shown in figure 3.
Then another transaction, Txn2, updates the row Row2 and wants to lock the row. However, there are no more slots in the ITL available to service the transaction. The MAXTRANS entry is 11, meaning the ITL can grow up to 11 slots and the block has empty space. Therefore, ITL can grow by another slot and Slot number two is created and allocated to Txn2 (refer to figure 4).
Now the empty space in the block is severely limited, and it will not be able to fit another ITL slot. If at this time another transaction comes in to update the row three, it must have a free slot in the ITL. The MAXTRANS is 11 and currently only two slots have been created, so another one is possible; but since there is no room in the block to grow, the slot can’t be created. Therefore, the Txn3 has to wait until either of the other transactions rolls back or commits and the slot held by it becomes free. At this time the session will experience an ITL waits event as seen from the view V$SESSION_WAIT.
How to Reduce ITL Waits
The primary cause of ITL waits is the unavailability of free slots in the ITL. These can be due to ,

Low setting of the MAXTRANS, which places a hard limit on the number of transactions that can have locks on a block or, the block is so packed that there is no room for the ITL to grow OR both .Therefore, setting a high value of INITRANS will make sure that there are enough free slots in the ITL and there will be minimal or no dynamic extension of the ITL. However this also means that there is less space in the block for actual data, increasing wasted space.

Note:- MAXTRANS parameter starting from 10G is deprecated.Whatever value we set it is completely ignored by oracle.

Source:- http://avdeo.com  , http://www.proligence.com/itl_waits_demystified.html


Saturday, January 19, 2013


Automatic Shared Memory Management in Oracle 10g

Oracle instance in memory consists of two major areas SGA (system/shared global area)and PGA (program/private global area). The SGA is shared by all sessions and consists of a few pools for different purposes. A few of them are as follows:

1. DEFAULT buffer cache :- used to store oracle data blocks when they are read or updated.
2. DEFAULT nK buffer cache :- used to store oracle data blocks having different size then the default block size (db_block_size) - (non -ASMM)
3. KEEP buffer cache :- used to store oracle data blocks from the objects which are not supposed to age out from memory. (non -ASMM)
4. RECYCLE buffer cache :- used to store oracle data blocks from the objects which are not supposed to be kept in the memory. (non -ASMM)
5. log buffer cache :- used to store redo entries to reconstruct the operations in case of an instance crash. (non -ASMM)
6. shared pool :- used to parse and store session queries, define execution plans for queries etc.
7. large pool :- used for backup/recovery operations and batch job processing etc.
8. java pool :- All session's java related activities are done here.
9. streams pool :- used for oracle streams.

Sizing these pools manually in the SGA is a great pain and it is almost impossible to use all available memory efficiently to different pools. Lets take a scenario where the database is being used for OLTP application in the daytime and there are some huge batch jobs scheduled to run every night. We have 1G of memory available for SGA out of which we have given 400m to the DB Buffer Cache, 300m to Shared Pool, 100m to Large Pool and rest of the memory i.e. 200m to other pools in the SGA.

In the daytime the DB Buffer Cache is being used extensively for OLTP transactions and a very little of Large Pool say 5 to 10 megabytes. Keeping this in view, even when DB Buffer Cache is in contention and 400m is not sufficient enough for it we are wasting a lot of memory in Large Pool where nothing is being happening.

While during nights when there is no OLTP activity and we need more memory for Large Pool, a lot of memory is being wasted in the DB Buffer cache.
Having this problem in hand now lets go through the ASMM (Automatic Shared Memory Management) feature introduced in Oracle 10g and see how it can help us with our problem.

ASMM when switched on, it controls the sizes of the certain components in the SGA by making sure they get the memory they need and it does that by shrinking the components which are not using all of memory allocated to them and growing the ones which need more then the allocated memory. ASMM adopts to the workload changes and maximize the utilization of the memory. This happens with the help of MMAN (Memory Manager) background process which is all the time capturing the workload during the instance run and uses the memory advisers to decide what should be size of components.

Components like db_nk_caches, keep/recycle buffer cache and log buffer cache are manually tuned. ASMM does the auto tuning for the following pools:

1. DEFAULT buffer cache
2. Shared Pool
3. Large Pool
4. Java Pool
5. Streams Pool (10g R2+)

When ASMM is disabled the following initialization parameters are used to set the sizes for auto tuned pools in SGA. In oracle 10g these initialization parameters are called "auto tuned parameters".

1. db_cache_size
2. shared_pool_size
3. large_pool_size
4. java_pool_size
5. streams_pool_size

To switch to ASMM you need to set the initialization parameters SGA_TARGET to a non-zero value which must be less than or equal to value of parameter SGA_MAX_SIZE.

$ sqlplus / as sysdba
SQL> alter system set sga_max_size=1G scope=spfile;
System altered.
SQL> alter system set sga_target=500m scope=both;
System altered.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
 Total System Global Area 1073741824 bytes
Fixed Size                  1223540 bytes
Variable Size             738198668 bytes
Database Buffers          327155712 bytes
Redo Buffers                7163904 bytes
Database mounted.
Database opened.
SQL> show parameter sga_target
 NAME                        TYPE        VALUE
--------------------------- ----------- ---------------------
sga_target                  big integer 500M
SQL> show parameter sga_max_size
 NAME                        TYPE        VALUE
--------------------------- ----------- ---------------------
sga_max_size                big integer 1G
SQL>

According to the configuration we just made the SGA_TARGET is 500m, which means that the total size of SGA will be 500m and after allocating the defined sizes to the non-auto tuned pools and memory areas Oracle will dynamically manage all the auto tuned pools in the remaining memory space. But the total size of the SGA will never exceed 500m. I have met people having a perception that sga_target is the limit for the SGA but if the current_sizes are not enough for the components in the SGA it may grow upto sga_max_size which is incorrect. The SGA will stay in the boundaries of sga_target, period.
The reason why sga_max_size is usually larger then sga_target is the relationship between these two and the static nature of the sga_max_size parameter. Lets say you have set both sga_target and sga_max_size to 500m initially but later on after a couple of months you find out that 500m is not enough memory for your components to be managed in an efficient manner. Now if you want to increase the sga_taget to 1G, you will have to increase the sga_max_size to 1G also because sga_target cannot be larger then sga_max_size. But othe other hand if initially you set sga_target to 500m and sga_max_size to 1G then you have a window of at lease 500m to increase your sga_target without shutting down your database. sga_max_size is nothing more then a maximum limit which defines how big your sga_target can be, it doesn't effect memory allocation for the SGA in the oracle instance. Whenever an oracle instance is started it allocates the SGA memory equal to the value of sga_target, so it doesn't really matter how big you set your sga_max_size.

Now lets come back to our example where we have set sga_target to 500m, it doesn't mean all of this 500m will be used for auto tuned pools. The memory that will be used for the auto tuned pools is (sga_target - sum of non-auto tuned areas sizes). ASMM is not suppose to touch the size of manually tuned memory areas. If the total size of all non-auto tuned areas (log buffer cache, keep/recycle buffer cache etc) is 100m, then rest of 400m will be used for adjusting the sizes of auto tuned pools according to the workload.

After we enable the automatic memory management Oracle start managing the pools for us and set reasonable sizes for all the pools according to their nature and the type of work they do. Lets have a look at the current allocation of the auto tuned pools:

$ sqlplus / as sysdba
 SQL> show parameter db_cache_size
 NAME                        TYPE        VALUE
--------------------------- ----------- ---------------------
db_cache_size               big integer 0
SQL> show parameter pool_size
 NAME                        TYPE        VALUE
--------------------------- ----------- ---------------------
global_context_pool_size    string
java_pool_size              big integer 0
large_pool_size             big integer 0
olap_page_pool_size         big integer 0
shared_pool_size            big integer 0
streams_pool_size           big integer 0
SQL>     
           
SQL> select component , round(current_size/1024/1024,2) size_mb
  2  from v$sga_dynamic_components
  3  where component like '%pool' 
  4  OR component ='DEFAULT buffer cache';
 COMPONENT                          SIZE_MB
------------------------------- ----------
shared pool                            172
large pool                               4
java pool                                4
streams pool                             0
DEFAULT buffer cache                   292

Notice all the auto tuned parameters are set to 0 (we will discuss about this later). v$sga_dynamic_component shows us the current sizes of all these components in SGA. If we sum them up (172 + 4 + 4 + 292 = 472) is the total size where auto tuning is suppose to happen. Rest of 28m is for other areas in SGA like log buffer, keep/recycle cache sizes and any db_nk_cache_sizes if configured. Now lets open another console and connect with a normal oracle user to put the ASMM to the test.

 /*  This is another console where we login with user scott. The sysdba session is still intact in the other console. */

$ sqlplus scott/tiger@mydb
 SQL> create or replace package myPack
  2     is
  3      TYPE myType is table of char(2000) index by binary_integer;
  4      myTable myType;
  5     end;
  6       /
 Package created.

 SQL> begin
  2    for i in 1..100000 loop
  3       myPack.myTable(i) := i; 
  4    end loop;
  5  end;
  6  /
 PL/SQL procedure successfully completed.
 SQL> exit 

I established this session using shared server mode, so any variables I declare will be stored in the Large Pool where my UGA is being maintained. I created a packaged PL/SQL table of type char(2000) and inserted 100000 records in it. Being char(2000) each element's size is 2000 bytes no matter hat I assign to it. Hence after the population of the PL/SQL it is going to be around 200m(200*100000 bytes) in size. Since it is a packaged variable, so it is gonna stay in my Large Pool until I exit out the session. Once I exit from the session the variable should be cleaned out releasing space from Larg Pool.

/* Now we are back to the sysdba session */

SQL> select component , round(current_size/1024/1024,2) size_mb
  2  from v$sga_dynamic_components
  3  where component like '%pool' 
  4  OR component ='DEFAULT buffer cache';
 COMPONENT                             SIZE_MB
---------------------------------- ----------
shared pool                               132
large pool                                232
java pool                                   4
streams pool                                0
DEFAULT buffer cache                      104
Notice the current size of all these pools. It is clear now that when Large Pool needed space Oracle squeezed both buffer cache and shared pool and gave required space to large pool. Also notice (132 + 232 + 4 + 104 = 472). Now these sizes will stay like this until any pool needs more memory then it has and that is when these sizes will adjust again.
Lets go back to our normal session and create a heavy work load on Default buffer cache

/* Open another console and connect with a normal oracle user. The sysdba session is still open in the other console. */

$ sqlplus scott/tiger@mydb
SQL> create table big_table as select * from all_objects;
Table created.
SQL> insert into big_table select * from big_table;
40697 rows created.
SQL> /
81394 rows created.
SQL> /
162788 rows created.
SQL> /
325576 rows created.
SQL> /
651152 rows created.
SQL> /
1302304 rows created.
SQL> commit;
Commit complete.
SQL> analyze table big_table compute statistics;
Table analyzed.
SQL> select table_name , round(blocks*8192/1024/1024,2) size_mb
  2  from user_tables
  3  where table_name = 'BIG_TABLE';
TABLE_NAME                        SIZE_MB
------------------------------ ----------
BIG_TABLE                          283.21
SQL> 

We have processed about 1.3 million rows here in the table big_table which is 283m in size i.e quite larger then the whole buffer cache as its current size is 104m. When these queries are processed there is an extensive aging out and loading of rows from big_table in buffer cache causing high physical reads and low cache hit ratio. MMAN (memory manager) captures it and signals the adjustment in the size of buffer cache as there is a plenty of free space in Large Pool. Lets go back to sysdba session and see what are the current sizes of these components.

/* Now we are back to the sysdba session */

SQL> select component , round(current_size/1024/1024,2) size_mb
  2  from v$sga_dynamic_components
  3  where component like '%pool' 
  4  OR component ='DEFAULT buffer cache';
COMPONENT                              SIZE_MB
----------------------------------- ----------
shared pool                                108
large pool                                   4
java pool                                    4
streams pool                                 0
DEFAULT buffer cache                       356

See the buffer cache is larger the all others. The free space has been taken away from Large Pool and even some from Shared Pool also to accommodate heavy workload on buffer cache. As the current component sizes stand (108 + 4 + 4 + 356 = 472) Oracle is managing then within the boundary of 472m.

The dynamically adjusted sizes are retained through instance shutdowns if you are using server parameter file (i.e. spfile).

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 1073741824 bytes
Fixed Size                  1223540 bytes
Variable Size             671089804 bytes
Database Buffers          394264576 bytes
Redo Buffers                7163904 bytes
Database mounted.
Database opened.

SQL> select component , round(current_size/1024/1024,2) size_mb
  2  from v$sga_dynamic_components
  3  where component like '%pool' 
  4  OR component ='DEFAULT buffer cache';
COMPONENT                            SIZE_MB
--------------------------------- ----------
shared pool                              108
large pool                                 4
java pool                                  4
streams pool                               0
DEFAULT buffer cache                     356
Note:- Even after a shutdown and restart the components are of the same size. 

 SQL> SELECT
  2     a.ksppinm "Parameter",
  3     b.ksppstvl "Value"
  4  FROM
  5     x$ksppi a,
  6     x$ksppsv b
  7  WHERE
  8     a.indx = b.indx AND
  9     a.ksppinm LIKE '/_/_%' escape '/' AND
 10    (a.ksppinm LIKE '%db_cache_size%' 
 11     OR a.ksppinm LIKE '%pool_size%');
Parameter                      Value
------------------------------ ----------
__shared_pool_size             113246208
__large_pool_size              4194304
__java_pool_size               4194304
__streams_pool_size            0
__db_cache_size                373293056

How is this happening? Actually whenever these components size is changed it is updated in undocumented siblings of auto tuned parameters. On the instance start up these undocumented parameters are read and the components are allocated accordingly. The above query shows how to read undocumented oracle parameters and their values from tables x$ksppi (contains parameter names) and x$ksppsv (contains parameter values for the instance) owned by sys

Here are some other useful columns in the view v$sga_dynamic_components where you can see the Operations have been happening with these components along their minimum and maximum sizes.

SQL> set lines 10000
SQL> column component format a20
SQL> select component , round(current_size/1024/1024,2) size_mb, 
  2  LAST_OPER_TYPE, OPER_COUNT, MIN_SIZE, MAX_SIZE
  3  from v$sga_dynamic_components
  4  where component like '%pool' 
  5  OR component ='DEFAULT buffer cache';
COMPONENT            SIZE_MB LAST_OPER_TYP OPER_COUNT 
-------------------- ------- ------------- ---------- 
shared pool              112 GROW                   1  
large pool                 4 SHRINK               114
java pool                  4 STATIC                 0
streams pool               0 STATIC                 0
DEFAULT buffer cache     352 GROW                 115
SQL> 

A few more things to know about ASMM:

When the ASMM is switched all auto tuned parameter sizes are managed by the oracle it self and any sizes manually defined for these parameters are no more considered as the max size that component may have, rather it is considered as a minimum size for that component.

$ sqlplus / as sysdba
 SQL> show parameter db_cache_size
 NAME                       TYPE        VALUE
-------------------------- ----------- --------------------
db_cache_size              big integer 0
SQL> show parameter pool_size
NAME                       TYPE        VALUE
-------------------------- ----------- --------------------
global_context_pool_size   string
java_pool_size             big integer 0
large_pool_size            big integer 0
olap_page_pool_size        big integer 0
shared_pool_size           big integer 0
streams_pool_size          big integer 0
SQL>
Here , All auto tuned parameter values are set to 0, which means 0 is the minimum size for this component and this component can be re-sized to 0 to allow other components use all of its memory. If we change them as follows:

SQL> alter system set db_cache_size=100m scope=both;
System altered.
SQL> show parameter db_cache_size
NAME                       TYPE        VALUE
-------------------------- ----------- --------------------
db_cache_size              big integer 100M
SQL> 
This means 100m is the minimum size for the buffer cache, no matter what happens to the other components this 100m will never be taken away from the Default buffer cache. Manual settings of the auto tuned parameters is useful when you don't want one of components to suffer too much because of auto adjustments in sizes.

When setting the auto tuned parameters manually if you set a size larger then its current size and the increase in the size can be supported by shrinking other components then the change is done immediately.

SQL> select component , round(current_size/1024/1024,2) size_mb
  2  from v$sga_dynamic_components
  3  where component like '%pool' 
  4  OR component ='DEFAULT buffer cache';
COMPONENT               SIZE_MB
-------------------- ----------
shared pool                 112
large pool                    4
java pool                     4
streams pool                  0
DEFAULT buffer cache        352
SQL> alter system set shared_pool_size = 120m scope=both;
System altered.
SQL> select component , round(current_size/1024/1024,2) size_mb
  2  from v$sga_dynamic_components
  3  where component like '%pool' 
  4  OR component ='DEFAULT buffer cache';
COMPONENT               SIZE_MB
-------------------- ----------
shared pool                 120
large pool                    4
java pool                     4
streams pool                  0
DEFAULT buffer cache        344
We increased the size of shared pool from 112m to 120m and there was space in buffer cache that was available for allocating to shared pool so our change came into effect immediately. And when there is no space available to grow the pool to the size you specified, you will get an error like this.

SQL> alter system set shared_pool_size = 450m scope=both;
alter system set shared_pool_size = 450m scope=both
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified  value is invalid
ORA-04033: Insufficient memory to grow pool
SQL>
When an auto tuned parameters are set to a value lower then their current size then the change is not done immediately.

SQL> alter system set shared_pool_size = 100m scope=both;
System altered.
SQL> select component , round(current_size/1024/1024,2) size_mb
  2  from v$sga_dynamic_components
  3  where component like '%pool' 
  4  OR component ='DEFAULT buffer cache';
COMPONENT               SIZE_MB
-------------------- ----------
shared pool                 120
large pool                    4
java pool                     4
streams pool                  0
DEFAULT buffer cache        344
SQL> 
Now according to the workload if a situations comes where Oracle has to shrink shared pool then the shared will be squeezed to 100m and the shrinking will stop since 100m is the minimum value for shared pool.

If you disable the ASMM then all the components that are being managed by the auto tuning will be freezed at their current sizes and become static unless you enable the ASMM again.

SQL> alter system set sga_target=0 scope=both;
System altered.
SQL> show parameter db_cache_size
NAME                        TYPE        VALUE
--------------------------- ----------- ---------------------
db_cache_size               big integer 344M
SQL> show parameter pool_size
NAME                        TYPE        VALUE
--------------------------- ----------- ---------------------
global_context_pool_size    string
java_pool_size              big integer 4M
large_pool_size             big integer 4M
olap_page_pool_size         big integer 0
shared_pool_size            big integer 120M
streams_pool_size           big integer 0
SQL>