Saturday, March 30, 2013

Basic of raids must know

Every article that i posted related to dba only, here is a diversion .Here i am giving a good notes about RAID. As we are using raids in different purpose in IT industry , a dba should know atleast the basic functioning of RAID.Hope it will helps to somebody. :)

A Redundant Array of Independent Drives (or Disks), also known as Redundant Array of  Inexpensive Drives (or Disks) (RAID) is an term for data storage schemes that divide and/or replicate data among multiple hard drives. RAID can be designed to provide increased data reliability or increased I/O performance, though one goal may compromise the other. Raid provide fault tolerance by stripping , mirroring or parity .

RAID can be implemented on a hardware or software level. On a hardware level, you can have hard disks connected to a RAID hardware controller, usually a special PC card. Your operating system then accesses storage through the RAID hardware controller. Alternatively, you can implement RAID as a software controller,IN a software RAID controller ,special program manage access to hard disks treated as RAID devices. The software version lets you use IDE hard disks as RAID disks. Linux uses the MD driver, supported in the 2.4 kernel, to implement a software RAID controller. Linux software RAID supports five levels (linear, 0, 1, 4, 5, and 6), whereas hardware RAID supports many more. Hardware RAID levels, such as 7–10, provide combinations of greater performance and reliability. Before you can use RAID on your system, make sure the RAID levels you want to use. If not, you will have for the kernel. Check the Multi-Driver Support component specify support for any or all of the RAID levels.

Note: we will get redundancy only when parity or mirroring  is present on RAID.

Commonly used RAID levels for UNIX / Linux and Windows server
RAID 0 (Striping)
This level is achieved by grouping 2 or more hard disks into a single unit with the total size equaling that of all disks used. Practical example: 3 disks, each 80GB in size can be used in a 240GB RAID 0 configuration. You can also use disk having different sizes.
RAID 0 works by breaking data into fragments and writing to all disk simultaneously. Read and write operations are improved as compared to single disk system, since the load is shared across many channel and are done in parallel on the disks.On the other hand, no single disk contains the entire information for any bit of data committed. This means that if one of the disks fails, the entire RAID is rendered inoperable, with unrecoverable loss of data.
However since there is no redundancy, it doesn't provide fault tolerance. If even one drive fails, data across the drive array will be lost.
RAID 0 is suitable for non-critical operations that require good performance, like the system partition or the /tmp partition where lots of temporary data is constantly written. It is not suitable for data storage.

Suggested application:- Video Production and Editing , Image Editing etc

RAID 1 (Mirroring

This level is achieved by grouping 2 or more hard disks into a single unit with the total size equaling that of the smallest of disks used. This is because RAID 1 keeps every bit of data replicated on each of its devices in the exactly same fashion, create identical clones. Hence the name, mirroring. Practical example: 2 disks, each 80GB in size can be used in a 80GB RAID 1 configuration.
Because of its configuration, RAID 1 reduced write performance, as every chunk of data 
has to be written n times, on each of the paired devices. The read performance is identical to single disks. Redundancy is improved, as the normal operation of the system can be maintained as long as any one disk is functional. Of Course, the disks must be of equal size. If one disk is larger than another, your RAID device will be the size of the smallest disk.
If there are spare disks available, and if the system survived the crash, reconstruction of the mirror will immediately begin on one of the spare disks, after detection of the drive fault.
RAID 1 is suitable for data storage, especially with non-intensive I/O tasks.

RAID4   (striping with parity) 
This RAID level is not used very often. It can be used on three or more disks. Instead of completely mirroring the information, it keeps parity information on one drive, and writes data to the other disks in a RAID-0 like way .If one of the data drives in array fails, the parity information can be used to reconstruct all data. However if more than one disks fail, whole of the data is lost.
The reason this level is not more frequently used, is because the parity information is kept on one drive. This information must be updated every time one of the other disks are written to. Thus, the parity disk will become a bottleneck, if it is not a lot faster than the other disks. However, if you just happen to have a lot of slow disks and a very fast one, this RAID level can be very useful.

RAID 5 (Stripping with distributed parity)

Raid 5 support both striping as well as redundancy in terms of parity. RAID 5 improves on RAID 4 by striping the parity data between all the disks in the RAID set. RAID-5 can be used on three or more disks, with zero or more spare-disks.

This is a more complex solution, with a minimum of three devices used. If one of the RAID 0 devices malfunctions, the array will continue operating, using the parity device as a backup. If spare disks are available, reconstruction will begin immediately after the device failure. If two disks fail simultaneously, all data are lost.Like raid 4 RAID-5 can survive one disk failure, but not two or more.

RAID 5 improves the write performance than raid 4, as well as redundancy and is useful in mission-critical scenarios, where both good throughput and data integrity are important.


In this configuration 25% of the combined dik space is used to store the parity information
And around 75% of the total disk capacity is available for use.

Linear RAID
This is a less common level, although fully usable. Linear is similar to RAID 0, except that data is written sequentially rather than in parallel. Linear RAID is a simple grouping of several devices into a larger volume, the total size of which is the sum of all members. For instance, three disks the sizes of 40, 60 and 250GB can be grouped into a linear RAID the total size of 350GB. 
Linear RAID provides no read/write performance, not does it provide redundancy; a loss of any member will render the entire array unusable. It merely increases size. It's very similar to LVM. Linear RAID is suitable when large data exceeding the individual size of any disk or partition must be used. 

RAID6
RAID-6 is an extension of RAID-5 to provide additional fault tolerance by using dual distributed parity schemes. Dual parity scheme helps survive two disk failures at a time without data loss. It extends RAID 5 by adding an additional parity block, thus it uses block-level striping with two parity blocks distributed across all member disks.  The read performance is same as RAID 5. However, its write performance is poorer than RAID 5 due to overhead associated with the additional parity calculations. But it does better than RAID 5 on file data protection because RAID 6 provides protection against double disk failures and failures while a single disk is rebuilding. RAID 5 fails to do so.






RAID 10 
RAID 10 or RAID 1+0 - Combination of RAID 0 (data striping) and RAID 1 (mirroring). 

  • RAID 10 is also called as RAID 1+0
  • It is also called as “stripe of mirrors”
  • It requires minimum of 4 disks
  • To understand this better, group the disks in pair of two (for mirror). For example, if you have a total of 6 disks in RAID 10, there will be three groups–Group 1, Group 2, Group 3 as shown in the above diagram.
  • Within the group, the data is mirrored. In the above example, Disk 1 and Disk 2 belongs to Group 1. The data on Disk 1 will be exactly same as the data on Disk 2. So, block A written on Disk 1 will be mirroed on Disk 2. Block B written on Disk 3 will be mirrored on Disk 4.
  • Across the group, the data is striped. i.e Block A is written to Group 1, Block B is written to Group 2, Block C is written to Group 3.
  • This is why it is called “stripe of mirrors”. i.e the disks within the group are mirrored. But, the groups themselves are striped.  
RAID 01
  • RAID 01 is also called as RAID 0+1
  • It is also called as “mirror of stripes”
  • It requires minimum of 3 disks. But in most cases this will be implemented as minimum of 4 disks.
  • To understand this better, create two groups. For example, if you have total of 6 disks, create two groups with 3 disks each as shown below. In the above example, Group 1 has 3 disks and Group 2 has 3 disks.
  • Within the group, the data is striped. i.e In the Group 1 which contains three disks, the 1st block will be written to 1st disk, 2nd block to 2nd disk, and the 3rd block to 3rd disk. So, block A is written to Disk 1, block B to Disk 2, block C to Disk 3.
  • Across the group, the data is mirrored. i.e The Group 1 and Group 2 will look exactly the same. i.e Disk 1 is mirrored to Disk 4, Disk 2 to Disk 5, Disk 3 to Disk 6.
  • This is why it is called “mirror of stripes”. i.e the disks within the groups are striped. But, the groups are mirrored.
Main difference between RAID 10 vs RAID 01
  • Performance on both RAID 10 and RAID 01 will be the same.
  • The storage capacity on these will be the same.
  • The main difference is the fault tolerance level. On most implememntations of RAID controllers, RAID 01 fault tolerance is less. On RAID 01, since we have only two groups of RAID 0, if two drives (one in each group) fails, the entire RAID 01 will fail. In the above RAID 01 diagram, if Disk 1 and Disk 4 fails, both the groups will be down. So, the whole RAID 01 will fail.
  • RAID 10 fault tolerance is more. On RAID 10, since there are many groups (as the individual group is only two disks), even if three disks fails (one in each group), the RAID 10 is still functional. In the above RAID 10 example, even if Disk 1, Disk 3, Disk 5 fails, the RAID 10 will still be functional.
  • So, given a choice between RAID 10 and RAID 01, always choose RAID 10.
Before You start 

Specially built hardware-based RAID disk controllers are available for both IDE and SCSI drives. They usually have their own BIOS, so you can configure them right after your system's the power on self test (POST). Hardware-based RAID is transparent to your operating system; the hardware does all the work. 
If hardware RAID isn't available, then you should be aware of these basic guidelines to follow when setting up software RAID. 

IDE Drives
To save costs, many small business systems will probably use IDE disks, but they do have some limitations.
  • The total length of an IDE cable can be only a few feet long, which generally limits IDE drives to small home systems.
  • IDE drives do not hot swap. You cannot replace them while your system is running.
  • Only two devices can be attached per controller.
  • The performance of the IDE bus can be degraded by the presence of a second device on the cable.
  • The failure of one drive on an IDE bus often causes the malfunctioning of the second device. This can be fatal if you have two IDE drives of the same RAID set attached to the same cable.
For these reasons, I recommend you use only one IDE drive per controller when using RAID, especially in a corporate environment.

Serial ATA Drives
Serial ATA type drives are rapidly replacing IDE drives as the preferred entry level disk storage option because of a number of advantages:
  • The drive data cable can be as long as 1 meter in length versus IDE's 18 inches.
  • Serial ATA has better error checking than IDE.
  • There is only one drive per cable which makes hot swapping, or the capability to replace components while the system is still running, possible without the fear of affecting other devices on the data cable.
  • There are no jumpers to set on Serial ATA drives to make it a master or slave which makes them simpler to configure.
  • IDE drives have a 133Mbytes/s data rate whereas the Serial ATA specification starts at 150 Mbytes/sec with a goal of reaching 600 Mbytes/s over the expected ten year life of the specification.
If you can't afford more expensive and faster SCSI drives, Serial ATA would be the preferred device for software and hardware RAID

SCSI Drives
SCSI hard disks have a number of features that make them more attractive for RAID use than either IDE or Serial ATA drives.
  • SCSI controllers are more tolerant of disk failures. The failure of a single drive is less likely to disrupt the remaining drives on the bus.
  • SCSI cables can be up to 25 meters long, making them suitable for data center applications.
  • Much more than two devices may be connected to a SCSI cable bus. It can accommodate 7 (single-ended SCSI) or 15 (all other SCSI types) devices.
  • Some models of SCSI devices support "hot swapping" which allows you to replace them while the system is running.
  • SCSI currently supports data rates of up to 640 Mbytes/s making them highly desirable for installations where rapid data access is imperative.

Thursday, March 28, 2013

Automatic Storage Management (ASM) lessons 1
Automatic Storage Management (ASM) provides a centralized way to manage Oracle Database disk storage.ASM is somewhat like a logical volume manager, allowing you to reduce the management of Oracle files into ASM disk groups. It also provides redundancy configurations, rebalancing operations, and, when installed on top of clusterware, the ability to share database-related files.

Here are some features of ASM:
  • Automatic software data striping (RAID-0)
  • Load balancing across physical disks 
  • Software RAID-1 data redundancy with double or triple mirrors 
  • Elimination of fragmentation 
  • Simplification of file management via support for Oracle Managed Files (OMF) 
  • Ease of maintenance

Creating the ASM Instance
1. Creating the ASM Instance with the DBCA:- 
Before going with dbca we need to install CSS service .  ASM cannot be used until Oracle CSS service is started.
C:\Windows\system32>localconfig add
Step 1:  creating new OCR repository
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'admin', privgrp ''..
Operation successful.
Step 2:  creating new CSS service
successfully created local CSS service
successfully added CSS to home
C:\Windows\system32>

To create the ASM instance with the DBCA, do the following:
1. Start the Oracle DBCA.
2. The DBCA presents a list of options for you to choose from. Select Configure Automatic
Storage Management and click Next.
3. The DBCA then prompts you for the SYS password for the new ASM instance to be
created. Enter the password for the SYS account.
4. Oracle then creates the ASM instance. A new window appears giving you the option
to create new disk groups. You can choose to create disk groups (we will cover that
shortly) or you can click Finish to complete the ASM instillation.
5. The name of the resulting instance will be +ASM. You can log into the ASM instance
from SQL*Plus, as shown in this example:

C:\Windows\system32>sqlplus
SQL*Plus: Release 10.2.0.4.0 - Production on Tue Mar 26 12:06:14 2013
Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.
Enter user-name: sys as sysdba
Enter password:
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
+asm

SQL>

In 11g we will use sys as sysasm for connecting to asm instance.

Creating the ASM Instance Manually
Manual creation of an ASM instance is fairly straightforward. If you have ever manually
created a database, then manually creating an ASM instance should be easy for you. To
manually create an ASM instance, you would follow these steps:
1. Create directories for the ASM instance.
2. Create the instance parameter file.
3. Perform any Microsoft Windows–specific configuration.
4. Start the ASM instance.
5. Create the ASM server parameter file (spfile).
Let’s look at each of these steps in a bit more detail

The following steps can be used to create a fully functional ASM instance named +ASM. The node I am using in this example also has a regular 10g database running named TESTDB. These steps should all be carried out by the oracle UNIX user account:
  1. Create Admin DirectoriesWe start by creating the admin directories from the ORACLE_BASE. The admin directories for the existing database on this node, (TESTDB), is located at $ORACLE_BASE/admin/TESTDB. The new +ASM admin directories will be created alongside the TESTDB database:
    UNIX
    mkdir -p $ORACLE_BASE/admin/+ASM/bdump
    mkdir -p $ORACLE_BASE/admin/+ASM/cdump
    mkdir -p $ORACLE_BASE/admin/+ASM/hdump
    mkdir -p $ORACLE_BASE/admin/+ASM/pfile
    mkdir -p $ORACLE_BASE/admin/+ASM/udump
    Microsoft Windows
    mkdir %ORACLE_BASE%\admin\+ASM\bdump
    mkdir %ORACLE_BASE%\admin\+ASM\cdump
    mkdir %ORACLE_BASE%\admin\+ASM\hdump
    mkdir %ORACLE_BASE%\admin\+ASM\pfile
    mkdir %ORACLE_BASE%\admin\+ASM\udump
  2. Create Instance Parameter FileIn this step, we will manually create an instance parameter file for the ASM instance. This is actually an easy task as most of the parameters that are used for a normal instance are not used for an ASM instance. Note that you should be fine by accepting the default size for the database buffer cache, shared pool, and many of the other SGA memory sructures. The only exception is the large pool. I like to manually set this value to at least 12MB. In most cases, the SGA memory footprint is less then 100MB. Let's start by creating the file init.ora and placing that file in$ORACLE_BASE/admin/+ASM/pfile. The initial parameters to use for the file are:
    UNIX
    $ORACLE_BASE/admin/+ASM/pfile/init.ora
    ###########################################
    # Automatic Storage Management
    ###########################################
    # _asm_allow_only_raw_disks=false
    # asm_diskgroups='TESTDB_DATA1'
    
    # Default asm_diskstring values for supported platforms:
    #     Solaris (32/64 bit)   /dev/rdsk/*
    #     Windows NT/XP         \\.\orcldisk*
    #     Linux (32/64 bit)     /dev/raw/*
    #     HPUX                  /dev/rdsk/*
    #     HPUX(Tru 64)          /dev/rdisk/*
    #     AIX                   /dev/rhdisk/*
    # asm_diskstring=''
    
    ###########################################
    # Diagnostics and Statistics
    ###########################################
    background_dump_dest=/u01/app/oracle/admin/+ASM/bdump
    core_dump_dest=/u01/app/oracle/admin/+ASM/cdump
    user_dump_dest=/u01/app/oracle/admin/+ASM/udump
    
    ###########################################
    # Miscellaneous
    ###########################################
    instance_type=asm
    compatible=10.1.0.4.0
    
    ###########################################
    # Pools
    ###########################################
    large_pool_size=12M
    
    ###########################################
    # Security and Auditing
    ###########################################
    remote_login_passwordfile=exclusive
    Microsoft Windows
    %ORACLE_BASE%\admin\+ASM\pfile\init.ora
    ###########################################
    # Automatic Storage Management
    ###########################################
    # _asm_allow_only_raw_disks=false
    # asm_diskgroups='TESTDB_DATA1'
    
    # Default asm_diskstring values for supported platforms:
    #     Solaris (32/64 bit)   /dev/rdsk/*
    #     Windows NT/XP         \\.\orcldisk*
    #     Linux (32/64 bit)     /dev/raw/*
    #     HPUX                  /dev/rdsk/*
    #     HPUX(Tru 64)          /dev/rdisk/*
    #     AIX                   /dev/rhdisk/*
    # asm_diskstring=''
    
    ###########################################
    # Diagnostics and Statistics
    ###########################################
    background_dump_dest=C:\oracle\product\10.1.0\admin\+ASM\bdump
    core_dump_dest=C:\oracle\product\10.1.0\admin\+ASM\cdump
    user_dump_dest=C:\oracle\product\10.1.0\admin\+ASM\udump
    
    ###########################################
    # Miscellaneous
    ###########################################
    instance_type=asm
    compatible=10.1.0.4.0
    
    ###########################################
    # Pools
    ###########################################
    large_pool_size=12M
    
    ###########################################
    # Security and Auditing
    ###########################################
    remote_login_passwordfile=exclusive

    After creating the $ORACLE_BASE/admin/+ASM/pfile/init.ora file, UNIX users should create the following symbolic link:
    $ ln -s $ORACLE_BASE/admin/+ASM/pfile/init.ora $ORACLE_HOME/dbs/init+ASM.ora


Identify RAW Devices
Before starting the ASM instance, we should identify the RAW device(s) (UNIX) or logical drives (Windows) that will be used as ASM disks. For the purpose of this article, I have four RAW devices setup on Linux:
# ls -l /dev/raw/raw[1234]
crw-rw----  1 oracle dba 162, 1 Jun  2 22:04 /dev/raw/raw1
crw-rw----  1 oracle dba 162, 2 Jun  2 22:04 /dev/raw/raw2
crw-rw----  1 oracle dba 162, 3 Jun  2 22:04 /dev/raw/raw3
crw-rw----  1 oracle dba 162, 4 Jun  2 22:04 /dev/raw/raw4

   Attention Linux Users!This article does not use Oracle's ASMLib I/O libraries. If you plan on using Oracle's ASMLib, you will need to install and configure ASMLib, as well as mark all disks using:
/etc/init.d/oracleasm createdisk <ASM_VOLUME_NAME> <LINUX_DEV_DEVICE>
. For more information on using Oracle ASMLib, see "Installing Oracle10g Release 1 (10.1.0) on Linux - (RHEL 4)".


   Attention Windows Users!A task that must to be performed for Microsoft Windows users is to tag the logical drives that you will want to use for ASM storage. This is done using a new utility that is included with Oracle Database 10g called asmtool. This tool can be run either before or after creating the ASM instance. asmtool is responsible for initializing the drive headers and marks drives for use by ASM. This really assists in reducing the risk of overwriting a usable drive that is being used for normal operating system files.


Starting the ASM Instance
Once the instance parameter file is in place, it is time to start the ASM instance. It is important to note that an ASM instance never mounts an actual database. The ASM instance is responsible for mounting and managing disk groups.
   Attention Windows Users!If you are running in Microsoft Windows, you will need to manually create a new Windows service to run the new instance. This is done using the ORADIM utility which allows you to create both the instance and the service in one command.

UNIX

# su - oracle
$ ORACLE_SID=+ASM; export ORACLE_SID
$ sqlplus "/ as sysdba"

SQL> startup
ASM instance started
Total System Global Area   75497472 bytes
Fixed Size                   777852 bytes
Variable Size              74719620 bytes
Database Buffers                  0 bytes
Redo Buffers                      0 bytes
ORA-15110: no diskgroups mounted

SQL> create spfile from pfile='/u01/app/oracle/admin/+ASM/pfile/init.ora';

SQL> shutdown
ASM instance shutdown

SQL> startup
ASM instance started
Microsoft Windows
C:\> oradim -new -asmsid +ASM -syspwd change_on_install 
    -pfile C:\oracle\product\10.1.0\admin\+ASM\pfile\init.ora -spfile 
    -startmode manual -shutmode immediate
Instance created.
C:\> oradim -edit -asmsid +ASM -startmode a
C:\> set oracle_sid=+ASM
C:\> sqlplus "/ as sysdba"

SQL> startup pfile='C:\oracle\product\10.1.0\admin\+ASM\pfile\init.ora';
ASM instance started
Total System Global Area    125829120 bytes
Fixed Size                    769268 bytes
Variable Size               125059852 bytes
Database Buffers                   0 bytes
Redo Buffers                       0 bytes
ORA-15110: no diskgroups mounted 

SQL> create spfile from pfile='C:\oracle\product\10.1.0\admin\+ASM\pfile\init.ora';
File created.
SQL> shutdown
ASM instance shutdown
SQL> startup
ASM instance started
You will notice when starting the ASM instance, we received the error:
ORA-15110: no diskgroups mounted
This error can be safely ignored.Notice also that we created a server parameter file (SPFILE) for the ASM instance. This allows Oracle to automatically record new disk group names in the asm_diskgroups instance parameter, so that those disk groups can be automatically mounted whenever the ASM instance is started.
Now that the ASM instance is started, all other Oracle database instances running on the same node will be able to find it.
Note the following Oracle parameters that are specific to ASM instances,
INSTANCE_TYPE  : Used only with an ASM instance, this parameter indicated to Oracle
that this is an ASM instance. The default value is RDBMS, which indicates the instance is
an Oracle database instance. This parameter is not dynamic and is the only mandatory
parameter in an ASM instance.
ASM_DISKSTRING  : This parameter indicates where Oracle should search for disk devices
to be used by ASM. We will discuss this parameter in more detail later in this section.
This parameter can be dynamically changed.
ASM_DISKGROUPS  : This parameter lists ASM disk groups that ASM should mount when
it is started. You can also use the alter diskgroup all mount command to cause
these disk groups to be mounted. This parameter can be dynamically changed.
ASM_POWER_LIMIT  : This parameter controls the rate at which ASM can rebalance disks by
increasing or decreasing the degree of parallelism used. Lower values will slow rebalancing
but will also result in less of an IO impact by those operations. Higher values may
speed up rebalancing by parallelizing the rebalance operation. The default is 1, and this
is typically sufficient. This parameter can be set dynamically.

Automatic Storage Management (ASM) lessons 2

ASM Processes
After you start your ASM instance, you will find that several of the Oracle processes you
are acquainted with will be running, such as PMON and DBWR. Additional ASM processes
will be started too. These processes include the following in 11g,
  • The ARBn process, used to perform disk group rebalance operations. There may be one or more of these processes running.
  • The ASMB process manages ASM storage and provides statistics. 
  • The GMON process maintains disk membership in ASM disk groups. 
  • The KATE process performs proxy I/O to ASM metadata files when a disk is offlined. 
  • The MARK process is responsible for marking ASM allocation units as stale following a missed write to an offline disk.
  • The RBAL process runs in both database and ASM instances. RBAL is responsible for performing a global open of ASM disks in normal databases. RBAL coordinates rebalance activity for disk groups in ASM instances.
ASM Disk Discovery

ASM disk discovery is the first step to setting up an ASM disk group. In this section, we

will cover configuring the ASM_DISKSTRING parameter, which helps with ASM disk discovery, and then we will discuss the topic of ASM disk discovery in general.

You may not need to set ASM_DISKSTRING. ASM_DISKSTRING has a number of different
default values depending on the platform you are using. Below table lists the platform-
specific default values (these will be set if ASM_DISKSTRING is set to a NULL value only).
#    Default asm_diskstring values for supported platforms:
#     Solaris (32/64 bit)   /dev/rdsk/*
#     Windows NT/XP         \\.\orcldisk* 
#     Linux (32/64 bit)     /dev/raw/*
#     HPUX                     /dev/rdsk/*
#     HPUX(Tru 64)          /dev/rdisk/*
#     AIX                   /dev/rhdisk/*

For example in my windows system i did'nt set any ASM_DISKSTRING, so it taken the default setting ,
SQL> show parameter asm

NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------------------------
asm_diskgroups                       string      DATA, FLASH
asm_diskstring                       string
asm_power_limit                      integer     1
SQL>
SQL> select path from v$asm_disk;
PATH
-----------------------------------------------------------------------------
\\.\ORCLDISKDATA0
\\.\ORCLDISKFLASH0
SQL>
The ASM_DISKSTRING can be dynamically altered, which is nice if your friendly system administratoradds some storage to your system that you want Oracle to be able to use.


The asterisk is required when defining the ASM_DISKSTRING parameter. Here are
some examples of setting the ASM_DISKSTRING parameter. In this first example, ASM will
look for disks in devices when we create disk groups:
SQL> Alter system set ASM_DISKSTRING=’/devices/*‘;
In the next example, we are pointing ASM_DISKSTRING to ORACLE_HOME/disks:
SQL> Alter system set ASM_DISKSTRING=’?/disks/*‘;

ASM Disk Discovery on Instance Start:
When the ASM instance is started, it will use the paths listed in the ASM_DISKSTRING parameter and discover the disks that are available.Once discovery is complete and the ASM instance is open, you can review the disks discovered by looking at the V$ASM_DISK view, as
SQL> column path format a20
set lines 132
set pages 50
select path, group_number group_#, disk_number disk_#, mount_status,
header_status, state, total_mb, free_mb
from v$asm_disk
order by group_number;

PATH GROUP_# DISK_# MOUNT_S HEADER_STATU STATE TOTAL_MB FREE_MB
-------------- ------- ------ ------- ------------ ----- -------- ---------
/dev/raw/raw4 0 1 CLOSED FOREIGN NORMAL 39 0
/dev/raw/raw5 0 0 CLOSED FOREIGN NORMAL 39 0
/dev/raw/raw3 0 2 CLOSED FOREIGN NORMAL 39 0
/dev/raw/raw6 0 2 CLOSED CANIDATE NORMAL 2048 2048
ORCL:ASM01_004 1 3 CACHED MEMBER NORMAL 34212 30436
ORCL:ASM01_005 1 4 CACHED MEMBER NORMAL 34212 30408
ORCL:ASM01_006 1 5 CACHED MEMBER NORMAL 34212 30420

In this view, you see that there are three disks (/dev/raw/raw3,/dev/raw/raw4,/dev/raw/raw5) that are not assigned to any group (thosewith GROUP_# set to 0). These are unassigned disks that ASM has discovered but that have not been assigned to a disk group. Note the mount status of CLOSED on those three disks, which also indicates that the disk is not being accessed by ASM. The HEADER_STATUS of FOREIGN indicates that these disks contain data already and are owned by some process other than ASM (in this case, these are voting disks for a RAC). If the HEADER_STATUS says CANIDATE, as with /dev/raw/raw6, then we could add this disk to an ASM disk group.

Notice that most of the disks have a MOUNT_STATUS of CACHED and a HEADER_STATUS of
MEMBER. This means that the disk is currently part of an ASM disk group (which we will
discuss more in the next section).

Redundancy, Striping, and Other ASM Topic
Redundancy:-
When configuring an ASM disk group, you can use one of three different ASM redundancy
setting options to protect the data in your disk group:
Normal : Typically employs two-way mirroring by default and thus requires allocation
of two failure groups.
High  : Typically employs three-way mirroring by default and thus requires allocation of
three failure groups.
External  : Does not employ any mirroring. This setting is typically used when the disk
group is being assigned to an external disk that is attached to some device that already
employs some disk redundancy (rely on hardware mirroring) and have only one failure group.

Each failure group represents a logical allocation of one or more disks to the ASM disk group and provides for mirroring within that disk group. Thus, when you create an ASM disk group, you might have one disk assigned to failure group 1 and one disk assigned to failure group 2. This way your data is protected from failure.When you’re using ASM mirroring, ASM will allocate an extent on a disk that becomes the primary copy (one of the failure groups) and then allocate copies of that extent to the mirrored copies (the other failure groups). 

When you define the redundancy setting for a disk group, you are defining things such
as what kind of striping occurs and whether the data will be mirrored. These attributes are defined based on which template you have assigned to the ASM disk group. By default, when you create a disk group, Oracle will assign it the default template setting. You can optionally assign another ASM template to a given disk group. Each Oracle file type has its own default template.

Here is a list of the "default" (built-in) templates and their attributes:
Template nameStripingMirroring when using a normal redundancy disk groupMirroring when using a high redundancy disk groupMirroring when using an external redundancy disk group
ControlfileFine3-Way Mirroring3-Way MirroringNo Mirroring
DatafileCoarse2-Way Mirroring3-Way MirroringNo Mirroring
OnlinelogFine2-Way Mirroring3-Way MirroringNo Mirroring
ArchivelogCoarse2-Way Mirroring3-Way MirroringNo Mirroring
TempfileCoarse2-Way Mirroring3-Way MirroringNo Mirroring
BackupsetCoarse2-Way Mirroring3-Way MirroringNo Mirroring
ParameterfileCoarse2-Way Mirroring3-Way MirroringNo Mirroring
DataguardconfigCoarse2-Way Mirroring3-Way MirroringNo Mirroring
FlashbackFine2-Way Mirroring3-Way MirroringNo Mirroring
ChangetrackingCoarse2-Way Mirroring3-Way MirroringNo Mirroring
DumpsetCoarse2-Way Mirroring3-Way MirroringNo Mirroring
XtransportCoarse2-Way Mirroring3-Way MirroringNo Mirroring
AutobackupCoarse2-Way Mirroring3-Way MirroringNo Mirroring
In addition to these "named" templates you can also create user-defined templates.  These user-defined templates appear in the "name" column of the v$asm_template view.

Default ASM Template Redundancy Settings
So, if you create a disk group with normal redundancy using the default template and
you put datafiles on it, the datafile template would be used by default. In this case, a
datafile would use two-way mirroring and coarse striping (see the section “Striping”).
This means you would have to allocate at least two disks to an ASM disk group when
it was created, one assigned to a different failure group.The DBA can mention whether the files created via the template should be 2-way or 3-way mirrored and Coarse or fine striped.

Template attributes
Striping Attribute ValueDescription
FINE
Striping in 128 KB chunks.
COARSE
Striping in 1 MB chunks.


Redundancy Attribute ValueResulting Mirroring in Normal Redundancy Disk GroupResulting Mirroring in High Redundancy Disk GroupResulting Mirroring in External Redundancy Disk Group
MIRROR
Two-way mirroring
Three-way mirroring
(Not allowed)
HIGH
Three-way mirroring
Three-way mirroring
(Not allowed)
UNPROTECTED
No mirroring
(Not allowed)
No mirroring

Example for add template,
ALTER DISKGROUP disk_group_name ADD TEMPLATE template_name 
  ATTRIBUTES ([{MIRROR|HIGH|UNPROTECTED}] [{FINE|COARSE}]);

The following statement creates a new template named reliable for the normal redundancy disk group dgroup2:
ALTER DISKGROUP dgroup2 ADD TEMPLATE reliable ATTRIBUTES (HIGH FINE);

When you add a template to a disk group, the template cannot be retroactively
applied to files already in that disk group. As a result, you will need to use RMAN
to back up and then restore files that already exist in the disk group in order for them to
take on the attributes of the new template.

You can see the templates associated with a given disk group by querying the V$ASM_
TEMPLATE view, as shown in this example:
SQL> select * from v$asm_template where group_number=1;

We can alter diskgroup command. Notice in this example that we are actually changing one of the attributes of the default templates. You cannot drop the default templates, but you can modify them, as shown here:

SQL> ALTER DISKGROUP sp_dgroup2 ALTER TEMPLATE datafile ATTRIBUTES (coarse);

ASM Disk Group Attributes
Oracle Database 11g also allows you to define specific disk group attributes. Disk group attributes are set using the attribute clause of the create diskgroup and alter
diskgroup commands. The following attributes can be set on a specific ASM disk group:

Au_size-This is the disk group allocation unit (AU) size. The value defaults to 1MB and
can be set only when the disk group is created. You must modify the AU size of the disk
group if you want the disk group to be able to hold larger amounts of data. A disk group
with the default AU size will be able to grow to 35TB (normal redundancy). Increasing the
AU size will significantly increase the maximum size of the disk group. The maximum AU
size is 64MB.

Compatible.rdbms- Indicates the database version that the disk group is compatible with
at a minimum (default is 10.1). This value should be equal to or greater than the compatibility parameter of the database(s) accessing the ASM disk group. This value cannot be rolled back once set.

Compatible.asm- Indicates the ASM instance version that the disk group is compatible with ata minimum (default is 10.1). Compatible.asm must always be set to a value equal to or greater than compatible.rdbms. Once compatible.asm is set for a disk group, it can not be rolled back to an earlier value.

Disk_repair_time- Indicates the length of time that the disk resync process should maintain
change tracking before dropping an offline disk. The default for this parameter is 3.6 hours. Disk group attributes can be viewed using the V$ASM_ATTRIBUTE view. You can see some examples of setting compatibility here:

SQL>Create diskgroup robert01 external redundancy
Disk ’/oracle/asm/ASM_DISKGROUP_robert01.asm’
Attribute ’ccompatible.asm’=’11.1.0’;
SQL>Alter diskgroup robert01 set attribute ’DISK_REPAIR_TIME’=’1200M’;
SQL>Alter diskgroup robert01 set attribute ’compatible.asm’=’11.1.0’;

ASM Fast Disk Resync
Disk loss in ASM can result from a number of reasons, such as loss of controller cards, cable failures, or power-supply errors. In many cases, the disk itself is still intact. To allow for sufficient time to recover from disk failures that do not involve the actual failure of a disk, ASM provides the ASM fast disk resync feature.
By default, when a disk in an ASM disk group fails the disk will be taken offline automatically. The disk will be dropped some 3.6 hours later. As a result, you have only 3.6 hours by default to respond to a disk outage. If you correct the problem and the physical disk media is not corrupted, then ASM fast disk resync will quickly resynchronize the disk when it comes back online, correcting the problem very quickly.You can change the amount of time that Oracle will wait to automatically drop the disk by setting the disk_repair_time attribute for the individual disk groups using the alter diskgroup command, as shown in this example, where we set the disk_repair_time attribute to 18 hours:
SQL> Alter diskgroup dgroup1 set attribute ‘disk_repair_time’=’18h’;

ASM Preferred Mirror Read

Your ASM configuration may involve remote mirroring to disks that are a fair distance away. When some of your disk mirrors are far away then those disks may not be the best set of disks for a given instance to read from. For example, you might have a Real Application Cluster database with local and remote mirrored disks. In this case, you want to have the RAC instances primarily read from the local disks to ensure the best performance. ASM preferred mirror read is designed to indicate to Oracle which disk failgroup is the preferred read disk group.
ASM preferred mirror read is only available if you are using RAC. Also preferred mirror read is generally used only with clustered ASM instances, but this is not a requirement. To take advantage of ASM preferred mirror read, you should configure each disk failure group with specific geographically located set of disks. Use the Oracle 11g parameter, asm_preferred_read_failure_groups, to configure a databse instance with a list of preferred disk failure group names to use when that instance accesses ASM disks. The format of the values of the asm_preferred_read_failure_groups parameter is diskgroupname.failuregroupname where diskgroupname is the name of the disk group that the failure group belongs to and failuregroupname is the preferred failure groups name. Include multiple diskgroup/failgroup names by separating each preferred read group defined with a comma as seen in this example:
asm_preferred_read_failure_groups=dgroup1.fdisk2, dgroup2.fdisk2

In the event ASM cannot read from the preferred disk failure group, then the non-preferred failure groups will be read. To determine if a given disk file group is a preferred disk group you can use the PREFERRED_READ column of the V$ASM_DISK view.


Adding an ASM Disk Group

You use the create diskgroup command to create an ASM disk group,

SQL> CREATE DISKGROUP DATA_AREA NORMAL REDUNDANCY
failgroup diskcontrol1 DISK
‘/devices/diska1’,‘/devices/diska2’,‘/devices/diska3’,‘/devices/diska4’
failgroup diskcontrol2 DISK
‘/devices/diskb1’,‘/devices/diskb2’,‘/devices/diskb3’,‘/devices/diskb4’;

we created as diskgroup named DATA_AREA with normal redundancy having two failure group, each contains four disks.From the following figure you can see how data is mirrored in disk group.




If we used high redundancy, we would need to add a third disk to the command,
as shown here:
SQL> CREATE DISKGROUP DATA_AREA HIGH REDUNDANCY
failgroup diskcontrol1 DISK
‘/devices/diska1’,‘/devices/diska2’,‘/devices/diska3’,‘/devices/diska4’
failgroup diskcontrol2 DISK
‘/devices/diskb1’,‘/devices/diskb2’,‘/devices/diskb3’,‘/devices/diskb4’;

failgroup diskcontrol3 DISK
‘/devices/diskc1’,‘/devices/diskc2’,‘/devices/diskc3’,‘/devices/diskc4’;


 You can also name the disks being assigned to the ASM disk group using the name
clause of the create diskgroup command . For example we can use
‘/devices/diska1’ name diska1,‘/devices/diska2’ name diska2,‘/devices/diska3’ name diska3,‘/devices/diska4’ name diska4  instead of ‘/devices/diska1’,‘/devices/diska2’,‘/devices/diska3’,‘/devices/diska4’ in the above command. Failure to use the name clause will result in each disk receiving its own system-default assigned name.

Dropping an ASM Disk Group
To remove an ASM disk group, you use the drop diskgroup command. By default, if any
files exist in the disk group, ASM will not allow you to drop it unless you use the including
contents clause.
Here is an example of removing an ASM disk group with the drop diskgroup command:
SQL> Drop diskgroup sp_dgroup2;
If the ASM disk group has files in it, use this version:
SQL> Drop diskgroup sp_dgroup2 including contents;

Adding Disks to an ASM Disk Group
As databases grow, you need to add disk space. The alter diskgroup command allows
you to add disks to a given disk group to increase the amount of space available. Adding
a disk to an existing disk group is easy with the alter diskgroup command,
SQL>alter diskgroup cooked_dgroup1 add disk ‘c:\oracle\asm_disk\_file_disk3’
name new_disk;
In the preceding example we did not assign the disk to a specific failure group. As a result, each disk will be assigned to its own failure group when it’s created. For example, when we added the disk to the cooked_dgroup1 disk group, a new failure group called cooked_dgroup1_0002 was created, as shown in this output:
SQL> select disk_number, group_number, failgroup from v$asm_disk;
DISK_NUMBER GROUP_NUMBER failgroup
----------- ------------ ------------------------------
1 0
0 1 DISKCONTROL1
1 1 DISKCONTROL2
2 1 COOKED_DGROUP1_0002

We can add a disk to an existing failure group by using the failgroup parameter, as
shown in this example:
SQL> alter diskgroup cooked_dgroup1 add failgroup DISKCONTROL1
disk ‘c:\oracle\asm_disk\_file_disk4’ name new_disk;

Removing Disks from an ASM Disk Group
The alter diskgroup command allows you to remove disks from an ASM disk group using
the drop disk parameter. ASM will first rebalance the data on the disks to be dropped,
assuming enough space is available. If insufficient space is available to move the data from
the disk to be dropped to another disk, then an error will be raised. Here is an example
of dropping a disk from a disk group:
SQL>alter diskgroup cooked_dgroup1 drop disk ‘c:\oracle\asm_disk\_file_disk4’;

The alter diskgroup command also gives you the option to drop from a failure group
all disks that are assigned to the disk group. Use the in failgroup keyword and then indicatethe name of the failure group, as shown in this example:
SQL>alter diskgroup cooked_dgroup1 drop disks in failgroup diskcontrol1;

When you drop a disk from a disk group, the operation is asynchronous. Therefore,
when the SQL prompt returns, this does not indicate that the operation has completed. To
determine if the operation has completed, you will need to review the V$ASM_DISK view.
When the disk drop is complete the column HEADER_STATUS will take on the value of FORMER,as shown in this example:
SQL> select disk_number, header_status from v$asm_disk;
DISK_NUMBER HEADER_STATU
----------- ------------
0 FORMER
1 FORMER
1 MEMBER
2 MEMBER
If the drop is not complete (the V$ASM_DISK column STATE will read dropping), you can
check the V$ASM_OPERATION view and it will give you an idea of how long the operation is expected to take before it is complete. Here is an example query that will provide you with this information:
SQL> select group_number, operation, state, power, est_minutes from v$asm_operation;

Undropping Disks from an ASM Disk Group
If you have accidentally dropped a disk, simply use the alter diskgroup command with the undrop disks parameter, as shown here:

SQL> alter diskgroup sp_dgroup2 undrop disks;

To undrop a disk, the header_status from v$asm_disk view must be dropping. You cannot undrop a completely dropped disk.

Resizing Disks in an ASM Disk Group
ASM diskgroup is full and we want to add space to diskgroup. We have 2 options here,
1. Add new disks depending on type of diskgroup / number of failgroups
2. Increase the size of the existing LUN / LV / Disk partition

First approach is pretty straight forward For example, if we have a NORMAL redundancy ASM diskgroup DATADG with 2 failgroups then we can add two disk in each failure group as follows ,
SQL> alter diskgroup DATADG add 
FAILGROUP DATADG_FG1 disk '/dev/raw/raw3' 
FAILGROUP DATADG_FG2 disk '/dev/raw/raw4' ;

This approach does not need any down time i.e. disks can be added when ASM & DB are running (Rebal operation will have some overhead, so we should be doing this when system is not heavily used).
Now second approach , 

First we need to extend LUN or LV or disk partition, so that OS can see the revised size. But here is the twist, If we are using oracle 10g, we’ll have to bounce the ASM instance (and database), so that ASM can read the new disk size. This is due to Bug 4110313 : So if you try to increase the diskgroup size using following command, command will not have any effect (although OS can see revised disk size)

SQL> select name, total_mb, usable_file_mb from v$asm_diskgroup;
NAME TOTAL_MB USABLE_FILE_MB
------------------------------ ---------- --------------
DATADG 69688 3460

SQL> alter diskgroup DATADG resize all;
Diskgroup altered.

SQL> select name, total_mb, usable_file_mb from v$asm_diskgroup;
NAME TOTAL_MB USABLE_FILE_MB
------------------------------ ---------- --------------
DATADG 69688 3460
This problem is fixed in 11g and we don’t have to recycle ASM instance
SQL> select name, total_mb, usable_file_mb from v$asm_diskgroup;
NAME TOTAL_MB USABLE_FILE_MB
------------------------------ ---------- --------------
DATADG 69688 3460

SQL> alter diskgroup DATADG resize all;
Diskgroup altered.

SQL> select name, total_mb, usable_file_mb from v$asm_diskgroup;
NAME TOTAL_MB USABLE_FILE_MB
------------------------------ ---------- --------------
DATADG 139320 38276