Tuesday, August 14, 2012

Manual Upgradation From Oracle 9i to 10g

Upgradation  is  the process of  replacing our existing software with  a  newer  version  of  the  same  product . For example, replacing  oracle  9i  release  to oracle 10g release . Upgrading our  applications  usually does not require special tools. Our existing reports should look and behave the same in both products. However, sometimes minor changes may be seen in product .Upgradation is done at Software level .

I  received a mail from  a  reader regarding  the upgradation  of  database . He wants  to upgrade his database  from  9i  to 10g  . Here, i  will  like  advice that  it's  better  to  upgrade our database  from 9i  to 11g  as compare  to  9i  to  10g  because  Oracle extended  support  for 10gR2 will ends on 31-Jul-2013 and  also there are more features available in Oracle 11g . We can directly upgrade to oracle 11g, if our curent database is 9.2.0.4 or newer then its supports direct upgrades to versions 9.2.0.4, 10.1 and 10.2 . We can upgrade the version as
  • 7.3.3 -> 7.3.4 -> 9.2.0.8 -> 11.1
  • 8.0.5 -> 8.0.6 -> 9.2.0.8 -> 11.1
  • 8.1.7 -> 8.1.7.4 -> 9.2.0.8 -> 11.1
  • 9.0.1.3-> 9.0.1.4 -> 9.2.0.8 -> 11.1
  • 9.2.0.3 (or lower) -> 9.2.0.8 -> 11.1
Oracle 11g client can access Oracle databases of versions 8i, 9i and 10g.

There are generally four method to Upgrade the Oracle database .
1.) Manual Upgradation :
2.) Upgradation Using the DBUA .
3.) export/import
4.) Data Copying

Let's have a look on manual upgradation .


Manual Upgradation :  A  manual   upgrade consists  of  running  SQL  scripts  and  utilities  from  a command line  to  upgrade  a  database  to  the  new  Oracle Database 10g  release. While  a   manual  upgrade  gives us finer control over the upgrade process, it  is  more susceptible to error  if  any  of  the  upgrade or pre-upgrade steps  are either not followed or are performed out of order. Below are the steps 

1.) Install Oracle 10g software : For Upgradation , Invoke the .exe or rumInstaller ad select  "Install software only" to Install the Oracle S/w .

2.) Take Full Backup Database :  Take full database backup of database which is to be upgraded .

3.) Check the invalid Objects : Check the invalid objects by running ultrp.sql scripts as
SQL> @ORACLE_HOME/rdbms/admin/utlrp.sql

4.) Login into 9i  home  and run the utlu102i.sql : This  script is in oracle 10g home .
SQL> spool  pre_upgrd.sql
SQL> @<ORACLE_10G_HOME>/rdbms/admin/utlu102i.sql
SQL> spool off

The above scripts checks a number of areas to make sure the instance is suitable for upgrade including
  • Database version
  • Log file sizes 
  • Tablespace sizes 
  • Server options
  • Initialization parameters (updated, depercated and obsolete)
  • Database components
  • Miscellaneous Warnings 
  • SYSAUX tablespace present
  • Cluster information
The  issues  indicated  by  this script  should  be  resolved  before  a  manual  upgrade  is  attempted. Once we  have  resolved the  above  warning , then re-run  the  above  script  once  more  to  cross-check .

5.)  Check  for  the  timestamp  with  timezone  Datatype : The  time zone  files  that are  supplied  with  Oracle  Database 10g  have  been  updated  from  version 1  to version 2  to  reflect changes  in  transition  rules  for  some  time  zone  regions. The changes may affect existing  data  of  TIMESTAMP WITH TIME ZONE  datatype. To  preserve this  TIMESTAMP data for updating  according  to  the  new  time zone  transition  rules, we  must  run  the utltzuv2.sql script on  the database  before  upgrading. This  script  analyzes our database for  TIMESTAMP WITH TIME ZONE columns  that a re  affected  by  the  updated  time  zone  transition  rules.
SQL> @ORACLE_10G_HOME/rdbms/admin/utltzuv2.sql
SQL> select * from sys.sys_tzuv2_temptab;

If   the  utltzuv2.sql   script  identifies  columns   with   time zone   data   affected  by  a  database  upgrade, then  back  up  the  data  in character  format  before we upgrade the database. After the   upgrade,   we   must  update  the  tables  to ensure  that  the data  is  stored   based on  the new  rules. If  we export  the tables  before  upgrading  and  import  them  after  the  upgrade, the  conversion  will  happen  automatically  during the import.

6.) Shutdown the database :
shut down the database and copy the spfile(or pfile) and password file from 9i home to 10g home .

7.) Upgrade Database : Set following environment for 10g and login using  "SYS"  user . It takes roughly half an hour to complete. Spool the output to a file so that you can review it afterward.
ORACLE_SID=<sid>
ORACLE_HOME=<10g home>
PATH=<10g path>
sqlplus / as sysdba

SQL> startup  upgrade
SQL>spool  upgrd_log.sql
SQL>@catupgrd.sql
SQL> spool off

8.)  Recompile any invalid objects : Compare the number of invalid objects with the number noted in step 4 . It should hopefully be the same or less.

SQL>@ORACLE_HOME/rdbms/admin/utlrp.sql

9.) Check the status of the upgrade :
SQL> @ORACLE_HOME/rdbms/admin/utlu102s.sql 
The above script queries the DBA_SERVER_REGISTRY to determine upgrade status and provides information about invalid or incorrect component upgrades. It also provides names of scripts to rerun to fix the errors.

10.) Edit the spfile : Create a  pfile  from spfile as
SQL>create pfile from spfile ;

Open the pfile and set the compatible parameter to 10.2.0.0.0 . Shutdown the database and create the new modified spfile .
SQL>shut immediate
SQL> create spfile from pfile ;

11.) Start the database normally
SQL> startup
and finally configure the Oracle net and drop the old Oracle database software i.e, 9i  using the OUI .

Reference :: http://docs.oracle.com/cd/B19306_01/server.102/b14238/upgrade.htm


Enjoy    :-)


Wednesday, July 11, 2012

Oracle Pro On Otn

Today , I  got the status of Oracle Pro on otn site . We always  feel good to acheive some awards or appreciations for  good work  and this time same with me . 

Working with oracle is always exiting , challenging and great fun . I am passionate about oracle and try to learn as much as i can .








Enjoy     :-) 


Control File Parallel Read & Write wait Event


A  control file contains  information  about the  associated database that is required for access by an instance, both  at startup and  during normal operation .  It  is  the Oracle control file(s) that records information about the  consistency  of a database's  physical structures and operational statuses . The database state changes through activities such as adding  data files,  altering  the  size  or  location  of  datafiles, redo  being  generated,  archive  logs being  created,  backups  being  taken, SCN numbers changing, or checkpoints being taken.

Why Control File Waits Occur ?
Control File Waits Occur due to the following reason .
1.) This wait occurs when a server process is updating all copies of the controlfile i.e,  the session is writing physical blocks to all control files at same time . 
2.) The session commits a transaction to a controlfile
3.) Changing a generic entry in the controlfile, the new value is being written to all controlfiles
4.) Controlfile resides on such a disk which is heavily used i.e, facing lots of i/o's .


We can check the wait experience by a session using v$session_wait views as 
SQL>select event, wait_time, p1, p2, p3  frpm v$session_wait wher  event   like  '%control%';
Here wait_time  is the elapsed time for reads or writes.


Possible steps to reduce this wait :
1.) Reduce the number of controlfile copies to the minimum that ensures that not all copies can be lost at the same time.
2.) Move the controlfile copies to less saturated storage locations.
3.)  Reduce the frequent log switches . To find the optimal time and size for log switch check this post

As far my experience , Control file access is governed by activities such as redo logfile  switching and checkpointing . Therefore it can only be influenced indirectly  by tuning . It rarely occurs in my report and it get solved automatically when i check the other wait metric especially "log sync wait"  and others waits . 



Enjoy    :-) 




Friday, June 29, 2012

Configure OEM in Oracle 11gR2 on Window


The Oracle Enterprise Manager, (OEM) is the standard monitoring tool for Oracle databases . The OEM allows the DBA to easily manage every aspect of the database, even non-traditional tasks like applying patches and scheduling jobs . Sometimes OEM doesn't work due to some issue and  i face such a issues while re-configuring it  and follow this method . Let's have a look on this below issue . All the steps are same for window or linux/unix platform . (just change path and variable where required)

C:\>set ORACLE_SID=orcl
C:\>emca  -deconfig  dbcontrol  db  -repos  drop
STARTED EMCA at Jun 28, 2012 10:52:27 AM
EM Configuration Assistant, Version 11.2.0.0.2 Production
Copyright (c) 2003, 2005, Oracle.  All rights reserved.
Enter the following information:

Database SID: orcl
Listener port number: 1521
Password for SYS user:
Password for SYS user: xxxx
Do you wish to continue? [yes(Y)/no(N)]: y
Jun 28, 2012 10:52:53 AM oracle.sysman.emcp.EMConfig perform
INFO: This operation is being logged at C:\app\Neerajs\cfgtoollogs\emca\orcl\emca_2012_06_28_10_52_24.log.
Jun 28, 2012 10:52:55 AM oracle.sysman.emcp.EMDBPreConfig performDeconfiguration
WARNING: EM is not configured for this database. No EM-specific actions can be performed.
Jun 28, 2012 10:52:56 AM oracle.sysman.emcp.ParamsManager checkListenerStatusForDBControl
WARNING: Error initializing SQL connection. SQL operations cannot be performed
Jun 28, 2012 10:52:56 AM oracle.sysman.emcp.EMReposConfig invoke
INFO: Dropping the EM repository (this may take a while) ...
Jun 28, 2012 11:03:43 AM oracle.sysman.emcp.EMReposConfig invoke
INFO: Repository successfully dropped
Enterprise Manager configuration completed successfully
FINISHED EMCA at Jun 28, 2012 11:03:44 AM

Now, creating  OEM

C:\>sqlplus sys/xxxx@orcl as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Thu Jun 28 11:07:03 2012
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL>  drop  user  SYSMAN   cascade ; 
 drop  user  SYSMAN   cascade
             *
ERROR at line 1:
ORA-01918: user 'SYSMAN' does not exist

SQL> drop  user  MGMT_VIEW ;
drop  user  MGMT_VIEW
            *
ERROR at line 1:
ORA-01918: user 'MGMT_VIEW' does not exist

SQL>  drop  role MGMT_USER ;
 drop  role MGMT_USER
            *
ERROR at line 1:
ORA-01919: role 'MGMT_USER' does not exist


Drop all the dependent synonyms of  "sysman" 

SQL> spool  drop_synonym.sql 
SQL> SELECT  'drop public synonym '  || synonym_name || ';'   FROM dba_synonyms  WHERE    table_owner = 'SYSMAN';
no rows selected

If we have got  any  output here , then run the below script .

SQL> @drop_synonym.sql 


Let's  Configure EM

C:\>set ORACLE_SID=orcl
C:\>set ORACLE_HOME=C:\app\Neerajs\product\11.2.0\dbhome_1
C:\>emca -config dbcontrol db -repos create

STARTED EMCA at Jun 28, 2012 11:08:59 AM
EM Configuration Assistant, Version 11.2.0.0.2 Production
Copyright (c) 2003, 2005, Oracle.  All rights reserved.

Enter the following information:
Database SID:  orcl 
Exception in thread "main" oracle.sysman.emcp.exception.DatabaseUnavailableException: Database instance unavailable.
        at oracle.sysman.emcp.DatabaseChecks.throwDBUnavailableException(DatabaseChecks.java:151)
        at oracle.sysman.emcp.DatabaseChecks.checkDbAvailabilityImpl(DatabaseChecks.java:144)
        at oracle.sysman.emcp.DatabaseChecks.checkDbAvailability(DatabaseChecks.java:163)
        at oracle.sysman.emcp.DatabaseChecks.getDbServiceName(DatabaseChecks.java:582)
        at oracle.sysman.emcp.EMConfigAssistant.performConfiguration(EMConfigAssistant.java:1272)
        at oracle.sysman.emcp.EMConfigAssistant.statusMain(EMConfigAssistant.java:574)
        at oracle.sysman.emcp.EMConfigAssistant.main(EMConfigAssistant.java:522)

and the emca log contains the below details : 

oracle.sysman.emcp.util.GeneralUtil initSQLEngineLoacly
CONFIG: ORA-01031: insufficient privileges
oracle.sysman.assistants.util.sqlEngine.SQLFatalErrorException: ORA-01031: insufficient privileges
at oracle.sysman.assistants.util.sqlEngine.SQLEngine.executeImpl(SQLEngine.java:1655)
at oracle.sysman.assistants.util.sqlEngine.SQLEngine.connect(SQLEngine.java:978)
at oracle.sysman.emcp.util.GeneralUtil.initSQLEngineLoacly(GeneralUtil.java:445)
at oracle.sysman.emcp.util.GeneralUtil.initSQLEngine(GeneralUtil.java:637)
at oracle.sysman.emcp.DatabaseChecks.checkDbAvailabilityImpl(DatabaseChecks.java:118)

From the above logs, we find two hints  i.e, database instance is down and ORA-01031 . We  search on meta-link and find some useful information . So, some possible reasons may be : 

1.) Database may not be up.

2.) Database is started setting environment variable ORACLE_HOME with trailing '/'. Reset  ORACLE_HOME and bounce the database.

3.) Confirm that the database is up.  Connect to the database via sql*plus and do:-

C:\>sqlplus sys/xxxx@orcl as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Thu Jun 28 17:25:00 2012
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select instance_name,status,version from v$instance;
INSTANCE_NAME    STATUS          VERSION
----------------         ------------      -----------------
orcl                          OPEN             11.2.0.1.0

4.) Confirm whether the $ORACLE_HOME environmental variable is correct. This involves comparing the current ORACLE_HOME environmental setting, with the one that was set the last time the database was started. 
a.) To check the current ORACLE_HOME :-
set ORACLE_HOME (windows)

b.) To check the $ORACLE_HOME which was set at the time when the database was last started .Connect to sqlplus as SYS as SYSDBA and issue the following:-

SQL> var OHM varchar2(100);
SQL> EXEC dbms_system.get_env('ORACLE_HOME', :OHM) ;
PL/SQL procedure successfully completed.
SQL> PRINT OHM
OHM
<Oracle_Home which was set at the time that the database was last started will be returned here>

c.) If the ORACLE_HOME was set wrongly at the time that the database was started (as determined by above steps), then the database will need to be shut down and re-started. To shut the database down, it will be necessary to set the 'wrong' ORACLE_HOME (the one which is returned in the OHM output). After the database has been successfully shut down, then the correct ORACLE_HOME should be set, and the database can be re-started.

5.) Confirm whether it's possible to connect to the database as the SYSDBA user remotely, using sqlplus. As part of the repository creation/drop process, the SYS user needs to be able to connect as SYSDBA . The ORA-01031: insufficient privileges error is returned when the SYS user is unable to connect as SYSDBA.  To test this outside of dbconsole, test a sqlplus remote connection (one which connects via a tnsnames.ora alias) as sysdba. eg:-
$ORACLE_HOME/bin>sqlplus sys/<password>@<alias> as sysdba

If the above returns the ora-1031 error, check the following :-

a.) Check whether the REMOTE_LOGIN_PASSWORDFILE initialization parameter is set correctly in the database.  For the purposes of configuring dbconsole, it must be set to 'EXCLUSIVE'.  This value can be checked by logging into the database via sql*plus and typing "show parameter remote".

SQL> show parameter remote 
NAME                                                TYPE                VALUE
----------------------------                    ---------            --------------
remote_login_passwordfile            string                EXCLUSIVE

If this parameter is not set to 'EXCLUSIVE', the following command can be run .
SQL>alter system set remote_login_passwordfile=EXCLUSIVE scope=spfile;

b.) For dbconsole 11.2.0.1 and 11.2.0.2 check whether the %ORACLE_HOME%/network/admin/sqlnet.ora (or %TNS_ADMIN%/sqlnet.ora)  contains the entry . 
SQLNET.AUTHENTICATION_SERVICES = (NTS)

In my case the value of  SQLNET.AUTHENTICATION_SERVICES is NONE . So, i changed to NTS and finally able to configure the OEM . The rest are as 

C:\>set ORACLE_SID=orcl
C:\>set ORACLE_HOME=C:\app\Neerajs\product\11.2.0\dbhome_1
C:\>emca  -config dbcontrol db -repos create -SID  ORCL   -PORT  1521 -ORACLE_HOME  C:\app\Neerajs\product\11.2.0\dbhome_1

STARTED EMCA at Jun 28, 2012 11:51:08 AM
EM Configuration Assistant, Version 11.2.0.0.2 Production
Copyright (c) 2003, 2005, Oracle.  All rights reserved.

Enter the following information:
Listener ORACLE_HOME [ C:\app\Neerajs\product\11.2.0\dbhome_1 ]: C:\app\Neerajs\product\11.2.0\dbhome_1
Password for SYS user:
Password for DBSNMP user:
Password for SYSMAN user:
Email address for notifications (optional):
Outgoing Mail (SMTP) server for notifications (optional):
-----------------------------------------------------------------
You have specified the following settings
Database ORACLE_HOME ................ C:\app\Neerajs\product\11.2.0\dbhome_1
Local hostname ................ ramtech-199
Listener ORACLE_HOME ................ C:\app\Neerajs\product\11.2.0\dbhome_1
Listener port number ................ 1521
Database SID ................ ORCL
Email address for notifications ...............
Outgoing Mail (SMTP) server for notifications ...............
-----------------------------------------------------------------
Do you wish to continue? [yes(Y)/no(N)]: y
Jun 28, 2012 11:51:48 AM oracle.sysman.emcp.EMConfig perform
INFO: This operation is being logged at C:\app\Neerajs\cfgtoollogs\emca\orcl\emca_2012_06_28_11_51_07.log.
Jun 28, 2012 11:51:51 AM oracle.sysman.emcp.EMReposConfig createRepository
INFO: Creating the EM repository (this may take a while) ...
Jun 28, 2012 12:55:16 PM oracle.sysman.emcp.EMReposConfig invoke
INFO: Repository successfully created
Jun 28, 2012 12:55:53 PM oracle.sysman.emcp.EMReposConfig uploadConfigDataToRepository
INFO: Uploading configuration data to EM repository (this may take a while) ...
Jun 28, 2012 1:07:37 PM oracle.sysman.emcp.EMReposConfig invoke
INFO: Uploaded configuration data successfully
Jun 28, 2012 1:07:58 PM oracle.sysman.emcp.util.DBControlUtil configureSoftwareLib
INFO: Software library configured successfully.
Jun 28, 2012 1:07:58 PM oracle.sysman.emcp.EMDBPostConfig configureSoftwareLibrary
INFO: Deploying Provisioning archives ...
Jun 28, 2012 1:10:46 PM oracle.sysman.emcp.EMDBPostConfig configureSoftwareLibrary
INFO: Provisioning archives deployed successfully.
Jun 28, 2012 1:10:47 PM oracle.sysman.emcp.util.DBControlUtil secureDBConsole
INFO: Securing Database Control (this may take a while) ...
Jun 28, 2012 1:11:26 PM oracle.sysman.emcp.util.DBControlUtil secureDBConsole
INFO: Database Control secured successfully.
Jun 28, 2012 1:11:27 PM oracle.sysman.emcp.util.DBControlUtil startOMS
INFO: Starting Database Control (this may take a while) ...
Jun 28, 2012 1:14:55 PM oracle.sysman.emcp.EMDBPostConfig performConfiguration
INFO: Database Control started successfully
Jun 28, 2012 1:15:05 PM oracle.sysman.emcp.EMDBPostConfig performConfiguration
INFO: >>>>>>>>>>> The Database Control URL is https://ramtech-199:5500/em <<<<<<<<<<<
Jun 28, 2012 1:15:48 PM oracle.sysman.emcp.EMDBPostConfig invoke
WARNING:
************************  WARNING  ************************
Management Repository has been placed in secure mode wherein Enterprise Manager data will be encrypted.  The encryption key has been placed in the file: C:/app/neerajs/product/11.2.0/dbhome_1/ramtech-199_orcl/sysman/config
/emkey.ora.   Please ensure this file is backed up as the encrypted data will become unusable if this file is lost.
***********************************************************
Enterprise Manager configuration completed successfully
FINISHED EMCA at Jun 28, 2012 1:15:48 PM


Enjoy    :-) 


Tuesday, May 29, 2012

Interested Transaction List ( ITL ) in Oracle

Few days back, I received an e-mail from a reader , who is having doubt on locking mechanism   i.e, related to  ITL  . This post will give some good concept about the ITL .  Let's start from the beginning i.e, from the transaction .

Transaction :  A transaction starts when an update to data such as insert, update or delete occurs (or the intention to do so, e.g. SELECT FOR UPDATE) and ends when the session issues a commit or rollback. Like everything else, a specific transaction should have a name or an identifier to differentiate it from another one of the same type. Each transaction is given a transaction ID.

Locks :  In multi-user systems, many users may update the same information at the same time. Locking allows only one user to update a particular data at a time ,  another person cannot modify the same data. When any user start the transaction then oracle automatically locks the data . Oracle uses a different locking method then most other databases, Oracle locking policy consists of the following :
  • Oracle locks data at the row level on modification only. There is no lock escalation to a block or table level, ever.
  • Oracle never locks data just to read it. There are no locks placed on rows of data by simple reads.
  • A writer of data does not block a reader of data .
  • A writer of data is blocked only when another writer of data has already locked the row it was going after.

 What is Interested Transaction List (ITL) ? 
It is a simple data structure called  "Interested Transaction List" (ITL), a list that maintains information on transaction. The transaction identifier will be stored as an entry in the ITL in the header of the data block. The ITL contains several placeholders (or slots) for transactions. When a row in the block is locked for the first time, the transaction places a lock in one of the slots. In other words, the transaction makes it known that it is interested in some rows (hence the term "Interested Transaction List"). 

When a different transaction locks another set of rows in the same block, that information is stored in another slot and so on. When a transaction ends after a commit or a rollback, the locks are released and the slot which was used to mark the row locks in the block is now considered free .

ITL slots are required for every transaction, it contains the transaction ID (XID) which is a pointer to an entry in the transaction table of a rollback segment. We can still read the data but other processes wanting to change the data must wait until the lock is released (commit or rollback). The ITL entry contains a XID, undo byes address (UBA) information, flags indicating the transaction status (Flag) and lock count (Lck) showing the number of rows locked by this transaction within the block and SCN at which the transaction is updated.

How many slots are typically available in ITL ?
During the table creation, the INITRANS parameter defines how many slots are initially created in the ITL. INITRANS is a block level storage parameter which can be specified while creating an object (table). INITRANS and MAXTRANS parameters are used to control the concurrent access to the same block  . When the transactions exhaust all the available slots and a new transaction comes in to lock a row, the ITL grows to create another slot. The ITL can grow up to the number defined by the MAXTRANS parameter of the table, provided there is space in the block. Nevertheless, if there is no more room in the block, even if the MAXTRANS is high enough, the ITL cannot grow. the maximum value for MAXTRANS parameter is 255.

Each ITL entry in the block transaction variable header takes 24 bytes. Though a block can have a maximum of 255 different ITL entries, the ITL allocation is limited by block size.  The database block size plays an important role when allocating the number of inital ITL entries for the block.  The rule is "the total size allocated for initial ITLs SHOULD be LESS THAN 50% of the database block size" .
i.e,   sizeof(INITIAL ITLs) < ( 50% of the DATABASE BLOCK SIZE )

BLOCK SIZE(B)                           NO OF ITL entries allocated in block header
============                      =============================
2048                                                               41
4096                                                               83
8192                                                              169

What happens when a transaction does not find a free slot to place its lock information? 
This can occur because either
1.)  the block is so packed that the ITL cannot grow to create a free slot, or 
2.)  the MAXTRANS has already been reached. In this case, the transaction that needs to lock a row has to wait until a slot becomes available. This wait is termed as ITL waits and can be seen from the view v$session_wait, in which the session is waiting on an event named "enqueue"  Since the INITRANS is one, there is only one slot for the ITL. The rest of the block is empty. 

How to Diagnose the ITL Wait   How do we know that a segment is experiencing ITL waits? 
To check for ITL waits, set up the STATISTICS_LEVEL to TYPICAL in init.ora or via ALTER SYSTEM, then examine the segment statistics for the waits.
SQL>select owner ,object_name  from v$segment_statistics where statistic_name='ITL waits' and value> 0 ;

When we suspect that a database is suffering from these waits, query the view v$session_wait. If the event on which the system is waiting is "enqueue," then the session might be experiencing ITL waits. However, enqueue is a very broad event that encompasses any type of locks, so it does not accurately specify the ITL waits. When the wait event is experienced, issue the following complex query a table tab1 (say) 

SQL> Select s.sid SID, s.serial# Serial#, l.type type, ' ' object_name, lmode held, request request
from  v$lock l,  v$session s,  v$process p
where s.sid = l.sid and   s.username <> ' ' and   s.paddr = p.addr and l.type <> 'TM' and
(l.type <> 'TX' or l.type = 'TX' and l.lmode <> 6)  
union
select s.sid SID, s.serial# Serial#,  l.type type, object_name object_name, lmode held, request request
from v$lock l, v$session s, v$process p, sys.dba_objects o
where s.sid = l.sid and o.object_id = l.id1 and l.type = 'TM' and  s.username <> ' ' and  s.paddr = p.addr
union
select s.sid SID, s.serial# Serial#,l.type type, '(Rollback='||rtrim(r.name)||')' object_name, lmode held,
request request
from v$lock l, v$session s, v$process p, v$rollname r
where s.sid = l.sid and l.type = 'TX' and l.lmode = 6 and  trunc(l.id1/65536) = r.usn and s.username <> ' ' and  s.paddr = p.addr order by 5, 6 ; 
The output of the query will look something like this:
SID          SERIAL#     TY        OBJECT_NAME          HELD     REQUEST
-----        ----------     -------    -------------------         -------     -------------
36            8428         TX                 0                         4
36            8428         TM                 TAB1                   3              0
52           29592        TM                 TAB1                   3              0
52           29592        TX         (Rollback=RBS1_6)      6              0 

Note how the sessions 36 and 52 both have a TM (DML) lock on the table TAB1 of type 3 (Row Exclusive), but session 52 also holds a TX (Transaction) lock on the rollback segment of mode 6 (Exclusive) and Session 36 is waiting for a mode 4 (Share) lock. If this combination of locking occurs, we can be sure that session 36 is waiting for ITL on the table TAB1. Beware of a similar but different diagnosis when two sessions try to insert the same key value (a real locking – primary key violation). In that case, we would also see an additional TX lock on a rollback segment from the session that is waiting; for ITL waits, this TX lock on the rollback segment would not be seen. To know more about working of ITL click here

Examining ITL allocation :
1.)  Create a table with INITRANS 4.
SQL> create table test1 (I  number)  INITRANS 4 ; 
2.) Insert few  record for testing purpose. A blockdump can be taken later. 
SQL> insert into test1 values (10) ; 
SQL> commit ;
now updated the table from sessions too these transaction are in hang state .

3.) Find out the block number and the file id for dumping the block :
SQL> select dbms_rowid.rowid_relative_fno(rowid), dbms_rowid.rowid_block_number(rowid) from test;
or use the below command 
SQL> select header_file,header_block from dba_segments where owner='HR' and segment_name='TEST' ; 

4.) Dump the data block:
SQL> alter system dump datafile 4 block min 636  block max 636 ;

5.) Open the dump trace file located in USER_DUMP_DEST directory and check the following:
Block dump from disk:
buffer tsn: 4 rdba: 0x0100027c (4/636)
scn: 0x0000.002a1814 seq: 0x02 flg: 0x04 tail: 0x18140602
frmt: 0x02 chkval: 0x1561 type: 0x06=trans data
Hex dump of block: st=0, typ_found=1
Block header dump:  0x0100027c
Object id on Block? Y
seg/obj: 0x1277b  csc: 0x00.2a1372  itc: 10  flg: E  typ: 1 - DATA
 brn: 0  bdba: 0x1000278 ver: 0x01 opc: 0
     inc: 0  exflg: 0
 Itl           Xid                                     Uba                   Flag     Lck          Scn/Fsc   
0x01   0x0001.001.00000380  0x00c01117.0204.07   --U-     2         fsc 0x0000.002a1376
0x02   0x0000.000.00000000  0x00000000.0000.00   ----     0          fsc 0x0000.00000000
0x03   0x0000.000.00000000  0x00000000.0000.00   ----     0          fsc 0x0000.00000000
0x04   0x0000.000.00000000  0x00000000.0000.00   ----     0           fsc 0x0000.00000000
bdba: 0x0100027c
data_block_dump,data header at 0xe511924

itl             = interested transaction list (list of transactions on a block that have changes on that block) 
xid           = transaction id (it is a set of three numbers – undo segment#, undo slot# and undo record# -                           separated by periods.)
uba          = undo block address (pointer to a transactions list of undo blocks) 
scn/fsc    = system change/commit number of the current transaction
tsn          = Tablespace Number
rdba       = Relative data block address.Its value is "0100027c" in hexa decimal & 16777852 in decimal
seg/obj   = is the object_id. 0x 1277b   = 75643 in 75643  in decimal.
itc: 10    =  Number of interested transaction list(ITL) entry in the block.
typ: 1    =  data –> Indicates this is an data block
Flag      =  State for current transaction (C – Commited) U- indicates that the particular ITL was used.
Lck       =  Number of Lock held by the transacion
csc:  0x00.2a1372  =  This is the comit SCN number of the block

If  we use locally managed tablespaces with automatic segment space management , we can more or less forget about these parameters as  Oracle takes care of it for us . In Oracle 10g and above,  this is now more to concern .   In the past they were much more important that they are now .  


Note :

  • Lck” column shows a value. It shows “2”, meaning two rows in the blocks is locked by a transaction. But, which transaction? To get that answer, note the value under the “Xid” column. It shows the transaction ID -  0x0001.001.00000380 .  These numbers are in hexadecimal (as indicated by the 0x at the beginning of the number).
  • A transaction is identified by a transaction ID (XID) which is a set of three numbers – undo segment#, undo slot# and undo record# - separated by periods.
  • We can also check all the active transactions in the view v$transaction, where the columns XIDUSN, XIDSLOT and XIDSQN denote the undo segment#, undo slot# and undo rec# - the values that make up the transaction ID.
  • The v$transaction view also contains the session address under SES_ADDR column, which can be used to join with the SADDR column of v$session view to get the session details


 References:- http://www.rampant-books.com/


Enjoy     :-) 



Saturday, May 26, 2012

Interpreting Raw Sql Trace File



SQL_TRACE is the main method for collecting SQL Execution information in Oracle. It records a wide range of information and statistics that can be used to tune SQL operations. The sql trace file contains a great deal of information . Each cursor that is opened after tracing has been enabled will be recorded in the trace file. 
The raw trace file mostly contains the cursor number . Eg, PARSING IN CURSOR #3 . EXECutes, FETCHes and WAITs are recorded against a cursor. The information applies to the most recently parsed statement within that cursor . Firstly,  let's have a look on "Wait Events"  . 

WAIT #6: nam='db file sequential read' ela= 8458 file#=110 block#=63682 blocks=1 obj#=221 tim=506028963546

WAIT        = An event that we waited for.
nam           = What was being waited for . The wait events here are the same as are seen in view V$SESSION_WAIT .
ela             = Elapsed time for the operation.(microseconds)
p1 = P1 for the given wait event.
p2 = P2 for the given wait event.
p3     = P3 for the given wait event.

Example No. 1 : WAIT #6: nam='db file sequential read' ela= 8458 file#=110 block#=63682 blocks=1 obj#=221 tim=506028963546 
The above line can be translated as  : Completed waiting under CURSOR no 6  for "db file sequential read" . We waited 8458 microseconds i.e. approx. 8.5 milliseconds .For a read of:  File 110, start block 63682, for 1 Oracle block of Object number 221. Timestamp was 506028963546 . 

Example no.2 : WAIT #1: nam='library cache: mutex X' ela= 814 idn=3606132107 value=3302829850624 where=4 obj#=-1 tim=995364327604
The above line can be translated as : Completed WAITing under CURSOR no 1 for "library cache: mutex X" .We waited 814 microseconds i.e. approx. 0.8 milliseconds .To get an eXclusive library cache latch with  Identifier 3606132107 value 3302829850624 location 4 . It was not associated with any particular object (obj#=-1) Timestamp 995364327604.

The trace file also show the processing of the sql statements Oracle processes SQL statements as follow :
Stage 1: Create a Cursor
Stage 2: Parse the Statement
Stage 3: Describe Results
Stage 4: Defining Output
Stage 5: Bind Any Variables
Stage 6: Execute the Statement
Stage 7: Parallelize the Statement
Stage 8: Fetch Rows of a Query Result
Stage 9: Close the Cursor
Now let's  move to another important term PARSING IN CURSOR #n . EXECutes, FETCHes and WAITs are recorded against a cursor. The information applies to the most recently parsed statement within that cursor.

PARSING IN CURSOR# : 
Cursor :  In order for Oracle to process an SQL statement, it needs to create an area of memory known as the context area; this will have the information needed to process the statement. This information includes the number of rows processed by the statement, a pointer to the parsed representa-tion of the statement (parsing an SQL statement is the process whereby information is transferred to the server, at which point the SQL statement is evaluated as being valid).

A cursor is a handle, or pointer, to the context area. Through the cursor, a PL/SQL program can control the context area and what happens to it as the statement is processed. Two important features about the cursor are
1.)  Cursors allow you to fetch and process rows returned by a SE-LECT statement, one row at a time.
2.)  A cursor is named so that it can be referenced.

Parsing : Oracle Parsing is the first step in processing of any database statement . PARSE record is accompanied by the cursor number. Let's have a look on "Parsing in Cursor" of a particular trace file .


PARSING IN CURSOR #2 len=92 dep=0 uid=0 oct=3 lid=0 tim=277930332201 hv=1039576264 ad='15d51e60' sqlid='dsz47ssyzdb68'
select p.PID,p.SPID,s.SID from v$process p,v$session s where s.paddr = p.addr and s.sid = 12
END OF STMT
PARSE#2:c=31250,e=19173,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=836746634,tim=27930332198
EXEC #2:c=0,e=86,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=836746634,tim=77930335666
WAIT #2: nam='SQL*Net message to client' ela= 10 driver id=1413697536 #bytes=1 p3=0 obj#=116 tim=77930335778
FETCH #2:c=0,e=805,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=1,plh=836746634,tim=77930336684
WAIT #2: nam='SQL*Net message from client' ela= 363 driver id=1413697536 #bytes=1 p3=0 obj#=116 tim=77930337227
FETCH #2:c=0,e=31,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=836746634,tim=77930337421
STAT #2 id=1 cnt=1 pid=0 pos=1 obj=0 op='NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us cost=0 size=152 card=1)'
STAT #2 id=2 cnt=27 pid=1 pos=1 obj=0 op='MERGE JOIN CARTESIAN (cr=0 pr=0 pw=0 time=156 us cost=0 size=96 card=1)'
STAT #2 id=3 cnt=1 pid=2 pos=1 obj=0 op='NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us cost=0 size=39 card=1)'
STAT #2 id=4 cnt=1 pid=3 pos=1 obj=0 op='FIXED TABLE FIXED INDEX X$KSLWT (ind:1) (cr=0 pr=0 pw=0 time=0 us cost=0 size=26 card=1)'
STAT #2 id=5 cnt=1 pid=3 pos=2 obj=0 op='FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=0 us cost=0 size=13 card=1)'
STAT #2 id=6 cnt=27 pid=2 pos=2 obj=0 op='BUFFER SORT (cr=0 pr=0 pw=0 time=78 us cost=0 size=57 card=1)'
STAT #2 id=7 cnt=27 pid=6 pos=1 obj=0 op='FIXED TABLE FULL X$KSUPR (cr=0 pr=0 pw=0 time=130 us cost=0 size=57 card=1)'
STAT #2 id=8 cnt=1 pid=1 pos=2 obj=0 op='FIXED TABLE FIXED INDEX X$KSUSE (ind:1) (cr=0 pr=0 pw=0 time=0 us cost=0 size=56 card=1)'
WAIT #2: nam='SQL*Net message to client' ela= 7 driver id=1413697536 #bytes=1 p3=0 obj#=116 tim=77930338248
*** 2012-05-19 15:07:22.843
WAIT #2: nam='SQL*Net message from client' ela= 38291082 driver id=1413697536 #bytes=1 p3=0 obj#=116 tim=77968629417
CLOSE #2:c=0,e=30,dep=0,type=0,tim=77968629737


len     = the number of characters in the SQL statement
dep   = tells the application/trigger depth at which the SQL statement was executed. dep=0 indicates that it was executed by the client application. dep=1 indicates that the SQL statement was executed by a trigger, the Oracle optimizer, or a space management call. dep=2 indicates that the SQL statement was called from a trigger, dep=3 indicates that the SQL statement was called from a trigger that was called from a trigger.
uid     = Schema id under which SQL was parsed.
oct = Oracle command type.
lid = Privilege user id
tim    = Timestamp.
hv = Hash id.
ad = SQLTEXT address


PARSE #3:  c=15625,  e=177782,  p=2,  cr=3,  cu=0,  mis=1,  r=0,  dep=0,  og=1,  plh=272002086,
tim=276565143470


c      = CPU time (microseconds rounded to centiseconds granularity on 9i & above)
e  = Elapsed time (centiseconds prior to 9i, microseconds thereafter)
p  = Number of physical reads.
cr  = Number of buffers retrieved for CR reads.(Consistent reads)
cu    =Number of buffers retrieved in current mode.
mis  = Cursor missed in the cache.
r  = Number of rows processed.
dep = Recursive call depth (0 = user SQL, >0 = recursive).
og = Optimizer goal: 1=All_Rows, 2=First_Rows, 3=Rule, 4=Choose


From the above Parse line it is very clear that the total time taken in parsing the statement is  0.177 sec and the no. of physical reads done are 2 .


Bind Variables :  If the SQL statement does reference bind variables, then the following  SQL statement shown in the cursor can locate a section of text associated with each bind variable. For each bind variable there are a number of attributes listed.  The following are the ones we are interested in here:


mxl      =  the maximum length - ie. the maximum number of bytes occupied by the variable. Eg. dty=2 and mxl=22 denotes a NUMBER(22) column.
scl       = the scale (for NUMBER columns)
pre      = the precision (for NUMBER columns)
value  = the value of the bind variable
dty      = the datatype.  Typical values are:
1       VARCHAR2 or NVARCHAR2
2       NUMBER
8       LONG
11     ROWID
12     DATE
23     RAW
24     LONG RAW
96     CHAR
112   CLOB or NCLOB
113   BLOB
114   BFILE


EXEC :  Execute a pre-parsed statement. At this point, Oracle has all necessary information and resources, so the statement is executed. For example
EXEC #2:c=0,e=225,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=3684871272,tim=282618239403


Fetch : Fetch rows from a cursor . For example
FETCH #4:c=0,e=8864,p=1,cr=26,cu=0,mis=0,r=1,dep=0,og=1,plh=3564694750,tim=282618267037


STAT :  Lines report explain plan statistics for the numbered [CURSOR]. These let us know the 'run time' explain plan. For example
STAT #1 id=1 cnt=7 pid=0 pos=1 obj=0 op='SORT ORDER BY (cr=0 pr=0 pw=0 time=0 us cost=2 size=2128 card=1)'

id       = Line of the explain plan which the row count applies to (starts at line 1).  This is effectively the row source row count for all row sources in the execution tree
cnt =  Number of rows for this row source.
pid  =  Parent id of this row source.
pos   =  Position in explain plan.
obj     =  Object id of row source (if this is a base object).
op='...'   The row source access operation


XCTEND  A transaction end marker. For example  XCTEND rlbk=0, rd_only=1, tim=282636050491
rlbk           =1   if a rollback was performed, 0 if no rollback (commit).
rd_only      =1   if transaction was read only, 0 if changes occurred.


CLOSE  cursor is closed .for example CLOSE #4:c=0,e=32,dep=0,type=0,tim=282636050688
c            = CPU time (microseconds rounded to centiseconds granularity on 9i and above)
e           = Elapsed time (centiseconds prior to 9i, microseconds thereafter)
dep       = Recursive depth of the cursor
type     = Type of close operation

Note : Timestamp are used to determine the time between any 2 operations.
Reference : Metalink [ID 39817.1]



Enjoy    :-) 



Saturday, May 19, 2012

Tracing using Event 10046


Event 10046 is an well known feature within the Oracle database that generates detailed information on how a statement is parsed, what the values of bind variables are, and what wait events occurred during a particular session . The 10046 event allows us to track, at a very fine level, the execution of a given set of SQL statements. The 10046 event can be set at both the session level (for tracing of a specific session) or at a database level. It can be set either via a parameter file setting, or it can be set dynamically for the database, for our session or even for another session. 

Event 10046 is quite helpful for DBA to determine exactly, why the database is performing in a certain way, by documenting how a statement is parsed and noting what wait events occured during the statement execution. 

Most of us are well known familiar with Oracle's sql_trace facility, which emits performance information about Oracle parse, execute, fetch, commit, and rollback database calls. Using sql_trace is actually the equivalent of using event 10046 set at level 1. 

The event 10046 is internally described as:

SQL> set serveroutput on 
SQL> begin  
   dbms_output.put_line (SQLERRM (-10046)) ; 
end ; 
/

Event 10046 level 
There are five different levels . The Levels are  as
1.) Level 1  is the default. This level traces all activities until the trace session is stopped. This event enable the standard SQL_TRACE facility (same as SQL_TRACE=TRUE).

2.)  Level 4  provides level 1 tracing and displays the entered value for all bind variables. Bind variables are the values that the user enters. The code displays these bind variables as: b1, etc. When level 4 is activated, the substituted value for the bind variable is displayed in the trace file.

3.) Level 8  provides level 1 tracing and displays a list of all database wait events. Database wait events list the reasons if the elapsed time is greater than the CPU time in the tkprof report.

4.) Level 12  provides level 1 tracing in addition to both bind variable substitution and database wait events.

5.) Level 16  added in 11g to generate STAT line dumps for each execution. STAT dumping has been amended in 11g so that they are not aggregated across all executions but are dumped after execution.

How to enable the event 10046 
The following  steps are required to enable event 10046

1.) Set timed statistics to TRUE
 SQL> alter session set timed_statistics=true ; 
Session altered.

2.) set statistics level
SQL> alter session set statistics_level=all ; 
Session altered.

3.) Set max_dump_file_size to UNLIMITED 
SQL> alter session set max_dump_file_size=unlimited ; 
Session altered.

4.) Enable the event 
SQL> alter session set events '10046 trace name context forever, level 12' ; 

We can check the generated trace file in the directory specified in the parameter user_dump_dest
SQL> show parameter user_dump_dest  

Stop Tracing   
If the session is not exited then the trace can be disabled using :
SQL> alter session set events '10046 trace name context off' ; 

Enabling  tracing in other sessions
We can enable tracing in other session using the oradebug utility .Here the steps to enable the tracing in another session.

1.) Find the SPID of the session which we want to trace 
SQL> select p.PID,p.SPID,s.SID   from v$process p,v$session s
where s.paddr = p.addr  and s.sid = &SESSION_ID ;   
or 
SQL> select spid from v$process 
where addr=(select paddr from v$session where sid=<enter the SID>); 
We can identify the session_id by using the v$session view using the following attribute  sid, serial#, username, osuser, machine ,server ,terminal , program .

2.) Once the OS process id for the process has been determined then login to SQL*Plus as a dba and execute the following 
c:\> sqlplus / as sysdba
SQL> oradebug setospid <SPID> ;    or
SQL> oradebug setorapid <PID> ; 
SQL> oradebug unlimit ; 
SQL> oradebug event 10046 trace name context forever, level 8 ; 


Turn off tracing 
SQL> oradebug event 10046 trace name context off ; 

There are various way of enable, disable and vary the contents of this trace . Here are the some way to enable and disable the tracing .
SQL> EXEC DBMS_SYSTEM.set_ev(si=>123, se=>1234, ev=>10046, le=>12, nm=>' ');   <--enable
SQL> EXEC DBMS_SYSTEM.set_ev(si=>123, se=>1234, ev=>10046, le=>0, nm=>' ');   <--disable

In next post , i will try to cover the topic " Interpreting Trace File" .


Enjoy         :-)