Wednesday, November 30, 2011

ORA-39082 ".... created with compilation warnings" while Importing



ORA-39082   generally  occur during  the  import . The error  message  states that  the  object in  the  SQL statement  following  this  error was  created with  compilation errors. If  this  error  occurred  for a  view,  it  is  possible  that  the base  table of  the view  was  missing . Here  is  an scenario  of  ora-39082 error ....

C:\>impdp  system/xxxx@xe  remap_schema=shaik9sep:shaik9sep11g dumpfile=SHAIK9SEP10G.DMP logfile=shaik9sep10g_import.log 

Import: Release 10.2.0.1.0 - Production on Wednesday, 30 November, 2011 14:40:06
Copyright (c) 2003, 2005, Oracle.  All rights reserved.
Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
Master table "SYSTEM"."SYS_IMPORT_FULL_04" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_04":  system/********@xe  remap_schema=shaik9sep:shaik9sep11g dumpfile=SHAIK9SEP10G.DMP logfile=shaik9sep10g_import.log
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"SHAIK9SEP11G" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "SHAIK9SEP11G"."AD_ARCHIVE"                       188.3 MB    1785 rows
. . imported "SHAIK9SEP11G"."T_REPORTSTATEMENT"          54.13 MB    952472 rows
. . imported "SHAIK9SEP11G"."FACT_ACCT"                        23.79 MB   115203 rows
. . imported "SHAIK9SEP11G"."AD_QUERYLOG"                     20.35 MB   78411 rows
. . imported "SHAIK9SEP11G"."T_TRIALBALANCE"                 12.54 MB   55310 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
ORA-39082: Object type ALTER_FUNCTION:"SHAIK9SEP11G"."DSI_FUNC_PRODUCTREP" created with compilation warnings
ORA-39082: Object type ALTER_FUNCTION:"SHAIK9SEP11G"."INVOICEOPEN" created with compilation warnings
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
ORA-39082: Object type ALTER_PROCEDURE:"SHAIK9SEP11G"."T_INVENTYVALUE_CREATE" created with compilation warnings
ORA-39082: Object type ALTER_PROCEDURE:"SHAIK9SEP11G"."AD_SYNCHRONIZE" created with compilation warnings
Processing object type SCHEMA_EXPORT/VIEW/VIEW
ORA-39082: Object type VIEW:"SHAIK9SEP11G"."M_STORAGE_V" created with compilation warning

All the above error ORA-30082 is a warning . This error occurs due to improper or re-order the sequence of importing the objects or due to the dependency on others objects . For example  , in above case data pump import create procedures before views, if our procedure have dependency on views then we will have the ORA-39082 compilation errors . There are various ways to solve this issues . 

1.)  Run  utlrp.sql  to recompile all invalid objects within the database after the import is complete. This script is in the $ORACLE_HOME\rdbms\admin  directory or alternatively we can use the built-in  DBMS_RECOMP package . This will usually clean up all the invalid objects. utlrp.sql  will compile objects in the database across schemas. In  case of  Re-mapping objects from one schema to another and utlrp.sql  won't be able to compile them .

2.) After the import is completed, recompile the every errors . This is useful when you have few errors. The below command are used to recompile the objects as 

SQL> ALTER PACKAGE <SchemaName>.<PackageName> COMPILE;
SQL> ALTER PACKAGE <SchemaName>.<PackageName> COMPILE BODY;
SQL> ALTER PROCEDURE my_procedure COMPILE;
SQL> ALTER FUNCTION my_function COMPILE;
SQL> ALTER TRIGGER my_trigger COMPILE;
SQL> ALTER VIEW my_view COMPILE;
SQL> EXEC DBMS_UTILITY.compile_schema(schema => 'shaik9sep11g') ; or 
SQL> EXEC UTL_RECOMP.recomp_serial('shaik9sep11g') ;

In case of synonym, we need to recreate the synonym.

3.) Use SQLFILE option from impdp to generate the DDL from the export dump and replace the schema name globally , edit and execute the script from sqlplus. This should resolve most of the errors. If we still have errors, proceed  with utlrp.sql. This is the one of good option to deal with this type of error .

4.) There are Bugs which return similar error during import( i.e, impdp). Check metalink  ORA-39082 When Importing Wrapped Procedures [ID 460267.1]

Check the below link for more info and examples about this errors  : 
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:286816015990



Enjoy   :-) 


RMAN Tablespace Point-in-Time Recovery(TSPIR) in Oracle 11gR2


Recovery Manager (RMAN) Automatic TSPITR enables quick recovery of one or more tablespaces in a database to an earlier time without affecting the rest of the tablespaces and objects in the database.

RMAN TSPITR is most useful for the following situations:

  • We want to recover a logical database to a point different from the rest of the physical database, when multiple logical databases exist in separate tablespaces of one physical database. For example, we maintain logical databases in the orders and personnel tablespaces. An incorrect batch job or DML statement corrupts the data in only one of the tablespaces.
  • We want to recover data lost after DDL operations that change the structure of tables. We cannot use Flashback Table to rewind a table to before the point of a structural change such as a truncate table operation.
  • We want to recover a table after it has been dropped with the PURGE option.
  • We want to recover from the logical corruption of a table.
  • We want to recover dropped tablespaces. In fact, RMAN can perform TSPITR on dropped tablespaces even when a recovery catalog is not used.
  • We can also use Flashback Database to rewind data, but we must rewind the entire database rather than just a subset. Also, unlike TSPITR, the Flashback Database feature necessitates the overhead of maintaining flashback logs. The point in time to which you can flash back the database is more limited than the TSPITR window, which extends back to our earliest recoverable backup.

TSPIR was there in  earleir release but have some limitation i.e, we cannot recover a dropped tablespace .  Oracle 11gr2 performs a fully automated managed  TSPIR. It automatically creates and start the auxiliary instance and restores the datafiles it requires and the files pertaining to the dropped tablespace. It will first 
perform a recovery of the tablespace on the auxiliary instance and then use Data Pump and Transportable Tablespace technology to extract and import the tablespace meta data into the original source database .

Here we will illustrate the Concept of TSPIR with an example .

We will create a user say  "TSPIR"  and assign the default tablespace say "tspir"  and create  tables in this tablespace. We take the full backup of the database and further drop the tablespace "tspir" . Before dropping we we note the scn and use this scn to do TSPIR . Below are steps for the TSPIR

Step 1 :  Clean the previous failed TSPIR 

SQL> exec dbms_backup_restore.manageauxinstance ('TSPITR',1) ;
PL/SQL procedure successfully completed.

Step 2 :  Create tablespace and Users and Create tables

SQL> create tablespace tspir datafile 'C:\app\neerajs\oradata\orcl\tspir.dbf' size 150m autoextend on;
Tablespace created.

SQL> create user tspir identified by tspir
  2  default tablespace tspir
  3  quota unlimited on tspir;
User created.

SQL> grant resource,connect to tspir;
Grant succeeded.

SQL> connect tspir/tspir@orcl
Connected.

SQL> create table test(id number);
Table created.

SQL> insert into test values(12);
1 row created.

SQL> insert into test values(121);
1 row created.

SQL> commit;
Commit complete.

SQL> select * from test;
        ID
----------
        12
       121

SQL> create table emp as select * from user_objects;
Table created.

SQL> select count(*) from emp;
  COUNT(*)
----------
         2

SQL> conn / as sysdba
Connected.

Step  3  :   Take the fresh backup of database 

SQL> host   rman   target   sys@orcl
Recovery Manager: Release 11.2.0.1.0 - Production on Wed Nov 30 14:35:44 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
target database Password:
connected to target database: ORCL (DBID=1296005542)

RMAN> backup database plus archivelog;
Starting backup at 30-NOV-11
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=141 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=3 RECID=1 STAMP=768238310
input archived log thread=1 sequence=4 RECID=2 STAMP=768238310
input archived log thread=1 sequence=5 RECID=3 STAMP=768238311
input archived log thread=1 sequence=6 RECID=4 STAMP=768238314
input archived log thread=1 sequence=7 RECID=5 STAMP=768239453
input archived log thread=1 sequence=8 RECID=6 STAMP=768239455
input archived log thread=1 sequence=9 RECID=7 STAMP=768305386
input archived log thread=1 sequence=10 RECID=8 STAMP=768334227
input archived log thread=1 sequence=11 RECID=9 STAMP=768393025
input archived log thread=1 sequence=12 RECID=10 STAMP=768454251
input archived log thread=1 sequence=13 RECID=11 STAMP=768521484
input archived log thread=1 sequence=14 RECID=12 STAMP=768580566
channel ORA_DISK_1: starting piece 1 at 30-NOV-11
channel ORA_DISK_1: finished piece 1 at 30-NOV-11
piece handle=F:\RMAN_BKP\01MSV6UP_1_1 tag=TAG20111130T143608 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:55
Finished backup at 30-NOV-11

Starting backup at 30-NOV-11
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=C:\APP\NEERAJS\ORADATA\ORCL\SYSTEM01.DBF
input datafile file number=00002 name=C:\APP\NEERAJS\ORADATA\ORCL\SYSAUX01.DBF
input datafile file number=00006 name=C:\APP\NEERAJS\ORADATA\ORCL\TSPIR.DBF
input datafile file number=00005 name=C:\APP\NEERAJS\ORADATA\ORCL\EXAMPLE01.DBF
input datafile file number=00003 name=C:\APP\NEERAJS\ORADATA\ORCL\UNDOTBS01.DBF
input datafile file number=00004 name=C:\APP\NEERAJS\ORADATA\ORCL\USERS01.DBF
channel ORA_DISK_1: starting piece 1 at 30-NOV-11
channel ORA_DISK_1: finished piece 1 at 30-NOV-11
piece handle=F:\RMAN_BKP\02MSV70H_1_1 tag=TAG20111130T143705 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:55
Finished backup at 30-NOV-11

Starting backup at 30-NOV-11
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=15 RECID=13 STAMP=768580741
channel ORA_DISK_1: starting piece 1 at 30-NOV-11
channel ORA_DISK_1: finished piece 1 at 30-NOV-11
piece handle=F:\RMAN_BKP\04MSV746_1_1 tag=TAG20111130T143901 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 30-NOV-11

Starting Control File and SPFILE Autobackup at 30-NOV-11
piece handle=F:\RMAN_BKP\CF\C-1296005542-20111130-01 comment=NONE
Finished Control File and SPFILE Autobackup at 30-NOV-11

Step 4  :  Note the SCN and drop the tablespace 

SQL> select current_scn from v$database;
CURRENT_SCN
-----------------
    5659022

SQL> drop tablespace tspir including contents and datafiles;
Tablespace dropped.

Step  5  :  Connect with rman and perform TSPIR  
Here we have used the auxiliary destination with the recover tablespace command because auxiliary  destination is an  optional disk location where RMAN uses to temporarily store the auxiliary set files. The auxiliary destination is used only when using a RMAN-managed auxiliary instance. Specifying an auxiliary destination with a user-managed auxiliary instance results in an error.

C:\>rman target sys/xxxx@orcl
Recovery Manager: Release 11.2.0.1.0 - Production on Wed Nov 30 14:58:11 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
connected to target database: ORCL (DBID=1296005542)

RMAN>  recover tablespace tspir  until scn 5659022  auxiliary  destination 'F:\';
Starting recover at 30-NOV-11
using channel ORA_DISK_1
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified point-in-time
List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1
Creating automatic instance, with SID='nume'
initialization parameters used for automatic instance:
db_name=ORCL
db_unique_name=nume_tspitr_ORCL
compatible=11.2.0.0.0
db_block_size=8192
db_files=200
sga_target=280M
processes=50
db_create_file_dest=F:\
log_archive_dest_1='location=F:\'
#No auxiliary parameter file used
starting up automatic instance ORCL
Oracle instance started
Total System Global Area     292933632 bytes
Fixed Size                     1374164 bytes
Variable Size                100665388 bytes
Database Buffers             184549376 bytes
Redo Buffers                   6344704 bytes
Automatic instance created
List of tablespaces that have been dropped from the target database:
Tablespace tspir
contents of Memory Script:
{
# set requested point in time
set until  scn 5659022;
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone 'alter database mount clone database';
# archive current online log
sql 'alter system archive log current';
# avoid unnecessary autobackups for structural changes during TSPITR
sql 'begin dbms_backup_restore.AutoBackupFlag(FALSE); end;';
}
executing Memory Script
executing command: SET until clause
Starting restore at 30-NOV-11
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=59 device type=DISK
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece F:\RMAN_BKP\CF\C-1296005542-20111130-01
channel ORA_AUX_DISK_1: piece handle=F:\RMAN_BKP\CF\C-1296005542-20111130-01 tag=TAG20111130T143903
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:04
output file name=F:\ORCL\CONTROLFILE\O1_MF_7FD0QK8S_.CTL
Finished restore at 30-NOV-11
sql statement: alter database mount clone database
sql statement: alter system archive log current
sql statement: begin dbms_backup_restore.AutoBackupFlag(FALSE); end;
contents of Memory Script:
{
# set requested point in time
set until  scn 5659022;
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile  1 to new;
set newname for clone datafile  3 to new;
set newname for clone datafile  2 to new;
set newname for clone tempfile  1 to new;
set newname for datafile  6 to
 "C:\APP\NEERAJS\ORADATA\ORCL\TSPIR.DBF";
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  1, 3, 2, 6;
switch clone datafile all;
}
executing Memory Script
executing command: SET until clause
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
renamed tempfile 1 to F:\ORCL\DATAFILE\O1_MF_TEMP_%U_.TMP in control file
Starting restore at 30-NOV-11
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to F:\ORCL\DATAFILE\O1_MF_SYSTEM_%U_.DBF
channel ORA_AUX_DISK_1: restoring datafile 00003 to F:\ORCL\DATAFILE\O1_MF_UNDOTBS1_%U_.DBF
channel ORA_AUX_DISK_1: restoring datafile 00002 to F:\ORCL\DATAFILE\O1_MF_SYSAUX_%U_.DBF
channel ORA_AUX_DISK_1: restoring datafile 00006 to C:\APP\NEERAJS\ORADATA\ORCL\TSPIR.DBF
channel ORA_AUX_DISK_1: reading from backup piece F:\RMAN_BKP\02MSV70H_1_1
channel ORA_AUX_DISK_1: piece handle=F:\RMAN_BKP\02MSV70H_1_1 tag=TAG20111130T143705
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:02:15
Finished restore at 30-NOV-11
datafile 1 switched to datafile copy
input datafile copy RECID=5 STAMP=768585055 file name=F:\ORCL\DATAFILE\O1_MF_SYSTEM_7FD0QYNZ_.DBF
datafile 3 switched to datafile copy
input datafile copy RECID=6 STAMP=768585056 file name=F:\ORCL\DATAFILE\O1_MF_UNDOTBS1_7FD0QYRF_.DBF
datafile 2 switched to datafile copy
input datafile copy RECID=7 STAMP=768585056 file name=F:\ORCL\DATAFILE\O1_MF_SYSAUX_7FD0QYPG_.DBF
contents of Memory Script:
{
# set requested point in time
set until  scn 5659022;
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  3 online";
sql clone "alter database datafile  2 online";
sql clone "alter database datafile  6 online";
# recover and open resetlogs
recover clone database tablespace  "TSPIR", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script
executing command: SET until clause
sql statement: alter database datafile  1 online
sql statement: alter database datafile  3 online
sql statement: alter database datafile  2 online
sql statement: alter database datafile  6 online
Starting recover at 30-NOV-11
using channel ORA_AUX_DISK_1
starting media recovery
archived log for thread 1 with sequence 15 is already on disk as file D:\ARCHIVE\ORCL_ARCHIVE\ARC0000000015_0768224813.0001
archived log for thread 1 with sequence 16 is already on disk as file D:\ARCHIVE\ORCL_ARCHIVE\ARC0000000016_0768224813.0001
archived log file name=D:\ARCHIVE\ORCL_ARCHIVE\ARC0000000015_0768224813.0001 thread=1 sequence=15
archived log file name=D:\ARCHIVE\ORCL_ARCHIVE\ARC0000000016_0768224813.0001 thread=1 sequence=16
media recovery complete, elapsed time: 00:00:04
Finished recover at 30-NOV-11
database opened
contents of Memory Script:
{
# make read only the tablespace that will be exported
sql clone 'alter tablespace  TSPIR read only';
# create directory for datapump import
sql "create or replace directory TSPITR_DIROBJ_DPDIR as ''
F:\''";
# create directory for datapump export
sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as ''
F:\''";
}
executing Memory Script
sql statement: alter tablespace  TSPIR read only
sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''F:\''
sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''F:\''
Performing export of metadata...
   EXPDP> Starting "SYS"."TSPITR_EXP_nume":
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   EXPDP> Master table "SYS"."TSPITR_EXP_nume" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TSPITR_EXP_nume is:
   EXPDP>   F:\TSPITR_NUME_43731.DMP
   EXPDP> ******************************************************************************
   EXPDP> Datafiles required for transportable tablespace TSPIR:
   EXPDP>   C:\APP\NEERAJS\ORADATA\ORCL\TSPIR.DBF
   EXPDP> Job "SYS"."TSPITR_EXP_nume" successfully completed at 16:20:28
Export completed
contents of Memory Script:
{
# shutdown clone before import
shutdown clone immediate
}
executing Memory Script
database closed
database dismounted
Oracle instance shut down
Performing import of metadata...
   IMPDP> Master table "SYS"."TSPITR_IMP_nume" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_nume":
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   IMPDP> Job "SYS"."TSPITR_IMP_nume" successfully completed at 16:21:48
Import completed
contents of Memory Script:
{
# make read write and offline the imported tablespaces
sql 'alter tablespace  TSPIR read write';
sql 'alter tablespace  TSPIR offline';
# enable autobackups after TSPITR is finished
sql 'begin dbms_backup_restore.AutoBackupFlag(TRUE); end;';
}
executing Memory Script
sql statement: alter tablespace  TSPIR read write
sql statement: alter tablespace  TSPIR offline
sql statement: begin dbms_backup_restore.AutoBackupFlag(TRUE); end;
Removing automatic instance
Automatic instance removed
auxiliary instance file F:\ORCL\DATAFILE\O1_MF_TEMP_7FD0Y3PY_.TMP deleted
auxiliary instance file F:\ORCL\ONLINELOG\O1_MF_4_7FD0XROZ_.LOG deleted
auxiliary instance file F:\ORCL\ONLINELOG\O1_MF_3_7FD0XK9R_.LOG deleted
auxiliary instance file F:\ORCL\ONLINELOG\O1_MF_2_7FD0X9RF_.LOG deleted
auxiliary instance file F:\ORCL\ONLINELOG\O1_MF_1_7FD0X2LK_.LOG deleted
auxiliary instance file F:\ORCL\DATAFILE\O1_MF_SYSAUX_7FD0QYPG_.DBF deleted
auxiliary instance file F:\ORCL\DATAFILE\O1_MF_UNDOTBS1_7FD0QYRF_.DBF deleted
auxiliary instance file F:\ORCL\DATAFILE\O1_MF_SYSTEM_7FD0QYNZ_.DBF deleted
auxiliary instance file F:\ORCL\CONTROLFILE\O1_MF_7FD0QK8S_.CTL deleted
Finished recover at 30-NOV-11
RMAN>

Step  6  : Check the tablepsace status and existance 

SQL> select tablespace_name,status from dba_tablespaces;
TABLESPACE_NAME                 STATUS
-------------------------                -----------
SYSTEM                                      ONLINE
SYSAUX                                      ONLINE
UNDOTBS1                                 ONLINE
TEMP                                           ONLINE
USERS                                         ONLINE
EXAMPLE                                   ONLINE
TSPIR                                          OFFLINE

Since, we find the tablepspace "TSPIR" is offline . So bring the tablespace online .

SQL> alter tablespace tspir online;
Tablespace altered.

SQL> alter database datafile 'C:\app\neerajs\oradata\orcl\tspir.dbf' online;
Database altered.

SQL> select table_name from dba_tables where tablespace_name='TSPIR';
TABLE_NAME
-----------------
TEST
EMP

SQL> select * from tspir.test;
        ID
----------
        12
       121

SQL> select count(*) from tspir.emp;
  COUNT(*)
----------
         2

Hence , we find that both the tables are recovered



Enjoy     :-) 


ORA-19852: Error creating services for auxiliary instance


This error  generally occur when we go for TSPIR(tablespace point-in-time recovery) . When we issue the "recover tablespace ... "  command we get this error as

RMAN> recover tablespace tspir  until scn 5659022 auxiliary destination 'F:\';
Starting recover at 30-NOV-11
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=64 device type=DISK
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified point-in-time
List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1
Creating automatic instance, with SID='tAbD'
RMAN-00571: ========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===========
RMAN-00571: ========================================================
RMAN-03002:failure of recover command at 11/30/2011 14:56:13
ORA-19852:error creating services for auxiliary instance tAbD (error 0)
ORA-27302: failure occurred at:
ORA-27303:additional information: failed to start instance


This error occur because the previous failed TSPITR attempt wasn't cleaned up properly . Therefore to solve this issue, we have to clean the failed TSPIR attempts .Issue the following procedure to clean up the previous failed auxiliary creation attempt:

SQL> exec dbms_backup_restore.manageauxinstance ('TSPITR',1) ;
PL/SQL procedure successfully completed.



Enjoy      :-) 


Monday, November 28, 2011

ORA-7445 Internal Error


An ORA-7445 is a generic error, and can occur from anywhere in the Oracle code. The precise location of the error is identified by the core file and/or trace file it produces. Whenever an ORA-7445 error is raised a core file is generated. There may be a trace file generated with the error as well.

Prior to 11g, the core files are located in the CORE_DUMP_DEST directory. Starting with 11g, there is a new advanced fault diagnosability infrastructure to manage trace data. Diagnostic files are written into a root directory for all diagnostic data called the ADR home. Core files at 11g will go to the ADR HOME/cdump directory
For more Indispensability check the below :

1. Check the Alert Log  :    The alert log may indicate additional errors or other internal errors at the time of the problem. In some cases, the ORA-7445 error will occur along with ORA-600, ORA-3113, ORA-4030 errors. The ORA-7445 error can be side effects of the other problems and we should review the first error and associated core file or trace file and work down the list of errors. 

If the ORA-7445 errors are not associated with other error conditions, ensure the trace data is not truncated. If we see a message at the end of the file “MAX DUMP FILE SIZE EXCEEDED" . the MAX_DUMP_FILE_SIZE parameter is not setup high enough or to ‘unlimited’. There could be vital diagnostic information missing in the file and discovering the root issue may be very difficult. Set the MAX_DUMP_FILE_SIZE appropriately and regenerate the error for complete trace information.

2. Search 600/7445 Lookup Tool  :  Visit My Oracle Support to access the ORA-00600 Lookup tool (Note 7445.1). The ORA-600/ORA-7445 Lookup tool may lead you to applicable content in My Oracle Support on the problem and can be used to investigate the problem with argument data from the error message or we  can pull out key stack pointers from the associated trace file to match up against known bugs.

3. “Fine tune” searches in Knowledge Base  :   As the ORA-7445 error indicates an unhandled exception in the Oracle source code, our search in the Oracle Knowledge Base will need to focus on the stack data from the core file or the trace file.

Keep in mind that searches on generic argument data will bring back a large result set . The more we can learn about the environment and code leading to the errors, the easier it will be to narrow the hit list to match our problem.

4  .  If assistance is required from Oracle  :    Should it become necessary to get assistance from Oracle Support on an ORA-7445 problem, please provide at a minimum, the 

  • Alert log
  • Associated tracefile(s) or incident package at 11g
  • Patch level information
  • Core file(s)
  • Information about changes in configuration and/or application prior to issues
  • If error is reproducible, a self-contained reproducible testcase: Note.232963.1 How to Build a Testcase for Oracle Data Server Support to Reproduce ORA-600 and ORA-7445 Errors.
  • RDA report or Oracle Configuration Manager information



Enjoy     :-) 




Saturday, November 26, 2011

Estimate the Optimal UNDO Tablespace size in Oracle


We can size the undo tablespace appropriately either by using automatic extension of the undo tablespace or by using the Undo Advisor for a fixed sized tablespace.If we have decided on a fixed-size undo tablespace, the Undo Advisor can help us to estimate needed capacity.The Undo Advisor relies for its analysis on data collected in the Automatic Workload Repository (AWR). It is therefore important that the AWR have adequate workload statistics available so that the Undo Advisor can make accurate recommendations. 

Below is the script which will estiate the size of the undo tablespace required and also provide the detail when the database face the Number of "ORA-01555 (Snapshot too old)" encountered since the last startup of the instance 

SQL> SELECT  d.undo_size/(1024*1024) “ACTUAL UNDO SIZE [MByte]“,
                        SUBSTR(e.value,1,25) “UNDO RETENTION [Sec]“,
                       (TO_NUMBER(e.value) * TO_NUMBER(f.value) *
                      g.undo_block_per_sec) / (1024*1024)  “NEEDED UNDO SIZE [MByte]“
  FROM        (SELECT SUM(a.bytes) undo_size  FROM v$datafile a,  v$tablespace b,
                     dba_tablespaces c
 WHERE      c.contents = 'UNDO'
 AND          c.status = 'ONLINE'
 AND          b.name = c.tablespace_name
AND           a.ts# = b.ts#  ) d, v$parameter e, v$parameter f,
                 ( SELECT MAX(undoblks/((end_time-begin_time)*3600*24))  undo_block_per_sec
FROM       v$undostat  ) g 
WHERE      e.name = 'undo_retention'
AND         f.name = 'db_block_size'
/
set feedback off
set heading off
set lines 132
declare
  cursor get_undo_stat is
         select d.undo_size/(1024*1024) "C1",
                substr(e.value,1,25)    "C2",
                (to_number(e.value) * to_number(f.value) *
g.undo_block_per_sec) / (1024*1024) "C3",
                round((d.undo_size / (to_number(f.value) *
g.undo_block_per_sec)))             "C4"
           from (select sum(a.bytes) undo_size
                   from v$datafile      a,
                        v$tablespace    b,
                        dba_tablespaces c
                  where c.contents = 'UNDO'
                    and c.status = 'ONLINE'
                    and b.name = c.tablespace_name
                    and a.ts# = b.ts#)  d,
                v$parameter e,
                v$parameter f,
                (select max(undoblks/((end_time-begin_time)*3600*24))
undo_block_per_sec from v$undostat)  g
          where e.name = 'undo_retention'
            and f.name = 'db_block_size';
begin
dbms_output.put_line(chr(10)||chr(10)||chr(10)||chr(10) ||
'To optimize UNDO we have two choices :');
dbms_output.put_line('==========================================' || chr(10));
  for rec1 in get_undo_stat loop
      dbms_output.put_line('A)
Adjust UNDO tablespace size according to UNDO_RETENTION :'
|| chr(10));
      dbms_output.put_line(rpad('ACTUAL UNDO SIZE ',65,'.')|| ' : ' ||
TO_CHAR(rec1.c1,'999999') || ' MEGS');
      dbms_output.put_line(rpad('OPTIMAL UNDO SIZE WITH ACTUAL UNDO_RETENTION
(' || ltrim(TO_CHAR(rec1.c2/60,'999999'))
 || ' MINUTES)
',65,'.') || ' : ' || TO_CHAR(rec1.c3,'999999') || ' MEGS');
      dbms_output.put_line(chr(10));
      dbms_output.put_line('B) Adjust UNDO_RETENTION according to UNDO tablespace size :'
|| chr(10));
      dbms_output.put_line(rpad('ACTUAL UNDO RETENTION ',65,'.') ||
' : ' || TO_CHAR(rec1.c2/60,'999999') || ' MINUTES');
      dbms_output.put_line(rpad('OPTIMAL UNDO RETENTION WITH ACTUAL UNDO SIZE
(' || ltrim(TO_CHAR(rec1.c1,'999999')) || ' MEGS) ',65,'.') || ' : ' ||
TO_CHAR(rec1.c4/60,'999999') || ' MINUTES');
  end loop;
dbms_output.put_line(chr(10)||chr(10));
end;
/
select 'Number of "ORA-01555 (Snapshot too old)" encountered since
the last startup of the instance : ' || sum(ssolderrcnt)
from v$undostat;


Enjoy   :-) 



How to resize redolog file in oracle


Once , i receive the e-mail regarding the resize of the redo log file . The Sender want the easiest way to size the redo log file something like 'alter database logfile group 1 '?\redo01.log resize 100m '  or using some other trick . 

We cannot resize the redo log files. We must drop the redolog file and recreate them .This is only method to resize the redo log files. A database requires atleast two groups of redo log files,regardless the number of the members. We cannot the drop the redo log file if its status is current or active . We have change the status to "inactive" then only we can drop it.

When a redo log member is dropped from the database, the operating system file is not deleted from disk. Rather, the control files of the associated database are updated to drop the member from the database structure. After dropping a redo log file, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped redo log file. In my case i have four redo log files and they are of 50MB in size .I will resize to 100 MB.  Below are steps to resize the redo log files.

Step 1 : Check the Status of Redo Logfile 
SQL>  select group#,sequence#,bytes,archived,status from v$log;
    GROUP#  SEQUENCE#      BYTES    ARC    STATUS
----------    ----------    ----------      -----       -------------
         1          5   52428800      YES          INACTIVE
         2          6   52428800      YES          ACTIVE
         3          7   52428800      NO          CURRENT
         4          4   52428800     YES          INACTIVE

Here,we cannot drop the current and active redo log file .

Step  2 :  Forcing a Checkpoint  :
The SQL statement alter system checkpoint explicitly forces Oracle to perform a checkpoint for either the current instance or all instances. Forcing a checkpoint ensures that all changes to the database buffers are written to the datafiles on disk .A global checkpoint is not finished until all instances that require recovery have been recovered.

SQL> alter system checkpoint global ;
system altered.

SQL> select group#,sequence#,bytes,archived,status from v$log;

    GROUP#    SEQUENCE#        BYTES    ARC       STATUS
----------    ----------    ----------    -----     ----------------
         1          5       52428800     YES      INACTIVE
         2          6      52428800     YES       INACTIVE
         3          7      52428800     NO       CURRENT
         4          4      52428800    YES       INACTIVE
Since the status of group 1,2,4 are inactive .so we will drop the group 1 and group 2 redo log file.

Step  3  :  Drop Redo Log File : 
SQL> alter database drop logfile group 1;
Database altered.

SQL> alter database drop logfile group 2;
Database altered.

SQL>  select group#,sequence#,bytes,archived,status from v$log;
    GROUP#  SEQUENCE#      BYTES    ARC    STATUS
----------    ----------    ----------    ---     ----------------
         3          7               52428800      NO       CURRENT
         4          4             52428800       YES      INACTIVE

Step  4  : Create new redo log file 
If we don't delete the old redo logfile by OS command when creating the log file with same name then face the below error . Therefore to solve it delete the file by using OS command .

SQL> alter database add logfile group 1 'C:\app\neerajs\oradata\orcl\redo01.log' size 100m;
alter database add logfile group 1 'C:\app\neerajs\oradata\orcl\redo01.log' size 100m
*
ERROR at line 1:
ORA-00301: error in adding log file 'C:\app\neerajs\oradata\orcl\redo01.log' - file cannot be created
ORA-27038: created file already exists
OSD-04010: <create> option specified, file already exists

SQL> alter database add logfile group 1 'C:\app\neerajs\oradata\orcl\redo01.log' size 100m;
Database altered.

SQL> alter database add logfile group 2 'C:\app\neerajs\oradata\orcl\redo02.log' size 100m;
Database altered.

SQL>  select group#,sequence#,bytes,archived,status from v$log;
    GROUP#      SEQUENCE#      BYTES     ARC       STATUS
----------    ----------     ----------       ---      ----------------
         1          0      104857600       YES     UNUSED
         2          0      104857600       YES     UNUSED
         3          7       52428800        NO      CURRENT
         4          4       52428800       YES      INACTIVE

Step 5 :  Now drop the remaining two old redo log file 
SQL> alter system switch logfile ;
System altered.

SQL> alter system switch logfile ;
System altered.

SQL>  select group#,sequence#,bytes,archived,status from v$log;
    GROUP#  SEQUENCE#      BYTES ARC STATUS
---------- ---------- ---------- --- ----------------
         1          8  104857600     YES     ACTIVE
         2          9  104857600     NO      CURRENT
         3          7   52428800     YES     ACTIVE
         4          4   52428800     YES     INACTIVE

SQL> alter system checkpoint global;
System altered.

SQL>  select group#,sequence#,bytes,archived,status from v$log;
    GROUP#  SEQUENCE#      BYTES ARC STATUS
---------- ---------- ---------- --- ----------------
         1          8    104857600     YES     INACTIVE
         2          9    104857600     NO     CURRENT
         3          7     52428800     YES     INACTIVE
         4          4     52428800    YES      INACTIVE

SQL> alter database drop logfile group 3;
Database altered.

SQL> alter database drop logfile group 4;
Database altered.

SQL>  select group#,sequence#,bytes,archived,status from v$log;
    GROUP#  SEQUENCE#      BYTES ARC STATUS
---------- ---------- ---------- --- ----------------
         1          8  104857600      YES      INACTIVE
         2          9  104857600      NO       CURRENT

Step 6 : Create the redo log file 
SQL> alter database add logfile group 3 'C:\app\neerajs\oradata\orcl\redo03.log' size 100m;
Database altered.

SQL> alter database add logfile group 4 'C:\app\neerajs\oradata\orcl\redo04.log' size 100m;
Database altered.

SQL>  select group#,sequence#,bytes,archived,status from v$log;
    GROUP#  SEQUENCE#      BYTES ARC STATUS
---------- ---------- ---------- --- ----------------
         1          8        104857600      YES       INACTIVE
         2          9        104857600      NO        CURRENT
         3          0        104857600     YES        UNUSED
         4          0        104857600     YES        UNUSED


Enjoy    :-) 


Friday, November 25, 2011

How Often Redo Log file should switch ?


Redo log file switch has good impact on the performance of the database. Frequent log switches may lead to the slowness of the database .If the log file switches after long times then there may be chances  of lossing data when the redo log file get corrupt . Oracle documents suggests to resize the redolog files so that log switches happen more like every 15-30 min (roughly depending on the architecture and recovery requirements). 

But what happen when there in bulk load ?? since we cannot resize the redolog file size every time because it's  seems to be silly. Generally we donot load the data in bulk on regular basis . it's very often twice or thrice in a week . So what should be the accurate size ?? 

Here is a very good explanation of this question by "howardjr".

One of my database have very large logs which are not intended to fill up under normal operation. They are actually big enough to cope with a peak load we get every week. previously, we had two or three log switches recorded under the one alert log timestamp! Now, they switch every 10 minutes or so, even under the heaviest load. 
So big logs are good for slowing things own under load. But I don't want to sit there with 5 hours of redo sitting in my current log during non-peak-load normal running. Therefore, I set archive_lag_target to 1800 (seconds = 30 minutes), and I know that in the worst possible case, I will only lose 30 minutes of redo.

I see LOADS of advantages for using archive_lag_target even for standalone instances. Actually especially  for standalone instances.  I want logs big enough not to cause rapid log switching. But I have bulk loads. Therefore, I have to have enormous logs to prevent rapid log switching during those times. In fact, on one database I am connected to right now, I have 2GB redo logs which nevertheless manage to switch every 8 minutes on a Friday night. We can imagine the frequency of log switches we had when those logs were originally created at 5MB each! And the number of redo allocation retries.

I'd like 8GB logs to get it down to a log switch every 30 minutes or so on a Friday night, but with multiple members and groups, that's just getting silly.But now I have an enormous log that will take forever and a day to fill up and switch when I'm NOT doing bulk loads. Ordinarily, without a forced log switch, my 2GB log takes 3 days to fill up.

How  FAST_START_MTTR_TARGET   affect the redolog file in case of recovery? 

If I were to have a catastrophic hardware failure, I could lose my current redo log. Fast_start_mttr_target can't do anything to ameliorate that loss: flushing the dirty buffers to disk regularly doesn't protect my data, actually. In fact, there is no way to recover transactions that are sitting in the current redo log if that log is lost. Therefore, having an enormous log full of hours and hours (in my case, about 72 hours'-worth) of redo is a massive data loss risk, and not one I'm prepared to take.forcing log switches is a good thing for everyone to be able to do, when appropriate, even if they're not using Data Guard and standby databases.

That huge log files are necessary. That a forced log switch is essential thereafter to data security. We can certainly try to minimise the risk: that's what redo log multiplexing is all about. But if we lose all copies of your current log, then we have lost the only copy of that redo, and that means we have lost data. 

Frequent checkpoints can help minimise the amount of redo that is vulnerable to loss, but they do nothing to minimise the risk of that loss occurring. Redundant disks (mirroring), redundant controllers, multiplexing: those are the only things that can help protect the current redo log and thus actually reduce the risk of failure occurring in the first place. Frequent checkpointing simply reduces the damage that the loss of all currrent logs would inevitably cause, but it doesn't (and cannot) reduce it to zero. It's therefore not a protection mechanism at all, in the slightest.

Checkpoints set a limit on potential data loss from redo log damage, absolutely they do. But no matter how frequently we checkpoint, we cannot reduce potential data loss to zero and reducing the potential cost of a disaster should it strike doesn't count as reducing the risk of the disaster happening. Buying car insurance doesn't reduce our risk of having a car accident: it simply means we can pay the bills when the accident eventually happens. Therefore, checkpoints cannot reasonably be called a "current redo logfile protection mechanism" . Mirroring, multiplexing and redundant hardware are the only ways to actually protect the current redo log Safety and performance always have to be traded off against each other, and we cannot realistically propose going for just one or the other without appreciating the impact on the other.


Enjoy    :-) 


"log file sync" while performing COMMIT and ROLLBACK


When the performance of the database go slow then we go for the ADDM and looks at the symptoms and immediately starts changing the system to fix those symptoms. In my case , i found that the "commit and rollback" has major impact on the slowness of the database . The finding points says  : 
Waits on event “log file sync” while performing COMMIT and ROLLBACK operations were consuming significant database time.

The Recommended action suggest is

Investigate application logic for possible reduction in the number of COMMIT operations by increasing the size of transactions. 

Since,the application is committing too ,so we have to reduce the frequent commit . The reason for the log sync waits occur when sessions wait for redo data to be written to disk. Typically this is caused by slow writes or committing too frequently in the application. Checking the "user commits" section in the AWR report can reveal if the issue is related to frequent committing.  

The following  tips may help to reduce log file sync when writes are slow :

  • Tune  LGWR  to get good  throughput to disk . eg: Do not put redo logs on RAID 5 .
  • If there are lots of short duration transactions, see if it is possible to BATCH transactions together so there are fewer distinct COMMIT operations. Each commit must confirmed that the relevant REDO is on disk before it can complete. Although commits can be "piggybacked" by Oracle, reducing the overall number of commits by batching transactions can have a very beneficial effect.
  • Check the size of the log buffer size 
  • Check the estimated size of the redo log file . Also check how fast the redo logfile is switching 
  • Check if any of the processing can use the COMMIT NO-WAIT option  

In 11g the commit_write  parameter is deprecated . It is retained for backward compatibility only. It is replaced by the COMMIT_LOGGING and COMMIT_WAIT parameters. It is retained for backward compatibility only. It is replaced by the COMMIT_LOGGING and COMMIT_WAIT parameters.For more check the meta-link [857576.1]

As in my case i have perform the following steps and find that the performance of the database has increased and the impact of "COMMIT and ROLLBACK"  is too low. Here are steps

1.) Estimate the size of the Redolog file :  The value for optimal_logfile_size is expressed in MB. This value changes frequently based on the DML load on your database.Use the below query to calculate the estimated size of the redo logfile .

SQL> select   TARGET_MTTR "trgt_mttr",  ESTIMATED_MTTR "est_mttr",  WRITES_MTTR  "wrt_mttr",  WRITES_LOGFILE_SIZE  "wrt_lg_size", OPTIMAL_LOGFILE_SIZE  "opt_lg_size"  from    v$instance_recovery ;

 trgt_mttr     est_mttr     wrt_mttr    wrt_lg_size     opt_lg_size
----------     ----------     --------       ------------     -----------
         0           227          0                    52                300


After this i find that redolog file switch after every 3-4 minutes. Hence, I have increased the size of the redo logfile to 300 MB and the check the log switch. I found that the redo log file switches after approax every 20min . 

Since ,I have set the sga_target ,therefore cannot change the log buffer size. In my case my redolog file is approax 5.5MB . Some expert also suggest to increase the size of log buffer to 10MB or more to reduce the impact of the "commit and rollback ". Though , I have  not tested this and i will come later on this .  

This is what the workaround , i have perform to reduce the "commit and rollback" impact .


Enjoy    :-) 


Saturday, November 19, 2011

Configuration of Snapshot Standby Database in Oracle 11g


Snapshot Standby is a new features introduced in Oracle 11g. A snapshot standby database is a type of updatable standby database that provides full data protection for a primary database. A snapshot standby database receives and archives, but does not apply, redo data from its primary database. Redo data received from the primary database is applied when a snapshot standby database is converted back into a physical standby database, after discarding all local updates to the snapshot standby database.

The main benefits of snapshot standby database is that we can convert the physical standby database into a read-write real time clone of the production database and we can then temporarily use this environment for carrying out any kind of  development testing, QA type work or to test the impact of a proposed production change on an application. 

The best part of this snapshot features is that the Snapshot Standby database in turn uses the Flashback Database technology to create a guaranteed restore point to which the database can be later flashed back to.All the features of the flashback  are inherent in the snapshot standby.

Here we will configure the snapshot standby database

Step 1 : Create the physical standby database 
Create the physical standby database .

Step 2:  Enable Flashack Parameter 

SQL>  alter system set db_recovery_file_dest_size=4G  scope=both  ; 
System altered.

SQL> alter system set db_recovery_file_dest='D:\standby\fra\'  scope=both ; 
System altered.

SQL> show  parameter  db_recovery
NAME                                            TYPE                   VALUE
-----------------------------      -----------      -------------------------
db_recovery_file_dest                string                  D:\standby\fra\
db_recovery_file_dest_size         big integer              4G

Step 3  :  Stop the media recovery process 
SQL> alter database recover managed standby database cancel;
Database altered.

Step 4 : Ensure that the database is mounted, but not open.
SQL> shut immediate
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
SQL> startup mount

Step 5  :  Create guaranteed restore point
SQL> create restore point snapshot_rspt guarantee flashback database;
Restore point created.

Step 6  :  Perform the conversion to snapshot standby database
SQL> alter database convert to snapshot standby ;
Database altered.

SQL> select name,open_mode,database_role from v$database;
NAME           OPEN_MODE            DATABASE_ROLE
---------        --------------        --------------------------
NOIDA           MOUNTED              SNAPSHOT STANDBY

SQL> alter database open;
Database altered.

SQL> select name,db_unique_name ,open_mode,database_role from v$database;
NAME      DB_UNIQUE_NAME    OPEN_MODE          DATABASE_ROLE
-------   ----------------------    ---------------      ------------------------
NOIDA         gurgoan                  READ WRITE        SNAPSHOT STANDBY

Since the database is in read-write mode , so we can make some changes or even change the parameter say tunning parameter and check there performance after converting to physical standby and further flashback the whole changes.

SQL> select name,guarantee_flashback_database  from  v$restore_point;
NAME                                                                                            GUA
-----------------------------------------------------------------      -------
SNAPSHOT_STANDBY_REQUIRED_11/18/2011 20:41:01            YES
SNAPSHOT_RSPT                                                                 YES

While the original physical standby database has been now converted to snapshot database, some changes are happening on the Primary database and those changes are shipped to the standby site but not yet applied. They will accumulate on the standby site and will be applied after the snapshot standby database is converted back to a physical standby database.

Step  7  :  Convert snapshot standby to physical standby 
SQL> shut immediate 
SQL> startup mount

SQL> alter database convert to physical standby ;
Database altered.

SQL>shut immediate
SQL> startup mount

SQL> alter database recover managed standby database disconnect from session;
Database altered.

SQL> alter database recover managed standby database cancel;
Database altered.

SQL> alter database open;
Database altered.

SQL> select name,open_mode,db_unique_name,database_role from v$database;
NAME      OPEN_MODE        DB_UNIQUE_NAME      DATABASE_ROLE
-------    --------------     -----------------------    ----------------------
NOIDA    READ ONLY          gurgoan                     PHYSICAL STANDBY

SQL> alter database recover managed standby database using current logfile disconnect;
Database altered.

Hence, we finally back to physical standby database.


Enjoy    :-) 


Thursday, November 17, 2011

ORA-16606: unable to find property


This error occurs because the apply service state is inconsistent or the named property does not exist . In my case, when i configured the data broker and is working fine. Whenever, i fire the below the command the error occurs as .

DGMGRL> show configuration verbose noida
ORA-16606: unable to find property "noida"

While using the below command it is working fine.

DGMGRL> show configuration verbose

Configuration            - dgnoida
Protection Mode       : MaxPerformance
Databases                 : noida - Primary database
                                : delhi - Physical standby database
Fast-Start Failover   : DISABLED
Configuration Status : SUCCESS


To solve this problem , I crosscheck the archive dest and state of primary database and find that the state of status of log_archive_dest_state_2 is deffered . I enable the dest state and then  disable the configuration and later enable the configuration .Now it's working fine.

DGMGRL> disable configuration
Disabled.
DGMGRL> enable configuration
Enabled.

DGMGRL> show database verbose noida

Database - noida
Role:                        PRIMARY
Intended State          TRANSPORT-ON
Instance(s)               noida
Properties:
    DGConnectIdentifier                      = 'noida'
    ObserverConnectIdentifier             = ''
    LogXptMode                                = 'ASYNC'
    DelayMins                                    = '0'
    Binding                                         = 'optional'
    MaxFailure                                   = '0'
    MaxConnections                          = '1'
    ReopenSecs                                = '300'
    NetTimeout                                 = '30'
    RedoCompression                      = 'DISABLE'
    LogShipping                               = 'ON'
    PreferredApplyInstance              = ''
    ApplyInstanceTimeout                = '0'
    ApplyParallel                              = 'AUTO'
    StandbyFileManagement            = 'AUTO'
    ArchiveLagTarget                      = '0'
    LogArchiveMaxProcesses         = '4'
    LogArchiveMinSucceedDest     = '1'
    DbFileNameConvert                 = ''
    LogFileNameConvert                = ''
    FastStartFailoverTarget             = ''
    StatusReport                             = '(monitor)'
    InconsistentProperties                = '(monitor)'
    InconsistentLogXptProps          = '(monitor)'
    SendQEntries                           = '(monitor)'
    LogXptStatus                           = '(monitor)'
    RecvQEntries                          = '(monitor)'
    HostName                              = 'TECH-199'
    SidName                                = 'noida'
    StaticConnectIdentifier            = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=TECH-199)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=noida_DGMGRL)(INSTANCE_NAME=noida)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = 'D:\archive\'
    AlternateLocation                    = ''
    LogArchiveTrace                    = '0'
    LogArchiveFormat                 = 'ARC%S_%R.%T'
    TopWaitEvents                      = '(monitor)'

Database Status                        :  SUCCESS


Enjoy    :-) 


How to Drop Data Guard Configuration in oracle 11g


Once while configuring the dataguard broker, i faced the ORA-16625 and ORA-16501 . This error occurs beacuse the broker rejects an operation requested by the client when the database required to execute that operation was not reachable from the database where the request was made. If the request modifies the configuration, the request must be processed by the copy of the broker running on an instance of the primary database.

Few days ago i configured the standby database "RED" and broker, later drop it . Next time while configuring the data broker the error occurs as 

DGMGRL> create configuration 'dgnoida'
> as primary database is 'noida'
> connect identifier is 'noida';
Error: ORA-16501: the Data Guard broker operation failed
Error: ORA-16625: cannot reach database "red"
Failed.

To solve this issue, I have remove the data guard broker configuration and then created the dataguard broker. The steps to drop the configuration are as follows :

Step 1 : Stop the standby data guard broker process  
( On Standby )

SQL>show parameter dg_broker
NAME                               TYPE                          VALUE
------------------------   -----------     ----------------------------------------------
dg_broker_config_file1        string             C:\APP\NEERAJS\PRODUCT\11.2.0\
                                                                DBHOME_1\DATABASE\DR1NOIDA.DAT
dg_broker_config_file2       string              C:\APP\NEERAJS\PRODUCT\11.2.0\
                                                                DBHOME_1\DATABASE\DR2NOIDA.DAT
dg_broker_start                  boolean                        True

SQL> alter system set dg_broker_start=false;
System altered.

Step 2 : Diable the archive log state 
(On Primary )
SQL> select dest_id,destination,status from v$archive_dest where target='STANDBY';
DEST_ID      DESTINATION     STATUS
--------       ---------------     ----------
   2                delhi                 VALID

SQL> alter system set log_archive_dest_state_2=defer ;
System altered.

SQL> select dest_id,destination,status from v$archive_dest where target='STANDBY';

DEST_ID      DESTINATION     STATUS
--------       ---------------     ----------
   2                delhi                 DEFERRED


Step 3 : On both system rename or drop the metadata files

SQL> show parameter dg_broker

NAME                               TYPE                          VALUE
------------------------   -----------     ----------------------------------------------
dg_broker_config_file1        string             C:\APP\NEERAJS\PRODUCT\11.2.0\
                                                                DBHOME_1\DATABASE\DR1NOIDA.DAT
dg_broker_config_file2       string              C:\APP\NEERAJS\PRODUCT\11.2.0\
                                                                DBHOME_1\DATABASE\DR2NOIDA.DAT
dg_broker_start                  boolean                        False


Delete or rename the file DR1NOIDA.DAT and DR@NOIDA.DAT .


Enjoy       :-) 


Step-By-Step Configuration Of Data Guard Broker in Oracle 11g

As  we  have already  discuss  about  the Data Guard Broker  and its  benefits  in  earlier post . Here we will configure the data Guard Broker . Here are the steps :

Primary Databse   =  Noida
Standby Database =  Delhi

Step 1 :  Check the Data Guard Broker process 
SQL> sho parameter dg_broker
NAME                                   TYPE             VALUE
-----------------                    ----------          ----------
dg_broker_start                boolean          FALSE

Step 2  : Start the Data Guard Broker Process on Primary database   
SQL>alter system set dg_broker_start=true scope=both;
System altered.

Step 3 : Check DG_BROKER on standby database and start it 
SQL> sho parameter dg_broker
NAME                                    TYPE             VALUE
-----------------                      ----------         ----------
dg_broker_start                  boolean         FALSE

SQL>alter system set dg_broker_start=true scope=both ;
System altered.

Step 4 :   Edit the listener.ora file
Edit the listener.ora file which includes the db_unique_name_DGMGRL.db_domain values for the GLOBAL_DBNAME in both primary and standby database . To set the value, lets check the db_domain value .

SQL> show parameter db_domain
NAME                              TYPE               VALUE
--------------                  -----------         --------------
db_domain                      string

Since the value of db_domain  is null so the the value of  GLOBAL_DBNAME = NOIDA_DGMGRL for primary database and for standby  GLOBAL_DBNAME = DELHI_DGMGRL. The primary listener.ora file  is as 

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = noida_DGMGRL)
      (ORACLE_HOME = C:\app\neerajs\product\11.2.0\dbhome_1)
      (SID_NAME = noida)
    )
  )
Similarly, edit the listener.ora  file on standby database .

Step 5 : Configure the Data Guard Configuration 
C:\> dgmgrl
DGMGRL for 32-bit Windows: Version 11.2.0.1.0 - Production
Copyright (c) 2000, 2009, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.

DGMGRL> connect sys/xxxx@noida
Connected.

DGMGRL> create configuration 'dgnoida'
> as primary database is 'noida'
> connect identifier is noida ;
Configuration "dgnoida" created with primary database "noida" .

Once the configuration is created then check the status of configuration .

DGMGRL> show configuration
Configuration            - dgnoida
Protection Mode       : MaxPerformance
Databases                : noida - Primary database
Fast-Start Failover    : DISABLED
Configuration Status : DISABLED

Step  6 :  Add standby database to the data broker configuration 
DGMGRL>  add database 'delhi' as
> connect identifier is delhi
> maintained as physical ;
Database "delhi" added

DGMGRL> show configuration
Configuration             -  dgnoida
Protection Mode         :  MaxPerformance
Databases                   :  noida - Primary database
                                    : delhi - Physical standby database
Fast-Start Failover      :  DISABLED
Configuration Status    :  DISABLED

Step 7 : Enable the configuration
DGMGRL> enable configuration
Enabled.
DGMGRL> show configuration
Configuration              - dgnoida
Protection Mode          : MaxPerformance
Databases                   : noida - Primary database
                                   : delhi - Physical standby database
Fast-Start Failover       : DISABLED
Configuration Status    : SUCCESS

Step 8 : View the Primary and Standby database properties 

DGMGRL> show database verbose noida
Database                 - noida
Role                        : PRIMARY
Intended State         : TRANSPORT-ON
Instance(s)              :  noida
Properties:
DGConnectIdentifier                = 'noida'
ObserverConnectIdentifier       = ''
LogXptMode                         = 'ASYNC'
DelayMins                              = '0'
Binding                                   = 'optional'
MaxFailure                             = '0'
MaxConnections                    = '1'
ReopenSecs                           = '300'
NetTimeout                            = '30'
RedoCompression                  = 'DISABLE'
LogShipping                           = 'ON'
PreferredApplyInstance          = ''
ApplyInstanceTimeout            = '0'
ApplyParallel                          = 'AUTO'
StandbyFileManagement         = 'AUTO'
ArchiveLagTarget                   = '0'
LogArchiveMaxProcesses      = '4'
LogArchiveMinSucceedDest  = '1'
DbFileNameConvert              = ''
LogFileNameConvert             = ''
FastStartFailoverTarget          = ''
StatusReport                          = '(monitor)'
InconsistentProperties            = '(monitor)'
InconsistentLogXptProps       = '(monitor)'
SendQEntries                        = '(monitor)'
LogXptStatus                        = '(monitor)'
RecvQEntries                        = '(monitor)'
HostName                            = 'TECH-199'
SidName                              = 'noida'
StaticConnectIdentifier          = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=TECH-199)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=noida_DGMGRL)(INSTANCE_NAME=noida)(SERVER=DEDICATED)))'
StandbyArchiveLocation       = 'D:\archive\'
AlternateLocation                 = ''
LogArchiveTrace                 = '0'
LogArchiveFormat               = 'ARC%S_%R.%T'
TopWaitEvents                    = '(monitor)'
Database Status                   = SUCCESS

DGMGRL> show database verbose delhi

Database                    - delhi
Role:                             PHYSICAL STANDBY
Intended State            :  APPLY-ON
  Transport Lag          :   0 seconds
  Apply Lag               :   0 seconds
  Real Time Query     :   ON
  Instance(s)              :   delhi
  Properties:
    DGConnectIdentifier                = 'delhi'
    ObserverConnectIdentifier       = ''
    LogXptMode                          = 'SYNC'
    DelayMins                              = '0'
    Binding                                   = 'OPTIONAL'
    MaxFailure                            = '0'
    MaxConnections                    = '1'
    ReopenSecs                          = '300'
    NetTimeout                           = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                           = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyParallel                          = 'AUTO'
    StandbyFileManagement        = 'AUTO'
    ArchiveLagTarget                  = '0'
    LogArchiveMaxProcesses     = '4'
    LogArchiveMinSucceedDest = '1'
    DbFileNameConvert             = 'C:\app\neerajs\oradata\noida\, D:\app\stand\oradata\, E:\oracle\, D:\app\stand\oradata\'
    LogFileNameConvert           = 'C:\app\neerajs\oradata\noida\, D:\app\stand\oradata\'
    FastStartFailoverTarget        = ''
    StatusReport                        = '(monitor)'
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps    = '(monitor)'
    SendQEntries                      = '(monitor)'
    LogXptStatus                      = '(monitor)'
    RecvQEntries                     = '(monitor)'
    HostName                          = 'TECH-284'
    SidName                            = 'delhi'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=TECH-284)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=delhi_DGMGRL)(INSTANCE_NAME=delhi)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = 'D:\app\stand\archive\'
    AlternateLocation                    = ''
    LogArchiveTrace                    = '0'
    LogArchiveFormat                  = 'ARC%S_%R.%T'
    TopWaitEvents                       = '(monitor)'
Database Status                       :   SUCCESS

DGMGRL>


Enjoy        :-)