R12.2 Apps DBA. Powered by Blogger.
Showing posts with label Data Guard. Show all posts

Troubleshooting Oracle Apps DBA Scripts

No comments :
Query based export/import using datapump
#########################################
Query based export/import using datapump
#########################################
Here is the scenario/requirement for export the output of the below query should be imported to another schema of Non-Prod database.
select *
from inv.mtl_mat_trans
where transaction_date >= '29-JUN-2013'
and transaction_type_id in
(
13
,26
,24
,41
,32
,24
,18
,14
,29
,31
,99
);
Steps to be performed:
[oracle@host]$ cat expdp_q.par
DIRECTORY = EXPDIR
DUMPFILE  = expdp%U.dmp
FILESIZE  = 5G
LOGFILE   = expdp_query.log
SCHEMAS   = INV
INCLUDE   = TABLE:"IN ('MTL_MAT_TRANS')"
QUERY     = INV.MTL_MAT_TRANS:"WHERE transaction_date >= '29-JUN-2013' and transaction_type_id in(13,26,24,41,32,24,18,14,29,31,99)"
impdp system/****
DIRECTORY=EXPDIR
FILE=expdp%U.dmp
LOGFILE=impdp_q.log
remap_schema=INV:SUPPORT REMAP_TABLESPACE=APPS_TS_TX_DATA:SUPPORT_DATA
CONTENT=data_only
ex:
expdp system/***** parfile=expdp_q.par
impdp system/***** parfile=impdp_q.par
Posted by Jagadish chandra Prasad M at 4:22 AM No comments:
Email This
BlogThis!
Share to Twitter
Share to Facebook
Share to Pinterest
Monday, March 16, 2015
TOP SEGMENTS OF A ORACLE DATABASE
####################################################
TOP 5 LARGE SEGMENT GROWTH(OBJECT GROWTH)
####################################################
Please find the below query to identify the top 5 segments in an oracle database with resepective to its size.
col SEGMENT_NAME for a30
col owner for a20
col SEGMENT_TYPE for a20
SELECT * FROM
(
select
   owner,
   SEGMENT_NAME,
   SEGMENT_TYPE,
   BYTES/1024/1024/1024 GB,
   TABLESPACE_NAME
from
   dba_segments order by 4 desc
) WHERE
ROWNUM <= 5;
OWNER                SEGMENT_NAME                   SEGMENT_TYPE                      GB TABLESPACE_NAME
-------------------- ------------------------------ -------------------- --------------- --------------------
ABC              POI_ATNL_KIT_DATA_KT            TABLE                634.56091308594 USERS
ABC              POS_ATNL_DATA_BKP_KIT         TABLE                      477.84375 USERS
ABC              POS_ADDTNL_DATA_ADD_KT_A     TABLE                 140.8427734375 USERS
XYZ              AT_DW_SALES_RPT            TABLE                      114.28125 USERS
YXAD             P_DATA_UDTC_BKLD                 TABLE                103.59356689453 USERS
Posted by Jagadish chandra Prasad M at 8:28 PM No comments:
Email This
BlogThis!
Share to Twitter
Share to Facebook
Share to Pinterest
Labels: Scripts
Determine the process which is locking an Oracle account with incorrect password
Here is the procedure to Determine the process which is locking an Oracle account with incorrect password, which search from the dba_audit_session view for records with a returncode equal to 1017 which indicate a failed logon.
column username format a15
column userhost format a25
column terminal format a15
column timestamp format a15
column action_name format a15
select username, userhost, terminal, timestamp, action_name from sys.dba_audit_trail where RETURNCODE='1017';
USERNAME        USERHOST                  TERMINAL   TIMESTAMP   ACTION_NAME
--------------- ------------------------- ---------- ----------- -----------
appsread              <HOSTNAME>                                    02-APR-14       LOGON
appsread              <HOSTNAME>                                    02-APR-14       LOGON
appsread             <HOSTNAME>                                     02-APR-14       LOGON
appsread             <HOSTNAME>                                     02-APR-14       LOGON
appsread             <HOSTNAME>                                     02-APR-14       LOGON
Posted by Jagadish chandra Prasad M at 8:18 PM No comments:
Email This
BlogThis!
Share to Twitter
Share to Facebook
Share to Pinterest
Labels: Database
HIGH WATER USAGE (HWM USAGE)
##############################
HIGH WATER USAGE (HWM USAGE)
##############################
To resize any datafile to reclaim some space on the datafile. Need to check for the High water mark usage and based on that Water mark we may have to resize the respective datafile to get the space reclaimed.
We had an critical space issue on the dataware house environment to reclaim the space identified the datafiles using below query and resized the respective datafiles where we can get some space through this process.
Note: This is for a temporary Fix and have to plan for a better storage.
set verify off
column file_name format a60 word_wrapped
column smallest format 9999,990 heading "Smallest|Size|Poss."
column currsize format 9999,990 heading "Current|Size"
column savings  format 9999,990 heading "Poss.|Savings"
set pages 100
break on report
compute sum of savings on report
column value new_val blksize
select value from v$parameter where name = 'db_block_size';
/
SELECT FILE_NAME,  CEIL( (NVL(HWM,1)*&&BLKSIZE)/1024/1024 ) SMALLEST,
CEIL( BLOCKS*&&BLKSIZE/1024/1024) CURRSIZE,
CEIL( BLOCKS*&&BLKSIZE/1024/1024) - CEIL((NVL(HWM,1)*&&BLKSIZE)/1024/1024 ) SAVINGS
FROM DBA_DATA_FILES DBADF, ( SELECT FILE_ID, MAX(BLOCK_ID+BLOCKS-1) HWM FROM DBA_EXTENTS
where tablespace_name ='USERS' GROUP BY FILE_ID ) DBAFS
WHERE DBADF.TABLESPACE_NAME='USERS' and DBADF.FILE_ID = DBAFS.FILE_ID(+) and
(CEIL( BLOCKS*&&BLKSIZE/1024/1024) - CEIL( (NVL(HWM,1)*&&BLKSIZE)/1024/1024 )) > 300;
                                                              Smallest
                                                                  Size   Current     Poss.
FILE_NAME                                                        Poss.      Size   Savings
------------------------------------------------------------ --------- --------- ---------
+DATA_PRD/PROD/datafile/users_data.363.825564829          31,396    31,744       348
+DATA_PRD/PROD/datafile/users_data.1042.866689707         16,076    16,512       436
                                                                                 ---------
sum                                                                                    784
Posted by Jagadish chandra Prasad M at 8:06 PM No comments:
Email This
BlogThis!
Share to Twitter
Share to Facebook
Share to Pinterest
Labels: Database
How to check whether patch is applied via Hotpatch or Not in R12.2.3
#######################################################
TO CHECK WHETHER PATCH IS APPLIED VIA HOTPATCH OR NOT
########################################################
set pagesize 200;
set linesize 160;
column adop_session_id format 999999999999;
column bug_number format a15;
column status format a15;
column applied_file_system_base format a23;
column patch_file_system_base format a23;
column adpatch_options format a15;
column node_name format a15;
column end_date format a15;
column clone_status format a15;
select ADOP_SESSION_ID, BUG_NUMBER, STATUS, APPLIED_FILE_SYSTEM_BASE, PATCH_FILE_SYSTEM_BASE, ADPATCH_OPTIONS, NODE_NAME, END_DATE, CLONE_STATUS
from ad_adop_session_patches where BUG_NUMBER='19677937' order by end_date desc;

ADOP_SESSION_ID BUG_NUMBER      STATUS          APPLIED_FILE_SYSTEM_BAS PATCH_FILE_SYSTEM_BASE  ADPATCH_OPTIONS NODE_NAME       END_DATE        CLONE_STATUS
--------------- --------------- --------------- ----------------------- ----------------------- --------------- --------------- --------------- ---------------
             32 19677937        Y               /u01//app/oracle/prod/fs                         hotpatch        <HOSTNAME>        27-MAR-14

Above information provides you that it uses session id 32 while applying as well as it applied in Hotpatch mode on so and so date.
Tha
Posted by Jagadish chandra Prasad M at 7:47 PM No comments:
Email This
BlogThis!
Share to Twitter
Share to Facebook
Share to Pinterest
Labels: R12.2.3
Logfile Locations in R12.2.3
Please find the below logfiles of Admin server/oacore/forms and so on.
Admin server - $FMW_HOME/user_projects/domains/EBS_domain_$TWO_TASK/servers/AdminServer/logs/AdminServer.log
oacore logfile - $FMW_HOME/user_projects/domains/EBS_domain_$TWO_TASK/servers/oacore_server1/logs/oacore_server1.log
oacore out file - $FMW_HOME/user_projects/domains/EBS_domain_$TWO_TASK/servers/oacore_server1/logs/oacore_server1.out
acore diagnostic log - $FMW_HOME/user_projects/domains/EBS_domain_$TWO_TASK/servers/oacore_server1/logs/oacore_server1-diagnostic.log
oafm logfile - $FMW_HOME/user_projects/domains/EBS_domain_$TWO_TASK/servers/oafm_server1/logs/oafm_server1.log
oafm outfile - $FMW_HOME/user_projects/domains/EBS_domain_$TWO_TASK/servers/oafm_server1/logs/oafm_server1.out
oafm diagnostic log - $FMW_HOME/user_projects/domains/EBS_domain_$TWO_TASK/servers/oafm_server1/logs/oafm_server1-diagnostic.log
form server log - $FMW_HOME/user_projects/domains/EBS_domain_$TWO_TASK/servers/forms_server1/logs/forms_server1.log
form server access log - $FMW_HOME/user_projects/domains/EBS_domain_$TWO_TASK/servers/forms_server1/logs/access.log
form server out file - $FMW_HOME/user_projects/domains/EBS_domain_$TWO_TASK/servers/forms_server1/logs/forms_server1.out
form server diagnostic log - $FMW_HOME/user_projects/domains/EBS_domain_$TWO_TASK/servers/forms_server1/logs/forms_server1-diagnostic.log
forms-c4ws_server1 log - $FMW_HOME/user_projects/domains/EBS_domain_$TWO_TASK/servers/forms-c4ws_server1/logs/forms-c4ws_server1.log
forms-c4ws_server1 out file - $FMW_HOME/user_projects/domains/EBS_domain_$TWO_TASK/servers/forms-c4ws_server1/logs/forms-c4ws_server1.out
forms-c4ws_server1 diagnostic log - $FMW_HOME/user_projects/domains/EBS_domain_$TWO_TASK/servers/forms-c4ws_server1/logs/forms-c4ws_server1-diagnostic.log
Posted by Jagadish chandra Prasad M at 7:38 PM No comments:
Email This
BlogThis!
Share to Twitter
Share to Facebook
Share to Pinterest
Labels: R12.2.3
How to find database growth on a Monthly wise
###################################
DATABASE GROWTH ON A MONTHLY WISE
##################################
select to_char(CREATION_TIME,'RRRR') year, to_char(CREATION_TIME,'MM') month, round(sum(bytes)/1024/1024/1024) GB
from   v$datafile
group by  to_char(CREATION_TIME,'RRRR'),   to_char(CREATION_TIME,'MM')
order by   1, 2;
YEAR MO              GB
---- -- ---------------
2011 09              74
2012 06            4579
2012 07             334
2012 08             523
2012 09             652
Posted by Jagadish chandra Prasad M at 7:34 PM No comments:
Email This
BlogThis!
Share to Twitter
Share to Facebook
Share to Pinterest
Labels: Database
Sunday, February 16, 2014
ORA-29702: ERROR OCCURRED IN CLUSTER GROUP SERVICE OPERATION
#################################################################
ORA-29702: ERROR OCCURRED IN CLUSTER GROUP SERVICE OPERATION
#################################################################
Got the above error while starting up test instance in nomount from prod instance where as prod is a Cluster database.
-bash-3.2$ sqlplus "/as sysdba"
SQL*Plus: Release 11.2.0.3.0 Production on Sun Feb 9 20:03:36 2014
Connected to an idle instance.
SQL> startup nomount;
ORA-29702: error occurred in Cluster Group Service operation
SQL> shut immediate;
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
SQL> exit
-bash-3.2$ cd cd $ORACLE_HOME/rdbms/lib
-bash-3.2$
-bash-3.2$ cd $ORACLE_HOME/rdbms/lib
-bash-3.2$ pwd
/u01/app/oracle/testdb/rdbms/lib
-bash-3.2$ make -f ins_rdbms.mk rac_off
rm -f /u01/app/oracle/testdb/lib/libskgxp11.so
cp /u01/app/oracle/testdb/lib//libskgxpg.so /u01/app/oracle/testdb/lib/libskgxp11.so
rm -f /u01/app/oracle/testdb/lib/libskgxn2.so
cp /u01/app/oracle/testdb/lib//libskgxns.so \
              /u01/app/oracle/testdb/lib/libskgxn2.so
/usr/bin/ar d /u01/app/oracle/testdb/rdbms/lib/libknlopt.a kcsm.o
/usr/bin/ar cr /u01/app/oracle/testdb/rdbms/lib/libknlopt.a /u01/app/oracle/testdb/rdbms/lib/ksnkcs.o
-bash-3.2$ make -f ins_rdbms.mk ioracle
chmod 755 /u01/app/oracle/testdb/bin
 - Linking Oracle
rm -f /u01/app/oracle/testdb/rdbms/lib/oracle
gcc  -o /u01/app/oracle/testdb/rdbms/lib/oracle -m64 -L/u01/app/oracle/testdb/rdbms/lib/ -L/u01/app/oracle/testdb/lib/ -L/u01/app/oracle/testdb/lib/stubs/
   -Wl,-E /u01/app/oracle/testdb/rdbms/lib/opimai.o /u01/app/oracle/testdb/rdbms/lib/ssoraed.o /u01/app/oracle/testdb/rdbms/lib/ttcsoi.o  -Wl,--whole-archive
 -lperfsrv11 -Wl,--no-whole-archive /u01/app/oracle/testdb/lib/nautab.o /u01/app/oracle/testdb/lib/naeet.o /u01/app/oracle/testdb/lib/naect.o /u01/app/oracl
e/testdb/lib/naedhs.o /u01/app/oracle/testdb/rdbms/lib/config.o  -lserver11 -lodm11 -lcell11 -lnnet11 -lskgxp11 -lsnls11 -lnls11  -lcore11 -lsnls11 -lnls11 -lcor
e11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lclient11  -lvsn11 -lcommon11 -lgeneric11 -lknlopt `if /usr/bin/ar tv /u01/app/orac
le/testdb/rdbms/lib/libknlopt.a | grep xsyeolap.o > /dev/null 2>&1 ; then echo "-loraolap11" ; fi` -lslax11 -lpls11  -lrt -lplp11 -lserver11 -lclient11  -lvsn11 -lc
ommon11 -lgeneric11 `if [ -f /u01/app/oracle/testdb/lib/libavserver11.a ] ; then echo "-lavserver11" ; else echo "-lavstub11"; fi` `if [ -f /u01/app/oracle/oa123t
st/lib/libavclient11.a ] ; then echo "-lavclient11" ; fi` -lknlopt -lslax11 -lpls11  -lrt -lplp11 -ljavavm11 -lserver11  -lwwg  `cat /u01/app/oracle/testdb/lib/ldf
lags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnro11 `cat /u01/app/oracle/testdb/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnnz11 -lzt11 -lm
m -lsnls11 -lnls11  -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lztkg11 `cat /u01/app/oracle/oa1
23tst/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnro11 `cat /u01/app/oracle/testdb/lib/ldflags`    -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnnz
11 -lzt11   -lsnls11 -lnls11  -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 `if /usr/bin/ar tv /u01
/app/oracle/testdb/rdbms/lib/libknlopt.a | grep "kxmnsd.o" > /dev/null 2>&1 ; then echo " " ; else echo "-lordsdo11"; fi` -L/u01/app/oracle/testdb/ctx/lib/ -lctx
c11 -lctx11 -lzx11 -lgx11 -lctx11 -lzx11 -lgx11 -lordimt11 -lclsra11 -ldbcfg11 -lhasgen11 -lskgxn2 -lnnz11 -lzt11 -lxml11 -locr11 -locrb11 -locrutl11 -lhasgen11 -lskg
xn2 -lnnz11 -lzt11 -lxml11  -loraz -llzopro -lorabz2 -lipp_z -lipp_bz2 -lippdcemerged -lippsemerged -lippdcmerged  -lippsmerged -lippcore  -lippcpemerged -lippcpmerge
d  -lsnls11 -lnls11  -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lsnls11 -lunls11  -lsnls11 -lnls
11  -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lasmclnt11 -lcommon11 -lcore11 -laio    `cat /ot0
1/app/oracle/testdb/lib/sysliblist` -Wl,-rpath,/u01/app/oracle/testdb/lib -lm    `cat /u01/app/oracle/testdb/lib/sysliblist` -ldl -lm   -L/u01/app/oracle/oa1
23tst/lib
test ! -f /u01/app/oracle/testdb/bin/oracle ||\
           mv -f /u01/app/oracle/testdb/bin/oracle /u01/app/oracle/testdb/bin/oracleO
mv /u01/app/oracle/testdb/rdbms/lib/oracle /u01/app/oracle/testdb/bin/oracle
chmod 6751 /u01/app/oracle/testdb/bin/oracle
-bash-3.2$
-bash-3.2$
-bash-3.2$
-bash-3.2$ sqlplus "/as sysdba"
SQL*Plus: Release 11.2.0.3.0 Production on Sun Feb 9 20:10:48 2014
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
Connected to an idle instance.
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 2137886720 bytes
Fixed Size                  2230072 bytes
Variable Size             452987080 bytes
Database Buffers         1660944384 bytes
Redo Buffers               21725184 bytes
SQL> exit
Posted by Jagadish chandra Prasad M at 3:23 PM No comments:
Email This
BlogThis!
Share to Twitter
Share to Facebook
Share to Pinterest
Labels: Database
ORA-01450: maximum key length (3215) exceeded
error while rebuilding the index through online.
06:19:09 [SYS][TEST]>> ALTER index PRS.UK_F_HEADER rebuild online;
ALTER index PRS.UK_HEADER rebuild online
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01450: maximum key length (3215) exceeded
Solution:
The issue was fixed with rebuilding the index without online.
06:24:30 [SYS][TEST]>> ALTER index PRS.UK_HEADER rebuild;
Index altered.
Labels: Database
Friday, January 31, 2014
RMAN-08137 on Primary Database although Archive Destination to Standby is deferred
RMAN does not delete ArchiveLogs on Primary Database when the State for the Archive Destination to the Standby Database is set to 'defer'.
When trying to delete ArchiveLogs, this Error is raised:
RMAN-08137: WARNING: archived log not deleted, needed for standby or upstream capture process
If we defer an Archive Destination to a Standby Database, the Primary Database will still consider the
Standby Database as existing but temporary unavailable eg. for Maintenance. This can happen if you stop Log Transport
Services from the Data Guard Broker or manually defer the State for the Archive Destination.
SOLUTION:
------------------
As long as the Archive Destination (log_archive_dest_n) is still set, we consider the
Standby Database as still existing and preserve the ArchiveLogs on the Primary
Database to perform Gap Resolution when the Archive Destination is valid again.
There are Situations when this is not wanted, eg. the Standby Database was activated or
removed but you still keep the Archive Destination because you want to rebuild the Standby
Database later again. In this Case you can set the hidden Parameter "_deferred_log_dest_is_valid" to
FALSE (default TRUE) which will consider deferred Archive Destinations as completely unavailable
and will not preserve ArchiveLogs for those Destinations any more. It is a dynamic Parameter and can be set this Way:
SQL> alter system set "_deferred_log_dest_is_valid" = FALSE scope=both;
NOTE: This Parameter has been introduced with Oracle Database 11.2.0.x. In earlier
Versions you have to unset the log_archive_dest_n-Parameter pointing to the remote Standby
Database to make the Primary Database considering it as completely unavailable. There also exists a
Patch on Top of 11.1.0.7 for some Platforms to include this Parameter in 11.1.0.7, too. This is Patch Number 8468117.
Reff:  RMAN-08137: WARNING: archived log not deleted, needed for standby or upstream capture process (1380368,1)

Resolve minimum archive gap between PRIMARY and STANDBY

No comments :
A Physical Standby database sync with Primary by continuous apply of archive logs from a Primary Database.
When the logs are missing on standby is less than 10-15 numbers, We can simple ship the logs which are missing in the standby database from primary database by using SCP/FTP and then register the logfiles in standby to resolve the gap.
Please use below query to find out archive gap on Standby:
SELECT ARCH.THREAD# “Thread”, ARCH.SEQUENCE# “Last Sequence Received”, APPL.SEQUENCE# “Last Sequence Applied”, (ARCH.SEQUENCE# – APPL.SEQUENCE#) “Difference” FROM (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH, (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;
 
Thread  Last Sequence Received   Last Sequence Applied     Difference
———-  ———————-                          ———————        ———-  
1                   9545                                             9535                   10
 
SQL> SELECT MAX(SEQUENCE#) FROM V$ARCHIVED_LOG WHERE APPLIED=’YES’;
 MAX(SEQUENCE#)
————–
9535
COPY ARCHIVELOG FILE TO STANDBY FROM PRIMARY:
REGISTER LOGFILE AT STANDBY:
SQL> alter database register logfile ‘/log_location/log_file_n.arc’; logfile registered
Repeat the same process for all the log files which are missing at standby.


Resolve huge archive gap between PRIMARY and STANDBY

No comments :
A Physical Standby database synchs with Primary by continuous apply of archive logs from a Primary Database.
When the logs are missing on standby difference is huge (say more than 500 logs), you have to rebuild the standby database from scratch.
With out rebuild standby database, As an enhancement from 10g, an incremental backup created with BACKUP INCREMENTAL… FROM SCN can be used to refresh the standby database.
Please use below query to find out archive gap on Standby:
SELECT ARCH.THREAD# “Thread”, ARCH.SEQUENCE# “Last Sequence Received”, APPL.SEQUENCE# “Last Sequence Applied”, (ARCH.SEQUENCE# – APPL.SEQUENCE#) “Difference” FROM (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH, (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;
  Thread        Last Sequence Received      Last Sequence Applied      Difference
———-              ———————-                  ———————             ———-  
1                               8254                                      7954                          300
 Find the SCN on the PRIMARY:
SQL> select current_scn from v$database;
 CURRENT_SCN
———–  
242671761
Find the SCN on the STANDBY:
 SQL> select current_scn from v$database;
CURRENT_SCN
———–  
223771173
Clearly you can see there is difference
Stop and shutdown the managed standby apply process:
SQL> alter database recover managed standby database cancel;
Database altered.
Shutdown the standby database
SQL> shut immediate
On the primary, take an incremental backup from the SCN number where the standby current value 223771173:
 RMAN> run { allocate channel c1 type disk format ‘/backup/%U.bkp’;
backup incremental from scn 223771173 database;
 }
On the primary, create a new standby controlfile:
SQL> alter database create standby controlfile as ‘/backup/for_standby.ctl’;
Database altered.
Copy the standby controlfile to STANDBY and bring up the standby instance in nomount status with standby controlfile:
SQL> startup nomount
SQL> alter database mount standby database;
Connect to RMAN on STANDBY, Catalog backup files to RMAN using below commands:
$ rman target=/
RMAN> catalog start with ‘/backup’;
PERFORM RECOVER:
RMAN> recover database;
Start managed recovery process:
SQL> alter database recover managed standby database disconnect from session;
Database altered.

Check the SCN’s in primary and standby it should be close to each other.

To check log-lags

No comments :
Data Guard Server.
ps -ef|grep -i MRP
ps -ef | grep pmon
sqlplus '/as sysdba'
select name,open_mode,database_role from v$database;

SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied",
(ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference" FROM (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME )
IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,(SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY
WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;

archive log list;
exit;

cd /oracle/PROD/db/apps_st/archive
Confirm archive log is present in directory.
ls -ltr *17985* if not Copy from Primary to DR.
1_17985_785817078.dbf 1_17986_785817078.dbf 1_18000_785817078.dbf 1_18001_785817078.dbf 1_18002_785817078.dbf 1_18003_785817078.dbf 1_18004_785817078.dbf

Restore archivelog
RUN {
allocate channel d1 type disk;
restore archivelog from logseq = 17985 until logseq = 17986;
restore archivelog from logseq = 18000 until logseq = 18004;
release channel d1;
}

DATABASE SWITCHOVER ACTIVITIES

No comments :
In Primary Database[APDBA/APDBABPRD], perform 3-4 log switches at 01:00hrs EST as well as check the level 0 backup status
Check that level0 Backup as well as validation has been completed in Primary APDBA and APDBAB, as well as check for block corruption.
Before validation, check for block corruption by using v$database_block_corruption as well as v$backup_corruption.
To recover block from rman:-
Block recover datafile <datafile No> block ID;
Alter system switch logfile;
Alter system switch logfile;
Alter system switch logfile;
Alter system switch logfile;
/
In the Primary database, make a note of the current sequence and the old archived sequence
archive log list;
Archive destination            /ora/APDBAB/arclogs/arch
Oldest online log sequence     185110
Next log sequence to archive   0
Current log sequence           185112
In the corresponding Standby node, check the log lag and the archive sequence applied
a.       Select * from v$archvie_gap; or
SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference"
FROM
(SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
(SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
WHERE
ARCH.THREAD# = APPL.THREAD#
ORDER BY 1
/
b.       Archive log list;
If all the archive logs are in sync b/w the Primary and secondary database, disable the archive shipping in Primary database
Primary – Standby DR Break step:-
Alter system set log_archive_dest_state_2=DEFER
Confirm the same by executing
show parameter log_archive_dest_state_2
The above step will disable the log shipping from Primary to Standby database. Double check that no shipping occurs by few log switches in PRIMARY.
Comment the PRIMARY(Production) node entries in /etc/hosts
In the DR DB server, comment the below entries in /etc/hosts as each user.
192.168.233.56   apdba.global.internal apdba
192.168.223.80   apdba1.global.internal apdba1
Shutdown the standby database and take a cold backup of the database and the Oracle Home
Ensure to take a backup of the control files and redo log files for safety purpose. Backup the configuration files like pfile/tnsnamas.ora/listener.ora under $HOME
All backups should be placed under /APDBA/rman and /APDBAB/rman only.
Cd /ora/APDBA/home/product
APDBA:- nohup tar -cvpf - ./11202 2> /ora/drtemp/APDBA/RDBMS_HOME/OH.log  | gzip -c > /ora/drtemp/APDBA/RDBMS_HOME/11202.tar.gz &
This Step needs to be done after Functional Testing, so at this point of time before switchover do not run this, this needs to be done after switchover to primary. Refer Page No - 13
cd /ora/APDBAB/home/product
APDBAB:- nohup tar -cvpf - ./11.2.0 2> /ora/drtemp/APDBAB/RDBMS_HOME/OH.log  | gzip -c > /ora/drtemp/APDBAB/RDBMS_HOME/11202.tar.gz &
Cold Backup Script:-
APDBA:- /ora/drtemp/APDBA/RMAN/COLD_BACKUP/RMAN_cold_backup_APDBA.rman
APDBAB:- /ora/drtemp/APDBAB/RMAN/COLD_BACKUP/RMAN_cold_backup_APDBAB.rman
In the secondary database, adjust the archive destination status
N/A
This step is not required in this point of time as it is a partial role transition exercise.
In the standby database, perform recovery operations with the available archive redo log files.
1.       Startup nomount;
2.       Alter database mount standby database;
3.       alter database recover managed standby database cancel;
4.       alter database recover managed standby database finish force;
Enable flashback database and create restore points in the secondary DB before converting it to Primary. This step is to ensure that DB is flashbacked after the testing. Also log_archive_dest is commented so as to ensure the archives generated will get located in the db_recovery_file_dest
1.  Ensure the database is in archive log mode.
2.  Set the parameter in the pfile :respective to the envs:
Comment log_archive_dest_1,log_archive_dest
,FAL_SERVER,
FAL_CLIENT in both the env’s(new APDBA&APDBAB)
db_recovery_file_dest_size=100GB
For APDBADR
db_recovery_file_dest= '/ora/drtemp/bov_fb'
FOR APDBADR
db_recovery_file_dest= ' /ora/drtemp/pic_fb'
db_flashback_retention_target=2880
alter system set db_recovery_file_dest='/ora/drtemp/<SID>' SCOPE=spfile;
alter system set db_recovery_file_dest_size=100G SCOPE=spfile;
3.  Enable Flashback Database
Shutdown immediate;
Startup mount;
alter database flashback on;
alter database open;
alter system set db_flashback_retention_target=2880;
How to check if Database Flashback is enabled or not:-
select flashback_on from v$database;
Make a note of the SCN after enabling the flash back for both the env’s
select to_char(standby_became_primary_scn) from v$database;
Create guaranteed restore point to flashback when required and validate using the point#2
1.       create restore point <restore point name>  guarantee flashback database;
2.       select name, scn, time, database_incarnation#,guarantee_flashback_database,storage_size   from v$restore_point;
3.       alter session set nls_date_format='dd/mm/yy hh24:mi:ss';
4.       select oldest_flashback_scn,oldest_flashback_time from v$flashback_database_log;
Once the above statement is completed successfully, convert the physical standby to a primary database role:
Database Switchover Steps(Only Standby to Primary):-
1.       alter database commit to switchover to primary;
2.       shutdown immediate;
3.       startup (ensure the pfile is intact) – spfile is used
Note: - Make sure not to change the DBNAME and DBID after the switchover, otherwise restore point won’t be visible.
In the new-primary database TNS parameters for the new standby database location
N/A
This step is not required in this point of time as it is a partial role transition exercise.
In the new-primary database, complete the database configuration steps as mentioned.
As APPS user, execute the below:
1.       exec fnd_net_services.remove_system(APDBAB/APDBA);
2.       commit;
3.       exec fnd_conc_clone.setup_clean;
As db owner of the new primary database, validate the context file and then run autoconfig to complete the configuration for primary operations.
If the Production environments are kept online, execute rapidclone with port pool 30 to avoid connections to Production environments at any point.
Else execute autoconfig as mentioned below:
./$ORACLE_HOME/appsutil/scripts/<context>/adautocfg.sh
Restart the listener service for the new primary database
Lsnrctl stop <LISTENER_NAME>
Lsnrctl start < LISTENER_NAME >
Drop all the DB links (for this iteration only)
Use the attached script to drop all the DB links.
Validate that no db links exists after the script.
select 'drop database link '||DB_LINK||' ;' from dba_db_links;
select 'public drop database link '||DB_LINK||' ;' from dba_db_links;
APPLICATIONS FAILOVER ACTIVITIES
The application binaries are already in sync with the middles tier, re-run rsync from the Secondary Application node (apdba1 and apdba2) to have the files intact with Primary.( Already ran rsync in both the DR Application nodes).
Before running rsync, make sure to enable SSH password less authentication.
APDBAB:
Before running rsync, enable ssh password less authentication between primary and standby Application servers:-
SSH Password Less Authentication
=================================
1)[apprd@apdbab1 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/apprd/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/apprd/.ssh/id_rsa.
Your public key has been saved in /home/apprd/.ssh/id_rsa.pub.
The key fingerprint is:
a6:16:81:1b:ac:d0:db:ed:2b:e2:31:0e:b0:d9:47:fc apprd@apdbab1.global.internal
2)ssh-copy-id remote machine:- apprd@apdba2.global.internal
If not working, then create
a)authorized_keys file in Remote Machine and do a vi and insert the content of id_rsa.pub in authorized_keys file in $HOME/.ssh
[abprd@apdbab2 ~]$ ssh-copy-id abprd@apdba3.global.internal
abprd@apdba3.global.internal's password:
Now try logging into the machine, with "ssh 'abprd@apdba3.global.internal'", and check in:
.ssh/authorized_keys
RYSNC command as follows
nohup rsync -rave ssh abprd@apdba3.global.internal:/ora/APDBAB/appl /ora/APDBAB
APDBA:
nohup rsync -rave ssh apprd@apdba2.global.internal:/ora/APDBA/appl /ora/APDBA
Ensure the Secondary Application node entries in /etc/hosts are commented.
Comment the entries below :
DR APDBA Middle Tier DR :
122.198.202.13  apdba2.global.internal      apdba2
122.198.233.56   apdba.global.internal      apdba
DR APDBAB Middle Tier DR:
122.198.202.14  apdba3.global.internal      apdba3
122.198.223.80   apdba1.global.internal      apdba1
Take a backup of the TNSnames.ora, listener.ora,
$CONETXT_FILE, DBC file, adcfgclone.pl for safer side.
“env_backup_Sep2015” directory has been created under each “ap” user $HOME directory. Files are already backuped except adcfgclone.pl
Modify the file adcfgclone.pl to skip starting the services as a part of rapidclone.
Modify the value
$isSkipStartingServices = "false";
To
my $isSkipStartingServices = "true";
Execute adcfgclone.pl appsTier and input right values
SID/TWO_TASK to be set as APDBADR(8 char) or APDBABDR(8 Char). Ensure to use the port pool 30 and 40 respectively for APD and APDA.
Update the host name in fnd_concurrent_requests table
APDBAB:
update apps.fnd_concurrent_requests set logfile_node_name ='apdbab2.global.internal',outfile_node_name ='apdbab2.global.internal' where logfile_node_name ='apdba3.global.internal' and outfile_node_name ='apdba3.global.internal';
APDBA:
update apps.fnd_concurrent_requests set logfile_node_name ='apdbab1.global.internal',outfile_node_name ='apdbab1.global.internal' where logfile_node_name ='apdba2.global.internal' and outfile_node_name ='apdba2.global.internal';
Perform post clone steps
Check if the APPLCSF, APPLTMP and APPLPTMP are set rightly. Validate the instance specific profiles if necessary.
Post clone Printer Setup
Check the list of PRINTERs functionality that will be configured in the server.
Submit an Active User/Responsibilities request with print option to validate.
Refer 1.2.4
Post clone concurrent requests setup. Place all the requests on hold and release the ones mentioned in the notepad attached.
Command to place the requests on hold:
update applsys.fnd_concurrent_requests set hold_flag='Y' where phase_code in ('R','P','I');
Requests will be released by the Functional Team.
Workflow Setup in DR Servers (new Primary DB)
Modify the outbound thread to 0 and update the override address for the workflow mailer as apdba.blogspot@apdba.com from backend using the below query
select parameter_value from fnd_svc_comp_param_vals where parameter_id = (select parameter_id from fnd_svc_comp_params_tl where display_name = 'Test   Address');
update fnd_svc_comp_param_vals
set parameter_value = 'apdba.blogspot@apdba.com'
where parameter_id = (select parameter_id
from fnd_svc_comp_params_tl
where display_name = 'Test Address');
select parameter_value from fnd_svc_comp_param_vals where parameter_id = (select parameter_id from fnd_svc_comp_params_tl where display_name = 'Test   Address');
Bounce all the services and release the env’s
Release it to the functional team for testing with the latest TNSnames.ora file from both the new-primary environments.
Shutdown the standby database and take a cold backup of the database and the Oracle Home:-
 Note:- After testing is confirmed from the Functional Team and Customer Side.
Ensure to take a backup of the control files and redo log files for safety purpose. Backup the configuration files like pfile/tnsnamas.ora/listener.ora under $HOME
All backups should be placed under /APDBA/rman and /APDBAB/rman only.
APDBA:- nohup tar -cvpf - ./11202 2> /ora/drtemp/APDBA/RDBMS_HOME/OH.log  | gzip -c > /ora/drtemp/APDBA/RDBMS_HOME/11202.tar.gz &
APDBAB:- nohup tar -cvpf - ./11.2.0 2> /ora/drtemp/APDBAB/RDBMS_HOME/OH.log  | gzip -c > /ora/drtemp/APDBAB/RDBMS_HOME/11202.tar.gz &
Cold Backup Script:-
APDBA:- /ora/drtemp/APDBA/RMAN/COLD_BACKUP/RMAN_cold_backup_APDBA.rman
APDBAB:- /ora/drtemp/APDBAB/RMAN/COLD_BACKUP/RMAN_cold_backup_APDBAB.rman
Issues and Resolutions during Testing:-
ISSUE:-  
ORA-01578: ORACLE data block (file #675, block #249019)
ORA-01110: data file 675:
'/ora/APDBA/data09/data/APPS_TS_TX_IDX10.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option
ORA-06512: at "APPS.ARP_PROCESS_APPLICATION", line 3822
RESOLUTION:-
SQL> select owner, segment_name, tablespace_name from dba_extents where FILE_ID=670 and 299649 between BLOCK_ID and BLOCK_ID+BLOCKS;
OWNER  SEGMENT_NAME                               TABLESPACE_NAME
AR           AR_PAYMENT_SCHEDULES_N17      APPS_TS_TX_IDX
SQL> select OWNER,SEGMENT_TYPE,HEADER_BLOCK from dba_segments where SEGMENT_NAME='AR_PAYMENT_SCHEDULES_N17';
OWNER              SEGMENT_TYPE       HEADER_BLOCK
------------------------------ ------------------ ------------
AR                             INDEX                    242770
SQL> alter index AR.AR_PAYMENT_SCHEDULES_N17 rebuild online;
Index altered.
SQL> exit
REFERENCE:-
https://chenguangblog.wordpress.com/tag/nologging/
Rebuild Data Guard
Since the datagaurd environments are dirtied, follow the below steps to rebuild the standby database for the respective environments. Ensure the scripts/commands will be executed in respective environments rightly.
Restore the ORACLE_HOME if needed or only the xml and .ora files with previous values
Tar backup located under /ora/drtemp/<SID> and /ora/drtemp/<SID>
Config files backup located under $HOME of respective db user.
Convert the new-primary database back to standby by
Identify the Restore Point first:-
performing the steps mentioned. Before that Roll back to the Restore Point.
select name, scn, time, database_incarnation#,guarantee_flashback_database,storage_size   from v$restore_point;
1.       Shutdown immediate
2.       Startup mount
3.       flashback database to restore point <restore point name>;
4.       alter database convert to physical standby;
5.       shutdown immediate;
6.       Uncomment the parameters log_archive_config, FAL_SERVER, FAL_CLIENT
7.       startup nomount;
8.       alter database mount standby database;
In the actual Primary database, ENABLE the log_archive_dest_state_2 to enable log shipping.
Alter system set log_archive_dest_state_1=ENABLE scope=BOTH;
Alter system set log_archive_dest_state_2='DEFER' scope=BOTH; - In standby
Alter system set log_archive_dest_state_2=’ENABLE’ scope=BOTH;
In the respective standby database, enable start recovery process (MRP)
Alter database recover managed standby database disconnect from session; - to start MRP process/ managed recovery mode.
Check the log file shipping and log lag in the respective standby databases.
select * from v$archvie_gap
SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference"
FROM
(SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
(SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
WHERE
ARCH.THREAD# = APPL.THREAD#
ORDER BY 1
/
Drop the restore point with all confirmation and disable the flashback.
1.Drop restore point pre_open;
2.Alter database flashback off;
Useful reference links / documents
Oracle Documentation: Business Continuity for Oracle E-Business Release 11i Using Oracle 11g Physical Standby Database - Single Instance and Oracle RAC [1068913.1]