DATABASE SWITCHOVER ACTIVITIES
In
Primary Database[APDBA/APDBABPRD],
perform 3-4 log switches at 01:00hrs EST as well as check the level 0 backup
status
In the Primary database, make a note of the current sequence and the old archived sequence
In the corresponding Standby node, check the log lag and the archive sequence applied
If all the archive logs are in sync b/w the Primary and secondary database, disable the archive shipping in Primary database
The above step will disable the log shipping from Primary to Standby database. Double check that no shipping occurs by few log switches in PRIMARY.
Comment the PRIMARY(Production) node entries in /etc/hosts
Check that level0 Backup as well as
validation has been completed in Primary APDBA and APDBAB, as well as check for
block corruption.
Before validation, check for block
corruption by using v$database_block_corruption as well as v$backup_corruption.
To recover block from rman:-
Block recover datafile <datafile
No> block ID;
Alter system switch logfile;
Alter system switch logfile;
Alter system switch logfile;
Alter system switch logfile;
/In the Primary database, make a note of the current sequence and the old archived sequence
archive log list;
Archive
destination
/ora/APDBAB/arclogs/arch
Oldest online log
sequence 185110
Next log sequence to
archive 0
Current log
sequence 185112In the corresponding Standby node, check the log lag and the archive sequence applied
a. Select
* from v$archvie_gap; or
SELECT ARCH.THREAD#
"Thread", ARCH.SEQUENCE# "Last Sequence Received",
APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# -
APPL.SEQUENCE#) "Difference"
FROM
(SELECT THREAD# ,SEQUENCE# FROM
V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME)
FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
(SELECT THREAD# ,SEQUENCE# FROM
V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME)
FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
WHERE
ARCH.THREAD# = APPL.THREAD#
ORDER BY 1
/
b. Archive
log list;If all the archive logs are in sync b/w the Primary and secondary database, disable the archive shipping in Primary database
Primary – Standby DR Break step:-
Alter system set
log_archive_dest_state_2=DEFER
Confirm the same by executing
show parameter log_archive_dest_state_2The above step will disable the log shipping from Primary to Standby database. Double check that no shipping occurs by few log switches in PRIMARY.
Comment the PRIMARY(Production) node entries in /etc/hosts
In the DR DB server, comment the
below entries in /etc/hosts as each user.
192.168.233.56 apdba.global.internal
apdba
192.168.223.80 apdba1.global.internal apdba1
Shutdown the standby
database and take a cold backup of the database and the Oracle Home
Ensure to take a backup of the
control files and redo log files for safety purpose. Backup the configuration
files like pfile/tnsnamas.ora/listener.ora under $HOME
All backups should be placed under /APDBA/rman
and /APDBAB/rman only.
Cd /ora/APDBA/home/product
APDBA:- nohup tar -cvpf - ./11202
2> /ora/drtemp/APDBA/RDBMS_HOME/OH.log | gzip -c > /ora/drtemp/APDBA/RDBMS_HOME/11202.tar.gz
&
This Step needs to be done after Functional Testing, so at this point of
time before switchover do not run this, this needs to be done after switchover
to primary. Refer Page No - 13
cd /ora/APDBAB/home/product
APDBAB:- nohup tar -cvpf - ./11.2.0
2> /ora/drtemp/APDBAB/RDBMS_HOME/OH.log | gzip -c > /ora/drtemp/APDBAB/RDBMS_HOME/11202.tar.gz
&
Cold Backup Script:-
APDBA:- /ora/drtemp/APDBA/RMAN/COLD_BACKUP/RMAN_cold_backup_APDBA.rman
APDBAB:- /ora/drtemp/APDBAB/RMAN/COLD_BACKUP/RMAN_cold_backup_APDBAB.rman
In
the secondary database,
adjust the archive destination status
N/A
This step is not required in this point of time
as it is a partial role transition exercise.
In
the standby database,
perform recovery operations with the available archive redo log files.
1. Startup
nomount;
2. Alter
database mount standby database;
3. alter
database recover managed standby database cancel;
4. alter
database recover managed standby database finish force;
Enable
flashback database and create restore points in the secondary DB before
converting it to Primary. This step is to ensure that DB is flashbacked after
the testing. Also log_archive_dest is commented so as to ensure the archives
generated will get located in the db_recovery_file_dest
1. Ensure the database is
in archive log mode.
2. Set the parameter in
the pfile :respective to the envs:
Comment log_archive_dest_1,log_archive_dest
,FAL_SERVER,
FAL_CLIENT in both the env’s(new APDBA&APDBAB)
db_recovery_file_dest_size=100GB
For APDBADR
db_recovery_file_dest=
'/ora/drtemp/bov_fb'
FOR APDBADR
db_recovery_file_dest= '
/ora/drtemp/pic_fb'
db_flashback_retention_target=2880
alter system set
db_recovery_file_dest='/ora/drtemp/<SID>' SCOPE=spfile;
alter system set
db_recovery_file_dest_size=100G SCOPE=spfile;
3. Enable Flashback
Database
Shutdown immediate;
Startup mount;
alter database flashback on;
alter database open;
alter system set
db_flashback_retention_target=2880;
How to check if Database Flashback
is enabled or not:-
select flashback_on from v$database;
Make
a note of the SCN after enabling the flash back for both the env’s
select
to_char(standby_became_primary_scn) from v$database;
Create
guaranteed restore point to flashback when required and validate using the
point#2
1. create
restore point <restore point name> guarantee flashback database;
2. select
name, scn, time, database_incarnation#,guarantee_flashback_database,storage_size
from v$restore_point;
3. alter
session set nls_date_format='dd/mm/yy hh24:mi:ss';
4. select
oldest_flashback_scn,oldest_flashback_time from v$flashback_database_log;
Once
the above statement is completed successfully, convert the physical standby to
a primary database role:
Database Switchover Steps(Only
Standby to Primary):-
1. alter
database commit to switchover to primary;
2. shutdown
immediate;
3. startup
(ensure the pfile is intact) – spfile is used
Note: - Make sure not to change the DBNAME and
DBID after the switchover, otherwise restore point won’t be visible.
In
the new-primary database TNS parameters for the new standby database location
N/A
This step is not required in this point of time
as it is a partial role transition exercise.
In
the new-primary database, complete the
database configuration steps as mentioned.
As APPS user, execute the below:
1. exec
fnd_net_services.remove_system(APDBAB/APDBA);
2. commit;
3. exec
fnd_conc_clone.setup_clean;
As
db owner of the new primary database, validate the context file and then
run autoconfig to complete the configuration for primary operations.
If the Production environments are
kept online, execute rapidclone with port pool 30 to avoid connections to
Production environments at any point.
Else execute autoconfig as mentioned
below:
./$ORACLE_HOME/appsutil/scripts/<context>/adautocfg.sh
Restart
the listener service for the new primary database
Lsnrctl stop <LISTENER_NAME>
Lsnrctl start < LISTENER_NAME >
Drop
all the DB links (for this iteration only)
Use the attached script to drop all
the DB links.
Validate that no db links exists
after the script.
select 'drop database link
'||DB_LINK||' ;' from dba_db_links;
select 'public drop database link '||DB_LINK||'
;' from dba_db_links;
APPLICATIONS FAILOVER
ACTIVITIES
The application binaries are already
in sync with the middles tier, re-run rsync from the Secondary Application node
(apdba1 and apdba2) to have the files intact with Primary.( Already ran
rsync in both the DR Application nodes).
Before running rsync, make sure to enable SSH
password less authentication.
APDBAB:
Before running rsync, enable ssh
password less authentication between primary and standby Application servers:-
SSH Password Less Authentication
=================================
1)[apprd@apdbab1 ~]$ ssh-keygen -t
rsa
Generating public/private rsa key
pair.
Enter file in which to save the key
(/home/apprd/.ssh/id_rsa):
Enter passphrase (empty for no
passphrase):
Enter same passphrase again:
Your identification has been saved
in /home/apprd/.ssh/id_rsa.
Your public key has been saved in
/home/apprd/.ssh/id_rsa.pub.
The key fingerprint is:
a6:16:81:1b:ac:d0:db:ed:2b:e2:31:0e:b0:d9:47:fc
apprd@apdbab1.global.internal
2)ssh-copy-id remote machine:-
apprd@apdba2.global.internal
If not working, then create
a)authorized_keys file in Remote
Machine and do a vi and insert the content of id_rsa.pub in authorized_keys
file in $HOME/.ssh
[abprd@apdbab2 ~]$ ssh-copy-id
abprd@apdba3.global.internal
abprd@apdba3.global.internal's
password:
Now try logging into the machine,
with "ssh 'abprd@apdba3.global.internal'", and check in:
.ssh/authorized_keys
RYSNC command as follows
nohup rsync -rave ssh abprd@apdba3.global.internal:/ora/APDBAB/appl
/ora/APDBAB
APDBA:
nohup rsync -rave ssh apprd@apdba2.global.internal:/ora/APDBA/appl
/ora/APDBA
Ensure
the Secondary Application node entries in /etc/hosts are commented.
Comment the entries below :
DR APDBA Middle Tier DR :
122.198.202.13 apdba2.global.internal
apdba2
122.198.233.56 apdba.global.internal
apdba
DR APDBAB Middle Tier DR:
122.198.202.14 apdba3.global.internal
apdba3
122.198.223.80 apdba1.global.internal
apdba1
Take a backup of the TNSnames.ora,
listener.ora,
$CONETXT_FILE, DBC file, adcfgclone.pl for safer
side.
“env_backup_Sep2015”
directory has been created under each “ap” user $HOME directory. Files are
already backuped except adcfgclone.pl
Modify
the file adcfgclone.pl to skip starting the services as a part of rapidclone.
Modify the value
$isSkipStartingServices =
"false";
To
my $isSkipStartingServices = "true";
Execute
adcfgclone.pl appsTier and input right values
SID/TWO_TASK
to be set as APDBADR(8 char) or APDBABDR(8 Char). Ensure to use the port pool
30 and 40 respectively for APD and APDA.
Update the host name in fnd_concurrent_requests
table
APDBAB:
update apps.fnd_concurrent_requests
set logfile_node_name ='apdbab2.global.internal',outfile_node_name ='apdbab2.global.internal'
where logfile_node_name ='apdba3.global.internal' and outfile_node_name ='apdba3.global.internal';
APDBA:
update apps.fnd_concurrent_requests set
logfile_node_name ='apdbab1.global.internal',outfile_node_name ='apdbab1.global.internal'
where logfile_node_name ='apdba2.global.internal' and outfile_node_name ='apdba2.global.internal';
Perform post clone steps
Check if the APPLCSF, APPLTMP and APPLPTMP are set rightly. Validate the instance specific profiles if necessary.
Post clone Printer Setup
Check the list of PRINTERs functionality that will be configured in the server.
Submit an Active User/Responsibilities request with print option to validate.
Refer 1.2.4
Post clone concurrent requests setup. Place all the requests on hold and release the ones mentioned in the notepad attached.
Command to place the requests on hold:
update applsys.fnd_concurrent_requests set hold_flag='Y' where phase_code in ('R','P','I');
Requests will be released by the Functional Team.
Workflow
Setup in DR Servers (new Primary DB)
Modify the outbound thread to 0 and
update the override address for the workflow mailer as apdba.blogspot@apdba.com from
backend using the below query
select parameter_value from
fnd_svc_comp_param_vals where parameter_id = (select parameter_id from fnd_svc_comp_params_tl
where display_name = 'Test Address');
update fnd_svc_comp_param_vals
set parameter_value = 'apdba.blogspot@apdba.com'
where parameter_id =
(select parameter_id
from
fnd_svc_comp_params_tl
where display_name = 'Test
Address');
select parameter_value from
fnd_svc_comp_param_vals where parameter_id = (select parameter_id from
fnd_svc_comp_params_tl where display_name = 'Test Address');
Bounce
all the services and release the env’s
Release
it to the functional team for testing with the latest TNSnames.ora file from
both the new-primary environments.
Shutdown the standby
database and take a cold backup of the database and the Oracle Home:-
Note:- After testing is
confirmed from the Functional Team and Customer Side.
Ensure to take a backup of the
control files and redo log files for safety purpose. Backup the configuration
files like pfile/tnsnamas.ora/listener.ora under $HOME
All backups should be placed under /APDBA/rman
and /APDBAB/rman only.
APDBA:- nohup tar -cvpf - ./11202
2> /ora/drtemp/APDBA/RDBMS_HOME/OH.log | gzip -c > /ora/drtemp/APDBA/RDBMS_HOME/11202.tar.gz
&
APDBAB:- nohup tar -cvpf - ./11.2.0
2> /ora/drtemp/APDBAB/RDBMS_HOME/OH.log | gzip -c > /ora/drtemp/APDBAB/RDBMS_HOME/11202.tar.gz
&
Cold Backup Script:-
APDBA:- /ora/drtemp/APDBA/RMAN/COLD_BACKUP/RMAN_cold_backup_APDBA.rman
APDBAB:- /ora/drtemp/APDBAB/RMAN/COLD_BACKUP/RMAN_cold_backup_APDBAB.rman
Issues and Resolutions
during Testing:-
ISSUE:-
ORA-01578: ORACLE data
block (file #675, block #249019)
ORA-01110: data file
675:
'/ora/APDBA/data09/data/APPS_TS_TX_IDX10.dbf'
ORA-26040: Data block
was loaded using the NOLOGGING option
ORA-06512: at "APPS.ARP_PROCESS_APPLICATION",
line 3822
RESOLUTION:-
SQL> select owner,
segment_name, tablespace_name from dba_extents where FILE_ID=670 and 299649
between BLOCK_ID and BLOCK_ID+BLOCKS;
OWNER SEGMENT_NAME
TABLESPACE_NAME
AR
AR_PAYMENT_SCHEDULES_N17 APPS_TS_TX_IDX
SQL> select
OWNER,SEGMENT_TYPE,HEADER_BLOCK from dba_segments where
SEGMENT_NAME='AR_PAYMENT_SCHEDULES_N17';
OWNER
SEGMENT_TYPE HEADER_BLOCK
------------------------------
------------------ ------------
AR
INDEX
242770
SQL> alter index
AR.AR_PAYMENT_SCHEDULES_N17 rebuild online;
Index altered.
SQL> exit
REFERENCE:-
https://chenguangblog.wordpress.com/tag/nologging/
Since the datagaurd
environments are dirtied, follow the below steps to rebuild the standby database
for the respective environments. Ensure the scripts/commands will be executed
in respective environments rightly.
Restore
the ORACLE_HOME if needed or only the xml and .ora files with previous values
Tar backup located under /ora/drtemp/<SID>
and /ora/drtemp/<SID>
Config files backup located under $HOME of
respective db user.
Convert
the new-primary database back to standby by
Identify the Restore Point first:-
performing
the steps mentioned. Before that Roll back to the Restore Point.
select name, scn, time,
database_incarnation#,guarantee_flashback_database,storage_size
from v$restore_point;
1. Shutdown
immediate
2. Startup
mount
3. flashback
database to restore point <restore point name>;
4. alter
database convert to physical standby;
5. shutdown
immediate;
6. Uncomment
the parameters log_archive_config, FAL_SERVER, FAL_CLIENT
7. startup
nomount;
8. alter
database mount standby database;
In
the actual Primary database, ENABLE the log_archive_dest_state_2 to enable log
shipping.
Alter system set
log_archive_dest_state_1=ENABLE scope=BOTH;
Alter system set
log_archive_dest_state_2='DEFER' scope=BOTH; - In standby
Alter system set
log_archive_dest_state_2=’ENABLE’ scope=BOTH;
In
the respective standby database, enable start recovery process (MRP)
Alter
database recover managed standby database disconnect from session; - to start
MRP process/ managed recovery mode.
Check the log file shipping and log lag in the respective standby databases.
select * from v$archvie_gap
SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference"
FROM
(SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
(SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
WHERE
ARCH.THREAD# = APPL.THREAD#
ORDER BY 1
/
Drop the restore point with all confirmation and disable the flashback.
1.Drop restore point pre_open;
2.Alter database flashback off;
Useful
reference links / documents
Oracle Documentation:
Business Continuity for Oracle E-Business Release 11i Using Oracle 11g Physical
Standby Database - Single Instance and Oracle RAC [1068913.1]
Subscribe to:
Post Comments
(
Atom
)
No comments :
Post a Comment
Note: only a member of this blog may post a comment.