R12.2 Apps DBA. Powered by Blogger.

Block Change Tracking

No comments :
Block changing tracking improves the performance of incremental backups by recording changed blocks in the block change tracking file. During an incremental backup, instead of scanning all data blocks to identify which blocks have changed, RMAN uses this file to identify the changed blocks that need to be backed up.
You can enable block change tracking when the database is either open or mounted. This section assumes that you intend to create the block change tracking file as an Oracle Managed File in the database area, which is where the database maintains active database files such as data files, control files, and online redo log files.
To determine if block change tracking is enabled, check the STATUS and FILENAME columns in the V$BLOCK_CHANGE_TRACKING view, using the following statement from the SQL or RMAN prompt:
SELECT status, filename FROM V$BLOCK_CHANGE_TRACKING;
To enable block change tracking:
Connect RMAN to the target database as described in "Connecting to the Target Database Using RMAN."
Determine the current location of the database data files by submitting the following query:
RMAN> SELECT NAME FROM V$DATAFILE;
NAME
-----------------------------------------------
/u01/app/oracle/oradata/orcl/system01.dbf
/u01/app/oracle/oradata/orcl/example01.dbf
/u01/app/oracle/oradata/orcl/sysaux01.dbf
/u01/app/oracle/oradata/orcl/undotbs01.dbf
/u01/app/oracle/oradata/orcl/users01.dbf
In this example, the query results show that data files are stored in the file system in the directory /u01/app/oracle/oradata/orcl. Data files might also be stored in an Oracle Automatic Storage Management disk group.
Set the DB_CREATE_FILE_DEST initialization parameter to specify the location where new database files, including the block change tracking file, must be stored. You can specify the same directory shown in query results from the previous step, with the final portion of the path—the database SID—stripped, as shown in the following example, or designate a new directory. Any directory that you specify must have the write permission for the Oracle software owner.
The following command specifies that new database files must be stored in the directory /u01/app/oracle/oradata/:
ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/u01/app/oracle/oradata';
Enable block change tracking for the database using the following command:
ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;

ERROR: Script failed, exit code 255

No comments :
R12.2 Adpreclone Fails When Creating WLS Config Archive : Exit Code 255
CAUSE
The issue is caused by some remaining active sessions in WLS (WebLogic Server)
SOLUTION
To implement the solution, please execute the following steps:
1. Stop all services from patch and run editions.
    You can use EBSapps.env to ensure that you are on the correct file edition.
2. Restart the environment.
3. Rerun adpreclone.
NOTE:
If one still has the same issue, please ensure that the FND_NODES table is reflecting the correct setup of your environment and check on all your nodes that the hosts file is correctly formatted and autoconfig was run successfully.

Fatal error in node manager server.

No comments :
SOLUTION
The issue could be specific to this Customer's instance, if the below acton did not  help please investigate further with Support.
The "/tmp"  folder did not have  the correct permission, the issue solved by setting permission to 777.
1- As root
chmod 777 /tmp
2- Restart Weblogic

R12.2 Log Files from Cloning.

No comments :
The cloning process in R12.2 creates log files in several locations:
1. OraInventory logs:
/oraInventory/logs
2. PRECLONE logs from the source server
adpreclone logs database tier:
$ORACLE_HOME/appsutil/log/<SID_hostname>/StageDBTier_<timestamp>.log
adpreclone logs Applications tier:
$INST_TOP/admin/log/clone/StageAppsTier_<timestamp>.log
3. ADCFGCLONE logs from target server
adcfgclone logs database tier:
$ORACLE_HOME/appsutil/clone/bin/CloneContext_<timestamp>.log
This log shows the entries selected during execution of adcfgclone command
$ORACLE_HOME/appsutil/log/<SID_hostname>/ApplyDBTier_<timestamp>.log
This log shows the results for the execution of the clone
adcfgclone logs Applications tier:
$COMMON_TOP/clone/bin/CloneContext_<timestamp>.log
This log shows the entries selected during execution of adcfgclone command
$COMMON_TOP/clone/FMW/logs/prereqcheck.log
This log shows the pre-requisites for FMW installation.
$INST_TOP/admin/log/clone_<timestamp>/ApplyAppsTier_<timestamp>.log
This log shows the results for the execution of the clone.
There are some other logs under $INST_TOP/admin/log/clone, so it is OK to request customer to upload a zip file with this directory for review.
If providing for support, compress and upload the zip of the directory $APPLRGF/TXK/ohsCloneLog ( this captures cloning failures for OHS ).

R12.2 How To Create, Update or Rebuild The Central Inventory

No comments :
Check that oraInst.loc exists in the correct directory for your platform.
Linux and IBM AIX on Power Systems - /etc
Oracle Solaris - /var/opt/oracle
The contents of the oralnst.loc file should look like this:
inventory_loc=/oracle/oraInventory
where /oracle/oraInventory points to the directory where the central inventory is to be located. This location must be writeable by the user account that is to run Rapid Install.
If the oraInst.loc file does not exist, create it in the correct directory with contents as shown above.
How to verify the contents of the Central Inventory
UNIX:
cd <ORACLE_HOME>/oui/bin
./runInstaller
The contents of the Central Inventory can also be checked by viewing file .../oraInventory/ContentsXML/inventory.xml
REMOVED="T" means the ORACLE_HOME has been removed.
Do not modify any files for the oraInventory manually.
UNIX:
Attach the Oracle Home using the OUI (the backslash at the end of each line is being used to divide each command across several screen lines for readability):
cd $ORACLE_HOME/oui/bin
./runInstaller -silent -attachHome -invPtrLoc <Inventory-Pointer-Location-File> \
ORACLE_HOME="<OracleHome-Directory>" \
ORACLE_HOME_NAME="<OracleHome-Name>"
For example:
cd /u01/oracle/PROD/fs1/EBSapps/10.1.2/oui/bin
./runInstaller -silent -attachHome -invPtrLoc /etc/oraInst.loc \
ORACLE_HOME="/u01/oracle/PROD/fs1/EBSapps/10.1.2" \
ORACLE_HOME_NAME="PROD_TOOLS__u01_oracle_PROD_fs1_EBSapps_10_1_2"
How to build a new Central Inventory from scratch
Create a new directory to hold the new Central Inventory, or empty the existing one after backing up its contents, and set its permissions.
Example:
mkdir /u01/oracle/oraInventory
chmod 777 /u01/oracle/oraInventory
Verify or modify the oraInst.loc file, for its location see the Introduction, so 'inventory_loc' points to the directory created in step 1.
Example:
inventory_loc=/u01/oracle/oraInventory
Perform the Procedure 1: Attaching an Oracle Home using the Oracle Universal Installer for for all Oracle Home's:
Note: Make sure to wait until each command completes before issuing the next command. This to prevent locking issues that block simultaneous updates to the Central Inventory.
Example:
./runInstaller -silent -attachHome \
ORACLE_HOME="/u01/oracle/PROD/11.2.0" \
ORACLE_HOME_NAME="OraDb11g_home1"

./runInstaller -silent -attachHome \
ORACLE_HOME="/u01/oracle/PROD/fs1/EBSapps/10.1.2" \
ORACLE_HOME_NAME="PROD_TOOLS__u01_oracle_PROD_fs1_EBSapps_10_1_2"

./runInstaller -silent -attachHome \
ORACLE_HOME="/u01/oracle/PROD/fs2/EBSapps/10.1.2" \
ORACLE_HOME_NAME="PROD_TOOLS__u01_oracle_PROD_fs2_EBSapps_10_1_2"

./runInstaller -silent -attachHome \
ORACLE_HOME="/u01/oracle/PROD/fs1/FMW_Home/Oracle_EBS-app1" \
ORACLE_HOME_NAME="u01_oracle_PROD_fs1_Oracle_EBS_app"

./runInstaller -silent -attachHome \
ORACLE_HOME="/u01/oracle/PROD/fs2/FMW_Home/Oracle_EBS-app1" \
ORACLE_HOME_NAME="u01_oracle_PROD_fs2_Oracle_EBS_app"

./runInstaller -silent -attachHome \
ORACLE_HOME="/u01/oracle/PROD/fs1/FMW_Home/webtier" \
ORACLE_HOME_NAME="OH664489085"

./runInstaller -silent -attachHome \
ORACLE_HOME="/u01/oracle/PROD/fs2/FMW_Home/webtier" \
ORACLE_HOME_NAME="OH332583268"

./runInstaller -silent -attachHome \
ORACLE_HOME="/u01/oracle/PROD/fs1/FMW_Home/oracle_common" \
ORACLE_HOME_NAME="OH1770305581"

./runInstaller -silent -attachHome \
ORACLE_HOME="/u01/oracle/PROD/fs2/FMW_Home/oracle_common" \
ORACLE_HOME_NAME="OH1837312334"
Verify the oraInventory using the procedure

How to check the status/stop/start Workflow Notification Mailer from Backend

No comments :
First How to check the status of Notification Mailer
a) Check workflow mailer service current status
sqlplus apps/
b) select running_processes
from fnd_concurrent_queues
where concurrent_queue_name = ‘WFMLRSVC’;
Number of running processes should be greater than 0
c) Find current mailer status
sqlplus apps/
select component_status
from fnd_svc_components
where component_id =
(select component_id
from fnd_svc_components
where component_name = ‘Workflow Notification Mailer’);
Possible values:
RUNNING
STARTING
STOPPED_ERROR
DEACTIVATED_USER
Now how to stop Notification Mailer from Backend
a) Stop notification mailer
sqlplus apps/
b) declare
p_retcode number;
p_errbuf varchar2(100);
m_mailerid fnd_svc_components.component_id%TYPE;
begin
— Find mailer Id
—————–
select component_id
into m_mailerid
from fnd_svc_components
where component_name = ‘Workflow Notification Mailer’;
————–
— Stop Mailer
————–
fnd_svc_component.stop_component(m_mailerid, p_retcode, p_errbuf);
commit;
end;
/
Now how to start Notification Mailer from Backend
a). Start notification mailer
sqlplus apps/
b) declare
p_retcode number;
p_errbuf varchar2(100);
m_mailerid fnd_svc_components.component_id%TYPE;
begin
— Find mailer Id
—————–
select component_id
into m_mailerid
from fnd_svc_components
where component_name = ‘Workflow Notification Mailer’;
————–
— Start Mailer
————–
fnd_svc_component.start_component(m_mailerid, p_retcode, p_errbuf);
commit;
end;
/

Login flow in R12.2

No comments :
Login flow in R12.2
When a HTTP request is made for EBS, the request is received by the Oracle HTTP Server (OHS).
When the configuration of OHS is for a resource that needs to be processed by Java, such as logging into EBS, the OHS configuration will redirect the request to the Web Logic Server (WLS) Java process (OACore in this case).
WLS determines the J2EE application that should deal with the request, which is called “oacore”.
This J2EE application needs to be deployed and available for processing requests in order for the request to succeed.   The J2EE application needs to access a database and does this via a datasource which is configured within WLS.
Here is the processing in terms of URL(Login HTTP headers)
When the EBS login works OK, the browser will be redirected to various different URLs in order for the login page to be displayed.  The page flow below shows the URLs that will be called to display the login page:
/OA_HTML/AppsLogin
EBS Login URL
/OA_HTML/AppsLocalLogin.jsp
Redirects to local login page
/OA_HTML/RF.jsp?function_id=1032925&resp_id=-1&resp_appl_id=-1&security_group_id=0&lang_code=US&oas=3TQG_dtTW1oYy7P5_6r9ag..&params=5LEnOA6Dde-bxji7iwlQUg
Renders the login page
The URLs after the user enters username and password, then clicks the “login” button are shown below:/OA_HTML/OA.jsp?page=/oracle/apps/fnd/sso/login/webui/MainLoginPG&_ri=0&_ti=640290175&language_code=US&requestUrl=&oapc=2&oas=4hoZpUbqVSrv9IE0iJdY1g..
/OA_HTML/OA.jsp?OAFunc=OANEWHOMEPAGE
/OA_HTML/RF.jsp?function_id=MAINMENUREST&security_group_id=0
Renders user home page
Once the users home page is displayed, the logout flow also redirects to several different URL before returning to the login page:
/OA_HTML/OALogout.jsp?menu=Y
Logout icon has been clicked
/OA_HTML/AppsLogout
/OA_HTML/AppsLocalLogin.jsp?langCode=US&_logoutRedirect=y
Redirects to the login page
/OA_HTML/RF.jsp?function_id=1032925&resp_id=-1&resp_appl_id=-1&security_group_id=0&lang_code=US&oas=r6JPtR7-a4n5U2H3–ytEg..&params=1JU-PCsoyAO7NMAeJQ.9N6auZoBnO8UYYXjUgSPLHdpzU3015KGHA668whNgEIQ4
Renders login page again
Basic Troubleshooting for Login in R12.2
1)  OHS (apache) failure
If OHS is not running or not responding, one would see a message as below. If OHS is not running then there will not be any messages in any EBS log file for this request.
Firefox: “The connection was reset”
Steps to take
Check OHS has started OK
adapcctl.sh status
adapcctl.sh stop
adapcctl.sh start
2  OACore JVM process not available
If the OACore JVM is not running or not reachable, then one will likely see the following message in the browser:
Failure of server APACHE bridge:
No backend server available for connection: timed out after 10 seconds or idempotent set to OFF or method not idempotent.
There could be two reason
Steps to take
a)Make sure the OACore JVM has started correctly
admanagedsrvctl.sh start oacore
b) Check mod_wl_ohs.conf file is configured correctly
3) oacore J2EE application not available
There may be cases where the OACore JVM is running and reachable but the oacore application is not available.
The browser will report the error:
Error 404–Not Found
From RFC 2068 Hypertext Transfer Protocol — HTTP/1.1:
10.4.5 404 Not Found
The server has not found anything matching the Request-URI. No indication is given of whether the condition is temporary or permanent.
Access_log will show 404 error:
GET /OA_HTML/AppsLogin HTTP/1.1″ 404
Steps to take
In the FMW Console check the “deployments” to confirm the “oacore” application is at status “Active” and Health is “OK”.
4) Datasource failure
The oacore logs will show this type of error
<Error> <ServletContext-/OA_HTML> <BEA-000000> <Logging call failed exception::
java.lang.NullPointerException
at oracle.apps.fnd.sso.AppsLoginRedirect.logSafe(AppsLoginRedirect.java:639)
at oracle.apps.fnd.sso.AppsLoginRedirect.doGet(AppsLoginRedirect.java:1314)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
The explorer will show
The system has encountered an error while processing your request.Please contact your system administrator
Steps to Take
Review the EBS Datasource and make sure it is targeted to the oacore_cluster1 managed server.   Also use the “Test Datasource” option to confirm database connection can be made
If one makes any changes, one will need to restart the managed server, despite FMW Console saying no restart is necessary.

Steps to change Admin Password which is lost or forgetten of an EBS WebLogic Domain R12.2

No comments :
In this article we will see the steps required to change Admin Password which is lost or forgetten of an EBS WebLogic Domain  in R12.2
EBS WebLogic domain uses Node Manager to control startup of the AdminServer and Managed Servers. For the EBS WebLogic domain, the Node Manager and WebLogic AdminServer passwords must be same. If the passwords are different, the AD control scripts will not work properly. If the AdminServer password has been lost or forgotten, it can be reset by carrying out the following steps on the run file system. As described in the final step, an fs_clone operation should then be performed to synchronize the run and patch file systems.
1. Shut down all running services. Since the AdminServer password is not known, the servers cannot be stopped from the console and so must be killed as follows.
Connect to the Oracle E-Business Suite instance and source the application tier
environment file.
2. Identify the PIDs of Node Manager, AdminServer, and all running Managed
Servers:
$ ps -ef | grep “NodeManager”
$ ps -ef | grep “weblogic.Name=AdminServer”
$ ps -ef | grep “weblogic.Name=forms-c4ws_server”
$ ps -ef | grep “weblogic.Name=forms_server”
$ ps -ef | grep “weblogic.Name=oafm_server”
$ ps -ef | grep “weblogic.Name=oacore_server”
Kill all these processes, starting with Node Manager and followed by the
Managed Servers.
2. Back up these folders, and then delete them:
<EBS_DOMAIN_HOME>/security/ DefaultAuthenticatorInit.ldift
<EBS_DOMAIN_HOME>/servers/<server_name>/data/ldap
<EBS_DOMAIN_HOME>/servers/<server_name>/security/boot.properties
<EBS_DOMAIN_HOME>/servers/<server_name>/data/nodemanager/boot.proper
ties
Where:
<EBS_DOMAIN_HOME> is the absolute path of the EBS WebLogic domain
<server_name> is the name of the server directory under <EBS_DOMAIN_HOME>.
If the password is not reset correctly, the backed up files and folders can be restored.
3. Set up a new environment to change the WLS AdminServer password.
a) Start a new session and connect to the Oracle E-Business Suite instance.
b) Do not source the application tier environment file.
c) Run the following command to source the WebLogic Server domain environment:
$ cd <EBS_DOMAIN_HOME>/bin
$ source setDomainEnv.sh
d) Run the following commands:
$ cd <EBS_DOMAIN_HOME>/security
$ java weblogic.security.utils.AdminAccount <wls_adminuser> <wls_admin_new_password> .
Where:
<wls_adminuser> is the same as the value of context variable s_wls_admin_user
<wls_admin_new_password> is the new WLS AdminServer password you wish to set.
Note: Do not omit the trailing period (‘.’) in the above command: it is needed to specify the current domain directory.
4. Start AdminServer from the command line. You will be prompted for the WebLogic Server username and password, so that the AdminServer boot.properties file
can be generated.
a) Go to the EBS Domain Home:
$ cd <EBS_DOMAIN_HOME>
b) Start AdminServer:
$ java <s_nm_jvm_startup_properties> -Dweblogic.system.StoreBootIdentity=true   -Dweblogic.Name=AdminServer weblogic.Server
Where:
<s_nm_jvm_startup_properties> is the same as the value of context variable   ss_nm_jvm_startup_properties
The above command prompts for the WebLogic Server username and  password:
Enter username to boot WebLogic server:
Enter password to boot WebLogic server:
Provide the same credentials as you provided in Step 3.
5. Change Node Manager password
a) Log in to the WebLogic Administration console.
b). Click the ‘Lock & Edit’ button.
c)  In the left panel, click on the EBS Domain link.
d)  Select the ‘Security’ tab.
e)  Click on the ‘Advanced’ link.
f)  Edit the ‘Node Manager password’ field and set it to the new WebLogic Server password. The password should be same as set in Step 3.
g) . Edit the ‘Confirm Node Manager Password’ field and set it to the new WebLogic Server password. The password should be same as set in Step 3.
h). Save and activate the changes.
6. The first time, AdminServer has to be stopped from the Admin console. Follow  these steps:
a) Log in to the WebLogic Administration console.
b) Shut down AdminServer.
7. Set up your environment to start AdminServer again. AdminServer should now be started using the normal AD script, which will also start Node Manager using the
new password.
a) Launch a new session and connect to the Oracle E-Business Suite instance.
b) Source the application tier environment file.
c) Start AdminServer with the following command:
$ $ADMIN_SCRIPTS_HOME/adadminsrvctl.sh start
8. Start the Managed Servers. For the first time, all Managed Servers should be started  from the WebLogic Server Admin console. This step will create boot.properties
files for the respective Managed Servers. Follow these steps:
a) Log in to the WebLogic Server Administration Console
b) Start all Managed Servers, one at a time
9.Shut down all the Managed Servers. This is so the new credentials will be picked up at the next startup. Follow these steps:
a) Log in to the WebLogic AdminServer console.
b) Shut down all Managed Servers.
c)  Shut down AdminServer.
10. Shut down Node Manager using the normal AD script.
$ $ADMIN_SCRIPTS_HOME/adnodemgrctl.sh stop
11. Copy the boot.properties file for each Managed Server.WebLogic Server native scripts use the boot.properties file. The above steps have created the boot.properties file under <EBS_DOMAIN_HOME>/servers/<Managed Servername>/data/nodemanager, which is used by Node Manager. For each ManagedServer, copy the newly-generated boot.properties file from <EBS_DOMAIN_HOME>/servers/<Managed Server name>/data/nodemanager to <EBS_DOMAIN_HOME>/servers/<Managed Server name>/security.
The EBS WebLogic Server domain password has now been changed, and all servers can now be started using the normal AD scripts.
To start AdminServer:
$ADMIN_SCRIPTS_HOME/adadminsrvctl.sh start
To start the Managed Servers:
$ $ADMIN_SCRIPTS_HOME/admanagedsrvctl.sh start <managed_server_name>
12. The above steps have changed the Oracle WebLogic AdminServer password on the run file system. You now need to perform an fs_clone operation, to change the
WebLogic EBS Domain password on the patch file system:
a) Launch a new session and connect to the Oracle E-Business Suite instance.
b) Source the application tier environment file.
c) Run the command:
$ adop phase=fs_clone

Downtime and Hotpatch apply mode in adop R12.2

No comments :
Downtime Mode adop R12.2
When applying patches in this mode, adop will first confirm that the application tier services are down, and will then proceed to apply the patch to the run edition of the Oracle E-Business Suite database and file system. Downtime mode patching does not use an online patching cycle. The process of applying a patch in downtime mode completes more quickly than in online mode, but at the cost of increased system downtime.
To run adop in downtime mode, you use the following command line options. In this example, patch 1111111 is applied in downtime mode:
$ adop phase=apply patches=1111111 apply_mode=downtime
It does not require the prepare phase
Important point to note with respect to downtime mode
1) Release 12.2 patches are not normally tested in downtime mode.So this mode should be used in emergency situation only
2) Downtime mode is only supported for production use where explicitly documented, or when directed by Oracle Support or Development.
3) This mode was introduced to optimize the process of upgrading to Oracle E-Business Suite Release 12.2
4) No patching cycle can be in progress. Even though adop will prevent you from applying patches in downtime mode while a patching cycle is under way, it is important to keep this in mind for planning purposes
5)To prevent massive invalidation in the database, code objects deployed to the database should not include parent objects that could cause extensive invalidation on dependent objects. If this does happen, a significant amount of time will be required for code recompilation.
6) Directories that contain code deployed to the application tier during emergency patch deployment must be registered with the custom synchronization driver, to ensure successful automatic file system synchronization by the adop synchronization process.
Hotpatch mode
In hotpatch mode, adop will apply the patch to the run edition of the system while application services are still running. Patches that can be safely applied in hotpatch mode (such as NLS and Online Help patches) will document this in the patch readme. Hotpatch mode cannot be used if there is an online patching cycle in progress.
To run adop in hotpatch mode, you use the following command line options. In this example, patch 1111111 is applied in hotpatch mode:
$ adop phase=apply patches=1111111 apply_mode=hotpatch
It does not require the prepare phase

How to monitor the progress of refresh of Materialized views

No comments :
What is materialized view:A materialized view in Oracle is a database object that contains the results of a query. They are local copies of data located remotely, or are used to create summary tables based on aggregations of a table’s data. Materialized views, which store data based on remote tables are also, know as snapshots.
We have already explained how to create materialized view and materialized view log
Suppose  it is already created in the database and you want to query the defination.
The below sql  will help in that
select query from dba_mviews where mview_name=’TEST_PARTY_ID_EMP_V’;
How to monitor the progress of refresh of Materialized views:
Many times it happens that materialized view is not refreshing from the master table(s) or the refresh is just not able to keep up with the changes occurring on the master table(s).
In these cases, we should look at below  things
1)The job that is scheduled to run the materialized view.
2) The materialized view log in case of fast refresh
3) The Source table
4) The target materialized view
First  we will need to check at the job which is scheduled to run the materialized view
It could be done using refresh group
It could be manually refresh using some cronjob or some other scheduling
For refresh group, the below queries gives the information about group
select * from dba_refresh;
select * from dba_refresh_children;
select * from sys.v_$mvrefresh;
Then below query to find the status of job.
SELECT  /*+ RULE */
A.JOB JOB#,
SCHEMA_USER MVIEW_OWNER,
DECODE(SUBSTR(WHAT,INSTR(WHAT,’.’,1,2)+2,INSTR(WHAT,’”‘,1,4)-4-INSTR(WHAT,’.’,1,2)+2),NULL,SUBSTR(WHAT,1,40), SUBSTR(WHAT,INSTR(WHAT,’.’,1,2)+2,INSTR(WHAT,’”‘,1,4)-4-INSTR(WHAT,’.’,1,2)+2)) MVIEW_NAME,
LAST_DATE LAST_RUN_DATE,
NEXT_DATE NEXT_SCHED_RUN_DATE,
DECODE(BROKEN,’Y’,’YES’,’N’,’NO’,’ ‘) IS_BROKEN,
FAILURES,
RUNNING IS_RUNNING,
B.SID SID
FROM DBA_JOBS A
LEFT OUTER JOIN (SELECT /*+ RULE */
JOB,’YES’ RUNNING,SID
FROM DBA_JOBS_RUNNING ) B
ON A.JOB = B.JOB
ORDER BY SCHEMA_USER, MVIEW_NAME;
We can find out if the job is broken. How much time last refresh took.All those detail can be find out
We also have to check if job-queue_processes parameter is adequately setup.
The next thing to check the MVlog table in the source database. We need to check how many changes happening/every hour
select
(select count(*) from <owner>.MLOG$_<MASTER TABLE NAME> WHERE DMLTYPE$$ = ‘U’) UPDATES,
(select count(*) from <owner>.MLOG$_<MASTER TABLE NAME> WHERE DMLTYPE$$ = ‘I’) INSERTS,
(select count(*) from <owner>.MLOG$_<MASTER TABLE NAME> WHERE DMLTYPE$$ = ‘D’) DELETES,
(select count(*) from <owner>.MLOG$_<MASTER TABLE NAME>) TOTAL
from dual
If the changes are high, the refresh will take time. It may be required to increase the frequency of the refresh so as to have less changes in a refresh
The other thing to check the master table. If many changes happening  and many queries running on master table  simultaneously with refresh time,then again it will slow down  the materialized view refresh
The performance of source and target database and network utilization should also be checked
What is Oracle materialized view
A materialized view is a database object that contains the results of a query. The FROM clause of the query can name tables, views, and other materialized views. Collectively these objects are called master tables (a replication term) or detail tables (a data warehousing term). This reference uses “master tables” for consistency. The databases containing the master tables are called the master databases.When you create a materialized view, Oracle Database creates one internal table and at least one index, and may create one view, all in the schema of the materialized view. Oracle Database uses these objects to maintain the materialized view data. You must have the privileges necessary to create these objects.
What is Oracle Materialized View Log
When DML changes are made to master table data, Oracle Database stores rows describing those changes in the materialized view log and then uses the materialized view log to refresh materialized views based on the master table. This process is called incremental or fast refresh. Without a materialized view log, Oracle Database must re-execute the materialized view query to refresh the materialized view. This process is called a complete refresh. Usually, a fast refresh takes less time than a complete refresh.
A materialized view log is located in the master database in the same schema as the master table. A master table can have only one materialized view log defined on it. Oracle Database can use this materialized view log to perform fast refreshes for all fast-refreshable materialized views based on the master table.
To fast refresh a materialized join view, you must create a materialized view log for each of the tables referenced by the materialized view.
Refresh Group
A refresh group is a collection of one or more materialized views that Oracle refreshes in an atomic transaction, guaranteeing that relationships among the master tables are preserved
Examples:
CREATE MATERIALIZED VIEW mv_test
TABLESPACE test_data
BUILD IMMEDIATE
REFRESH complete ON COMMIT AS
SELECT a.name,b.salary FROM emp@DB_LINK a, compensation@DB_LINK b where a.emp_id=b.emp_id;
CREATE MATERIALIZED VIEW mv_test
TABLESPACE test_data
BUILD IMMEDIATE
REFRESH force ON COMMIT AS
SELECT a.name,b.salary FROM emp@DB_LINK a, compensation@DB_LINK b where a.emp_id=b.emp_id;
CREATE MATERIALIZED VIEW LOG ON emp;
exec DBMS_REFRESH.MAKE(name=>’test_grp’, –
list=>’test_mv1,test_mv2′,’mv_test’ –
next_date => sysdate, –
interval => ‘null’);
exec DBMS_REFRESH.REFRESH(‘test_grp’);
Privileges required to create materialized view
User must have CREATE MATERIALIZED VIEW to create materialize view
General Syntax
CREATE MATERIALIZED VIEW <View Name>
BUILD [IMMEDIATE | DEFERRED]
REFRESH [FAST | COMPLETE | FORCE ]
ON [COMMIT | DEMAND ]
[[ENABLE | DISABLE] QUERY REWRITE]
[ON PREBUILT TABLE]
AS
<Select statement > ;
Explanation of each term
BUILD [IMMEDIATE | DEFERRED]
We can specify to populate immediately Or we can specify DEFERRED to populate on the first requested refresh.
REFRESH [FAST | COMPLETE | FORCE ]
There are three option here.Each explained below
1) FAST : A fast refresh is attempted. If materialized view logs are not present against the source tables in advance, the creation fails.
2) COMPLETE : The table segment supporting the materialized view is truncated and repopulated completely using the associated query.
3)FORCE : A fast refresh is attempted. If one is not possible a complete refresh is performed.
ON [COMMIT | DEMAND ] We can specify ON COMMIT so that  refresh is triggered by a committed data change in one of the dependent tables Or we can specify ON DEMAND so that   refresh is initiated by a manual request or a scheduled task.
[[ENABLE | DISABLE] QUERY REWRITE] The view is eligible for query rewrite
Difference between Oracle view and Oracle materialized view
As explained above Materialized views are disk based and are updated periodically based upon the query definition.In materialized view,result set is stored in the materialized view table
Views are virtual only and run the query definition each time they are accessed.In view no result set is stored  and it accesses the underlying table each time view is accessed

What is FND_OAM_CONTEXT_FILES used

No comments :
FND_OAM_CONTEXT_FILES Context Files are also stored in the database.. The table that store the context files( historically) is “FND_OAM_CONTEXT_FILES”
Context files in this table is stored in a huge clob column named TEXT..
In TEXT column the context file is stored as is, that is in xml format
So by quering the table and using the proper xml functions , the values in context file can be read.
Desc fnd_oam_context_files
—————————————– ——– —————————-
NAME NOT NULL VARCHAR2(512)
VERSION NOT NULL VARCHAR2(30)
PATH NOT NULL VARCHAR2(1024)
LAST_SYNCHRONIZED NOT NULL DATE
TEXT NOT NULL CLOB
LAST_UPDATE_DATE NOT NULL DATE
LAST_UPDATED_BY NOT NULL NUMBER(15)
CREATION_DATE NOT NULL DATE
CREATED_BY NOT NULL NUMBER(15)
LAST_UPDATE_LOGIN NOT NULL NUMBER(15)
NODE_NAME NOT NULL VARCHAR2(30)
STATUS VARCHAR2(1)
SERIAL_NUMBER NUMBER
EDIT_COMMENTS VARCHAR2(2000)
CTX_TYPE NOT NULL VARCHAR2(1)
For example, to gather the web port information of the E-Busines Suite system, following query can be used;
SELECT extractvalue(xmltype(text),’//web_port’) FROM fnd_oam_context_files where status=’S’;
How autoconfig works on it
Each Contextfile version has a serial number given in the parameter:
When autoconfig is run, this serial number is written to the table:
FND_OAM_CONTEXT_FILES against the version of the Context file:
For example for a Context file containing the following information:
apps_contextfile
67
The table FND_OAM_CONTEXT_FILES should contain the following entry:
NAME                VERSION     LAST_SYNC     LAST_UPDATE     CREATION_DATE         SERIAL_NUMBER
—————-    ————   ————–       —————–      ———————         ———————
apps_contextfile   115.377         21-NOV-13       21-NOV-13            27-SEP-05                         67
However it is possible that the Apps Tier Contextfile serial number can become unsynchronised with the serial number information in the table.
If the serial number in the filesystem reads a lower value compared with that showing in the table for the same contextfile version, then each time Autoconfig is run, AutoConfig will see that the table is showing the higher value and it will replace the Contextfile in the filesystem with the parameter values from the contextfile associated with the serial number in the table.
This results in any changes made to the filesystem Contextfile being ignored.
The adconfig log file will show the following:
—————————————————————-
File system Context file :/u11/app//appl/admin/apps_contextfile.xml
Checking the Context file for possible updates from the Database
Comparing the Context file stored in database with the Context file in the file system
Result                  : File system Context is below par with respect to the data base Context
Action to be taken      : Copy the Data Base Context onto the file system
Result                  : Context file successfully copied
What happens when we change the any context variable through OAM
1)OAM displays all the context variable by parsing the context file stored in fnd_oam_context_files table (Autoconfig at each run uploads the context file in this table).
2) When we try to update the context file, OAM first update’s the status flag from ‘S’ to ‘H'(History) for our context file record, it then inserts another row for the same context file with status ‘S’. (OAM inserts another row instead of updating the existing row to maintain change history for each context file)
3( It then requests the specific node’s FNDFS listener for updating the file on the file system (autoconfig uses the file on the file system). So we have to make sure that the FNDFS listeners are running on all the nodes before using OAM to update the context file.

adopmon and adopreports utility R12.2.5

No comments :
adopmon utility in R12.2.5
R12.2.5 provides a  utility name adopmon to monitor the progress of adop session.The adopmon utility is useful both in following the overall progress of a patching cycle and identifying the various individual actions that are being taken.
The following example shows how adopmon is used, with some sample output from the finalize phase.
$ adopmon
Running script. Press Ctrl-C to quit.
Enter the APPS password:
Validating credentials…
2016/01/04 1:29:02 test          EVENT        Checking for existing adop sessions.
2016/01/04 1:29:02 test          EVENT        Checking for pending hotpatch session.
2016/01/04 1:29:02 test          EVENT        Checking for pending cleanup session.
………………………………………………………………..
2016/01/04 2:01:56 testsystem          EVENT        The finalize phase completed successfully.
adopreports utility in R12.2.5
Online Patching Diagnostic Reports utility, $AD_TOP/bin/adopreports, can be used to help diagnose issues or simply gather information about the status of your system.
The adopreports utility is invoked by entering the command:
$ adopreports <APPS username> <APPS password>
This displays the adopreports Main Menu:
Online Patching Diagnostic Reports Main Menu
——————————————–
1.  Run edition reports
2.  Patch edition reports
3.  Other generic reports
4.  Exit
Choosing option 1 from the Main Menu displays the Run Edition Reports Sub Menu:
Run Edition Reports Sub Menu
—————————-
1.  All
2.  Count of uncovered objects per edition
3.  List of uncovered objects per edition
4.  Cleanup status – summary of covered objects per edition,etc.
5.  Show covered objects count per edition.
6.  Show list of covered objects per edition.
7.  Back to main menu
Choosing option 2 from the Main Menu displays the Patch Edition Reports Sub Menu:
Patch Edition Reports Sub Menu
——————————
1.  All
2.  Patch status – new/changed objects
3.  Objects patch in the current edition
4.  Table manager status
5.  Back to main menu
Choosing option 3 from the Main Menu displays the Other Generic Reports Sub Menu:
Other Generic Reports Sub Menu
——————————
1.  Editions summary
2.  Editioned objects summary
3.  Free space in important tablespaces
4.  Status of critical AD_ZD objects
5.  Actual objects in current edition
6.  Objects dependencies
7.  Objects dependency tree
8.  Editioning views column mappings
9.  Index details for a table
10.  Inherited objects in the current edition
11.  All log messages
12.  Materialized view details
13.  Database sessions by edition
14.  Table details (Synonyms, EV, etc.)
15.  Count and status of DDL execution by phase
16.  Back to main menu

How to delete the manage server in R12.2

No comments :
Steps on How to delete the manage server in R12.2
Deletion of managed servers needs to be done on the run file system when there is no active ADOP cycle. During the next adop prepare, the Configuration Change Detector identifies that the addition has been made and the managed servers are automatically synced up from the run file system to the patch file system. The synchronization also gets done when fs_clone is executed.
2.If the managed server to be deleted is running, shut it down as follows:
On Unix:
$ sh <ADMIN_SCRIPTS_HOME>/admanagedsrvctl.sh stop <MANAGED SERVER NAME>
For example, before deleting a managed server ‘oacore_server2’, execute the following command to shut it down.$ sh <ADMIN_SCRIPTS_HOME>/admanagedsrvctl.sh stop oacore_server2
3. Run the command below on the application tier node where the managed server resides. This will delete the managed server, and also update the respective context variables that contain references to the deleted managed server.
$ perl <AD_TOP>/patch/115/bin/adProvisionEBS.pl \
ebs-delete-managedserver \
-contextfile=<CONTEXT_FILE> -managedsrvname=<MANAGED_SERVER_NAME> \
-servicetype=<SERVICE_TYPE> -logfile=<LOGFILE>
For example, for deleting a managed server ‘oacore_server2’ of type ‘oacore’, execute the following command:$ perl <AD_TOP>/patch/115/bin/adProvisionEBS.pl \
ebs-delete-managedserver \
-contextfile=<CONTEXT_FILE> -managedsrvname=oacore_server2 \
-servicetype=oacore -logfile=<APPLRGF>/TXK/delMS_oacoreserver2.log
4.Perform the following steps on all application tier nodes participating in the same cluster as the deleted managed server:
a.Source the run file system.
b.If the deleted managed server is part of the cluster configuration defined on the current node, execute the following command to delete details of the managed server from the OHS configuration files mod_wl_ohs.conf and apps.conf on the current node:
$ perl <FND_TOP>/patch/115/bin/txkSetAppsConf.pl \
-contextfile=<CONTEXT_FILE> \
-configoption=removeMS \
-oacore=<host>.<domain>:<port> \
-oafm=<host>.<domain>:<port> \
-forms=<host>.<domain>:<port> \
-formsc4ws=<host>.<domain>:<port> \
-ekanban=<host>.<domain>:<port> \
-accessgate=<host>.<domain>:<port> \
-yms=<host>.<domain>:<port>where¦The argument contextfile accepts the complete path to the context file.
¦The arguments oacore, oafm, forms, formsc4ws, ekanban, accessgate and yms accept a comma-separated list of managed server details in the following format:
<host>.<domain>:<port> ¦host, domain and port are the hostname, domain and port of the managed server whose reference is to be deleted.
For example, to remove references of the deleted managed server oacore_server2 with port 9705 on host ‘myserver’ and domain ‘go.com’, the following command should be executed:
$ perl <FND_TOP>/patch/115/bin/txkSetAppsConf.pl -contextfile=<CONTEXT_FILE> \
-configoption=removeMS -oacore=myserver.go.com:9705
3. If Oracle HTTP Server is enabled on the node, restart it as follows:
On UNIX:
$ sh <ADMIN_SCRIPTS_HOME>/adapcctl.sh stop
$ sh <ADMIN_SCRIPTS_HOME>/adapcctl.sh start

How to add the manage server in R12.2

No comments :
Steps on How to add the manage server in R12.2
Addition of managed servers needs to be done on the run file system when there is no active ADOP cycle. During the next adop prepare, the Configuration Change Detector identifies that the addition has been made and the managed servers are automatically synced up from the run file system to the patch file system. The synchronization also gets done when fs_clone is executed.
Execute the following command to add a new managed server. This will create a managed server and add a new entry to the context file for starting and stopping the new managed server via the adstrtal and adstpall scripts:
$ perl <AD_TOP>/patch/115/bin/adProvisionEBS.pl \
ebs-create-managedserver -contextfile=<CONTEXT_FILE> \
-managedsrvname=<MANAGED_SERVER_NAME> -servicetype=<SERVICE_TYPE> \
-managedsrvport=<MANAGED_SERVER_PORT> -logfile=<LOGFILE>For example, to add a managed server ‘oacore_server2’ of type ‘oacore’ with port 9705, run the following command:
$ perl <AD_TOP>/patch/115/bin/adProvisionEBS.pl \
ebs-create-managedserver -contextfile=<CONTEXT_FILE> \
-managedsrvname=oacore_server2 -servicetype=oacore \
-managedsrvport=9705 -logfile=<APPLRGF>/TXK/addMS_oacoreserver2.log
3) Start the newly created managed server
sh <ADMIN_SCRIPTS_HOME>/admanagedsrvctl.sh start <MANAGED SERVER NAME>
4) Perform the following steps on all application tier nodes participating in the same cluster where this managed server is added:
Add the entry in OHS configuration file
a) .Source the run file system.
b)Execute the following command to add details of the newly added managed servers into the OHS configuration files mod_wl_ohs.conf and apps.conf on the current node:
$ perl <FND_TOP>/patch/115/bin/txkSetAppsConf.pl \
-contextfile=<CONTEXT_FILE> \
-configoption=addMS \
-oacore=<host>.<domain>:<port> \
-oafm=<host>.<domain>:<port> \
-forms=<host>.<domain>:<port> \
-formsc4ws=<host>.<domain>:<port>where
The argument contextfile accepts the complete path to the context file.
The arguments oacore, oafm, forms, formsc4ws accept a comma-separated list of managed server details in the following format:
<host>.<domain>:<port>
host and domain are the hostname and domain name of the newly added node
port is the port of the new managed server whose reference is to be added
For example, if the managed server oacore_server2 has been added on host ‘myserver’ and domain ‘go.com’ with port 9705, the following command should be executed:
$ perl <FND_TOP>/patch/115/bin/txkSetAppsConf.pl -contextfile=<CONTEXT_FILE> \
-configoption=addMS -oacore=myserver.go.com:9705
c) Restart the http server if it is present on that server
sh <ADMIN_SCRIPTS_HOME>/adapcctl.sh stop
$ sh <ADMIN_SCRIPTS_HOME>/adapcctl.sh start
As I told earliar ,the steps are very different from R12.1.X  in R12.2.X.  I hope the steps are clear How to add the manage server in R12.2

What Happens in cutover phase of adop in R12.2

No comments :
Cutover phase of adop   is downtime phase of Online patching cycle. We should ensure no user access is there in that period as application processes are restarted.
Before running the cutover command, ensure you are ready to commit to application of the selected patches. Once cutover is complete, it is not possible to revert to the previous edition
Cutover phase  of adop has following steps
1) Shut down internal concurrent manager: The adop utility signals the internal concurrent manager to shut down, but will wait for any existing concurrent requests to finish before it proceeds with cutover actions. The system is still available to users during this waiting period.
If you do not wish to wait indefinitely for concurrent requests to finish, specify the option cm_wait=<maximum_minutes_to_wait> with a number of minutes that reflects your operational needs
On production systems, do not specify cm_wait, but monitor progress of concurrent tasks and take manual action on them if needed.It is good to schedule cutover during the time where least jobs are running.
On non-production systems, we specify cm_wait to limit the waiting time before cutover proceeds as it is development and we can tolerate abort of Concurrenyt Manager
Note: Cutover will take longer if it has to wait for long-running concurrent processes to complete. In such a case, you can expect to see an informational message of the form:
[STATEMENT] [END ] Waiting for ICM to go down
If you do not want to wait for in-progress concurrent requests to finish normally, you can terminate the internal concurrent manager by executing the adcmctl.sh abort command from a different shell
This task is performed by $FND_TOP/bin/txkADOPCutOverPhaseCtrlScript.pl script
2) Shut down application tier services: All application tier services are brought down. During this period, the system is unavailable to users.
This task is performed by $FND_TOP/bin/txkADOPCutOverPhaseCtrlScript.pl script
3) Cutover database: Promote patch database edition to become the new run database edition, using adzdpmgr.pl script.
This task is performed by $FND_TOP/bin/txkADOPCutOverPhaseCtrlScript.pl script
4) Cutover file system: Promote patch file system to become the new run file system, switching the $FILE_EDITION values in the patch and run enviroments. The current patch APPL_TOP becomes the new run APPL_TOP, and the current run APPL_TOP becomes the new patch APPL_TOP.
This task is completed by Autoconfig
5) Terminate old database sessions: Terminate any database connections to the old run edition of the database.
This task is performed by $FND_TOP/bin/txkADOPCutOverPhaseCtrlScript.pl script6) Start application tier services: Application tier services are restarted, on the new run edition. The system is now available again to users.
This task is performed by $FND_TOP/bin/txkADOPCutOverPhaseCtrlScript.pl scriptExample .
$ adop phase=cutoverThis will promote the patch edition to be the new run edition, as well as switching the patch and run labels on the file systems (and thereby, as noted above, changing the patch file system to be the new run file system and the run file system to be the new patch file system).
Deferring Application Tier Restart at Cutover
Many times you may need to perform additional manual steps after cutover but before restarting the application tier services. If this is the case, you can supply an additional parameter to the cutover command that causes the application services to remain shut down:
$ adop phase=cutover mtrestart=no
With this parameter,cutover will complete without restarting the application tier services. You can perform any additional steps that require the services to be shut down, and then start the application tier services manually using the adstrtal.sh script.
JAR Files and Cutover
In an online patching cycle, the requisite JAR files are initially stored in the $APPL_TOP/admin/<SID>/out directory, and then uploaded into the database during the cutover phase. Therefore, the out directory must not be deleted at least until cutover is complete.

adopscanlog Oracle Apps R12.2

No comments :
The adop log files are located on the non-editioned file system (fs_ne), under:
fs_ne_base/EBSapps/log/adop/<adop_session_id>/<phase>_<date>_<time>/<context_name>/log
For example, if s_ne_base was /u00/app/TECHGO/fs_ne, the session ID was 10, and the <CONTEXT_NAME> was techgo_server, the path to the adop log files from 1th May 2016 would resemble this:
/u00/app/TECHGO/fs_ne/EBSapps/log/adop/10/apply_20160501_101121/techgo_server
Each cycle of adop creates a subdirectory corresponding to the patch session ID, e.g.
/u00/app/TECHGO/fs_ne/EBSapps/log/adop/1
/u00/app/TECHGO/fs_ne/EBSapps/log/adop/2
etc..etc
When running adop the on-screen terminal output will mention which adop session ID is in use.
e.g. /u00/app/TECHGO/fs_ne/EBSapps/log/adop/9/apply_xyz
AD and TXK C Delta 4 patches has introduced a new Online Patching Log Analyzer Utility adopscanlog
This utility makes the work easiar and analyzes adop log directories for errors and warnings, and displays messages to help the user quickly identify any problems that may have occurred during an adop run. It thereby offers an alternative to reviewing log files manually.
The Log Analyzer utility can be run without options, to scan all log directories of the latest adop session for errors:
$ adopscanlog
The utility can also be run with various options, as shown in the following list.
To scan log directories relating to the latest run of adop in the latest session:
$ adopscanlog -latest=yes
To scan log directories relating to the latest run of the specified phase, in the latest session:
$ adopscanlog -latest=yes -phase=<phase_name>
To scan all log directories of a given session (represented by a session_id) for errors:
$ adopscanlog -session_id=<number>
To scan a specific log directory for errors:
$ adopscanlog –scan_dir=<full path of directory to scan>
To see a complete list of supported parameters:
$ adopscanlog -help

Unable to Start Output Post Processor and workflow related Concurrent Manager

No comments :
Error in logfile :
$ cat FNDOPP10587566924.txt
Unable to initialize state monitor.
oracle.apps.fnd.cp.gsm.GenCartCommException: ORA-01403: no data found
ORA-06512: at “APPS.FND_CP_GSM_IPC”, line 539
ORA-06512: at line 1
at oracl.apps.fnd.cp.gsm.GenCartComm.initService(GenCartComm.java:233)
at oracle.apps.fnd.cp.gsm.GenCartComm.(GenCartComm.java:80)
at oracle.apps.fnd.cp.gsf.GSMStateMonitor.init(GSMStateMonitor.java:74)
at oracle.apps.fnd.cp.gsf.GSMStateMonitor.(GSMStateMonitor.java:62)
at oracle.apps.fnd.cp.gsf.GSMServiceController.init(GSMServiceController.java:117)
at oracle.apps.fnd.cp.gsf.GSMServiceController.(GSMServiceController.java:72)
at oracle.apps.fnd.cp.gsf.GSMServiceController.main(GSMServiceController.java:446)
0.0128720 secs]
$
Similar errors are present in workflow concurrent manager.
These errors generally comes if the Service Manager (FNDSM)  is not up or if it is up,it is hang situation.
This can happen in Concurrent Manager failover or failback in Parallel concurrent Processing env.
If FNDSM is not Up,  we can activate and deactivate Internal monitor, it will automatically bring up the FNDSM
if FNDSM is hang, then kill the process using the below process
ps -ef|grep FNDSM
kill -9 <pid>
and then we can activate and deactivate Internal monitor, it will automatically bring up the FNDSM
Why we saying here Internal monitor as it is safe to restart the process any time with out stopping any crfitical buisness process
Once FNDSM is up, workflow related manager and Output post processor will come automatically. Hope this resolve the Unable to Start Output Post Processor in your environment also
We can look up at Concurrent Manager concept in below articles.

Workflow Notification Mailer Troubleshooting.

No comments :
Outbound Notification Processing –
-Creation of Workflow notification raises oracle.apps.wf.notification.send event.
-This event is deferred immediately and placed on the WF_DEFERRED agent.
-Workflow Agent Listener on WF_DEFERRED queue process the deferred subscription.
-Subscription to this event calls the Generate function (WF_XML.Generate) to generate an XML representation of the notification.
-The event is then placed on the WF_NOTIFICATION_OUT agent.
-Workflow Notification Mailer dequeues the message from WF_NOTIFICATION_OUT.
-Before the e-mail notification is sent, Workflow Notification Mailer performs the following actions:
1. Resolves the notification recipient role to a single e-mail address, which itself can be a mail list.
2. Switches its database session to the recipient role’s preferred language and territory.
3. Generates the message and any optional attachments using the appropriate message template.
4. Sends the message by SMTP protocol.
Inbound Notification Processing
-This is followed If Inbound proccessing Checkbox is enabled for it in Workflow setup
-Email received into INBOX on IMAP Server
-Inbound Workflow Notification Mailer thread queries its IMAP e-mail account for incoming message.
-Use a Java-based e-mail parser to perform high level verification checks on the message.
-If the message is a valid notification response, create an XML message and place it on WF_NOTIFICATION_IN.
-Workflow Agent Listener runs on WF_NOTIFICATION_IN dequeues the message.
Workflow executes the appropriate response function to record the response and complete the notification.
Some important point about Workflow Notification Mailer issues
For workflow mailer to work variable AF_CLASSPATH and AFJVAPRG should be set correctly in adovars.env
-Override address feature can be used to test the mailer.By providing this address,mailer sent all the mail to this address only
-$FND_TOP/sql/Wfver.sql can be used to find the version of worflow.There are some more scripts in $FND_TOP/sql which can be used to debug the workflow items
Some useful queries about Workflow Notification Mailer issues
select count(*), MSG_PRIORITY, corr_id, msg_state from applsys.aq$wf_deferred
where corr_id like ‘%oracle.apps.wf.notification%’ group by MSG_PRIORITY, corr_id, msg_state;
select count(*), msg_state from applsys.aq$wf_notification_out group by msg_state
select status, mail_status, count(*) from wf_notifications group by status, mail_status;
select count(*), MSG_PRIORITY, corr_id, msg_state from applsys.aq$wf_error where corr_id like ‘APPS:oracle.apps.wf.notification%’ group by MSG_PRIORITY, corr_id, msg_state;
Log file location for Workflow Notification Mailer
Log files are located in $APPLCSF/$APPLLOG on the concurrent tier
Log file start with FNDCPGSC.  There are three files,One belongs to Notification mailer while other two belongs to Workflow Agent listener
ls -ltr FNDCPGSC*.txt
Troubleshooting Workflow Notification  Mailer issues
First of all here are the sanity check for Inbound and Outbound Processing
Verify Outbound Processing
1 Verify Agent Listeners are running
2 Go to the “View Details” screen for the Mailer you are configuring
3 Verify that Mailer is running
4 Click on the “Test Email” button
5 Enter a role using the LOV
6. The test address will override that role’s email address
7. Click Send, write down the Notification Id just in case
8. Verify your email account receives this email
Verify Inbound Processing
1 Send an e-mail to the Workflow Mailer reply to address
2 Subject “Hello World”
3 Body “Hello World”
4. The mailer should process this email and send back an FYI “unsolicited email” message to your email account
Now lets take some particular issues on Workflow Notification Mailer
A)  A particular user is not receiving the mail from Workflow mailer
We need to  check the user preferences
declare
recipient_role VARCHAR2(100);
display_name VARCHAR2(200);
email VARCHAR2(1000);
notification_pref VARCHAR2(100);
installed VARCHAR2(1);
language VARCHAR2(100);
territory VARCHAR2(100);
orig_system VARCHAR2(100);
orig_system_id number;
begin
recipient_role := ‘&username’;
WF_DIRECTORY.GetRoleInfoMail(recipient_role, display_name, email,
notification_pref,
language, territory,
orig_system, orig_system_id, installed);
dbms_output.put_line(‘display_name=’|| display_name);
dbms_output.put_line(’email=’|| email);
dbms_output.put_line(‘language=’|| language);
dbms_output.put_line(‘territory=’|| territory);
dbms_output.put_line(‘notification_pref=’|| notification_pref);
end;
/
The notification preferences determine if the notification will be emailed or not. When the notification preference is set to “Do not send me mail” then notifications are not emailed to that user.
The script in step#1 also returns the notification preference value. QUERY means “Do not send me mail”.
How to change the user’s notification preference to a value other than “Do not send me mail” ?
Navigation path:
a. Log into the application as the user with this problem
b. Preferences (at the TOP of the main page on your right hand side).
c. Notifications Section> Email Style field
B)  Outbound Processing Not working, No mails are being send from Notification Mailer
i)  Make sure Java mailer and WF Agent Listeners are up and running:
Navigation path:
a. Workflow Administrator web Applications Responsibility
b. Workflow Manager
c. Click on the Service Components icon
The following components should be up and running:
Workflow Notification Mailer
Workflow Deferred Agent Listener
ii) Are messages in WF_DEFERRED and WF_NOTIFICATION_OUT processed?
a. Execute the following queries:
select substr(wfd.corrid,1,40) corrid,
decode(wfd.state,
0, ‘0 = Ready’,
1, ‘1 = Delayed’,
2, ‘2 = Retained’,
3, ‘3 = Exception’,
to_char(substr(wfd.state,1,12))) State,
count(*) COUNT
from applsys.wf_deferred wfd
group by wfd.corrid, wfd.state;
b.
select substr(wfd.corrid,1,40) corrid,
decode(wfd.state,
0, ‘0 = Ready’,
1, ‘1 = Delayed’,
2, ‘2 = Retained’,
3, ‘3 = Exception’,
to_char(substr(wfd.state,1,12))) State,
count(*) COUNT
from applsys.wf_notification_out wfd
group by wfd.corrid, wfd.state;
c. Re-execute the same queries after 15-20 minutes of first execution and compare the number of messages in READY state/status. The number messages in READY state/status should decrease.
iii) Check the SMTP server
Here is the check
telnet <SMTP SERVER> 25 Enter
Trying 192.avc.ass….
Connected to <SMTP SERVER>.
Escape character is ‘^]’.
220 <SMTP SERVER> ESMTP Sendmail 8.13.8/8.13.8; Fri, 19 Mar 2016 15:4
0:07 -0400
EHLO <SMTP SERVER> Enter
250-<SMTP SERVER> Hello <SMTP SERVER> [192.127.240.8], pleased to meet you
250-ENHANCEDSTATUSCODES
250-PIPELINING
250-EXPN
250-VERB
250-8BITMIME
250-SIZE
250-ETRN
250-AUTH DIGEST-MD5 CRAM-MD5
250-DELIVERBY
250 HELP
MAIL FROM:workflow-<ORACLE_SID>@<SMTP SERVER> Enter From address
250 2.1.0 workflow-<ORACLE_SID>@<SMTP SERVER>… Sender ok
RCPT TO:abc@xyz.com Enter To address
250 2.1.5 abc@xyz.com… Recipient ok
DATA Enter
354 Enter mail, end with “.” on a line by itself
Test message from <SMTP server> Enter the message
. enter a . by itself on a new line
250 2.0.0 o2JJe7Ip016348 Message accepted for delivery
Quit Enter
221 2.0.0 <SMTP SERVER> closing connection
Connection closed by foreign host.
iv)
Run $FND_TOP/sql/wfmlrdbg.sql for a notification id and check the status of the message in WF_DEFERRED and WF_NOTIFICATION_OUT queue.
PROCESSED in WF_DEFERRED – The message is enqueued to WF_NOTIFICATION_OUT
PROCESSED in WF_NOTIFICATION_OUT – The message is sent as e-mail
READY in WF_DEFERRED – Check if Deferred Agent Listener is running
READY in WF_NOTIFICATION_OUT – Check if Notification Mailer is running
v)  Try rebuilding the WF_NOTIFICATION_OUT queue if necessary
The wfntfqup.sql script rebuilds the WF_NOTIFICATION_OUT queue by dropping and recreating
that queue, removing pending notification messages from the WF_DEFERRED queue, and
repopulating the WF_NOTIFICATION_OUT queue from the Oracle Workflow Notification System
tables.
Stop Notification Mailer, rebuild Mailer Queue using $FND_TOP/patch/115/sql/wfntfqup.sql.
C) Inbound Processing not working
i)  Check the IMAP server
$AFJVAPRG -classpath $AF_CLASSPATH -Dprotocol=imap -Dserver=<IMAP Server> -Dport=143 \
-Ddebug=Y \
-Daccount=workflow-<ORACLE_SID> -Dpassword=<workflow password> -Dconnect_timeout=120 \
-Ddbcfile=$FND_SECURE/<ORACLE_SID>.dbc -Dfolder=Inbox oracle.apps.fnd.wf.mailer.Mailer
ii) Check if expunge to inbox is checked or Not
iii) Login to the IMAP mailbox   and cleanup any spam email if present. It is recommended to regularly cleanup the Process and discard folder
iv) Create tag  for the autoreply and delivery failure notification to avoid the continuous sending of the mails
v) Ensure  the Mailer inbound processing has been enabled by setting the mailer parameter Inbound thread count=1 and ensure that Notification mailer is up and running.

Unix command Useful for Oracle DBA

No comments :
How to kill all similar processes with single command (in this case java)
ps -ef | grep java |grep -v grep | awk ‘{print $2}’ |xargs -i kill -9 {}
Locating Files under a particular directory
find . -print |grep -i xyz.sql
Cleanup any unwanted trace files more than seven days old
find . *.trc -mtime +7 -exec rm {} ;
To remove a specific column of output from a UNIX command – for example to determine the UNIX process Ids for all Oracle processes on server (second column)
ps -ef |grep -i oracle |awk ‘{ print $2 }’
Changing the standard prompt for Oracle Users
Edit the .profile for the oracle user
PS1=”`hostname`*$ORACLE_SID:$PWD>”
Display top 10 CPU consumers using the ps command
/usr/ucb/ps auxgw | head -11
Show number of active Oracle dedicated connection users for a particular ORACLE_SID
ps -ef | grep $ORACLE_SID|grep -v grep|grep -v ora_|wc -l
Display the number of CPU’s in Solaris
psrinfo -v | grep “Status of processor”|wc -l
How to schedule a Job in Unix
Use cronjob
crontab -l ( list current jobs in cron)
crontab -e ( edit current jobs in cron )
_1_ _2_ _3_ _4_ _5_ $Job_Name
1 – Minutes (0-59)
2 – Hours ( 0-24)
3 – day of month ( 1- 31 )
4 – Month ( 1-12)
5 – A day of week ( 0- 6 ) 0 -> sunday 1-> monday
e.g. 0 0 1 * 5 Means run job at Midnight on 1st of month & every friday
Crontab examples
To submit a task every Tuesday (day 2) at 2:45PM
45 14 2 * * /opt/oracle/scripts/tr_listener.sh > /dev/null 2>&1
To submit a task to run every 15 minutes on weekdays (days 1-5)
15,30,45 * 1-5 * * /opt/oracle/scripts/tr_listener.sh > /dev/null 2>&1
To submit a task to run every hour at 15 minutes past the hour on weekends (days 6 and 0)
15 * 0,6 * * opt/oracle/scripts/tr_listener.sh > /dev/null 2>&1
Show all links
find . -type l -exec ls -ld {} ;
show all directiries
find . -type d -exec ls -ld {} ;
How to find the release in Linux
cat /etc/*-release
To print the top 10 largest “files and directories”:
$ du -a . | sort -nr | head
To print the top 10 largest files names (not directories) in a particular directory and its sub directories
$ find . -printf ‘%s %p\n’|sort -nr|head
To restrict the search to the present directory use “-maxdepth 1” with find.
$ find . -maxdepth 1 -printf ‘%s %p\n’|sort -nr|head
How to check if a Port is listening for any Service
netstat -an | grep <port no>
How to find the symbolic links that point to the old path in your oracle_home and appl_top.
ls -al `find . -type l` | grep $OLD_PATH
To find CPU in Solaris
uname -X
To find files modified in the last  n mins:
find . -mmin -n
To find files modified before n mins:
find . -mmin +n
How To Check Environment Variables for a Running Process
In Linux,
strings –a /proc/<pid_of_the_process>/environ
In Solaris,
pargs -e <pid_of_the_process>
In AIX,
pargs or ps eww <pid_of_the_process>
Display n lines after match
grep -A <n> “string” FILENAME
Display N lines before match
grep -B <n> “string” FILENAME
Searching in all files recursively using grep -r
grep -r “tom” *
Eliminate empty and commented lines
sed -e ‘s/#.*//;/^$/d’ filename.txt
Eliminate Comments Using sed
sed -e ‘s/#.*//’ filename.txt
Display One File Per Line Using ls -1
ls -1
Display File Size in Human Readable Format Using ls -lh
$ ls -lh

Oracle RAC interview questions and answers

No comments :
Q What is SCAN?
Single Client Access Name (SCAN) is s a new Oracle Real Application Clusters (RAC) 11g Release 2 feature that provides a single name for clients to access an Oracle Database running in a cluster. The benefit is clients using SCAN do not need to change if you add or remove nodes in the cluster.

difference between RAC public IP, private IP and Virtual VIP
Answer:  In Oracle RAC clusters, we see three types of IP addresses:
Public IP:  The public IP address is for the server.  This is the same as any server IP address, a unique address with exists in /etc/hosts.
Private IP: Oracle RCA requires "private IP" addresses to manage the CRS, the clusterware heartbeat process and the cache fusion layer.
Virtual IP:  Oracle uses a Virtual IP (VIP) for database access.  The VIP must be on the same subnet as the public IP address.  The VIP is used for RAC failover (TAF).

Global Cache Service  
Global Cache Service (GCS) is the heart of Cache Fusion concept. It is through GCS that data integrity in RAC is maintained when more than one instance need a particular data block. Instances look up to the GCS for fulfilling their data block needs.
 GCS is responsible for:
Tracking the data block
 Accepting the data block requests from instances
Informing the holding instance to release the lock on the data block or ship a CR image
Coordinating the shipping of data blocks as needed between the instance through the interconnect
Informing the instances to keep or discard PIs  
Q what is dynamic remastering ? When will the dynamic remastering happens?
dynamic remastering is ability to move the ownership of resource from one instance to another instance in RAC. dynamic resource remastering is used to implement for resource affinity for increased performance. resource affinity optimized the system in situation where update transactions are being executed in one instance. when activity shift to another instance the resource affinity correspondingly move to another instance. If activity is not localized then resource ownership is hashed to the instance.
In 10g dynamic remastering happens in file+object level.the process of remastering is very stringent. For one instance should touch more than 50 times than the other instance in particular period(say 10 mints). this touch ratio and time can be tuned by gc_affinity_limit and _gc_affinity_time parameter.
 Q why we required to maintain odd number of voting disks?
Odd number of disk are to avoid split brain, When Nodes in cluster can't talk to each other they run to lock the Voting disk and whoever lock the more disk will survive, if disk number are even there are chances that node might lock 50% of disk (2 out of 4) then how to decide which node to evict. 
whereas when number is odd, one will be higher than other and each for cluster to evict the node with less number
 Q How you check the health of Your RAC Database?
 'crsctl' command from root or oracle user can be used to check the clusterware health But for starting or stopping we have to use root user or any privilege user.
[oracle@TEST_NODE1 ~]$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
 Q How you check the services in RAC Node?
 We can check the service or start the services with 'srvctl' command.load balanced/TAF service named RAC online.
[oracle@TEST_NODE1 ~]$ srvctl start service -d orcl -s RAC
[oracle@TEST_NODE1 ~]$ crsstat
Q If there is some issue with virtual IP how will you troubleshoot it?How will you change virtual ip?
 To change the VIP (virtual IP) on a RAC node, use the command
[oracle@testnode oracle]$ srvctl modify nodeapps -A new_address
 Q How you will backup your RAC Database?
 Backup strategy of RAC Database:
An RAC Database consists of
1)OCR
2)Voting disk &
3)Database files, controlfiles, redolog files & Archive log files
 Q Do you have any idea of load balancing in application?How load balancing is done?
http://practicalappsdba.wordpress.com/category/for-master-apps-dbas/
 Q What is RAC?
RAC stands for Real Application cluster. It is a clustering solution from Oracle Corporation that ensures high availability of databases by providing instance failover, media failover features.
 Q What is RAC and how is it different from non RAC databases?
RAC stands for Real Application Cluster, you have n number of instances running in their own separate nodes and based on the shared storage. Cluster is the key component and is a collection of servers operations as one unit. RAC is the best solution for high performance and high availably. Non RAC databases has single point of failure in case of hardware failure or server crash.
 Q Give the usage of srvctl ?
srvctl start instance -d db_name -i "inst_name_list" [-o start_options]
srvctl stop instance -d name -i "inst_name_list" [-o stop_options]
srvctl stop instance -d orcl -i "orcl3,orcl4" -o immediate
srvctl start database -d name [-o start_options]
srvctl stop database -d name [-o stop_options]
srvctl start database -d orcl -o mount
 Q Mention the Oracle RAC software components ?
Oracle RAC is composed of two or more database instances. They are composed of Memory structures and background processes same as the single instance database.Oracle RAC instances use two processes GES(Global Enqueue Service), GCS(Global Cache Service) that enable cache fusion.Oracle RAC instances are composed of following background processes:
ACMS—Atomic Controlfile to Memory Service (ACMS)
GTX0-j—Global Transaction Process
LMON—Global Enqueue Service Monitor
LMD—Global Enqueue Service Daemon
LMS—Global Cache Service Process
LCK0—Instance Enqueue Process
RMSn—Oracle RAC Management Processes (RMSn)
RSMN—Remote Slave Monitor
 Q What is GRD?
GRD stands for Global Resource Directory. The GES and GCS maintains records of the statuses of each datafile and each cahed block using global resource directory.This process is referred to as cache fusion and helps in data integrity.
 Q What are the different network components are in 10g RAC?
public, private, and vip components
Private interfaces is for intra node communication. VIP is all about availability of application. When a node fails then the VIP component fail over to some other node, this is the reason that all applications should based on vip components means tns entries should have vip entry in the host list
 Q Give Details on ACMS:
ACMS stands for Atomic Controlfile Memory Service.In an Oracle RAC environment ACMS is an agent that ensures a distributed SGA memory update(ie)SGA updates are globally committed on success or globally aborted in event of a failure.
 Q What are the major RAC wait events?
In a RAC environment the buffer cache is global across all instances in the cluster and hence the processing differs.The most common wait events related to this are gc cr request and gc buffer busy
GC CR request :the time it takes to retrieve the data from the remote cache
Reason: RAC Traffic Using Slow Connection or Inefficient queries (poorly tuned queries will increase the amount of data blocks requested by an Oracle session. The more blocks requested typically means the more often a block will need to be read from a remote instance via the interconnect.)
GC BUFFER BUSY: It is the time the remote instance locally spends accessing the requested data block.
 Q Give details on GTX0-j 
The process provides transparent support for XA global transactions in a RAC environment.The database autotunes the number of these processes based on the workload of XA global transactions.
 Q Give details on LMON
This process monitors global enques and resources across the cluster and performs global enqueue recovery operations.This is called as Global Enqueue Service Monitor.
 Q Give details on LMD
This process is called as global enqueue service daemon. This process manages incoming remote resource requests within each instance.
 Q Give details on LMS
This process is called as Global Cache service process.This process maintains statuses of datafiles and each cahed block by recording information in a Global Resource Dectory(GRD).This process also controls the flow of messages to remote instances and manages global data block access and transmits block images between the buffer caches of different instances.This processing is a part of cache fusion feature.
 Q Give details on LCK0
This process is called as Instance enqueue process.This process manages non-cache fusion resource requests such as libry and row cache requests.
 Q Give details on RMSn
This process is called as Oracle RAC management process.These pocesses perform managability tasks for Oracle RAC.Tasks include creation of resources related Oracle RAC when new instances are added to the cluster.
Q How to export and import crs resources while migrating Oracle RAC to new server.
Below script generate svrctl add script for database, instance, service and 11G listeners from OCR from current RAC.
Save the result of the script and run it at new RAC.
for DBNAME in $(srvctl config database)
do
# Generate DB resource
srvctl config database -d $DBNAME -a | awk -v dbname="$DBNAME" \
'BEGIN { FS=":" }
$1~/Oracle home/ || $1~/ORACLE_HOME/ {dbhome = "-o" $2}
$1~/Spfile/ || $1~/SPFILE/ {spfile = "-p" $2}
$1~/Disk Groups/ {dg = "-a" $2}
END { if (avail == "-a ") {avail = ""}; printf "%s %s %s %s %s\n", "srvctl add database -d ", dbname, dbhome, spfile, dg }'
# Generate Instance resource
srvctl status database -d $DBNAME | awk -v dbname="$DBNAME" \
'$4~/running/ { printf "%s %s %s %s %s %s\n", "srvctl add instance -d ",dbname, " -i ", $2 ," -n ", $7 }
$5~/running/ { printf "%s %s %s %s %s %s \n", "srvctl add instance -d ",dbname, " -i ", $2 ," -n ", $8 }'
# Modify instance for 10G - ASM dependency
if [ $(echo $ORACLE_HOME | grep "1020" | wc -l ) -eq 1 ]
then
srvctl status database -d $DBNAME | awk -v dbname="$DBNAME" \
'$2~/1$/ { printf "%s %s %s %s %s \n", "srvctl modify instance -d ",dbname, " -i ", $2 ," -s +ASM1" }
$2~/2$/ { printf "%s %s %s %s %s \n", "srvctl modify instance -d ",dbname, " -i ", $2 ," -s +ASM2" }
$2~/3$/ { printf "%s %s %s %s %s \n", "srvctl modify instance -d ",dbname, " -i ", $2 ," -s +ASM3" }
$2~/4$/ { printf "%s %s %s %s %s \n", "srvctl modify instance -d ",dbname, " -i ", $2 ," -s +ASM4" }'
fi
echo "srvctl start database -d $DBNAME"
# Generate Service resource
snamelist=$(srvctl status service -d $DBNAME | awk '{print $2}')
for sname in $snamelist
do
srvctl config service -d $DBNAME -s $sname| awk -v dbname="$DBNAME" -v sname=$sname \
'BEGIN { FS=":"}
$1~/Preferred instances/ {pref = "-r" $2}
$1~/PREF/ {pref = "-r" $2; sub(/AVAIL/, "", pref) }
$1~/Available instances/ {avail = "-a" $2}
$2~/AVAIL/ {avail = "-a" $3}
$1~/Failover type/ {ft = "-e" $2}
$1~/Failover method/ {fm = "-m" $2}
$1~/Runtime Load Balancing Goal/ {g = "-B" $2}
END { if (avail == "-a ") {avail = ""}; printf "%s %s %s %s %s %s %s %s %s %s\n", "srvctl add service -d ",dbname, "-s ", sname, pref, avail ,ft, fm,g, "-P BASIC"}'
echo "srvctl start service -d $DBNAME -s $sname"
done
done
# Listener at 11G Home. 10G listener can't ba added with srvctl.
srvctl config listener | awk \
'BEGIN { FS=":"; state = 0; }
$1~/Name/ {lname = "-l" $2; state=1};
$1~/Home/ && state == 1 {ohome = "-o" $2; state=2;}
$1~/End points/ && state == 2 {lport = "-p " $3; state=3;}
state == 3 {if (ohome != "-o ") {printf "%s %s %s %s\n", "srvctl add listener ", lname, ohome, lport;} state=0;}'
 Q Give details on RSMN
This process is called as Remote Slave Monitor.This process manages background slave process creation andd communication on remote instances. This is a background slave process.This process performs tasks on behalf of a co-ordinating process running in another instance.
 Q What components in RAC must reside in shared storage?
All datafiles, controlfiles, SPFIles, redo log files must reside on cluster-aware shred storage.
 Q What is the significance of using cluster-aware shared storage in an Oracle RAC environment?
All instances of an Oracle RAC can access all the datafiles,control files, SPFILE's, redolog files when these files are hosted out of cluster-aware shared storage which are group of shared disks.
 Q Give few examples for solutions that support cluster storage
ASM(automatic storage management),raw disk devices,network file system(NFS), OCFS2 and OCFS(Oracle Cluster Fie systems).
 Q What is an interconnect network?
An interconnect network is a private network that connects all of the servers in a cluster. The interconnect network uses a switch/multiple switches that only the nodes in the cluster can access.
 Q How can we configure the cluster interconnect?
Configure User Datagram Protocol(UDP) on Gigabit ethernet for cluster interconnect.On unix and linux systems we use UDP and RDS(Reliable data socket) protocols to be used by Oracle Clusterware.Windows clusters use the TCP protocol.
 Q Can we use crossover cables with Oracle Clusterware interconnects?
No, crossover cables are not supported with Oracle Clusterware intercnects.
 Q What is the use of cluster interconnect?
Cluster interconnect is used by the Cache fusion for inter instance communication.
 Q How do users connect to database in an Oracle RAC environment?
Users can access a RAC database using a client/server configuration or through one or more middle tiers ,with or without connection pooling.Users can use oracle services feature to connect to database.
 Q What is the use of a service in Oracle RAC environment?
Applications should use the services feature to connect to the Oracle database.Services enable us to define rules and characteristics to control how users and applications connect to database instances.
 Q What are the characteristics controlled by Oracle services feature?
The charateristics include a unique name, workload balancing and failover options,and high availability characteristics.
 Q What enables the load balancing of applications in RAC?
Oracle Net Services enable the load balancing of application connections across all of the instances in an Oracle RAC database.
 Q What is a virtual IP address or VIP?
A virtl IP address or VIP is an alternate IP address that the client connectins use instead of the standard public IP address. To configureVIP address, we need to reserve a spare IP address for each node, and the IP addresses must use the same subnet as the public network.
 Q What is the use of VIP?
If a node fails, then the node's VIP address fails over to another node on which the VIP address can accept TCP connections but it cannot accept Oracle connections.
 Q Give situations under which VIP address failover happens
VIP addresses failover happens when the node on which the VIP address runs fails, all interfaces for the VIP address fails, all interfaces for the VIP address are disconnected from the network.
 Q What is the significance of VIP address failover?
When a VIP address failover happens, Clients that attempt to connect to the VIP address receive a rapid connection refused error .They don't have to wait for TCP connection timeout messages.
 Q What are the administrative tools used for Oracle RAC environments?
Oracle RAC cluster can be administered as a single image using OEM(Enterprise Manager),SQL*PLUS,Servercontrol(SRVCTL),clusterverificationutility(cvu),DBCA,NETCA
 Q How do we verify that RAC instances are running?
Issue the following query from any one node connecting through SQL*PLUS.
$connect sys/sys as sysdba
SQL>select * from V$ACTIVE_INSTANCES;
The query gives the instance number under INST_NUMBER column,host_:instancename under INST_NAME column.
 Q What is FAN?
Fast application Notification as it abbreviates to FAN relates to the events related to instances,services and nodes.This is a notification mechanism that Oracle RAc uses to notify other processes about the configuration and service level information that includes service status changes such as,UP or DOWN events.Applications can respond to FAN events and take immediate action.
 Q Where can we apply FAN UP and DOWN events?
FAN UP and FAN DOWN events can be applied to instances,services and nodes.
State the use of FAN events in case of a cluster configuration change?
During times of cluster configuration changes,Oracle RAC high availability framework publishes a FAN event immediately when a state change occurs in the cluster.So applications can receive FAN events and react immediately.This prevents applications from polling database and detecting a problem after such a state change.
 Q Why should we have seperate homes for ASm instance?
It is a good practice to have ASM home seperate from the database hom(ORACLE_HOME).This helps in upgrading and patching ASM and the Oracle database software independent of each other.Also,we can deinstall the Oracle database software independent of the ASM instance.
Q What is the advantage of using ASM?
Having ASM is the Oracle recommended storage option for RAC databases as the ASM maximizes performance by managing the storage configuration across the disks.ASM does this by distributing the database file across all of the available storage within our cluster database environment.
 Q What is rolling upgrade?
It is a new ASM feature from Database 11g.ASM instances in Oracle database 11g release(from 11.1) can be upgraded or patched using rolling upgrade feature. This enables us to patch or upgrade ASM nodes in a clustered environment without affecting database availability.During a rolling upgrade we can maintain a functional cluster while one or more of the nodes in the cluster are running in different software versions.
 Q Can rolling upgrade be used to upgrade from 10g to 11g database?
No,it can be used only for Oracle database 11g releases(from 11.1).
 Q State the initialization parameters that must have same value for every instance in an Oracle RAC database
Some initialization parameters are critical at the database creation time and must have same values.Their value must be specified in SPFILE or PFILE for every instance.The list of parameters that must be identical on every instance are given below:
ACTIVE_INSTANCE_COUNT
ARCHIVE_LAG_TARGET
COMPATIBLE
CLUSTER_DATABASE
CLUSTER_DATABASE_INSTANCE
CONTROL_FILES
DB_BLOCK_SIZE
DB_DOMAIN
DB_FILES
DB_NAME
DB_RECOVERY_FILE_DEST
DB_RECOVERY_FILE_DEST_SIZE
DB_UNIQUE_NAME
INSTANCE_TYPE (RDBMS or ASM)
PARALLEL_MAX_SERVERS
REMOTE_LOGIN_passWORD_FILE
UNDO_MANAGEMENT
 Q What is ORA-00603: ORACLE server session terminated by fatal error or ORA-29702: error occurred in Cluster Group Service operation?
RAC node name was listed in the loopback address...
 Q Can the DML_LOCKS and RESULT_CACHE_MAX_SIZE be identical on all instances?
These parameters can be identical on all instances only if these parameter values are set to zero.
What two parameters must be set at the time of starting up an ASM instance in a RAC environment?The parameters CLUSTER_DATABASE and INSTANCE_TYPE must be set.
 Q Mention the components of Oracle clusterware
Oracle clusterware is made up of components like voting disk and Oracle Cluster Registry(OCR).
 Q What is a CRS resource?
Oracle clusterware is used to manage high-availability operations in a cluster.Anything that Oracle Clusterware manages is known as a CRS resource.Some examples of CRS resources are database,an instance,a service,a listener,a VIP address,an application process etc.
 Q What is the use of OCR?
Oracle clusterware manages CRS resources based on the configuration information of CRS resources stored in OCR(Oracle Cluster Registry).
 Q How does a Oracle Clusterware manage CRS resources?
Oracle clusterware manages CRS resources based on the configuration information of CRS resources stored in OCR(Oracle Cluster Registry).
 Q Name some Oracle clusterware tools and their uses?
OIFCFG - allocating and deallocating network interfaces
OCRCONFIG - Command-line tool for managing Oracle Cluster Registry
OCRDUMP - Identify the interconnect being used
CVU - Cluster verification utility to get status of CRS resources
 Q What are the modes of deleting instances from ORacle Real Application cluster Databases?
We can delete instances using silent mode or interactive mode using DBCA(Database Configuration Assistant).
 Q How do we remove ASM from a Oracle RAC environment?
We need to stop and delete the instance in the node first in interactive or silent mode.After that asm can be removed using srvctl tool as follows:
srvctl stop asm -n node_name
srvctl remove asm -n node_name
We can verify if ASM has been removed by issuing the following command:
srvctl config asm -n node_name
 Q How do we verify that an instance has been removed from OCR after deleting an instance?
Issue the following srvctl command:
srvctl config database -d database_name
cd CRS_HOME/bin
./crs_stat
 Q How do we verify an existing current backup of OCR?
We can verify the current backup of OCR using the following command : ocrconfig -showbackup
What are the performance views in an Oracle RAC environment?
We have v$ views that are instance specific. In addition we have GV$ views called as global views that has an INST_ID column of numeric data type.GV$ views obtain information from individual V$ views.
What are the types of connection load-balancing?
There are two types of connection load-balancing:server-side load balancing and client-side load balancing.
 Q What is the difference between server-side and client-side connection load balancing?
Client-side balancing happens at client side where load balancing is done using listener.In case of server-side load balancing listener uses a load-balancing advisory to redirect connections to the instance providing best service.
 Q What are the three greatest benefits that RAC provides??
The three main benefits are availability, scalability, and the ability to use low cost commodity hardware. RAC allows an application to scale vertically, by adding CPU, disk and memory resources to an individual server. But RAC also provides horizontal scalability, which is achieved by adding new nodes into the cluster. RAC also allows an organization to bring these resources online as they are needed. This can save a small or midsize organization a lot of money in the early stages of a project.
In a RAC environment, if a node in the cluster fails, the application continues to run on the surviving nodes contained in the cluster. If your application is configured correctly, most users won't even know that the node they were running on became unavailable.
 Q What are the major RAC wait events?
In a RAC environment the buffer cache is global across all instances in the cluster and hence the processing differs.The most common wait events related to this are gc cr request and gc buffer busy
GC CR request: the time it takes to retrieve the data from the remote cache
Reason: RAC Traffic Using Slow Connection or Inefficient queries (poorly tuned queries will increase the amount of data blocks
requested by an Oracle session. The more blocks requested typically means the more often a block will need to be read from a remote instance via the interconnect.)
GC BUFFER BUSY: It is the time the remote instance locally spends accessing the requested data block.
 Q What are the different network components in Oracle 10g RAC?
We have public, private, and VIP components. Private interfaces is for intra node communication. VIP is all about availability of application. When a node fails then the VIP component will fail over to some other node, this is the reason that all applications should be based on VIP components.  This means that tns entries should have VIP entry in the host list.
 Q Tune the following RAC DATABASE (DBNAME=PROD) which is 3 node RAC.
PROD1             PROD2                     PROD3
CPU 8               CPU 15                    CPU 8
32 GB RAM       12 GB RAM             16 GB RAM
What are you looking for here? What tuning information do you expect?
It is a 3 node cluster with different hardware configuration running RAC.
I would put 20% of the memory for Oracle in each node. So that would mean that the SGA is different in each of the nodes.
Also since the CPU's are different PROD2 can have more number of max number of processes as compared to the rest of them.
But as I said this is just configuration, this is not tuning. Question is not clear.
 Q Write a sample script for RMAN for the recovery if all the instance are down.(First explain the procedure how you will restore)
Bring all nodes down.
Start one Node
Restore all datafiles and archive logs.
Recover 1 Node.
Open the database.
bring other nodes up.
Confirm that all nodes are operational.
 Clients are performing some operation and suddenly one of the datafile is experiencing problem what do you do? The cluster is a two node one.
Bring the datafile offline recover the datafile.
How can you connect to a specific node in a RAC environment?
tnsnames.ora ensure that you have INSTANCE_NAME specified in it.
 Q How to move OCR and Voting disk to new storage device?
Moving OCR
==========
You must be logged in as the root user, because root owns the OCR files. Also an ocrmirror must be in place before trying to replace the OCR device.
Make sure there is a recent backup of the OCR file before making any changes:
ocrconfig –showbackup
If there is not a recent backup copy of the OCR file, an export can be taken for the current OCR file. Use the following command to generate an export of the online OCR file:
In 10.2
# ocrconfig –export -s online
In 11g
# ocrconfig -manualbackup
The new OCR disk must be owned by root, must be in the oinstall group, and must have permissions set to 640. Provide at least 100 MB disk space for the OCR.
On one node as root run:
# ocrconfig -replace ocr 
# ocrconfig -replace ocrmirror 
Now run ocrcheck to verify if the OCR is pointing to the new file
Moving Voting Disk
==================
Note: crsctl votedisk commands must be run as root
Shutdown the Oracle Clusterware (crsctl stop crs as root) on all nodes before making any modification to the voting disk. Determine the current voting disk location using:
crsctl query css votedisk
Take a backup of all voting disk:
dd if=voting_disk_name of=backup_file_name
To move a Voting Disk, provide the full path including file name:
crsctl delete css votedisk –force
crsctl add css votedisk –force
After modifying the voting disk, start the Oracle Clusterware stack on all nodes
# crsctl start crs
Verify the voting disk location using
crsctl query css votedisk
 Q What is runfixup.sh script in Oracle Clusterware 11g release 2 installation
With Oracle Clusterware 11g release 2, Oracle Universal Installer (OUI) detects when the minimum requirements for an installation are not met, and creates shell scripts, called fixup scripts, to finish incomplete system configuration steps. If OUI detects an incomplete task, then it generates fixup scripts (runfixup.sh). You can run the fixup script after you click the Fix and Check Again Button.
The Fixup script does the following:
¦ If necessary sets kernel parameters to values required for successful installation,
including:
– Shared memory parameters.
– Open file descriptor and UDP send/receive parameters.
¦ Sets permissions on the Oracle Inventory (central inventory) directory.
¦ Reconfigures primary and secondary group memberships for the installation
owner, if necessary, for the Oracle Inventory directory and the operating system
privileges groups.
¦ Sets shell limits if necessary to required values.
 Q When exactly during the installation process are clusterware components created?
After fulfilling the pre-installation requirements, the basic installation steps to follow are:
1. Invoke the Oracle Universal Installer (OUI)
2. Enter the different information for some components like:
- name of the cluster
- public and private node names
- location for OCR and Voting Disks
- network interfaces used for RAC instances
-etc.
3. After the Summary screen, OUI will start copying under the $CRS_HOME (this is the $ORACLE_HOME for Oracle Clusterware) in the local node the libraries and executables.
- here we will have the daemons and scripts init.* created and configured properly.
Oracle Clusterware is formed of several daemons, each one of which have a special function inside the stack. Daemons are executed via the init.* scripts (init.cssd, init.crsd and init.evmd).
- note that for CRS only some client libraries are recreated, but not all the executables (as for the RDBMS).
4. Later the software is propagated to the rest of the nodes in the cluster and the oraInventory is updated.
5. The installer will ask to execute root.sh on each node. Until this step the software for Oracle Clusterware is inside the $CRS_HOME.
Running root.sh will create several components outside the $CRS_HOME:
- OCR and VD will be formated.
- control files (or SCLS_SRC files ) will be created with the correct contents to start Oracle Clusterware.
These files are used to control some aspects of Oracle Clusterware like:
- enable/disable processes from the CSSD family (Eg. oprocd, oslsvmon)
- stop the daemons (ocssd.bin, crsd.bin, etc).
- prevent Oracle Clusterware from being started when the machine boots.
- etc.
- /etc/inittab will be updated and the init process is notified.
In order to start the Oracle Clusterware daemons, the init.* scripts first need to be run. These scripts are executed by the daemon init. To accomplish this some entries must be created in the file /etc/inittab.
- the different processes init.* (init.cssd, init.crsd, etc) will start the daemons (ocssd.bin, crsd.bin, etc). When all the daemons are running then we can say that the installation was successful
- On 10.2 and later, running root.sh on the last node in the cluster also will create the nodeapps (VIP, GSD and ONS). On 10.1, VIPCA is executed as part of the RAC installation.
6. After running root.sh on each node, we need to continue with the OUI session. After pressing the 'OK' button OUI will include the information for the public and cluster_interconnect interfaces. Also CVU (Cluster Verification Utility) will be executed.
 Q What are Oracle Clusterware processes for 10g on Unix and Linux
Cluster Synchronization Services (ocssd) — Manages cluster node membership and runs as the oracle user; failure of this process results in cluster restart.
Cluster Ready Services (crsd) — The crs process manages cluster resources (which could be a database, an instance, a service, a Listener, a virtual IP (VIP) address, an application process, and so on) based on the resource's configuration information that is stored in the OCR. This includes start, stop, monitor and failover operations. This process runs as the root user
Event manager daemon (evmd) —A background process that publishes events that crs creates.
Process Monitor Daemon (OPROCD) —This process monitor the cluster and provide I/O fencing. OPROCD performs its check, stops running, and if the wake up is beyond the expected time, then OPROCD resets the processor and reboots the node. An OPROCD failure results in Oracle Clusterware restarting the node. OPROCD uses the hangcheck timer on Linux platforms.
RACG (racgmain, racgimon) —Extends clusterware to support Oracle-specific requirements and complex resources. Runs server callout scripts when FAN events occur.
 Q What are Oracle database background processes specific to RAC
•LMS—Global Cache Service Process
•LMD—Global Enqueue Service Daemon
•LMON—Global Enqueue Service Monitor
•LCK0—Instance Enqueue Process
To ensure that each Oracle RAC database instance obtains the block that it needs to satisfy a query or transaction, Oracle RAC instances use two processes, the Global Cache Service (GCS) and the Global Enqueue Service (GES). The GCS and GES maintain records of the statuses of each data file and each cached block using a Global Resource Directory (GRD). The GRD contents are distributed across all of the active instances.
Q What are Oracle Clusterware Components
Voting Disk — Oracle RAC uses the voting disk to manage cluster membership by way of a health check and arbitrates cluster ownership among the instances in case of network failures. The voting disk must reside on shared disk.
Oracle Cluster Registry (OCR) — Maintains cluster configuration information as well as configuration information about any cluster database within the cluster. The OCR must reside on shared disk that is accessible by all of the nodes in your cluster
 Q How do you troubleshoot node reboot 
Please check metalink ...
Note 265769.1 Troubleshooting CRS Reboots
Note.559365.1 Using Diagwait as a diagnostic to get more information for diagnosing Oracle Clusterware Node evictions.
 Q How do you backup the OCR
There is an automatic backup mechanism for OCR. The default location is : $ORA_CRS_HOME\cdata\"clustername"\
To display backups :
#ocrconfig -showbackup
To restore a backup :
#ocrconfig -restore
With Oracle RAC 10g Release 2 or later, you can also use the export command:
#ocrconfig -export -s online, and use -import option to restore the contents back.
With Oracle RAC 11g Release 1, you can do a manaual backup of the OCR with the command:
# ocrconfig -manualbackup 
 Q How do you backup voting disk
#dd if=voting_disk_name of=backup_file_name
 Q How do I identify the voting disk location 
#crsctl query css votedisk
 Q How do I identify the OCR file location 
check /var/opt/oracle/ocr.loc or /etc/ocr.loc ( depends upon platform)
or
#ocrcheck
 Q Is ssh required for normal Oracle RAC operation ?
"ssh" are not required for normal Oracle RAC operation. However "ssh" should be enabled for Oracle RAC and patchset installation.
 Q What is SCAN?
Single Client Access Name (SCAN) is s a new Oracle Real Application Clusters (RAC) 11g Release 2 feature that provides a single name for clients to access an Oracle Database running in a cluster. The benefit is clients using SCAN do not need to change if you add or remove nodes in the cluster.
 Q What is the purpose of Private Interconnect ?
Clusterware uses the private interconnect for cluster synchronization (network heartbeat) and daemon communication between the the clustered nodes. This communication is based on the TCP protocol.
RAC uses the interconnect for cache fusion (UDP) and inter-process communication (TCP). Cache Fusion is the remote memory mapping of Oracle buffers, shared between the caches of participating nodes in the cluster. 
 Q Why do we have a Virtual IP (VIP) in Oracle RAC?
Without using VIPs or FAN, clients connected to a node that died will often wait for a TCP timeout period (which can be up to 10 min) before getting an error. As a result, you don't really have a good HA solution without using VIPs.
When a node fails, the VIP associated with it is automatically failed over to some other node and new node re-arps the world indicating a new MAC address for the IP. Subsequent packets sent to the VIP go to the new node, which will send error RST packets back to the clients. This results in the clients getting errors immediately
 Q What do you do if you see GC CR BLOCK LOST in top 5 Timed Events in AWR Report? 
This is most likely due to a fault in interconnect network.
Check netstat -s
if you see "fragments dropped" or "packet reassemblies failed" , Work with your system administrator find the fault with network. 
 Q How many nodes are supported in a RAC Database?
10g Release 2, support 100 nodes in a cluster using Oracle Clusterware, and 100 instances in a RAC database.
 Q Srvctl cannot start instance, I get the following error PRKP-1001 CRS-0215, however sqlplus can start it on both nodes? How do you identify the problem?
Set the environmental variable SRVM_TRACE to true.. And start the instance with srvctl. Now you will get detailed error stack.
 Q what is the purpose of the ONS daemon?
The Oracle Notification Service (ONS) daemon is an daemon started by the CRS clusterware as part of the nodeapps. There is one ons daemon started per clustered node.
The Oracle Notification Service daemon receive a subset of published clusterware events via the local evmd and racgimon clusterware daemons and forward those events to application subscribers and to the local listeners.
This in order to facilitate:
a. the FAN or Fast Application Notification feature or allowing applications to respond to database state changes.
b. the 10gR2 Load Balancing Advisory, the feature that permit load balancing accross different rac nodes dependent of the load on the different nodes. The rdbms MMON is creating an advisory for distribution of work every 30seconds and forward it via racgimon and ONS to listeners and applications.
Q How do users connect to database in an Oracle RAC environment?
Users can access a RAC database using a client/server configuration or through one or more middle tiers, with or without connection pooling. Users can use oracle services feature to connect to database.
 Q What is the use of a service in Oracle RAC environment?
Applications should use the services feature to connect to the Oracle database. Services enable us to define rules and characteristics to control how users and applications connect to database instances.
 Q What are the characteristics controlled by Oracle services feature?
The characteristics include a unique name, workload balancing and failover options, and high availability characteristics.
 Q What is a voting disk?
A voting disk is a file that manages information about node membership.
 Q What are the administrative tasks involved with voting disk?
Following administrative tasks are performed with the voting disk :
1) Backing up voting disks
2) Recovering Voting disks
3) Adding voting disks
4) Deleting voting disks
5) Moving voting disks
 Q How do we backup voting disks?
1) Oracle recommends that you back up your voting disk after the initial cluster creation and after we complete any node addition or deletion procedures.
2) First, as root user, stop Oracle Clusterware (with the crsctl stop crs command) on all nodes. Then, determine the current voting disk by issuing the following command:
crsctl query votedisk css
3) Then, issue the dd or ocopy command to back up a voting disk, as appropriate.
Give the syntax of backing up voting disks:-
On Linux or UNIX systems:
dd if=voting_disk_name of=backup_file_name
where,
voting_disk_name is the name of the active voting disk
backup_file_name is the name of the file to which we want to back up the voting disk contents
On Windows systems, use the ocopy command:
ocopy voting_disk_name backup_file_name
 Q What is the Oracle Recommendation for backing up voting disk?
Oracle recommends us to use the dd command to backup the voting disk with a minimum block size of 4KB.
 Q How do you restore a voting disk?
To restore the backup of your voting disk, issue the dd or ocopy command for Linux and UNIX systems or ocopy for Windows systems respectively.
On Linux or UNIX systems:
dd if=backup_file_name of=voting_disk_name
On Windows systems, use the ocopy command:
ocopy backup_file_name voting_disk_name
where,
backup_file_name is the name of the voting disk backup file
voting_disk_name is the name of the active voting disk
 Q How can we add and remove multiple voting disks?
If we have multiple voting disks, then we can remove the voting disks and add them back into our environment using the following commands, where path is the complete path of the location where the voting disk resides:
crsctl delete css votedisk path
crsctl add css votedisk path
 Q How do we stop Oracle Clusterware?When do we stop it?
Before making any modification to the voting disk, as root user, stop Oracle Clusterware using the crsctl stop crs command on all nodes.
 Q How do we add voting disk?
To add a voting disk, issue the following command as the root user, replacing the path variable with the fully qualified path name for the voting disk we want to add:
crsctl add css votedisk path -force
 Q How do we move voting disks?
To move a voting disk, issue the following commands as the root user, replacing the path variable with the fully qualified path name for the voting disk we want to move:
crsctl delete css votedisk path -force
crsctl add css votedisk path -force
 Q How do we remove voting disks?
To remove a voting disk, issue the following command as the root user, replacing the path variable with the fully qualified path name for the voting disk we want to remove:
crsctl delete css votedisk path -force
 Q What should we do after modifying voting disks?
After modifying the voting disk, restart Oracle Clusterware using the crsctl start crs command on all nodes, and verify the voting disk location using the following command:
crsctl query css votedisk
 Q When can we use -force option?
If our cluster is down, then we can include the -force option to modify the voting disk configuration, without interacting with active Oracle Clusterware daemons. However, using the -force option while any cluster node is active may corrupt our configuration