PCP

classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

PCP

satish
Hello erman,

Why does oracle recommends shared filesystem for pcp?any ideas

Thank you.
Reply | Threaded
Open this post in threaded view
|

Re: PCP

satish
If we dont use shared fs,Do we face any problem with viewing log and output files?If yes,pls can you give some idea of what problem will we face.
Reply | Threaded
Open this post in threaded view
|

Re: PCP

ErmanArslansOracleBlog
Administrator
In a shared filesystem architecture, all changes made to the shared file system are immediately accessible to all application tier nodes.
I don't think shared filesystem is a "must" for PCP. But it is recommended. Viewing Log and out files may be the main cause for that. You can however; find a workaround for that too.
Reply | Threaded
Open this post in threaded view
|

Re: PCP

satish
Thank you.

I  dont understand,where do we get the problem in viewing logfiles.
Reply | Threaded
Open this post in threaded view
|

Re: PCP

satish
if it's not shared,i think output generated by node1 will not be able to read by node2.

How can we overcome this problem?
 
Reply | Threaded
Open this post in threaded view
|

Re: PCP

ErmanArslansOracleBlog
Administrator
So you are asking about having a nonshared apps filesystem.
You make directories which store the concurrent request out and log files, be accessible via NFS mount to all concurrent processing tiers.
You don't have to share the whole apps filesystem, but those out and log files.
You can implement such a configuration using several mechanisms..
One of those mechanisms may be mounting those directories from app node2 to app node1 using NFS. Or vice-versa.
Reply | Threaded
Open this post in threaded view
|

Re: PCP

Satish
Thank you for the update

If I  create directories and mount them from node2 to node1 using NFS.Incase,if node1 is failed,can i be able to view the logs/output of requests that executed on node1?

My concern is,as node 1 is down and if i want to view logs/outputfiles of node1,do we need update the OUTFILE_NODE_NAME and LOGFILE_NODE_NAME columns of the FND_CONCURRENT_REQUESTS table with the surviving shared node name i.e..,node2.
Reply | Threaded
Open this post in threaded view
|

Re: PCP

ErmanArslansOracleBlog
Administrator
If I  create directories and mount them from node2 to node1 using NFS.Incase,if node1 is failed,can i be able to view the logs/output of requests that executed on node1? 

*Yes. (as long as your nfs host is up .. in this case as long as node 2 is up)
*Alternatively, you can have an external NFS share that is mounted to both of your nodes, this way is better actually.

My concern is,as node 1 is down and if i want to view logs/outputfiles of node1,do we need update the OUTFILE_NODE_NAME and LOGFILE_NODE_NAME columns of the FND_CONCURRENT_REQUESTS table with the surviving shared node name i.e

*Yes.

Ref: Unable to View Output and Log Files From Failed PCP Node (Doc ID 1342773.1)