logminer issue

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

logminer issue

Roshan
Hi Erman,

OGG is running on one Oracle database server (v 12.1.0.2) to replicate tables on a MSSQL server. I am connecting streamset to this server (Oracle CDC). The mechanism to replicate changes from the Oracle server is same as golden gate: mining the redo logs using logminer and replicate changes to another target (e.g Mongo DB...).

I noticed sometimes the goldengate extract process abends with the following error in alert log:

Wed Aug 26 14:30:03 2020
Errors in file /u01/ora12c/diag/rdbms/bi/BI/trace/BI_j000_4941.trc:
ORA-12012: error on auto execute of job "APEX_040200"."ORACLE_APEX_WS_NOTIFICATIONS"
ORA-04063: package body "APEX_040200.WWV_FLOW_WORKSHEET_API" has errors
Wed Aug 26 14:30:44 2020
Thread 1 advanced to log sequence 96839 (LGWR switch)
  Current log# 4 seq# 96839 mem# 0: /data1/oradata/BI/onlinelog/redo04a.rdo
  Current log# 4 seq# 96839 mem# 1: /data1/oradata/BI/onlinelog/redo04b.rdo
Wed Aug 26 14:30:52 2020
Archived Log entry 95582 added for thread 1 sequence 96838 ID 0x45acdab7 dest 1:
Wed Aug 26 14:31:11 2020
krvxenq: Failed to acquire logminer dictionary lock (1). pid=100 OS id=4545. Retrying...
Wed Aug 26 14:35:01 2020

In ggserr.log, there is no error. Kindly advise if there is conflict with streamset and golden gate.

Thanks,

Roshan
Reply | Threaded
Open this post in threaded view
|

Re: logminer issue

ErmanArslansOracleBlog
Administrator
Please check this MOS note -> krvxenq: Failed to acquire logminer dictionary lock (Doc ID 2059318.1)
Reply | Threaded
Open this post in threaded view
|

Re: logminer issue

Roshan
Thanks. I tried the workaround but issue is still persisting. Currently I have Golden Gate downstream replication. Is there any tool we can use to only perform the Transformation and Loading on the mining database?
Reply | Threaded
Open this post in threaded view
|

Re: logminer issue

ErmanArslansOracleBlog
Administrator
Did you check the other notes about this subject?

Such as;

Registering a New Integrated Extract Hangs In INITIALIZING State (Doc ID 2414184.1)
Oracle Capture in state "WAITING FOR TRANSACTION", excessive time spent in LOGMNR_DICT_CACHE.SAVE_OBJ() (Doc ID 2030973.1)

AS for your downstream related question;

When we talk about downstream replication; we actually have the following;

The source database ships its redo logs to a downstream database, and Extract uses the logmining server at the downstream database to mine the redo logs.
So, actually you use dataguard redo transport sevices to ship the redo from primary to downstream.
If you want to replace that shipment method, then we are talking about a complete picture.
I understand you right, right?

That is we will put a tool to make that redo shipment. That tool may be a custom log shipping script or a sophisticated tool that use log miner and get the changes from redo or archives of the primary and then transforms them (if necessary) and lastly apply them to the target.(downstream) .. A tool like Striim can do that.. But the complexity of the flow will increase so it may not be feasible to do that.
Reply | Threaded
Open this post in threaded view
|

Re: logminer issue

Roshan
Hi,

I think I will use Oracle Golden Gate for Big Data.

http://www.oracle.com/us/products/middleware/data-integration/goldengate-for-big-data-ds-2415102.pdf

Regards,

Roshan
Reply | Threaded
Open this post in threaded view
|

Re: logminer issue

ErmanArslansOracleBlog
Administrator
You asked -> Currently I have Golden Gate downstream replication. Is there any tool we can use to only perform the Transformation and Loading on the mining database?

So your primary DB is an Oracle RDBMS and your mining database is also an Oracle RDBMS.

Where it the Big Data in this picture?
Reply | Threaded
Open this post in threaded view
|

Re: logminer issue

Roshan
As shown in the diagram on page 1, we have many BigData platforms like Kudu, Apache Hadoop, Kafka, cloudera, mongo .... We will replicate the tables from the mining database to these platforms using these BigData adapter.