logminer read rate downstream DB

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

logminer read rate downstream DB

Roshan
Hello Erman,

I have one downstream database which runs Oracle golden gate. Bith extract and replicat run on same server. I would like to do CDC using Striim from database on that server.

I see performance is very slow: it takes hours to replicate the change.

I checked the logminer read rate and most of the time is spent on the SQL client.

dware11_ora_8404_LOGMNR_TRACEnew.txt
Log read rate is 22G/h.


45 G archives generated per hour.

Kindly advise.

Regards,

Roshan
Reply | Threaded
Open this post in threaded view
|

Re: logminer read rate downstream DB

ErmanArslansOracleBlog
Administrator
1) Did you check this blog post -> https://ermanarslan.blogspot.com/2020/03/rdbms-cdc-log-miner-read-performance.html ?  -- for most likely reasons for performance degradation...

2) Is 45GB/per hour normal for this database? There may be unnecessary archive generated, did you check that as well? Did you check what DML activities cause that archive?
Reply | Threaded
Open this post in threaded view
|

Re: logminer read rate downstream DB

Roshan
Hi,

I guess I used wring query.

I used this one

SET PAGESIZE 120;
SET LINESIZE 200;
COL DAY FOR A8;

SELECT
TO_CHAR(FIRST_TIME,'YYYY-MM-DD') DAY,
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'00',1,0)),'99') "00",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'01',1,0)),'99') "01",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'02',1,0)),'99') "02",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'03',1,0)),'99') "03",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'04',1,0)),'99') "04",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'05',1,0)),'99') "05",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'06',1,0)),'99') "06",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'07',1,0)),'99') "07",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'08',1,0)),'99') "08",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'09',1,0)),'99') "09",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'10',1,0)),'99') "10",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'11',1,0)),'99') "11",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'12',1,0)),'99') "12",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'13',1,0)),'99') "13",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'14',1,0)),'99') "14",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'15',1,0)),'99') "15",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'16',1,0)),'99') "16",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'17',1,0)),'99') "17",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'18',1,0)),'99') "18",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'19',1,0)),'99') "19",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'20',1,0)),'99') "20",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'21',1,0)),'99') "21",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'22',1,0)),'99') "22",
TO_CHAR(SUM(DECODE(TO_CHAR(FIRST_TIME,'HH24′),'23',1,0)),'99') "23"
FROM
V$LOG_HISTORY
WHERE    TO_CHAR(FIRST_TIME,'YYYY-MM-DD') > TO_CHAR(SYSDATE-20,'YYYY-MM-DD')
GROUP BY
TO_CHAR(FIRST_TIME,'YYYY-MM-DD')
ORDER BY TO_CHAR(FIRST_TIME,'YYYY-MM-DD');

archive.xlsx

I see average 24G per hour.

I will gather the stats and check.


Also, the archivelogs are on SASS disks and datafiles and redo logs are on SSD.

Do you think I move archivelogs on SSD?

Reply | Threaded
Open this post in threaded view
|

Re: logminer read rate downstream DB

ErmanArslansOracleBlog
Administrator
If you generate 45 GB archivelog per hour, and if you can read 24 GB per hour, then you need to do something...
Putting archivelogs into SSD drives may increase the Log Miner read performance, but 45 gb per hour should also be investigated.

Check ->
https://ermanarslan.blogspot.com/2020/03/rdbms-cdc-log-miner-read-performance.html
https://ermanarslan.blogspot.com/2019/11/rdbms-oracle-log-miner-read-rate.html