Hi Erman,
I have checked your blog on logminer performance. https://ermanarslan.blogspot.com/2019/11/rdbms-oracle-log-miner-read-rate.html In fact I will configure logminer using redo logs only and not archivelogs I have added the redo logs as follows: SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( - LOGFILENAME => '/app/PMTG/db/apps_st/data/log01a.dbf', - OPTIONS => DBMS_LOGMNR.NEW);> SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( - LOGFILENAME => '/app/PMTG/db/apps_st/data/log02a.dbf', - OPTIONS => DBMS_LOGMNR.ADDFILE);> > SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( - LOGFILENAME => '/app/PMTG/db/apps_st/data/log03a.dbf', - OPTIONS => DBMS_LOGMNR.ADDFILE);> > SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR (STARTSCN => 315614665840, ENDSCN => 315615487985, OPTIONS => DBMS_LOGMNR.SKIP_CORRUPTION+ DBMS_LOGMNR.NO_SQL_DELIMITER+DBMS_LOGMNR.NO_ROWID_IN_STMT+ DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG); PL/SQL procedure successfully completed. where startscn and endscn is from v$log; SQL> set arraysize 1000 SQL> set timing on SQL> SELECT * FROM V$LOGMNR_CONTENTS WHERE TABLE_NAME = 'TABLENOTEXIT'; no rows selected Elapsed: 00:02:49.90 Rate= archived_log_count x archive_log_size / duration. 3 x 1.074/(60/2.8) =0.15 G/hr The amount of archive log generated per hour is low (almost 1G/h). archive12Apr.archive12Apr log file sequential read 6,958 1.1 0.15 .0 System I/O AWR:awrrpt_1_4354_4355.html Does this mean our system/DB is well tuned? I see from Finding 1 that this logminer is taking time. 12aqm6fhn7z7c How can we tune it? Thanks, Roshan |
I have attached the logminer trace file for info.
I see Disk file operations IO and db file sequential read. Kindly advise if disk needs to to tuned. PMTG2_ora_50135104_SQL10046.rar |
Administrator
|
12aqm6fhn7z7c is your query / your tool's query.. That query is based on the logminer view...
Please do the following and send me the trace produced with tkprof.. SQL> spool on SQL> spool <full_path>/lgmnr.log SQL> set timing out SQL> alter session set tracefile_identifier='LOGMNR_TRACE'; SQL> alter session set MAX_DUMP_FILE_SIZE = unlimited; SQL> alter session set events '10046 trace name context forever, level 12'; -- Initiate the view containing the LogMiner data: SQL> exec sys.dbms_logmnr.add_logfile ( logfilename => '<full-path-name-of-archived-log-file-name>', options => sys.dbms_logmnr.new); <add the rest of archive logs that need to be mined> and start LogMiner with: SQL> exec sys.dbms_logmnr.start_logmnr(<options>); SQL> <here execute the LogMiner query used to extract the data> -- For hang issues let the LogMiner command/query run for at least 1 hour. For slow performance issues wait for the LogMiner to finish. And stop tracing with: SQL> alter session set events '10046 trace name context off'; SQL> exec dbms_logmnr.end_logmnr; -- Identify the generated trace file and issue a command like the following to create a tkprof version of the trace file: > tkprof <filename.trc> <output_filename> |
Hello Erman,
Thanks for the update. please find attached tkprof output. PMTG2_ora_28901588_LOGMNR_TRACENEW.prf Regards, Roshan |
I have also executed the scripts from (Doc ID 822527.1)
Below is the output: SQL> COLUMN category HEADING "Category" COLUMN name HEADING "Name" COLUMN heap_name HEADING "Heap name" COLUMN q1 HEADING "Memory 1st" Format 999,999,999,999 COLUMN q2 HEADING "Memory 2nd" Format 999,999,999,999 COLUMN diff HEADING "Difference" Format S999,999,999,999 SET LINES 150 SELECT tab2.category, tab2.name, tab2.heap_name, tab1.bytes q1, tab2.bytes q2, tab2.bytes-tab1.bytes diff FROM tab1, tab2 WHERE tab1.category = tab2.category SQL> SQL> AND tab1.name = tab2.name AND tab1.heap_name = tab2.heap_name SQL> SQL> SQL> AND tab1.bytes <> tab2.bytes ORDER BY 6 DESC;SQL> SQL> 2 3 4 5 6 7 Category Name Heap name Memory 1st Memory 2nd Difference --------------- -------------------------- --------------- ---------------- ---------------- ---------------- Other free memory pga heap 240,912 245,416 +4,504 Other krvuinl Logminer LCR c 67,152 69,928 +2,776 Other krvtadc Logminer LCR c 2,258,880 2,260,344 +1,464 Other krvtgfcrs Logminer LCR c 19,152 19,880 +728 Other krvrd-extendedCA_krvxca krvxiphd: priv 6,440 6,560 +120 Other kgsc:kxsc session heap 623,248 623,352 +104 Other krvrd-szFQColName_krvxXca krvxiphd: priv 3,152 3,256 +104 Other krvurtlas Logminer LCR c 27,808 27,904 +96 Other krvrd-szTypeName_krvxXca krvxiphd: priv 488 528 +40 Other krvrd_gtloHtElemt krvxiphd: priv 42,944 42,968 +24 Other krvrd-szTypeSchemaName_kr krvxiphd: priv 472 480 +8 Category Name Heap name Memory 1st Memory 2nd Difference --------------- -------------------------- --------------- ---------------- ---------------- ---------------- Other krvrd-pCachedObj krvxiphd: priv 196,680 196,688 +8 Other krvrd_gtcsHtElemt krvxiphd: priv 14,336 14,320 -16 Other kgiobdtb session heap 7,832 7,800 -32 Other kgiob session heap 9,416 9,376 -40 Other krvrd-pCachedTcs krvxiphd: priv 563,992 563,928 -64 Other free memory session heap 345,648 345,376 -272 Other krvugcdc_1 Logminer LCR c 110,520 109,696 -824 Other free memory krvxdups: priv 2,471,352 2,469,968 -1,384 Other free memory krvxiphd: priv 19,960 16,856 -3,104 Other free memory Logminer LCR c 616,352 610,568 -5,784 21 rows selected. |
Administrator
|
Based on your previous updates; you read 3 logs each 1GB in size in 2 min 49 secs right?
Actually it takes a little less based on your trace file. Anyways, lets say it is 3 min. So you read 1GB per min. This makes 60GB per hour. LGWR performance depends on lots of things, including the performance of IO Subsytem of yours. It depends on the system activity as well.. 60GB looks okay for a traditional system.. You say, you generate 1GB sized redo per hour, so I don't see a problem here. In your trace, you only spent time on the following sql and it is normal .. SELECT * FROM V$LOGMNR_CONTENTS WHERE TABLE_NAME = 'TABLENOTEXIT' As for logmnr memory analysis, you can use the following query -> SQL> select sysdate,session_state, AVAILABLE_TXN, DELIVERED_TXN,AVAILABLE_TXN-DELIVERED_TXN DIFFERENCE, MAX_MEMORY_SIZE, USED_MEMORY_SIZE FROM v$logmnr_session; Description of the columns of v$logmnr_session -> https://docs.oracle.com/cd/B19306_01/server.102/b14237/dynviews_1161.htm check the following blogpost of mine as well -> https://ermanarslan.blogspot.com/2020/03/rdbms-cdc-log-miner-read-performance.html |
This post was updated on .
Hello Erman,
I have one downstream database which runs Oracle golden gate. Bith extract and replicat run on same server. I would like to do CDC using Striim from database on that server. I see performance is very slow: it takes hours to replicate the change. I checked the logminer read rate and most of the time is spent on the SQL client. dware11_ora_8404_LOGMNR_TRACEnew.txt Log read rate is 22G/h. 45 G archives generated per hour. Kindly advise. Regards, Roshan |
Administrator
|
Hi Roshan, please create a new issue for this.
|
Free forum by Nabble | Edit this page |