Oracle Database 12.1.0.2 (RAC 2 nodes)
ASM Solaris 11.4 Hi, I have backup which is scheduled (1 full and other days incremental) on Commvault. The database size is currently 2.5 TB and is expected to grow to around 10 TB. The problem is whenever backup runs it slows down the DB. Could you please advise on a mechanism to backup the DB with no/minimal impact on DB performance. Regards, Roshan |
Administrator
|
Okay, so you use Commvault's database agent, which runs RMAN probably.
Here is a quick list for you-> First of all check your schedule policy.. It should not have unnecessary backup iterations. Execute your backup tasks in the maintanence windows. (backup windows) Consider using Resource Manager to manage the resource consumptions of RMAN. Decrease the parallelism, decrease the RMAN channels created by RMAN backup tasks. If the host is an Exadata, Consider using IORM Consider using RMAN's Backup optimization.. (see the related document/documents for understanding it) If you have a good/certified/supported/reliable technology, you may consider getting Snapshot-based backups as well.. (while your db is in backup mode -- begin backup/end backup, you may get storage or filesystem snapshots..) These will be much more quicker, but still need to be supported with RMAN full backups.. However these type of backups may decrease your RMAN full backup frequency(the total count of RMAN backup that you get in a certain amount of time). |
This post was updated on .
Thanks.
I have tuned my backup. I noticed 3 archivelog policies were scheduled, so I removed them. Can you please check my current backup policies: #******DATA/CONTROL FILE/SPFILE BACKUP SCRIPT******# CONFIGURE CONTROLFILE AUTOBACKUP ON; run { allocate channel ch1 type 'sbt_tape' connect sys/351d1103a129722c0a1ef64b82e76018f0364682b65183faf@dware1 PARMS="SBT_LIBRARY=/opt/commvault/Base64/libobk.so, BLKSIZE=1048576 ENV=(CV_mmsApiVsn=2,CV_channelPar=ch1)" TRACE 0; allocate channel ch2 type 'sbt_tape' connect sys/351d1103a129722c0a1ef64b82e76018f0364682b65183faf@dware1 PARMS="SBT_LIBRARY=/opt/commvault/Base64/libobk.so, BLKSIZE=1048576 ENV=(CV_mmsApiVsn=2,CV_channelPar=ch2)" TRACE 0; allocate channel ch3 type 'sbt_tape' connect sys/351d1103a129722c0a1ef64b82e76018f0364682b65183faf@dware1 PARMS="SBT_LIBRARY=/opt/commvault/Base64/libobk.so, BLKSIZE=1048576 ENV=(CV_mmsApiVsn=2,CV_channelPar=ch3)" TRACE 0; allocate channel ch4 type 'sbt_tape' connect sys/351d1103a129722c0a1ef64b82e76018f0364682b65183faf@dware1 PARMS="SBT_LIBRARY=/opt/commvault/Base64/libobk.so, BLKSIZE=1048576 ENV=(CV_mmsApiVsn=2,CV_channelPar=ch4)" TRACE 0; ## send "BACKUP -jm 16809997 -a 2:428 -cl 187 -ins 80 -at 80 -j 46820 -bal 0 -t 2 -ms 2 -data -PREVIEW -mhn dware1.telecom.mu*dware1*8400*8402"; setlimit channel ch1 maxopenfiles 2; setlimit channel ch2 maxopenfiles 2; setlimit channel ch3 maxopenfiles 2; setlimit channel ch4 maxopenfiles 2; backup incremental level = 0 filesperset = 8 format='<CVJOBID>_%d_%U' database plus archivelog not backed up; delete archivelog until time 'sysdate-1'; } exit; Incremental Backup: #******DATA/CONTROL FILE/SPFILE BACKUP SCRIPT******# CONFIGURE CONTROLFILE AUTOBACKUP ON; run { allocate channel ch1 type 'sbt_tape' connect sys/351d1103a129722c0a1ef64b82e76018f0364682b65183faf@dware1 PARMS="SBT_LIBRARY=/opt/commvault/Base64/libobk.so, BLKSIZE=1048576 ENV=(CV_mmsApiVsn=2,CV_channelPar=ch1)" TRACE 0; allocate channel ch2 type 'sbt_tape' connect sys/351d1103a129722c0a1ef64b82e76018f0364682b65183faf@dware1 PARMS="SBT_LIBRARY=/opt/commvault/Base64/libobk.so, BLKSIZE=1048576 ENV=(CV_mmsApiVsn=2,CV_channelPar=ch2)" TRACE 0; allocate channel ch3 type 'sbt_tape' connect sys/351d1103a129722c0a1ef64b82e76018f0364682b65183faf@dware1 PARMS="SBT_LIBRARY=/opt/commvault/Base64/libobk.so, BLKSIZE=1048576 ENV=(CV_mmsApiVsn=2,CV_channelPar=ch3)" TRACE 0; allocate channel ch4 type 'sbt_tape' connect sys/351d1103a129722c0a1ef64b82e76018f0364682b65183faf@dware1 PARMS="SBT_LIBRARY=/opt/commvault/Base64/libobk.so, BLKSIZE=1048576 ENV=(CV_mmsApiVsn=2,CV_channelPar=ch4)" TRACE 0; ## send "BACKUP -jm 16809997 -a 2:428 -cl 187 -ins 80 -at 80 -j 46820 -bal 0 -t 2 -ms 2 -data -PREVIEW -mhn dware1.telecom.mu*dware1*8400*8402"; setlimit channel ch1 maxopenfiles 2; setlimit channel ch2 maxopenfiles 2; setlimit channel ch3 maxopenfiles 2; setlimit channel ch4 maxopenfiles 2; backup incremental level = 1 filesperset = 8 format='<CVJOBID>_%d_%U' database plus archivelog not backed up; delete archivelog until time 'sysdate-1'; } exit; Have you ever used block change tracking? Is I enable it, when DB grows up to 10 TB, what will be the size of the file? I will check with the storage admin of ZFS storage and see if snapshot backup is possible. Thanks, Roshan |
Administrator
|
Hi Roshan,
your rman scripts look okay. (if your schedule policy is aligned with your retention policy) Yes, I have used BCT (block change tracking) several times. It increases the speed of the Incremental backups, and its size depends on the changes, not the database size. High number of changes -> Bigger size of the BCT. BCT also brings a little overhead to the database. Keep these in mind. |
Hi Erman,
I have used the script in doc below to purge foreign archivelogs 2011174.1 On target: show parameter log_archive_dest_1 log_archive_dest_1 string LOCATION=+RECO VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE) log_archive_dest_4 string LOCATION=+RECO/foreign VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE) On source: log_archive_dest_4 string SERVICE=DWARE1 ASYNC OPTIONAL NOREGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) REOPEN=10 DB_UNIQU +RECO/foreign entries: pastedImage_1.png Can you please advise if I run the script to purge archivelogs which have already been applied before backup starts, whether recovery of DB will be successful after restore? Regards, R |
Administrator
|
You asked -> "Can you please advise if I run the script to purge archivelogs which have already been applied before backup starts, whether recovery of DB will be successful after restore?"
applied to what? to where? In general, you need all the archivelogs that were generated during your RMAN-based (including DUPLICATE) backup to restore your RMAN-based backup and open your restored database properly. If you want to recover/roll forward your restored database to a point of time after your backup was completed, then you will need the archivelogs, which were generated after your backup is completed. So there is no requirement for the presence of archivelogs which were generated before "your backup was started". I think you understand what I mean.. |
In reply to this post by ErmanArslansOracleBlog
Hi,
if the change tracking file gets lost, what will happen? WIll DB still be operational? DB will open upon next restart or should I disable BCT first? |
Administrator
|
I didn't test it yet. But as far as I see, when BTC file is removed, or let's say if there will be an I/O error there, then the BCT will be silently disabled by the database.
You will see something like "Block change tracking service stopping" in your alert log file. The argument is probably -> The database should be allowed to continue, even if the BCT fails.. So, your db won't be stopped probably, but your rman related processes, which may be running on that time, may be affected by that. |
Free forum by Nabble | Edit this page |