Backup
Method
Type
Version
Available
Requirements
Recovery
Manager (RMAN)
Physical
Oracle
version 8.0 and higher
Third-party
media manager (only if backing up to tape)
Operating
System
Physical
All
versions
Operating
system backup utility (for example, UNIX cp)
Export
All
versions
N/A

 

Feature
Recovery
Manager
Operating
System
Export
Closed
database backups
Supported.
Requires instance to be mounted.
Supported.
Not
supported
Open
database backups
Do
not use BEGIN/END BACKUP statements.
Use
BEGIN/END BACKUP statements
Requires
rollback or undo segments to generate consistent backups.
Incremental
backups
Supported.
Not
supported
Not
supported
Corrupt
block detection
Supported.
Identifies corrupt blocks and writes to V$DATABASE_CORRUPTION.
Not
supported
Supported.
Identifies corrupt blocks in the export log.
Automatic
backup
Supported.
Establishes the name and locations of all files to be backed up
(whole database, tablespace, datafile or control file backup).
Not
supported. Files to be backed up must be specified manually.
Supported.
Performs either full, user, or table backups
Backup
catalogs
Supported.
Backups are recorded in the recovery catalog and in the control
file, or exclusively in the target control file.
Not
supported
Not
supported
Backups
to media manager
Supported.
Interfaces with a media manager. RMAN also supports proxy copy,
a feature that allows the media manager to manage the transfer of
data.
Supported.
Backup to tape is manual or controlled by a media manager.
supported
Backs
up initialization parameter files and password files
Not
supported

supported
Not
supported
Operating
system independent language
Supported
(uses PL/SQL interface).
Not
supported
supported

 

There are three words that are to be understood by any Oracle DBA before he/she proceeds with ‘Backup and Recovery Strategies’.

Before getting into any discussion on backup and recovery of Oracle databases, it is necessary to understand how file structures are inter related and
how a consistency is taken care of by Oracle. Oracle to start an instance depends on the Oracle background processes, system global area (SGA) apart from Oracle kernel. SGA is a group of memory structures that contain data and control information for one Oracle Instance. Oracle automatically allocates SGA when an instance is started. When startup command is given to oracle to start up a particular instance pointed to a parameter file, which is called init<sid>.ora file, the background processes associated with the parameters and SGA are initialized. There are various stages in starting up an instance.

Command
<span “>What
happens then….?
<span “>Startup
no mount
<span “>Read
init.ora, identify control files, create and initialize SGA and
start background processes (this stage is to create a database or
to re-create control files after the loss of current control file)
<span “>Startup
mount
<span “>Opens
control files and mounts database and acquires an instance lock
Startup
open
Open
and lock data files. If first instance, get startup lock and open
inline redo log files. If first instance also perform crash recovery
if necessary to open the database in a consistent state

From the above it is understood that oracle has an internal mechanism to check the consistency basing on data files, control file and online redo log files. If any of these three structures are not in consistency they need to be restored and/or recovered. Like wise what happens during shutdown is also important to understand the way the consistency is maintained and not maintained and when recovery is required.

Shutdown
normal
Shutdown
immediate
Shutdown
abort
1.
Stop access to database
1.
Stop access to database
1.
Stop access to database
2.
Wait until users exit
—–
—–
—–
2.
Cancel active transactions and rollback
—–
3.
Flush DB Buffer and Redo log caches
3.
Flush DB Buffer and Redo log caches
4.
Drop file locks
4.
Drop file locks
2.
Drop file locks
5.
Complete on going transactions
5.
Complete on going transactions
—-
6.
Update file headers
6.
Update file headers
7.
Close threads
7.
Close threads
8.
Drop DB Instance Lock
8.
Drop DB Instance Lock
3.
Drop DB Instance Lock
9.
Synchronize control files and database files
9.
Synchronize control files and database files
—-
NO
REVOVERY REQUIRED
NO
REVOVERY REQUIRED
CRASH
RECOVERY REQUIRED DURING NEXT STARTUP
BACKUPS
ARE CONSISTENT
BACKUPS
ARE CONSISTENT
BACKUPS
ARE NOT CONSISTENT

 

Backup

It is to maintain a copy the data for use. Incase the database crashes to avoid data loss the data is backed up to another device or/and destination and protected. It is the basic material for the reconstruction of the lost / crashed database for some reason.

Restore

It is replacement of lost or damaged file with a backup. The files can be restored using RMAN or using OS commands like cp on UNIX machines copy on Windows machines. Recover It is the process of updating the data files and/or control files restored using the redo records and/or saved to archived redo log files generated by oracle. This process is also called ‘rolling forward’.

What Oracle Does For You?

Oracle does crash recovery and instance recovery.

Crash
Recovery:

Crash recovery is the process of applying the online redo log files to the database files and control files to bring the database to a consistent state and then closing all threads that are open at the time of crash. After every startup of the machine and after every startup of the instance which is not closed with normal and immediate options, the crash recovery starts with the applying of the threads that are not closed at the time of last closure and after applying all the open threads SMON enables crash and transaction recovery and opens the database for use.

In crash recovery, an instance automatically recovers the database before opening it. In general, the first instance to open the database after a crash or shutdown abort automatically performs crash recovery in the case of Oracle Real application Clusters.

Instancerecovery:

Crash Recovery and Instance Recovery are considered synonymous as long as the database has only one Oracle Instance to recover. In Oracle Parallel Server (OPS) or Oracle Real Application Clusters (ORAC) the database is accessed by multiple instances.

In OPS/ORAC configuration, the application of redo data to an open database by an instance when this instance discovers that another instance has crashed. A surviving instance automatically uses the redo log to recover the data in the instance’s buffer cache. Oracle undoes any uncommitted transactions that were in progress on the failed instance when it crashed and then clears any locks held by the instance after recovery is complete.

If one or more instances die/crash the instance is recovered by the living instances following the crash recovery methodology of applying the on line redo logs and closing the open redo threads to sync the database with the data that was in the cache, in redo and in rollback. This is done by Oracle itself.

If all the instances die then the database is to be restarted by starting all the instances. During the opening of the database Oracle instances do crash recovery and then if necessary instance recovery to bring the other dead instances to life.

Media Recovery:

Media Failure is the result of a physical problem that arises when Oracle fails in its attempt to write or read a file that is required to operate the database. A common example is a disk head crash that causes the loss of all data on the disk drive. Disk failure can affect a variety of files; including data files redo log files and control files. Because the database instance cannot continue to function properly, it cannot write the data in the buffer cache of the SGA to the data files.

Then it is necessary for a media recovery. Media recover means the application of redo or incremental backups to a restored backup datafile or individual data block to bring it to a specified time. Datafile media recover always begins at the lowest SCN recorded in the datafile header. 

When doing the media recovery:

(1) entire database or

(2) a tablespace or

(3) a set of blocks within datafile

can be recovered.

This media recovery is of two types: (1) Complete Recovery and (2) Incomplete Recovery. There is a choice of performing complete or incomplete recover only when the database is operated in ARCHIVELOG MODE. In NOARCHIVELOG MODE there is no incomplete recovery at all. This is due to no availability of archived log files in NOARCHIVELOG MODE.

If all the redo data is used it is complete recovery. If only part of the redo is used it is Incomplete Recovery.

Now that the important words in backup and recovery are known what they meant when dealing with Oracle databases, the discussion is to be shifted to
backup strategy.

 

Backup Strategy:

(1)
Physical Backups

In the physical backups the datafiles associated with the database are backed up.

The backup strategy is dependent on the data loss tolerance level of the database owners and / or the clients using the database.

If the data saved to the disks is high per every unit of time with some lean periods the data loss tolerance level is always low and the shut down time is ‘zero’. This means the database is 24*7 by nature. When the data loss tolerance level is zero, the database is to be put on media recovery mode. This mode is also called ‘archive log mode’. In this mode the online redo log files when are filled are moved to the archive log file destination as set in the parameter file for the database. The background process ‘ARCH’ is enabled by setting (log_archive_start = true) archive parameter to true in the parameter file. Additionally there parameters are set for the format of the archive log file name and archive log file destination or location on the disk.

log_archive_dest_1
= “location=H:Oracleoradatasrinivasarchive”

log_archive_format = %%ORACLE_SID%%T%TS%S.ARC

The archive log mode is enabled at two levels. At the database level and at the system level. If the database cannot be stopped altering the system a command is issued which is valid until the next startup of the database. When such a command is issued the database parameter file is modified and in the next startup the modified parameter becomes effective. If the database is very large with many associated data files, they need more time to backup the database. Such databases do not have down time  also. The backup strategy is to be evolved to meet such contingency where in we can recover the database within the lowest possible time and to the last committed transaction to the database. If there is down time for the database – we can have consistent backup of the database. If there is no downtime for the database – the only alternative is to backup the database in inconsistent mode and then there also backup archive logs which can be used in case of recovery. Consistent backup is called ‘cold backup‘ or ‘off-line‘ of the database after the database is shutdown with NORMAL OR IMMEDIATE option but not  ABORT option. In this the database is not open and no user is able to access the database. The instance is shutdown and database is closed in normal mode and at the operating system level the backup takes place.The following files are backed up in this type of backups:

1.All datafiles

2. All controlfiles

3. All online redo log files

4. The init.ora file and config.ora file

5. Password file

Inconsistent backup

A backup in which some of the files in the backup contain changes made after they were check pointed. This type of backup needs recovery before it can be made consistent. Taking online database backups usually creates inconsistent backup. It means that the data files are backed up while they are open and are used. Inconsistent backup can also be done while the database is closed. This is possible in the following circumstances:

(1)A backup taken immediately after the instance crashed (or when all instances crashed in multi-instance environment like Oracle Parallel Server now called Oracle Real Application Clusters)

(2) When the instance was shutdown with ‘shutdown abort option’.

 

Utilities for the Backup

Oracle utility ‘ocopy’ for the Windows platform:

(Quote from Doc ID 139327.1 Meatalink)

The Differences between Windows NT COPY and Oracle OCOPY When Doing Backups:

When doing an online backup, should you use the Windows NT COPY command, or theOracle OCOPY command?

While doing online backups you should use OCOPY, or Oracle7 EBU, or Oracle8, 8.1 (and later) RMAN. With the OCOPY command you could copy to a backup directory on the hard drive but cannot use OCOPY to copy a file to tape. The other option if you do not want to use ‘ocopy’ to perform your backup as this does require a lot of disk space is EBU/RMAN that comes with Oracle. Depending on your Oracle version, the distribution includes a utility called EBU (Oracle7) or RMAN (Oracle8 and later) that can be used for online recovery as well. You will need to use a media management product
to move the data from RMAN to tape. Legato Storage Manager is provided however there are other products that are supported to be used with this tool.

To backup you will need to use the utility delivered by Oracle, the ocopy command. Utilities like the NT commands ‘copy’, ‘xcopy’ CANNOT be used to back up. The Windows NT feature to be aware of is that NT Backup does not allow files in use to be copied, so you must use the OCOPY utility that Oracle provides to copy the open database files to another disk location. Since OCOPY cannot copy files directly to tape, you will then need to use NT Backup or copy or a similar utility to copy the files to tape, as required.

OCOPY allows writing to continue while the backup is running. The NT COPY is a closed copy and the files may be marked either as “fuzzy” or “corrupt.” ‘OCOPY’ opens the file using CreateFile() with
the FILE_SHARE_READ and FILE_SHARE_WRITE flags. This allows writing to continue while we take the backup.

Applying archived redo during recovery repairs inconsistencies in the backup. The ‘copy’ command from NT doesn’t use these flags since it wants to prevent
writes to the file while the copy is taking place.

RMAN

Recovery
Manager is a utility provided by Oracle for Backup and Recovery
Procedures.

Oracle
Recovery Manager User Manual says:

Recovery
Manager is able to:

Use server sessions to back up and copy the database, tablespaces, datafiles, control files, and archived redo logs. Compress backups of datafiles so that only those data blocks that have been written to are included in a backup. Store frequently executed backup and recovery operations in scripts. Perform incremental backups, which back up only those data blocks that have changed since a previous backup. Recover changes to objects created with the NOLOGGING option by applying incremental backups (recovery with redo logs does not apply these changes). Create a duplicate of your production database for testing purposes. Use third-party media management software. Generate a printable message log of all backup and recovery operations. Use the recovery catalog to automate both restore and recovery operations. Perform automatic parallelization of backup and restore operations. Restore a backup using a backup control file and automatically adjust  the control file to reflect the structure of the restored datafiles. Find datafiles that require a backup based on user-specified limits on the amount of redo that must be applied for recovery. Perform crosschecks to determine whether archived materials in the media management catalog are still available. Test whether specified backups can be restored.

(2)
Logical Backups

These backups are to be performed only when the database is open and available. Export utility provided by oracle enables the exports. The Export is done in (1) Full Mode (2) User Mode and (3) Table Mode and the types of exports are (1) complete (2) cumulative and (3) incremental. These types can be used only when full=y is set as a parameter and then inctype=complete or cumulative or incremental can be set. When the export is performed using the inctype parameter, the database is to be recovered by:


a. Import the latest full export dump file

b. Import the latest cumulative export dump file

c. Then import all the incremental export files successively.

The database is recovered to the to point-in-time of the last incremental export and the data after that point of time is lost as no archive log files are used for recovering the database and they become useless.

Upside of Exports and Imports:

a.Use the exports to clone the schema and database.

b. Reorganization of the tablespace until 8.1 is introduced.

c. Ability to export to multiple files of a prescribed size.

d. Ability to export sub-sets of data from the tables of choice.

e. Transportable Tablespaces. They are like plug-in tablespaces. This is a feature of 8.1. In this the time taken to export the tablespace/s is just that of the time needed to copy a datafile physically to another location. This facility has some limitations. Such as


1. The source and target databases should be running on the same hardware
platforms.

2. The source and target databases should be using the same character
set.

3. The source database must not have a tablespace by the same name.

4. The source and target databases should have the same data block size.

5. Must transport self-contained set of objects.

6. Snapshots, materialized views, function based indexes, domain indexes,
scoped refs, advanced queues with more than one recipient cannot be transported.

7. The source database must set the tablespace to READ ONLY mode for a
short period of time

i.e.,. until meta data of the tablespace is exported and data file is
copied.

8. SYS owned objects couldn’t be transported.

When a tablespace is intended to be transported, it is to be routinely checked for self-containment using the built in package provided by Oracle.

Exec sys.dbms_tts.transport_set_check(‘tablespace_name’,true);

After the procedure is successfully executed

Select
* from sys.transport_set_violations;

If no rows are selected that is fine. Check the same way for the other tablespace intended for export. If any rows are returned against that tablespace you are to use multiple tablespace names in the ackage until zero rows are returned to find of the sets of tablespaces that can be transported. This is to be done as the table may be created in one tablespace and index is created in another tablespace. All the tablespaces so connected are to be transported together else your export is not usable. For multiple tablespaces the following example is may help:

Exec
sys.dbms_tts.transport_set_check(‘tablespace_1, tablespace_2,tablespace_3’,true);

Example
of exporting transportable tablespaces t1, t2, t3

Step 01.

Exec sys.dbms_tts.transport_set_check(‘t1, t2,t3’,true);

Step 02.

Select
* from sys. transport_set_violations;

No
rows selected.

Step 03.

Alter
tablespace t1 read only;

Alter tablespace t2 read only;

Alter tablespace t3 read only;

Step 04.

At the command prompt:

Exp
userid=”””sys/password as sysdba””” transaport_tablespace=y
tablespaces=(t1, t2, t3) file=tr_tbsp.dmp

After
the export is terminated successfully without warnings,

Step 05.

At the command prompt ‘copy’ or ‘xcopy’ the datafiles, associated with those tablespaces exported, to the destination within the framework of hardware compatibility and software compatibility and data block size compatibility etc discussed above.

Step 06.

After they are successfully copied verify the sizes with the original files and at Server Manager or SQL* Plus issue the following commands at the source database where from the tablespaces are exported to enable all DML transactions.

Alter tablespace t1 read write;

Alter tablespace t2 read write;

Alter tablespace t3 read write;

Step 07.

Now
import the tablespaces into target database:

Imp
file=tr_tbsp.dmp userid =””” sys/password as sysdba “””
transport_tablespace=y “datafiles=(filename1, filename2, filename3)”

Step 08.

After the import is terminated successfully, the last step is to change the mode of the tablespaces as they are imported in read only mode.

Alter tablespace t1 read write;

Alter tablespace t2 read write;

Alter tablespace t3 read write;

Downside of Exports and Imports:

a.
Still it is not possible to export on to a device not recognized when in Windows environment and it is possible in UNIX environment as pipes can be used to export on to a tape.

b.
The database suffers loss of data when exports and imports are used to recover the database, as media recovery cannot be enabled to apply the archived redo log files.

c.
Reorganization of tablespace is possible only when you have logically contiguous chunks of space available on the tablespace. This is because the tables are exported as a single largest extent combining all the extents the table had spanned before the export when compress=y parameter is used to reorganize the tablespace. When the tables and indexes in a particular tablespace are to be imported into more than one tablespace the DDL is to be edited carefully and pre-create tablespaces and then tables and then set the parameter ignore=y and then import the data. Instead ‘alter table <table_name> move <tablespace_name>;’ would take less time and efficient. But there are some issues with this move command. All the indexes on that table become un-usable and they are to be re-created
or rebuilt.

d.
Named versus Default -named constraints can cause some issues by spitting IMP-00003 and ORA-02264 or ORA-02261 and like that indicate the existence of so named object that is attempted to be imported.

e.
National Language Support (NLS) Issues are to be taken care of. If they differ there is an issue. If the NLS character sets of the export client are different from the database being exported or if the import client character sets are different from the export clients or the import client character set is different from the being imported into database character set, there are issues and they are to be handled carefully by setting the local environment variables such as $NLS_LANG on UNIX boxes and Registry entry in Windows environment.

f.
Limitations to the transportable tablespaces concept.

To improve the performance of Exports and Imports the following may be considered:

1. Set DIRECT parameter in the export parameter file and this enables to bypass the SQL command processing layer.

2. Set RECORDLENGTH parameter for the exports in multiples of db_block_size or in multiples of OS block size and the max is 64 kilobytes.

3. While importing set BUFFER parameter a little high to have array inserts and also use COMMIT=Y as additional parameter. Too high BUFFER parameter can result in paging or swaping which negatively effects the performance.

4. Generate index file for the indexes to be separately after the data is imported with no indexes on the tables. INDEXES=N and INDEXFILE=name and path of index file

 

Scripts for performing backups:

Hot Backup: – Data files – windows platform

set
pagesize 0

set feedback off

spool m:hot_bkup.sql

select ‘alter tablespace ‘|| tablespace_name ||’ begin backup;’||chr(10)||

‘host ocopy ‘||file_name ||’ m:oraclebkup ‘||chr(10)||

‘alter tablespace ‘|| tablespace_name ||’ end backup;’

from dba_data_files

where status <> ‘INVALID’

/

spool off

spool m:hot_bkup_rslt

@ m:hot_bkup.sql

spool off

This can be scheduled in the scheduled tasks or using ‘at’ command at the command prompt.

The output is seen as under:

alter tablespace TTS_EX2 begin backup;

host ocopy H:ORACLEORADATASRINIVASTTS_EX2.DBF c:oraclebkup

alter tablespace TTS_EX2 end backup;

(This
example is for windows and limitation here is you cannot directly put the things on to tape. From this directory you may have to copy them to tape NT Backup Utility)

Hot Backup: – Datafiles – UNIX platform directly on to tape

Set pagesize 0

Feedback off

Spool hot_bkup.sql

Rem rewind the tape

Select ‘mt – f /dev/rmt/0hc rew’ from dual;

Rem begin hot backup

select ‘alter tablespace ‘|| tablespace_name ||’ begin backup;’||chr(10)||

‘!tar -rvf /dev/rmt/0hc ‘||file_name ||chr(10)||

‘alter tablespace ‘|| tablespace_name ||’ end backup;’

from dba_data_files

where status <> ‘INVALID’

/

spool off

Now schedule hot_bkup.sql as a cron job in UNIX to be run at sqlplus or
svrmgrl prompt

Cold backup – windows platform

Create run.cmd file

set
oracle_sid=srinivas

h:oracleora81binsvrmgrl “@shut.sql”

create
shut.sql file

connect
internal/oracle@srinivas

@c:hbkup.sql

shutdown immediate

!hbkup.cmd

exit

create
hbkup.sql file

set
feedback off

set heading off

set pagesize 0

set echo off

spool c:hbkup.cmd

select ‘ocopy ‘||file_name ||’ c:oraclebkup’ from dba_data_files;

select ‘ocopy h:oracleadminsrinivaspfileinit.ora c:oraclebkup’
from dual;

select ‘ocopy ‘||member ||’ c:oraclebkup’ from v$logfile;

spool off

Set
run.cmd in the scheduled tasks or use ‘at’ command to set up the task.

Cold
backup – Unix platform (The variables set may be of choice and directory
structures)

BACKUP_DIR=/home/oracle/bkup; export BACKUP_DIR

BKUPTIME=’date ‘+%m%d%H

BKUPFILE=$BACKUP_DIR/bkup$ORACLE_SID$BKUPTIME.tar

Sqlplus -s sys/change_on_install > file_name.dat <<!

Set heading off ;

Set pagesize 0;

Set linesize 2000;

Set feedback off;

Select name from v$datafile;

Select name from v$controlfile;

Select member from v$logfile;

Exit;

!

svrmgrl <<!

connect internal as sysdba;

shutdown immediate;

exit;

!

tar cvf $BKUPFILE $ORACLE_HOMEdbsinit$ORACLE_SID.ora

for I in ‘cat file_name.dat’

do

echo ‘Backing up datafile $I’

tar -rvf $BKUPFILE $I

done

svrmgrl <<!

Connect internal as sysdba;

Startup pfile=$ORACLE_HOME/dbs/init$ORACLE_SID.ora;

Exit;

!

exit;

(The above are simple sample scripts, which are to be tested and satisfied before use).

The above scripts are not compatible with Oracle 9 and above as server manager is not available. The same scripts are to be run at sqlplus substituting server manager with sqlplus and then connect sys, instead of internal as internal is de-supported, as sysdba. Setting up RMAN RMAN setup in 8.0 is different from 8.1

RMAN in Oracle 8 and 8.1 – with no catalog

Step 01 Create a rman backup command file for the datafiles (rman_bu.rcv)

run
{

allocate channel t1 type ‘SBT_TAPE’;

setlimit channel t1 kbytes 2097150 maxopenfiles 32 readrate 200;


# Backup the database

backup

full

tag db_full_backup

filesperset 6

format ‘db_full_%d_t%t_s%s_p%p’

(database);


# Release the tape device

release channel t1;

}

# The following commands are run outside of the “run {}”

# command.


# Save control file to trace file.

sql ‘ALTER DATABASE BACKUP CONTROLFILE TO TRACE’;

Step
02 create a rman backup command file for the archive log files (rman_bu_log.rcv)

run
{

allocate channel t1 type ‘SBT_TAPE’;

setlimit channel t1 kbytes 2097150 maxopenfiles 32 readrate 200;


# Archive the current online log

sql ‘alter system archive log current’;


# save archive logs…

backup

filesperset 20

format ‘al_%d_t%t_s%s_p%p’

(archivelog all delete input);


# Release the tape device

release channel t1;

}

These two files can be set in a batch file to be run at specific times of choice. The backups can be monitored if Legato or Veritas or any other STORAGE MANAGER that is compatible and installed.

The following batch file is used in conjunction with Legato Storage Manager
(Oracle 8.1.x)

echo
#################### Begin ################## >> d:orantrdbms80traceorc_db_bu.log

date /t >> d:orantrdbmstrace orc _db_bu.log

time /t >> d:orantrdbmstrace orc _db_bu.log

rman target internal/oracle@orcl nocatalog cmdfile d:orantdba_utilrman_bu.rcv
1>>d:orantrdbmstrace orcl_db_bu.log

date /t >> d:orantrdbmstrace orc _db_bu.log

time /t >> d:orantrdbmstrace orc _db_bu.log

c:win32appnsrbinsavegrp -O -l full -c %computername%

print d:orantrdbmstrace orc _db_bu.log

echo #################### End #################### >> d:orantrdbmstrace
orc_db_bu.log

echo

echo

Please note that Oracle 8.X – RMAN executable is rman80, for Oracle 8.1.X RMAN Executable is RMAN for Oracle 9.X it is RMAN again. This is a common file for the data files as well as archive log files run

{

allocate channel t1 type ‘SBT_TAPE’;

setlimit channel t1 kbytes 2097150 maxopenfiles 32 readrate 200;


# Backup the database

backup

full

tag db_full_backup

filesperset 6

format ‘db_full_%d_t%t_s%s_p%p’

(database);


# Release the tape device

release channel t1;

}

# The following commands are run outside of the “run {}”

# command.


# Save control file to trace file.

sql ‘ALTER DATABASE BACKUP CONTROLFILE TO TRACE’;

run
{

allocate channel t1 type ‘SBT_TAPE’;

setlimit channel t1 kbytes 2097150 maxopenfiles 32 readrate 200;

 

# Archive the current online log

sql ‘alter system archive log current’;


# save archive logs…

backup

filesperset 20

format ‘al_%d_t%t_s%s_p%p’

(archivelog all delete input);


# Release the tape device

release channel t1;

}

list backupset of database;

list backupset of archivelog all;

RMAN
in Oracle 8.1 – with catalog

Step 01 create the tablespace for the RMAN schema and user

create tablespace rcvcat datafile ‘pathrcvcat.ora’ size 100M default storage (initial 1M next 1M minextents 1 maxextents unlimited pctincrease 0);

Step 02 set auto extension on

alter
database datafile ‘pathrcvcat.ora’ autoextend on;

Step
03 create user for owning rman schema

create
user rman_<sid_name> identified by rman_<sid_name>

temporary tablespace temp01 default tablespace rcvcat

quota unlimited on rcvcat;

Step
04 make grants

grant
create session to rman_<sid_name>;

grant recovery_catalog_owner to rman_<sid_name>;

Step
05 at the command prompt in a single line

Rman
target internal/oracle@connect_string rcvcat rman_<sid_name>/rman_<sid_name>@connecting_string

Step
06 now create catalog by issuing the following command at RMAN prompt

create
catalog;

Step
07 after the catalog is created register the database

register
database;

that
is it you are done

A
piece of advice:

Create
the RMAN catalog as a separate database. For this create a small database with about 500 MB and create RMAN user and create catalog for him in that database and that database can be put on cold backup daily after the RMAN backup is completed for the database and this protects the database in case of disaster as RMAN database is immediately restorable and all the
tapes or devices on to which the backups are done can be used for recovery.

Media Recovery Scenarios

Gather the information

Before the recovery strategies are drawn it is necessary to create record for the tablespaces, associated data files, listing control files and on-line redo log files.

The following SQL statement helps gather records for recovery:

SELECT NAME FROM V$DATAFILE
UNION ALL
SELECT MEMBER FROM V$LOGFILE
UNION ALL
SELECT NAME FROM V$CONTROLFILE;

Recording the Locations of Archived Redo Logs

SELECT NAME, VALUE
FROM V$PARAMETER
WHERE NAME LIKE ‘log_archive_dest%’
AND VALUE IS NOT NULL
/

Determine the format for archived logs by running SHOW as follows:

SHOW PARAMETER LOG_ARCHIVE_FORMAT

To see a list of all the archived logs recorded in the control file, issue this query:

SELECT NAME FROM V$ARCHIVED_LOG;

Recording the Locations of Backup Files

It is not enough to merely record the location of backup files: you must correlate the backups with the original files. If possible, name the backups with the same relative filename as the primary file. Whatever naming system you use, keep a table containing the relevant information.

Determining Which Data files Require Recovery

SELECT * FROM V$RECOVER_FILE;

With the output of the above query, query V$DATAFILE and V$TABLESPACE to obtain filenames and tablespace names for data files requiring recovery.

SELECT d.NAME, t.NAME AS tablespace_name
FROM V$DATAFILE d, V$TABLESPACE t
WHERE t.TS# = d.TS#
AND d.FILE# IN (select file# from v$recover_file);


Recovery Strategies and Procedures in Media Recovery Scenarios:

(1) Using Non-RMAN methods or User Managed Backups

Recovery under noarchivelog mode

01 after deleting all the database files restore complete set of database files from the latest offline backup or cold backup
02. start the database using startup open pfile=<init.ora file with path>;

In this type of recovery you will be loosing data logged into the database after the last backup.

Recovering Lost Data Files in noarchivelog mode

01.Mount the database
startup mount pfile=<init.ora file with path>;
02.alter database open;
The database complains about the missing data file.
If that data file is to be taken off line that is not permitted as the database is in ‘noarchivelog’ mode.
Even if it tried to switch the database mode to ‘archivelog’ it is not permitted.

So

03.alter datafile ‘<data file complained above with path>’ offline drop;
04. alter database open;
05. drop tablespace <tablespace_name> drop including contents;
06. create tablespace <tablespace_name> datafile ‘<data file name with path>’ size 100M;
07. recreate the objects in that tablespace if any. In this context export dumps are useful to recreate the objects with data or without data.

To offline a data file the database is to be in archive log mode. If the database is in noarchive log mode you can offline drop the data file, which is equivalent to the offline method, and actually the datafile is not dropped or deleted as that data file is already missing or dropped.

Recovering Lost Data Files in archivelog mode

(1) Identify the data files that are damaged or lost. This is known when a table is accessed which resides in that tablesapce that is associated with that missing/damaged datafile or when instance is started while opening the database the database complains of the physical structures of the database identified as data files, control files and redo log files.
The recover can be done in two ways. Shutdown database, restore the datafile file from the latest back up and also all the archived redo log files from the from the latest backup of the database to the point of time at which the database is mounted and not opened for the simple reason the datafile is not available.

SVRMGR> connect internal as sysdba
Connected.
SVRMGR> startup
ORACLE instance started.
Total System Global Area 137020380 bytes
Fixed Size 70620 bytes
Variable Size 77750272 bytes
Database Buffers 59121664 bytes
Redo Buffers 77824 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 3 – see DBWR trace file
ORA-01110: data file 3: ‘H:ORACLEORADATASRINIVASUSERS01.DBF’
SVRMGR> shutdown immediate;


At the OS level restore the data file reported not able to identify. Then connect to server manager and mount database and then recover database or the data file.

Recovering the Database

SVRMGR> connect internal/oracle@srinivas
Connected.
SVRMGR> startup mount
ORACLE instance started.
Total System Global Area 137020380 bytes
Fixed Size 70620 bytes
Variable Size 77750272 bytes
Database Buffers 59121664 bytes
Redo Buffers 77824 bytes
Database mounted.
SVRMGR> recover database;
Media recovery complete.
SVRMGR> alter database open;
Statement processed.
SVRMGR>

Recovering the Datafile


SVRMGR> connect internal/oracle@srinivas
Connected.
SVRMGR> startup mount
ORACLE instance started.
Total System Global Area 137020380 bytes
Fixed Size 70620 bytes
Variable Size 77750272 bytes
Database Buffers 59121664 bytes
Redo Buffers 77824 bytes
Database mounted.
SVRMGR>
SVRMGR>alter database datafile ‘H:ORACLEORADATASRINIVASUSERS01.DBF’ offline;
Statement processed.
SVRMGR> alter database open;
Statement processed.
SVRMGR> alter tablespace users offline;
alter tablespace users offline
*
ORA-01191: file 3 is already offline – cannot do a normal offline
ORA-01110: data file 3: ‘H:ORACLEORADATASRINIVASUSERS01.DBF’
SVRMGR> alter tablespace users offline immediate;
Statement processed.
SVRMGR> recover tablespace users;
ORA-00283: recovery session canceled due to errors
ORA-00264: no recovery required
SVRMGR> alter tablespace users online;
Statement processed.
SVRMGR> connect scott/tiger@srinivas
Connected.
SVRMGR> select * from case_1;
COL1
———-
1
2
3
3 rows selected.
SVRMGR>

There are some differences between Windows and UNIX platforms. The data files cannot be deleted from windows platform while the database is open and they complain of sharing violation and like wise the data files cannot be deleted while the database is open. It is not the case in UNIX plat forms the data files can be deleted and moved while the database is open.

SVRMGR> create tablespace test_tbsp
2> datafile ‘H:Oracleoradatasrinivastest_tbsp_01.dbf’ size 5m,
3> ‘H:Oracleoradatasrinivastest_tbsp_02.dbf’ size 5m ;
Statement processed.
SVRMGR> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SVRMGR> startup mount
ORACLE instance started.
Total System Global Area 137020380 bytes
Fixed Size 70620 bytes
Variable Size 77750272 bytes
Database Buffers 59121664 bytes
Redo Buffers 77824 bytes
Database mounted.
SVRMGR> alter database datafile ‘H:Oracleoradatasrinivastest_tbsp_01.dbf’ offline;
Statement processed.
SVRMGR> alter database open;
Statement processed.
SVRMGR> alter tablespace test_tbsp offline temporary;
Statement processed.
SVRMGR> recover datafile ‘H:Oracleoradatasrinivastest_tbsp_01.dbf’;
Media recovery complete.
SVRMGR> alter tablespace test_tbsp online;
alter tablespace test_tbsp online
*
ORA-01113: file 18 needs media recovery
ORA-01110: data file 18: ‘H:ORACLEORADATASRINIVASTEST_TBSP_02.DBF’
SVRMGR> recover tablespace test_tbsp;
Media recovery complete.
SVRMGR> alter tablespace test_tbsp online;
Statement processed.
SVRMGR>

In case of UNIX platforms when the database reports the non-availability of the data files the following steps are required to be followed.

(1) take the tablespace offline temporarily

alter tablespace users offline temporary;

(2) restore the datafile from the latest backup as also the redo log files in case they are moved or deleted after backed up.

(3) issue the following statement for recovering the datafile

recover automatic datafile ‘H:ORACLEORADATASRINIVASTEST_TBSP_02.DBF’;

(4) after the media recovery is complete bring the tablespace online

alter tablespace test_tbsp online;

Restoring and recreating control files

There are three situations where control files are to be restored and recovered.

(1) Loosing a member of the multiplexed control files
(2) Loosing all the members of the control files when the back up of the control files is available
(3) Loosing all Current and Backup Control Files.


Loosing a member of the multiplexed control files

In this case it is not a big problem to restore the member control file. The following steps are required to recreate the lost member control file.

There are again two scenarios in this case.

a. Restoring the control file to the default destination

There are control_1.ctl, control_2.ctl and control_3.ctl files and control_2.ctl is lost. Then,

01. If the instance is still running

shutdown abort;

02. If there is any hardware problem correct that problem and then

for Windows Platform

copy control_3.ctl control_2.ctl

for UNIX platforms

cp control_3.ctl control_2.ctl

03. start the new instance

startup

b. Restoring the control file to the non-default destination

The default destination directory for control_2.ctl is %ORACLE_HOME%dbs and the control file is to be created / restored in a non- default destination like %ORACLE_HOME%database.

01. If the instance is still running

shutdown abort;

02. If there is any hardware problem correct that problem and then

for Windows Platform

copy control_3.ctl f:oracledatabasecontrol_2.ctl

for UNIX platforms

cp control_3.ctl oracledatabasecontrol_2.ctl

03. Edit the init.ora file for the instance

old parameter:
Windows:
controlfiles=’d:oracledbscontrol_1.ctl’,’e:oracledbscontrol_2.ctl’,’f:oracledbscontrol_3.ctl’
UNIX:
controlfiles=/oracle/dbs/control_1.ctl’,’/oracle/dbs/control_2.ctl’,’/oracle/dbs/control_3.ctl’

new parameter:
controlfiles=’d:oracledbscontrol_1.ctl’,’e:oracledatabasecontrol_2.ctl’,’f:oracledbscontrol_3.ctl’
UNIX:
controlfiles=/oracle/dbs/control_1.ctl’,’/oracle/database/control_2.ctl’,’/oracle/dbs/control_3.ctl’


04. start the new instance

startup


Loosing all the members of the control files when the back up of the control files is available
When a control file is inaccessible, then you can start the instance, but not mount the database. If you attempt to mount the database when the control file is unavailable, you see this error message:
ORA-00205: error in identifying controlfile, check alert log for more info

If the control files are restored from the backup the database can not be opened without resetlogs option.

There are 4 scenarios on which the recovery procedure is dependent.

Status of Online Logs
Status of Datafiles
Response
Available
Current
If the online logs contain redo necessary for recovery, then restore a backup control file apply the logs during recovery. Hence, you must specify the filename of the online logs containing the changes in order to open the database. After recovery, open RESETLOGS.
Unavailable
Current
If the online logs contain redo necessary for recovery, then you must re-create the control file. Because the logs are inaccessible, open RESETLOGS.
Available
Backup
Restore a backup control file, perform complete recovery, and then open RESETLOGS.
Unavailable
Backup
Restore a backup control file, perform incomplete recovery, and then open RESETLOGS.

Source – Oracle Docs

If the control files are being restored to the default locations the initialization parameter file need not be edited.

Procedure:

01. Shutdown abort; — if the instance is still running.

02. Correct if there are any hardware problems

03. Restore the backup control files to the respective destinations. Use cp for UNIX platforms and copy for the windows platforms.

04. Startup mount;

05. Issue the following command to recover the database
RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;
(select auto while applying the redo log files and when prompted that that it can not find any redo log files for application then type cancel to cancel the recovery process. At the end of the process ‘Media recovery complete.’ is spit out)

06. Then issue the following statement at SQL or svrmgrl prompt:
ALTER DATABASE OPEN RESETLOGS;

07. Backup the database afresh and keep that copy safe.

If the control files are being restored to the non-default locations the initialization parameter file need not be edited.

Procedure:

01. Shutdown abort; — if the instance is still running.

02. Correct if there are any hardware problems

03. Restore the backup control files to new locations/destinations. Use cp for UNIX platforms and copy for the windows platforms.

04. Edit the initialization parameter file suitably to reflect the new locations for the ‘controlfiles’ parameter

05. Startup mount;

06. Issue the following command to recover the database
RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;
(select auto while applying the redo log files and when prompted that that it can not find any redo log files for application then type cancel to cancel the recovery process. At the end of the process ‘Media recovery complete.’ is spit out)

07. Then issue the following statement at SQL or svrmgrl prompt:
ALTER DATABASE OPEN RESETLOGS;

08. Backup the database afresh and keep that copy safe.

Loosing all Current and Backup Control Files.

If you . . .
Then . . .
Executed ALTER DATABASE BACKUP CONTROLFILE TO TRACE NORESETLOGS after you made the last structural change to the database, and if you have saved the SQL command trace output
Use the CREATE CONTROLFILE statement from the trace output as-is.
Performed your most recent execution of ALTER DATABASE BACKUP CONTROLFILE TO TRACE before you made a structural change to the database
Edit the output of ALTER DATABASE BACKUP CONTROLFILE TO TRACE to reflect the change. For example, if you recently added a datafile to the database, then add this datafile to the DATAFILE clause of the CREATE CONTROLFILE statement.
Backed up the control file with the ALTER DATABASE BACKUP CONTROLFILE TO filename statement (not the TO TRACE option)
Use the control file copy to obtain SQL output. Copy the backup control file and execute STARTUP MOUNT before ALTER DATABASE BACKUP CONTROLFILE TO TRACE NORESETLOGS. If the control file copy predated a recent structural change, then edit the trace output to reflect the structural change.
Do not have a control file backup in either TO TRACE format or TO filename format
Create the CREATE CONTROLFILE statement manually

Source – Oracle Docs

Note

If your character set is not the default US7ASCII, then you must specify the character set as an argument to the CREATE CONTROLFILE statement. The database character set is written to the alert log at startup. The character set information is also recorded in the BACKUP CONTROLFILE TO TRACE output.

Recover Procedure:

01. STARTUP NOMOUNT

02. issue the create control file statement manually:

CREATE CONTROLFILE REUSE DATABASE “SRINIVAS” NORESETLOGS ARCHIVELOG
MAXLOGFILES 32
MAXLOGMEMBERS 2
MAXDATAFILES 32
MAXINSTANCES 16
MAXLOGHISTORY 1815
LOGFILE
GROUP 1 (
‘H:ORACLEORADATASRINIVASREDO03.LOG’,
‘H:ORACLEORADATASRINIVASREDO_3.LOG’
) SIZE 1M,
GROUP 2 (
‘H:ORACLEORADATASRINIVASREDO02.LOG’,
‘H:ORACLEORADATASRINIVASREDO_2’
) SIZE 1M
DATAFILE
‘H:ORACLEORADATASRINIVASSYSTEM01.DBF’,
‘H:ORACLEORADATASRINIVASRBS01.DBF’,
‘H:ORACLEORADATASRINIVASUSERS01.DBF’,
‘H:ORACLEORADATASRINIVASTEMP01.DBF’,
‘H:ORACLEORADATASRINIVASTOOLS01.DBF’,
‘H:ORACLEORADATASRINIVASINDX01.DBF’,
‘H:ORACLEORADATASRINIVASDR01.DBF’,
‘H:ORACLEORADATASRINIVASUSERS02.DBF’,
‘H:ORACLEORADATASRINIVASTEST_TBSP_01.DBF’,
‘H:ORACLEORADATASRINIVASTTS_EX2.DBF’,
‘H:ORACLEORADATASRINIVASOEM_REPOSITORY.ORA’,
‘H:ORACLEORADATASRINIVASPERFSTAT.DBF’,
‘H:ORACLEORADATASRINIVASP1.DBF’,
‘H:ORACLEORADATASRINIVASP2.DBF’,
‘H:ORACLEORADATASRINIVASUSER01.DBF’,
‘H:ORACLEORADATASRINIVASTEST.DBF’,
‘H:ORACLEORADATASRINIVASTEST_TBSP_02.DBF’
CHARACTER SET WE8ISO8859P1
;

03.After control file is created Oracle MOUNTS the database. Then issue the following statement.

RECOVER DATABASE

04.Open the database when it is recovered.

Alter database open; — note that reset logs option is not necessary.

05. after the database is open backup control file to trace and another destination with reuse clause.

alter database backup controlfile to trace;

for UNIX platforms
alter database backup controlfile to ‘/oracle/database/bkup_control.ctl’ reuse;
for Windows platforms
alter database backup controlfile to ‘d:oracledatabasebkup_control.ctl’ reuse;

Preparing for Closed Database Recovery
In this stage, you shut down the instance and inspect the media device that is causing the problem.

To prepare for closed database recovery:

If the database is open, then shut it down with the ABORT option:
SHUTDOWN ABORT
If recovering from a media error, then correct it if possible. If the hardware problem that caused the media failure was temporary, and if the data was undamaged (for example, a disk or controller power failure), then no media recovery is required: simply start the database and resume normal operations. If you cannot repair the problem, then proceed to the next step.

Restoring Backups of the Damaged or Missing Files
In this stage, restore all necessary backups.
To restore the necessary files:
Determine which datafiles to recover by querying the v$recover_file.
If the files are permanently damaged, then identify the most recent backups for the damaged files. Restore only the datafiles damaged by the media failure: do not restore any undamaged datafiles or any online redo log files.
For example, if /oracle/dbs/tbs_10.f is the only damaged file, then you may consult your records and determine that /oracle/backup/tbs_10.backup is the most recent backup of this file. If you do not have a backup of a specific datafile, then you may be able to create an empty replacement file that can be recovered.
Use an operating system utility to restore the files to their default location or to a new location. For example, a UNIX user restoring/oracle/dbs/tbs_10.f to its default location might enter:
% cp /oracle/backup/tbs_10.backup /oracle/dbs/tbs_10.f
Follow these guidelines when determining where to restore datafile backups:

If . . . Then . . .
The hardware problem is repaired and you can restore the datafiles to their default locations Restore the datafiles to their default locations and begin media recovery.
The hardware problem persists and you cannot restore datafiles to their original locations The hardware problem persists and you cannot restore datafiles to their original locations.

Recovering the Database

In the final stage, you recover the datafiles that you have restored.
To recover the restored datafiles:
Connect to the database with administrator privileges, then start a new instance and mount, but do not open, the database. For example, enter:
STARTUP MOUNT
Obtain the datafile names and statuses of all datafiles by checking the list of datafiles that normally accompanies the current control file or querying the V$DATAFILE view. For example, enter:
SELECT NAME,STATUS FROM V$DATAFILE;
Ensure that all datafiles of the database are online. All datafiles of the database requiring recovery must be online unless an offline tablespace was taken offline normally or is part of a read-only tablespace. For example, to guarantee that a datafile named /oracle/dbs/tbs_10.dbf is online, enter the following:
ALTER DATABASE DATAFILE ‘/oracle/dbs/tbs_10.dbf’ ONLINE; – UNIX
ALTER DATABASE DATAFILE ‘d:oracleoradatatbs_10.dbf’ ONLINE; -Windows
If a specified datafile is already online, then Oracle ignores the statement. If you prefer, create a script to bring all datafiles online at once as in the following:

SPOOL onlineall.sql

SELECT ‘ALTER DATABASE DATAFILE ”’||name||”’ ONLINE;’ FROM V$DATAFILE;

SPOOL OFF

SQL> @onlineall

Issue the statement to recover the database, tablespace, or datafile. For example, enter one of the following RECOVER command:

RECOVER DATABASE # recovers whole database
RECOVER TABLESPACE users # recovers specific tablespace
RECOVER DATAFILE ‘/oracle/dbs/tbs_10’; # recovers specific datafile

Follow these guidelines when deciding which statement to execute:

To . . .
Then . . .
Recover all damaged files in one step
Execute RECOVER DATABASE
Recover an individual tablespace
Execute RECOVER TABLESPACE
Recover an individual damaged datafile
Execute RECOVER DATAFILE

 

If you choose not to automate the application of archived logs, then you must accept or reject each required redo log that Oracle prompts you for. If you automate recovery, then Oracle applies the necessary logs automatically. Oracle continues until all required archived and online redo log files have been applied to the restored datafiles.
Oracle notifies you when media recovery is complete:
Media recovery complete
If no archived redo log files are required for complete media recovery, then Oracle applies all necessary online redo log files and terminates recovery.
After recovery terminates, then open the database for use:
ALTER DATABASE OPEN;

Preparing for Open Database Recovery

In this stage, you take affected tablespaces offline and inspect the media device that is causing the problem.

To prepare for datafile recovery when the database is open:

If the database is open when you discover that recovery is required, take all tablespaces containing damaged datafiles offline. For example, if tablespace
users contains damaged datafiles, enter:

ALTER TABLESPACE users OFFLINE TEMPORARY;

Correct the hardware problem that caused the media failure. If the hardware problem cannot be repaired quickly, proceed with database recovery by restoring damaged files to an alternative storage device.

Restoring Backups of the Damaged or Missing Files

In this stage, restore all necessary backups in the offline tablespaces.

To restore datafiles in an open database:

If files are permanently damaged, then restore the most recent backup files of only the datafiles damaged by the media failure. Do not restore undamaged datafiles, online redo log files, or control files.

If the hardware problem has been repaired and the datafiles can be restored to their original locations, then do so. If the hardware problem persists, then restore the datafiles to an alternative storage device.

If damaged datafiles are restored to alternative locations, rename the datafiles in the control file of the database.

For example, to change the filename of the datafile in tablespace users you might enter:

for UNIX platforms:
ALTER DATABASE RENAME FILE ‘/d1/oracle/dbs/tbs1.dbf’ TO ‘/d3/oracle/dbs/tbs1.dbf’;

for Windows Platforms:
ALTER DATABASE RENAME FILE ‘d:oracleoradatasrinivastbs1.dbf’ TO ‘e:oracleoradatasrinivastbs1.dbf’;

Recovering Offline Tablespaces in an Open Database


In the final stage, you recover the datafiles in the offline tablespaces.


To recover offline tablespaces in an open database:

Connect to the database with administrator privileges. For example, connect as SYSTEM or SYS to database:
sqlplus SYS/PASSWORD@DBNAME_ALIAS AS SYSDBA

Start offline tablespace recovery of all damaged datafiles in one or more offline tablespaces using one step. For example, recover the users and sales tablespaces as follows:
RECOVER TABLESPACE users, sales; # begins recovery on datafiles in users and sales

If the damaged datafiles are associated with one tablespace only then:

RECOVER TABLESPACE users;
Oracle begins the roll forward phase of media recovery by applying the necessary redo log files (archived and online) to reconstruct the restored datafiles. Unless the applying of files is automated withRECOVER AUTOMATIC or SET AUTORECOVERY ON, Oracle prompts for each required redo log file.
Oracle continues until all required archived redo log files have been applied to the restored datafiles. The online redo log files are then automatically applied to the restored datafiles to complete media recovery.
If no archived redo log files are required for complete media recovery, then Oracle does not prompt for any. Instead, all necessary online redo log files are applied, and media recovery is complete.
When the damaged tablespaces are recovered up to the moment that media failure occurred, bring the offline tablespaces online. For example, to bring tablespaces users and sales online, issue the following statements:
ALTER TABLESPACE users ONLINE;

Issue the statement for each tablespace that is to be brought online separately. There is no possibility to issue a single statement for multiple tablepsaces as is done in the case of ‘RECOVER’ statement.

 

Loss of Online Redo Files

When media failure has effected the online redo log files appropriate recovery procedure is to be adopted. The adoption of the recovery procedures is dependent on:

(1) How the online redo log files are configured. That means whether the online redo log files are mirrored or not.
(2) What type of media failure is there? Temporary or permanent.
(3) Types of the online redo log file that are affected by the media failure.

To identify the configuration of the redo log files the following query is to be issued at SQL prompt logging in as a user with administrative privileges.

select group#, members, status from v$log;

If each group has more than one member online redo log files are mirrored.

The media failure is to be identified and fixed.

After the media failure is identified and fixed the status of the online redo log files is to be examined by accessing the v$log file. The table given below describes the status of online redo log file and their meanings.

Status
Description
UNUSED
The online redo log has never been written to.
CURRENT
The log is active, that is, needed for instance recovery, and it is the log to which Oracle is currently writing. The redo log can be open or closed.
ACTIVE
The log is active, that is, needed for instance recovery, but is not the log to which Oracle is currently writing.It may be in use for block recovery, and may or may not be archived.
CLEARING
The log is being re-created as an empty log after an ALTER DATABASE CLEAR LOGFILE statement. After the log is cleared, then the status changes to UNUSED.
CLEARING_CURRENT
The current log is being cleared of a closed thread. The log can stay in this status if there is some failure in the switch such as an I/O error writing the new log header.
INACTIVE
The log is no longer needed for instance recovery. It may be in use for media recovery, and may or may not be archived


The status of online redo log files in v$logfile should not be mistaken with the status of the online redo log files in v$log.

Status
Description
Inactive
It is not needed for crash recovery
Active
It is needed for crash recovery
Current
It is the log that Oracle is currently writing to

 

When one of the multiplexed online redo log file is lost and there is at least one member in the group then:

(1) The log group is not affected by the media failure and Oracle allows the database to function as normal.
(2) Oracle writes error messages to the LGWR trace file and the alert log of the database.

If the hardware problem is temporary, then correct it. LGWR accesses the previously unavailable online redo log files as if the problem never existed.
If the hardware problem is permanent the redo log group member is to be dropped and recreated.
The procedure for that is:
(1) Locate the file name from v$logfile with the status ‘INVALID’
select group#, status, member from v$logfile where status = ‘INVALID’;
(2) drop the damaged member
ALTER DATABASE DROP LOGFILE MEMBER ‘<redo log file name with path>’;

(3) Add new member to that group

ALTER DATABASE ADD LOGFILE MEMBER ‘<new redo log file name with path>’ TO GROUP <group number>;

The group number is the group number obtained using the query in (1)

or

(3) Add a existing member which is of same size as the other member/s in the group

ALTER DATABASE ADD LOGFILE MEMBER ‘<existing member redo log file name>’ REUSE TO GROUP <group number>;

When all the members of a group are lost there are three scenarios:

(1) When an INACTIVE group is lost.
(2) When an ACTIVE group is lost.
(3) When a CURRENT group is lost.

(1) When an INACTIVE group is lost:

If the media failure is temporary is fixed LGWR can reuse that group and there is nothing to be done at this juncture.

If the media failure is permanent and is fixed it is to be investigated whether that INACTIVE group is archived or not and also whether there is any data file that has gone offline that requires the inactive redo log file. To find answers for these queries follow the procedure laid down.

(a) select * from v$log where status=’INACTIVE’ and arc=’NO’;

If you any rows returned the inactive redo log group is not archived. At this juncture we are not sure whether data in that inactive redo log group is required for any offline data file or not is to be ascertained. To ascertain that

(b) Select file#, name, status from v$datafile where status<>’ONLINE’;

If this query return values then there are data files that may require the redo log information. Then the procedure is

(1) Mount database

startup mount;

(2) (i) if query (a) has rows returned and query (b) has no rows returned:

Clear the log using the UNARCHIVED keyword. For example, to clear log group 2, issue

ALTER DATABASE CLEAR LOGFILE UNARCHIVED GROUP 2;

(2) (ii) if query (a) and query (b) return rows:

ALTER DATABASE CLEAR LOGFILE UNARCHIVED GROUP 2 UNRECOVERABLE DATAFILE;

(3) Backup entire database and also backup control file to trace or different destination. If the control file is backed up to trace ‘create control file’ command is generated and when backed up to different destination entire control file is copied and the size of the control file is verifiable with the existing control files.

There is possibility of failure of CLEAR LOGFILE operation. If this operation fails the following procedure is to be adopted.

(a) Relocate the redo log file onto alternative media by re-creating it under the currently configured redo log filename
(b) Reuse the currently configured log filename to re-create the redo log file because the name itself is invalid or unusable (for example, due to media failure).
(2) When an ACTIVE group is lost the procedures are different in NOARCHIVELOG and ARCHIVELOG modes.

(2) When an ACTIVE group is lost in NOARCHIVELOG mode:
(There is loss of data in this kind of scenario)

(1) if the media failure is temporary and is corrected when instance is restarted Oracle reuses the group as if nothing had happened.

(2) if the media failure is permanent and is corrected the database is to be restored from the latest backup available ( in noarchivelog mode only consistent backups are possible and no online backup can be done as media recovery is not enabled). If the online redo log files are also backed up, restore them to their default locations and start the database. there is no necessacity to recover the database. But if the online redo log files are not backed up then,

(3) mount the database
startup mount;

(4) mimic the recovery until cancel.
RECOVER DATABASE UNTIL CANCEL
and issue CANCEL to cancel the recovery.

(5) Open the database using reset logs option
ALTER DATABASE OPEN RESETLOGS;

(6) backup the database completely including the online redo log files after shutting down the database with normal or immediate option as the database is in noarchivelog mode.

Note: The database restored contains data until the latest backup which is used to restore the database. All the data is to be re-entered after that point of restoration.

(3) When an ACTIVE group is lost in ARCHIVELOG mode:

If the media failure is temporary, Oracle will start writing to the redo log files when the instance is restarted.

But, in the case of permanent failure of media the following procedure is to be adopted.

(1) do an incomplete media recovery by canceling the recovery at that point where online redo logs files are lost. This results in loss of the data that the redo log file contains.

(2) Ensure that the current name of the lost redo log can be used for a newly created file. If not, then rename the members of the damaged online redo log group to a new location. The procedure to add online redo log group and member are already discussed above. But, to rename the redo log files issue the following statement:

ALTER DATABASE RENAME FILE ‘<old redo log file name with path> TO < new redo log file name with path>;

This statement is to be issued to each member in that group that has been lost.

(3) Open the database with reset logs option

ALTER DATABASE OPEN RESETLOGS;

All updates executed from the endpoint of the incomplete recovery to the present must be re-executed. Else there is loss of data.

When a CURRENT group is lost, there will be data loss, in ARCHIVELOG mode or NOARCHIVELOG mode. This is an un avoidable situation.

In NOARCHIVELOG mode if online redo log files are also backed up and when the database is restored including the online redo log files – no more data is recovered and all the data is to be re-entered after that point of time.

In ARCHIVELOG mode the database is restored from the latest backup and roll forward of archived redo logs is done, but the recover is incomplete as the current redo log files are not available and the recovery is to be on UNTIL CANCEL basis. The loss of data is minimized to the extent of the data lost with the current online redo log file.

Recovering After the Loss of Archived Redo Log Files:

If the database is operating in ARCHIVELOG mode, and if the only copy of an archived redo log file is damaged, then the damaged file does not affect the present operation of the database. The following situations can arise, however, depending on when the redo log was written and when you backed up the datafile.

If you backed up . . .
Then . . .
All datafiles after the filled online redo log group (which is now archived) was written
The archived version of the filled online redo log group is not required for complete media recovery operation.
A specific datafile before the filled online redo log group was written
If the corresponding datafile is damaged by a permanent media failure, use the most recent backup of the damaged datafile and perform incomplete recovery up to the damaged log.

Caution:

If you know that an archived redo log group has been damaged, immediately back up all data files so that you will have a whole database backup that does not require the damaged archived redo log.

This is a very handy material. You may be doing recovery but have you done mistakes in doing so.. say forget commands or sequence of commands to be written.. Here is a material worth for collection.
Your feedback is most welcome..

BACKUP PROCEDURES :

It is a very important aspect of any database which should be planned carefully as recovery depends upon the back up strategy which are being followed. Backup strategy depends upon the mode of database. Different methods are adopted for the database running in archivelog mode or database running in no-archivelog mode .

Criteria :

[1] When database is running in archive-log mode.

Cold backup :

In init.ora, search for parameter control_files to find the name of control
file for that database. Query the v$datafiles and v$logfiles views to find the names
of datafiles and redo logfiles associated with the database. Use the operating system
command to take the backup of these files. *


In init.ora, search for parameter log_archive_dest to find the location of archived files.

Use the operating system command to take the backup of these files.

Frequency: This backup has to be taken weekly.

Only Oracle database related file should be backedup. It will solve two problems
[a] Downtime of database will be less.
[b] Retrieval from cartridge will take less time.
[c] Less numbers of cartridges will be required.

File System Backup : Generally its frequency should be low. It will act as a backup for all the files (Oracle+O.S.+Other). It will be needed if all the disks crash. If you are creating important files on server then its frequency should be increased as decided by the site incharge.

Hot backup:

In init.ora, search for parameter control_files to find the name of control file for that database. Query the v$datafiles views to find the names of datafiles associated with the database.

Use the operating system command to take the backup of these files.*

In init.ora, search for parameter log_archive_dest to find the location of archived files.

Use the operating system command to take the backup of these files. *

Frequency: This backup has to be taken daily.

Logical Backup:

[1] Ideally Complete database export should be taken daily.
It is also called base backup.

[2] Take incremental export daily except on weekends.

On weekends, cumulative database export should be taken. When cumulative export is taken, one should remove incremental export to save space on disk.

On month end, take complete database export and remove previously stored cumulative export logical backups.

[3] Take important user level export daily.

Either of the above option can be implemented at the site but order of prefrence should be first try [1], if not then use [2] else last option should be [3]


Cartridges Strategy :

If you are taking complete database export then use three different sets of cartridges ( Grand father , Father and Son concepts .) on three different days . And rotate these cartridges again .

For Incremental Backups use six different sets of cartridges on six different days . And rotate these cartridges again after successfully completion of Cummulative database export backup .

For Cummulative Backups use different cartridges in every week and rotate those cartridges in the next month after successfully completion of Complete database export backup.


Recovery :

In day to day operation the most common type of failure is table drop or partial data loss in any table or instance failure. Using export backed up dump file (expdat.dmp) one can recover first two type of problems.

For instance failure, simply restart the database, oracle will automatically recover the database (Instance recovery).

For more complicated type of problems like media crash (data file loss etc.), please refer to annexure-1.

[2] When database is running in no archive-log mode.


Cold backup :
In init.ora, search for parameter control_files to find the name of control
file for that database. Query the v$datafiles and v$logfiles views to find the names
of datafiles and redo logfiles associated with the database. Use the operating system
command to take the backup of these files. Ideally this backup should be taken daily.

Logical Backup:

[1] Ideally Complete database export should be taken daily.
It is also called base backup.

[2] Take incremental export daily except on weekends.

On weekends, cumulative database export should be taken. When cumulative export is taken, one should remove incremental export to save space on disk.

On month end, take complete database export and remove previously stored cumulative export logical backups.

[3] Take important user level export daily.

Either of the above option can be implemented at the site but order of prefrence should be first try [1], if not then use [2] else last option should be [3]

Cartridges Strategy :

If you are taking complete database export then use three different sets of cartridges ( Grand father , Father and Son concepts .) on three different days . And rotate these cartridges again .

For Incremental Backups use six different sets of cartridges on six different days . And rotate these cartridges again after successfully completion of Cummulative database export backup .

For Cummulative Backups use different cartridges in every week and rotate those cartridges in the next month after successfully completion of Complete database export backup.


Recovery :

It is a very important process and it should be done very carefully. In day to day operation the most common type of failure is table drop or partial data loss in any table or instance failure. Using export backed up dump file (expdat.dmp) one can recover first two type of problems. For instance failure, simply restart the database, oracle will automatically recover the database (Instance recovery). For more complicated type of problems like media crash (data file loss etc.), please refer to annexure-1.

* Commands to be used in copying file(s) to backup device :

In Unix :

[a] cpio -ocBv < [name of file] > [/dev/rmt0.1|/dev/rmt0]
or
find / -name [pattern] -depth -print|cpio -ocBv > [/dev/rmt0.1|/dev/rmt0]

[b] tar -cvf [name of file] > [/dev/rmt0.1|/dev/rmt0]

or
tar -cvf /

Complete file system backup :

In Unix :

[a] find / -name -depth -print|cpio -ocBv > [/dev/rmt0.1|/dev/rmt0]

or

[b] To copy all files of unix to backup device

tar -cvf /

In Window NT :

Use backup utility to copy the necessary files.

Commands to be used in restoring file(s) from backup device :

In Unix :
[a] cpio -icBv < [/dev/rmt0.1|/dev/rmt0]
or
cpio -icBv “[pattern]” < [/dev/rmt0.1|/dev/rmt0]

[b] tar -xvf < [/dev/rmt0.1|/dev/rmt0]

In Window NT :

Use restore utility to copy the necessary files from backup device.


Annexure – 1


1. LOSS OF NON-ESSENTIAL DATAFILE WHEN DATABASE IS DOWN
( DATABASE CAN BE IN ARCHIVELOG MODE OR NO ARCHIVELOG
MODE )

SCENARIO
[1] Database startup fails with errors :
ora 1157 can not identify datafile
ora 1110 give the name of datafile which is missing.

REQUIREMENT
[1] The script which will recreate the objects in the datafile like script which will create indexex.

TIME TAKEN IN RECOVERY
DATABASE WILL BE READY FOR USE IN (5 MIN+ TIME TAKEN TO CREATE INDEXES)

NON-ESSENTIAL DATAFILES
DATAFILE OF INDEX TABLESPACE, TEMPORARY TABLESPACE.

SOLUTION
Shutdown the database.(shutdown immediate).
Take complete backup of current database.
Startup mount
Query the v$recover_file view along with v$datafile with a join on file# and note down the name of file
say it is /prodebs/test/ind.dbf.
Alter database datafile ‘/prodebs/test/ind.dbf’ offline;
(if database is in noarchivwlog mode command will be
Alter database datafile ‘/prodebs/test/ind.dbf’ offline drop; )
Alter database open;
Drop tablespace user_index including contents;
Create tablespace user_index
datafile ‘/prodebs/test/ind.dbf’ size 1M;
Run the script which will built indexes*.
Shutdown the database and take backup if necessary.
Startup.

* NB : For temporary tablespace skip this step.
———————————————————————————————–
2. MISSING MIRRORED ONLINE REDO LOG FILES (DATABASE IS UP/DOWN)

SCENARIO

Database opens neatly but in alert log two error messages are logged with errors :
(error from lgwr. Error is also written in lgwr trace file)
ora 313 open failed for members ..
ora 312 name of redo log memeber missing
ora 7360 OS error
ora 321 Can not update logfile header

TIME TAKEN IN RECOVERY
DATABASE WILL BE READY FOR USE IN 5 MIN.

SOLUTION

Shutdown the database.(shutdown).
Startup mount
Query v$logfile view and find which member has become invalid.
Query v$log view and find which group is current and size of group members (say it is b).
If the member of current log group (say it is 1) is corrupted issue the following commands :
Alter system switch logfile
If you can add one more member to corrupted log group ie maximum log member is not reached add one more
member to that group
Alter database add logfile member ‘filespec’ to group 1;
Shutdown the database
Startup the database
If you can not add one more member to corrupted log group
create one more log group with equal members and size of non corrupted log group.
Alter database group 3 (‘filespec’,’filespec’) size b;
Drop corrupted log group.
alter database drop logfile group 1;
Manually remove other members of this corrupted log group (ie rm in unix)
Shut down the database
Startup the database


———————————————————————————————–
3. RECOVER A LOST DATAFILE WITH NO BACKUP AND ALL ARCHIVED
LOG FILES

SCENARIO
Database startup fails with errors :
ora 1157 can not identify datafile
ora 1110 give the name of datafile which is missing.

REQUIREMENT
For full recovery, database should be in archivelog mode.

TIME TAKEN IN RECOVERY
DATABASE WILL BE READY FOR USE IN MIN 10 mins.

SOLUTION

Shutdown the database.(shutdown).
Startup mount;
Query v$recover_file
Query v$datafile and find the name of datafile which is missing.(say it is ‘/prodebs/test/user_odc.dbf’)
Now issue the following commands in the given order :
alter database datafile ‘/prodebs/test/user_odc.dbf’ offline;
alter database create datafile ‘/prodebs/test/user_odc.dbf’ as ‘/prodebs/test/user_odc1.dbf’;
(removed file) (new file)
alter database datafile ‘/prodebs/test/user_odc1.dbf’ online;
alter database recover datafile ‘/prodebs/test/user_odc1.dbf’; or recover database
It will generally recover the database if you have all the archived file with you
alter database open;
Shutdown the database and take necessary backup if required.
Start the database.
———————————————————————————————–
4. RECOVER A LOST DATAFILE WITH BACKUP AND ALL ARCHIVED
LOG FILES

SCENARIO
Database startup fails with errors :
ora 1157 can not identify datafile
ora 1110 give the name of datafile which is missing.

REQUIREMENT
For full recovery, database should be in archivelog mode.

TIME TAKEN IN RECOVERY
DATABASE WILL BE READY FOR USE IN 5 mins.

SOLUTION

Shutdown the database.(shutdown).
Startup mount;
Query v$recover_file
Query v$datafile and find the name of datafile which is missing.(say it is ‘/prodebs/test/user_odc.dbf’)
Copy the archived datafile (old one that is in backup) and give following commands :
recover database;
alter database open;
Shutdown the database and take necessary backup if required.
Start the database.

———————————————————————————————–
5. RECOVER A LOST DATAFILE WITH BACKUP AND MISSING ARCHIVED
LOG FILES.

SCENARIO
Database startup fails with errors :
ora 1157 can not identify datafile
ora 1110 give the name of datafile which is missing.

REQUIREMENT
For recovery, database should be in archivelog mode.

CONDITION
Recovery will be incomplete.

TIME TAKEN IN RECOVERY
DATABASE WILL BE READY FOR USE IN MIN 10 mins.

SOLUTION

Shutdown the database.(shutdown).
Copy all your datafiles from backup except control file.
Startup mount;
Copy the archived datafile (old one that is in backup) and give following commands :
recover database until cancel
alter database open resetlogs;
Shutdown the database and take necessary backup if required.
Start the database.

———————————————————————————————–
6. LOSS OF DATAFILE WHEN THE DATABASE IS IN NOARCHIVELOG MODE WITH NO LOGICAL BACKUP AND RECOVERY.

SCENARIO
Database startup fails with errors : (a) on monday morning (b) on thursday
ora 1157 can not identify datafile
ora 1110 give the name of datafile which is missing and found to be user data file.
There is no export and import backup (Logical backup).
Cold backup is taken once in every week . Here backup is taken on every sunday and this is the last activity
on sunday.
Problem has occured on (a) on monday morning (b) on thursday.
Here data file (s) associated with user tablespace is (are) lost.

TIME TAKEN IN RECOVERY
DATABASE WILL BE READY FOR USE IN MIN 30 mins.

SOLUTION when Problem has occured on (a) on monday morning

Shutdown the database.(shutdown).
Take complete backup of current database.
Delete all the database files, redo log files and control files.
Copy the all the database files , redo log files and control files from the backup.
Start the database. There is no data loss.

SOLUTION when Problem has occured on (a) on thursday morning. Here data loss will occur.

Shutdown the database.(shutdown).
Take complete backup of current database.
Delete all the database files, redo log files and control files.
Copy the all the database files , redo log files and control files from the backup.
Start the database. There is data loss for monday, tuesday and wednessday.
Ask user to reenter the data.

7. LOSS OF DATAFILE WHEN THE DATABASE IS IN NOARCHIVELOG MODE WITH LOGICAL BACKUP AND RECOVERY.

SCENARIO
Database startup fails with errors on thursday morning.
ora 1157 can not identify datafile
ora 1110 give the name of datafile which is missing and found to be user data file.
There is export and import backup. (Logical backup).
Strategy : So along with coldback up a complete database backup is also taken.
(i) After this on everyday, an incremental backup is also taken.
(ii) After this on everyday, complete database backup is also taken.
Cold backup is taken once in every week . Here backup is taken on every sunday and this is the last activity
on sunday.
Here data file (s) associated with user tablespace is (are) lost.

TIME TAKEN IN RECOVERY
DATABASE WILL BE READY FOR USE IN MIN 30 mins.

[i] SOLUTION when Problem has occured on thursday morning and incremental backup is taken.

Shutdown the database.(shutdown).
Take complete backup of current database.
Delete all the database files, redo log files and control files.
Copy the all the database files , redo log files and control files from the backup.
Start the database.
Apply incremental export using import file starting from monday to wednesday.There is no data loss.

[ii] SOLUTION when Problem has occured on thursday morning and daily complete database logical
backup is taken.

Shutdown the database.(shutdown).
Take complete backup of current database.
Delete all the database files, redo log files and control files.
Copy the all the database files , redo log files and control files from the backup.
Start the database.
Apply wednesday complete export backup. There is no data loss.

8. LOSS OF SYSTEM DATAFILE WHEN THE DATABASE IS IN ARCHIVE
LOG MODE AND RECOVERY.

SCENARIO
Database startup fails with errors on thursday morning.
ora 1157 can not identify datafile
ora 1110 give the name of datafile which is missing and found to be user data file.
Cold backup is taken once in every week . Here backup is taken on every sunday and this is the last activity
on sunday.
Here data file (s) associated with system tablespace is (are) lost.

TIME TAKEN IN RECOVERY
DATABASE WILL BE READY FOR USE IN MIN 30 mins.

SOLUTION

Shutdown the database.(shutdown).
Take complete backup of current database.
Copy the missing system database file(s).
Startup mount exclusive;
recover database;
alter database open;
Database is ready for use.


9. LOSS OF NON SYSTEM DATAFILE WITHOUT ROLLBACK SEGMENTS
WHEN THE DATABASE IS IN ARCHIVE LOG MODE AND RECOVERY.

SCENARIO
Database startup fails with errors on thursday morning.
ora 1157 can not identify datafile
ora 1110 give the name of datafile which is missing and found to be user data file.
Cold backup is taken once in every week . Here backup is taken on every sunday and this is the last activity
on sunday.
Here data file (s) associated with user tablespace is (are) lost.

TIME TAKEN IN RECOVERY
DATABASE WILL BE READY FOR USE IN MIN 30 mins.

SOLUTION -1 (DATABASE RECOVERY)

Shutdown the database.(shutdown).
Take complete backup of current database.
Copy the missing database file(s).
Startup mount exclusive;
recover database;
alter database open;
Database is ready for use.

SOLUTION -2 (DATAFILE RECOVERY)*

Shutdown the database.(shutdown).
Take complete backup of current database.
Copy the missing database file(s).
Startup mount exclusive;
Alter database datafile ” offline;
Alter database open;
recover datafile ”
Alter database datafile ” online;
Database is ready for use.

* If multiple datafiles are lost use parallel recovery method (from muliple terminal use the same method for different files).

SOLUTION -3 (TABLESPACE RECOVERY)

Shutdown the database.(shutdown).
Take complete backup of current database.
Copy the missing database file(s).
Startup mount exclusive;
Alter database datafile ” offline;
Alter database open;
Alter tablespace offline temporary;
recover tablespace ;
Alter tablespace online ;
Database is ready for use.


10. LOSS OF NON SYSTEM DATAFILE WITH ROLLBACK SEGMENTS WHEN
THE DATABASE IS IN ARCHIVE LOG MODE AND RECOVERY.

SCENARIO
Database startup fails with errors on thursday morning.
ora 1157 can not identify datafile
ora 1110 give the name of datafile which is missing and found to be user data file.
Cold backup is taken once in every week . Here backup is taken on every sunday and this is the last activity
on sunday.
Datafile(s) associated with rollback segment tablespace is (are) lost.

TIME TAKEN IN RECOVERY
DATABASE WILL BE READY FOR USE IN MIN 30 mins.

SOLUTION –

Shutdown the database.(shutdown).
Take complete backup of current database.
Comment the ROLLBACK_SEGMENT parameter or assign ROLLBACK_SEGMENT=(SYSTEM)
in init.ora file before startup.
Copy the missing rollback segment database file(s).
Startup mount exclusive;
alter database datafile ” offline;
alter database open;
Alter tablespace offline temporary;
recover tablespace ;
Alter tablespace online;
Query dba_rollback_segs (column name segment_name, status) and note down the name of segment
name having status recovery (say they are r01, r02,r03).
alter rollback segment r01 online;
alter rollback segment r02 online;
alter rollback segment r03 online;
shutdown the database.
Remove comment from rollback_segment parameter or remove system value and give the name of rollback
segments which you want to be online when database starts.
Start the database.
Database is ready for use.


11. LOSS OF UNARCHIVED ONLINE LOG FILES WHEN THEY ARE NOT
MIRRORED WHEN THE DATABASE IS IN ARCHIVE LOG MODE AND
RECOVERY.

SCENARIO
Database startup fails with errors on thursday morning.
ora 1157 can not identify datafile
ora 1110 give the name of datafile which is missing and found to be user data file.
Cold backup is taken once in every week . Here backup is taken on every sunday and this is the last activity
on sunday.
All the online redo log files are lost.
All the data files and current control files are intact.

TIME TAKEN IN RECOVERY
DATABASE WILL BE READY FOR USE IN MIN 30 mins.

SOLUTION –

Shutdown the database.(shutdown).
Take complete backup of current database.
Copy the all the database files from latest offline or online backup.
Startup mount exclusive;
recover database until cancel;
alter database open resetlogs;
shutdown the database.
Take cold backup. It is strongely advised.
Start the database.
Database is ready for use.


12. DATABASE CRASH DURING HOT BACKUP WHEN THE DATABASE IS IN
ARCHIVE LOG MODE AND RECOVERY.

SCENARIO
While taking hot backup, database crashes.
(a) When Oracle Version is 7.2 or more.
(b) When Oracle Version is 7.1.

TIME TAKEN IN RECOVERY
[a] DATABASE WILL BE READY FOR USE IN MIN 2 mins.
[b] DATABASE WILL BE READY FOR USE IN MIN 30 mins.

SOLUTION – (Oracle Version is 7.2 or more)

Shutdown the database.(shutdown).
Take complete backup of current database.
Startup mount exclusive;
Query the view v$backup and get file # having status active. Now from v$datafile get the name of
file which is active in v$backup
alter database datafile ” end backup;
alter database open ;
shutdown the database.
Start the database.
Database is ready for use.

SOLUTION – (Oracle Version is 7.1 )

Shutdown the database.(shutdown).
Take complete backup of current database.
Startup mount exclusive;
recover database ;
( This may take significant amount of time if a large number of archived logs are to be applied)
alter database open ;
shutdown the database.
Start the database.
Database is ready for use.


13. LOSS OF CONTROL FILE AND WHEN THE DATABASE IS IN ARCHIVE
LOG MODE AND RECOVERY.

SCENARIO
[a] Loss of control file when it is mirrored.
[b] Loss of control file when there is a backup and it is not mirrored but not before last reset log option.
[c] Loss of control file when there is no backup and it is not mirrored (total loss).

NB : Loss of control file when there is a backup and it is not mirrored but it is before last reset log option.
(suppose database is open on day x with alter database startup resetlogs.So your control file should
be before xth day )

startup/recover database using backup controlfile
Oracle error no is ora 1190 :control file or data file 1 is from before the last RESETLOGS
ora 1110 :name of system datafile.
[b] Loss of control file when there is a backup and it is not mirrored but not before last reset log option.
(suppose database is open on day x with alter database startup resetlogs.So your control file should
be after xth day )
ORA-01122: database file 1 failed verification check
ORA-01110: data file 1: ‘/prodebs/test/sys_odc.dbf’
ORA-01207: file is more recent than control file – old control file

TIME TAKEN IN RECOVERY
[a] DATABASE WILL BE READY FOR USE IN MIN 2 mins.
[b] DATABASE WILL BE READY FOR USE IN MIN 30 mins.
[c] DATABASE WILL BE READY FOR USE IN MIN 30 mins.

SOLUTION – (Loss of control file when it is mirrored)

Shutdown the database.(shutdown).
Copy the second control file to this disk.
Rename this control file to the lost one.
Start the database.
Database is ready for use.

SOLUTION – (Loss of all the control file(s) )

Shutdown the database.(shutdown).
run backup of controlfile which you might have taken using the following command :
alter database backup controlfile to trace;
It creates a script. After editing that script it looks like this :
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE “” NORESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 30
MAXINSTANCES 1
MAXLOGHISTORY 100
LOGFILE
GROUP 1 (
‘/prodebs/test/redo_odc11.dbf’,
‘/prodebs/test/redo_odc12.dbf’
) SIZE 50K,
GROUP 2 (
‘/prodebs/test/redo_odc21.dbf’,
‘/prodebs/test/redo_odc22.dbf’
) SIZE 50K
DATAFILE
‘/prodebs/test/sys_odc.dbf’,
‘/prodebs/test/sys_odc1.dbf’,
‘/prodebs/test/user_odc.dbf’,
‘/prodebs/test/temp_odc.dbf’,
‘/prodebs/test/rbs_odc.dbf’,
‘/prodebs/test/ind.dbf’
;
RECOVER DATABASE
ALTER SYSTEM ARCHIVE LOG ALL;
ALTER DATABASE OPEN;
shutdown the database.
Start the database.
Database is ready for use.

SOLUTION – (Loss of control file when there is a backup and it is not mirrored)

Shutdown the database.(shutdown).
Copy the old control file to this disk.
startup mount exclusive;
If you have any tablespace which is read only, take all the datafile offiline related to this tablespace.
recover database using backup controlfile;
Offline datafile should bring to online status. (alter database datafile ‘< name of datafile>’ online;)
(for read only tablespace)
alter database open resetlogs;
Shutdown the database.
Take cold backup. It is strongely advised.
Start the database.
Database is ready for use.


14. DATABASE SPACE MANAGEMENT WHEN THE DATABASE IS IN
ARCHIVE LOG MODE AND RECOVERY (RESIZING DATAFILE).

SCENARIO
Space management when :
(a) Oracle Version is 7.2 or more.
(b) Oracle Version is 7.1.

Oracle Error is : ora 00376 file # can not be read at this time.
ora 01110 name of datafile.

TIME TAKEN IN RECOVERY
[a] DATABASE WILL BE READY FOR USE IN MIN 5 mins.
[b] DATABASE WILL BE READY FOR USE IN MIN 30 mins.

SOLUTION – (Oracle Version is 7.2 or more)

Shutdown the database.(shutdown).
Take complete backup of current database.
Startup open;
Query the view v$datafile get the name of file which you want to resize.
alter database datafile ” resize [m/k];
shutdown the database.
Take backup if necessary.
Start the database.
Database is ready for use.

SOLUTION – (Oracle Version is 7.1 )

[a] Restore the datafile and apply recovery. Resizing is not possible.

Shutdown the database.(shutdown).
Take complete backup of current database.
Copy the deleted datafile
Startup mount exclusive;
recover database ;
( This may take significant amount of time if a large number of archived logs are to be applied)
alter database open ;
shutdown the database.
Start the database.
Database is ready for use.

[b] If deleted datafile is not available in backup .

Shutdown the database.(shutdown).
Take complete backup of current database.
Startup mount exclusive;
alter database add datafile ” as ”;
recover database ;
( This may take significant amount of time if a large number of archived logs are to be applied)
alter database open ;
shutdown the database.
Start the database.
Database is ready for use.

[c] Rebuild tablespace
Requirement : Logical backup is there.

Shutdown the database.(shutdown).
Take complete backup of current database.
Startup mount exclusive;
alter database datafile ” offline;
alter database open;
alter tablespace offline;
drop tablespace ;
create tablespace datafile ” size [m/k];
alter tablespace online;
use export / import method to recover the loss data.
shutdown the database.
Take backup if necessary.
Start the database.
Database is ready for use.


15. RECOVERY THROUGH RESETLOGS .

CONDITIONS :

[A] When online redo logs files are deleted.
[B] Loss of all control files.
[C] When recovery is done through old control files.

Events
[1]
a- Cold backup is taken.
b- Loss of redo log file.and media recovery. At this moment, a backup is taken.
c- Loss of data file
[2]
a- Cold backup is taken.
b- Loss of redo log file.and media recovery. At this moment, a backup is not taken.
c- Loss of data file .

SOLUTION – [1]

Shutdown the database.(shutdown).
Take complete backup of current database.
Copy the most recent cold backup of datafiles.
Startup mount exclusive;
Recover database;
alter database open;
shutdown the database.
Take a cold backup.
Start the database.
Database is ready for use.

Advantage : All the data will be recovered.

SOLUTION – [2]-i

Shutdown the database.(shutdown).
Take complete backup of current database.
Startup mount exclusive;
alter database datafile ” offline;
alter database open;
Export all the data from the tablespace (all objects in missing file will be inaccessible),
Drop and recreate the tablespace.
Import all the data taken from tablespace.
recover datafile ” ;
alter database open;
shutdown the database.
Take a cold backup.
Start the database.
Database is ready for use.

Disadvantage : All the data will be lost that was entered in datafile 5;

SOLUTION – [2]-ii

Shutdown the database.(shutdown).
Take complete backup of current database.
Copy the most recent cold backup of datafile and control file only. Do not copy redo log file.
Startup mount exclusive;
Recover database;
alter database open resetlogs;
shutdown the database.
Take a cold backup.
Start the database.
Database is ready for use.

Disadvantage : All the data will be lost that after event b.


16. LOSS OF DATA FILE WHEN THE DATABASE IS IN ARCHIVE LOG MODE AND RECOVERY.

SCENARIO
Database startup fails with errors on thursday morning.
ora 1157 can not identify datafile
ora 1110 give the name of datafile which is missing and found to be user data file.
Cold backup is taken once in every week . Here backup is taken on every sunday and this is the last activity
on sunday.
Datafile(s) associated with user tablespace is (are) permanently lost.

TIME TAKEN IN RECOVERY
DATABASE WILL BE READY FOR USE IN MIN 30 mins.

SOLUTION –

Shutdown the database.(shutdown).
Take complete backup of current database.
Startup mount exclusive;
alter database create datafile ” ;
recover datafile ” ;
alter database open;
shutdown the database.
Start the database.
Database is ready for use.


17. SYSTEM CLOCK CHANGE AND POINT-IN-TIME RECOVERY.

SCENARIO
Database startup fails with errors on thursday morning.
ora 1157 can not identify datafile
ora 1110 give the name of datafile which is missing and found to be user data file.
Cold backup is taken once in every week . Here backup is taken on every sunday and this is the last activity
on sunday.
Datafile(s) associated with user tablespace is (are) permanently lost.

TIME TAKEN IN RECOVERY
DATABASE WILL BE READY FOR USE IN MIN 30 mins.

SOLUTION –

Shutdown the database.(shutdown).
Take complete backup of current database.
Startup mount exclusive;
recover database until time ” ;
alter database open resetlogs;
shutdown the database.
Take a cold backup of database;
Start the database.
Database is ready for use.


18. OFFLINE TABLESPACES AND RECOVERY.

SCENARIO
Database startup fails with errors on thursday morning.
ora 1157 can not identify datafile
ora 1110 give the name of datafile which is missing and found to be user data file.
Cold backup is taken once in every week . Here backup is taken on every sunday and this is the last activity
on sunday.
Datafile(s) associated with user tablespace is (are) permanently lost.

TIME TAKEN IN RECOVERY
DATABASE WILL BE READY FOR USE IN MIN 30 mins.

SOLUTION – 1

Shutdown the database.(shutdown).
Take complete backup of current database.
Startup mount exclusive;
recover database ;
alter database open ;
recover tablespace ;
alter tablespace online;
Shutdown the database.
Start the database.
Database is ready for use.

SOLUTION – 2 (better)

Shutdown the database.(shutdown).
Take complete backup of current database.
Startup mount exclusive;
Query the view v$datafile and note which datafile is offline say it is .
alter database datafile <‘a’> online;
recover database;
alter database open ;
shutdown the database
Start the database.
Database is ready for use.