Export Utility enables you to take a snapshot of the database basing on the flexibility preferred/chosen by the user and generates a single file or multiple files of the size chosen by the user.
As export is a logical snap shot of database/user schema/tablespace/table neither archived log files not online redo log files can be applied to recover the database/tablespace/user schema/table. That is possible when online or offline backup of the database is available. Otherwise it is only restoration of the database/tablespace/user schema/table to a point of time when the export took place.On restoring the database/tablespace/user schema/table it can not be recovered. The data loss should be tolerated.
Exporting Data for Added Protection and Flexibility and is used as another tool for Backup of data
Because the Oracle Export utility can selectively export specific objects, consider exporting portions or all of a database for supplemental protection and flexibility in a database's backup strategy. This strategy is especially useful for logical backups of the RMAN recovery catalog, because you can quickly reimport this data into any database and rebuild the catalog if the recovery catalog database is lost.
Database exports are not a substitute for whole database backups and cannot provide the same complete recovery advantages that the built-in functionality of Oracle offers. For example, you cannot apply archived logs to logical backups in order to update lost changes. An export provides a snapshot of the logical data (tables, stored procedures, and so forth) in a database when the export was made.
Using Different Releases of Export and Import ( Source Oracle Documentation)
Export from->Import to
Use Export Release
Use Import Release
7.3.3 to 8.1.6
8.1.6 to 8.1.6
8.1.5 to 8.0.6
8.1.7 to 8.1.6
8.1.7 to 7.3.4
9.0.1 to 8.1.6
9.0.1 to 9.0.2
Restrictions When Using Different Releases and Versions of Export and Import
The following restrictions apply when you are using different releases of Export and Import:
Export dump files can be read only by the Import utility because they are stored in a special binary format.
Any export dump file can be imported into a higher release of the Oracle database server.
The Import utility can read export dump files created by Export release 5.1.22 and higher.
The Import utility cannot read export dump files created by the Export utility of a higher maintenance release or version. For example, a release 8.1 export dump file cannot be imported by a release 8.0 Import utility, and a version 8 export dump file cannot be imported by a version 7 Import utility.
The Oracle version 6 (or earlier) Export utility cannot be used against an Oracle8 or higher database.
Whenever a lower version of the Export utility runs with a higher version of the Oracle database server, categories of database objects that did not exist in the lower version are excluded from the export. For example, partitioned tables did not exist in the Oracle database server version 7. So, if you need to move a version 8 partitioned table to a version 7 database, you must first reorganize the table into a nonpartitioned table.
Export files generated by Oracle9i Export, either direct path or conventional path, are incompatible with earlier releases of Import and can be imported only with Oracle9i Import. When backward compatibility is an issue, use the earlier release or version of the Export utility against the Oracle9i database.
Considerations When Exporting Database Objects
The following sections describe points you should consider when you export particular database objects.
If transactions continue to access sequence numbers during an export, sequence numbers can be skipped. The best way to ensure that sequence numbers are not skipped is to ensure that the sequences are not accessed during the export.
Sequence numbers can be skipped only when cached sequence numbers are in use. When a cache of sequence numbers has been allocated, they are available for use in the current database. The exported value is the next sequence number (after the cached values). Sequence numbers that are cached, but unused, are lost when the sequence is imported.
Exporting LONG and LOB Datatypes
On export, LONG datatypes are fetched in sections. However, enough memory must be available to hold all of the contents of each row, including the LONG data.
LONG columns can be up to 2 gigabytes in length.
All data in a LOB column does not need to be held in memory at the same time. LOB data is loaded and unloaded in sections.
Exporting Foreign Function Libraries
The contents of foreign function libraries are not included in the export file. Instead, only the library specification (name, location) is included in full database and user mode export. You must move the library's executable files and update the library specification if the database is moved to a new location.
Exporting Offline Bitmapped Tablespaces
If the data you are exporting contains offline bitmapped tablespaces, Export will not be able to export the complete tablespace definition and will display an error message. You can still import the data; however, you must first create the offline bitmapped tablespaces before importing to prevent DDL commands that may reference the missing tablespaces from failing.
Exporting Directory Aliases
Directory alias definitions are included only in a full database mode Export. To move a database to a new location, the database administrator must update the directory aliases to point to the new location.
Directory aliases are not included in user or table mode Export. Therefore, you must ensure that the directory alias has been created on the target system before the directory alias is used.
Exporting BFILE Columns and Attributes
The export file does not hold the contents of external files referenced by BFILE columns or attributes. Instead, only the names and directory aliases for files are copied on Export and restored on Import. If you move the database to a location where the old directories cannot be used to access the included files, the database administrator (DBA) must move the directories containing the specified files to a new location where they can be accessed.
The contents of external tables are not included in the export file. Instead, only the table specification (name, location) is included in full database and user mode export. You must manually move the external data and update the table specification if the database is moved to a new location.
Exporting Object Type Definitions
In all Export modes, the Export utility includes information about object type definitions used by the tables being exported. The information, including object name, object identifier, and object geometry, is needed to verify that the object type on the target system is consistent with the object instances contained in the export file. This ensures that the object types needed by a table are created with the same object identifier at import time.
Note, however, that in table, user, and tablespace mode, the export file does not include a full object type definition needed by a table if the user running Export does not have execute access to the object type. In this case, only enough information is written to verify that the type exists, with the same object identifier and the same geometry, on the import target system.
The user must ensure that the proper type definitions exist on the target system, either by working with the DBA to create them, or by importing them from full database or user mode exports performed by the DBA.
It is important to perform a full database mode export regularly to preserve all object type definitions. Alternatively, if object type definitions from different schemas are used, the DBA should perform a user mode export of the appropriate set of users. For example, if table1 belonging to user scott contains a column on blake's type type1, the DBA should perform a user mode export of both blake and scott to preserve the type definitions needed by the table.
Exporting Nested Tables
Inner nested table data is exported whenever the outer containing table is exported. Although inner nested tables can be named, they cannot be exported individually.
Exporting Advanced Queue (AQ) Tables
Queues are implemented on tables. The export and import of queues constitutes the export and import of the underlying queue tables and related dictionary tables. You can export and import queues only at queue table granularity.
When you export a queue table, both the table definition information and queue data are exported. Because the queue table data is exported as well as the table definition, the user is responsible for maintaining application-level data integrity when queue table data is imported.
You should be cautious when exporting compiled objects that reference a name used as a synonym and as another object. Exporting and importing these objects will force a recompilation that could result in changes to the object definitions.
The following example helps to illustrate this problem:
CREATE PUBLIC SYNONYM emp FOR scott.emp;
CREATE TRIGGER t_emp BEFORE INSERT ON emp BEGIN NULL; END;
CREATE VIEW emp AS SELECT * FROM dual;
If the database in the preceding example were exported, the reference to emp in the trigger would refer to blake's view rather than to scott's table. This would cause an error when Import tried to reestablish the t_emp trigger.
Possible Export Errors Related to Java Synonyms
If an export operation attempts to export a synonym named DBMS_JAVA when there is no corresponding DBMS_JAVA package or when Java is either not loaded or loaded incorrectly, the export will terminate unsuccessfully. The error messages that are generated include, but are not limited to, the following: EXP-00008, ORA-00904, and ORA-29516.
If Java is enabled, make sure that both the DBMS_JAVA synonym and DBMS_JAVA package are created and valid before rerunning the export.
If Java is not enabled, remove Java-related objects before rerunning the export.
Support for Fine-Grained Access Control
You can export tables with fine-grained access control policies enabled. When doing so, consider the following:
The user who imports from an export file containing such tables must have the appropriate privileges (specifically, the EXECUTE privilege on the DBMS_RLS package so that the tables' security policies can be reinstated). If a user without the correct privileges attempts to export a table with fine-grained access policies enabled, only those rows that the exporter is privileged to read will be exported.
If fine-grained access control is enabled on a SELECT statement, then conventional path Export may not export the entire table because fine-grained access may rewrite the query.
Only user SYS, or a user with the EXPORT_FULL_DATABASE role enabled or who has been granted EXEMPT ACCESS POLICY, can perform direct path Exports on tables having fine-grained access control.
The transportable tablespace feature enables you to move a set of tablespaces from one Oracle database to another.
To move or copy a set of tablespaces, you must make the tablespaces read-only, copy the datafiles of these tablespaces, and use Export and Import to move the database information (metadata) stored in the data dictionary. Both the datafiles and the metadata export file must be copied to the target database. The transport of these files can be done using any facility for copying flat binary files, such as the operating system copying facility, binary-mode FTP, or publishing on CD-ROMs.
After copying the datafiles and exporting the metadata, you can optionally put the tablespaces in read/write mode.
Export provides the following parameters to enable export of transportable tablespace metadata.
Exporting from a Read-Only Database
To extract metadata from a source database, Export uses queries that contain ordering clauses (sort operations). For these queries to succeed, the user performing the export must be able to allocate on-disk sort segments. For these sort segments to be allocated in a read-only database, the user's temporary tablespace should be set to point at a temporary, locally managed tablespace.
Using Export and Import to Partition a Database Migration
When you use the Export and Import utilities to migrate a large database, it may be more efficient to partition the migration into multiple export and import jobs. If you decide to partition the migration, be aware of the following advantages and disadvantages.
Advantages of Partitioning a Migration
Partitioning a migration has the following advantages:
Time required for the migration may be reduced because many of the subjobs can be run in parallel.
The import can start as soon as the first export subjob completes, rather than waiting for the entire export to complete.
Disadvantages of Partitioning a Migration
Partitioning a migration has the following disadvantages:
The export and import processes become more complex.
Support of cross-schema references for certain types of objects may be compromised. For example, if a schema contains a table with a foreign key constraint against a table in a different schema, you may not have the required parent records when you import the table into the dependent schema.
How to Use Export and Import to Partition a Database Migration
To perform a database migration in a partitioned manner, take the following steps:
For all top-level metadata in the database, issue the following commands:
exp dba/password FILE=full FULL=y CONSTRAINTS=n TRIGGERS=n ROWS=n INDEXES=n
imp dba/password FILE=full FULL=y
For each scheman in the database, issue the following commands:
exp dba/password OWNER=scheman FILE=scheman
imp dba/password FILE=scheman FROMUSER=scheman TOUSER=scheman IGNORE=y
All exports can be done in parallel. When the import of full.dmp completes, all remaining imports can also be done in parallel
When Direct path export is done you may encounter...
EXP-00041: Export done in server's WE8ISO8859P1, different from user's character set US7ASCII
EXP-00000: Export terminated unsuccessfully
EXP-00041: Export done in server's US7ASCII, different from user's character set WE8ISO8859P1
EXP-00000: Export terminated unsuccessfully
or some other session character set which differs with the database server NLS_LANGUAGE parameter.
at the command prompt issue the following command depending upon the session NLS_LANGUAGE settings and DB NLS_LANG settings.
and then invoke exp and call the par file or issue on line parameters and do the export successfully.
Good for reading
Metalink Doc ID Note:155477.1
Subject Parameter DIRECT: Conventional Path Export Versus Direct Path Export
Import utilitiy enables you to import the data exported. The exported definitions are executed against the database and various logical object are created.
Import: Release 188.8.131.52.0 - Production on Sun Nov 28 16:35:07 2004
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
You can let Import prompt you for parameters by entering the IMP
command followed by your username/password:
Example: IMP SCOTT/TIGER
Or, you can control how Import runs by entering the IMP command followed
by various arguments. To specify parameters, you use keywords:
Format: IMP KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
Example: IMP SCOTT/TIGER IGNORE=Y TABLES=(EMP,DEPT) FULL=N
or TABLES=(T1:P1,T1:P2), if T1 is partitioned table
USERID must be the first parameter on the command line.
FULL import entire file (N)
BUFFER size of data buffer
FROMUSER list of owner usernames
FILE input files (EXPDAT.DMP)
TOUSER list of usernames
SHOW just list file contents (N)
TABLES list of table names
IGNORE ignore create errors (N)
RECORDLENGTH length of IO record
GRANTS import grants (Y)
INCTYPE incremental import type
INDEXES import indexes (Y)
COMMIT commit array insert (N)
ROWS import data rows (Y)
PARFILE parameter filename
LOG log file of screen output
CONSTRAINTS import constraints (Y)
DESTROY overwrite tablespace data file (N)
INDEXFILE write table/index info to specified file
SKIP_UNUSABLE_INDEXES skip maintenance of unusable indexes (N)
FEEDBACK display progress every x rows(0)
TOID_NOVALIDATE skip validation of specified type ids
FILESIZE maximum size of each dump file
STATISTICS import precomputed statistics (always)
RESUMABLE suspend when a space related error is encountered(N)
RESUMABLE_NAME text string used to identify resumable statement
RESUMABLE_TIMEOUT wait time for RESUMABLE
COMPILE compile procedures, packages, and functions (Y)
STREAMS_CONFIGURATION import streams general metadata (Y)
STREAMS_INSTANITATION import streams instantiation metadata (N)
The following keywords only apply to transportable tablespaces
TRANSPORT_TABLESPACE import transportable tablespace metadata (N)
TABLESPACES tablespaces to be transported into database
DATAFILES datafiles to be transported into database
TTS_OWNERS users that own data in the transportable tablespace set
Order of Import
Table objects are imported as they are read from the export file. The export file contains objects in the following order:
Integrity constraints, views, procedures, and triggers
Bitmap, functional, and domain indexes
First, new tables are created. Then, data is imported and indexes are built. Then triggers are imported, integrity constraints are enabled on the new tables, and any bitmap, functional, and/or domain indexes are built. This sequence prevents data from being rejected due to the order in which tables are imported. This sequence also prevents redundant triggers from firing twice on the same data (once when it is originally inserted and again during the import).
For example, if the emp table has a referential integrity constraint on the dept table and the emp table is imported first, all emp rows that reference departments that have not yet been imported into dept would be rejected if the constraints were enabled.
When data is imported into existing tables, however, the order of import can still produce referential integrity failures. In the situation just given, if the emp table already existed and referential integrity constraints were in force, many rows could be rejected.
A similar situation occurs when a referential integrity constraint on a table references itself. For example, if scott's manager in the emp table is drake, and drake's row has not yet been loaded, scott's row will fail, even though it would be valid at the end of the import.
Table 2-1 Privileges Required to Import Objects into Your Own Schema
Object Required Privilege (Privilege Type, If Applicable)
Clusters CREATE CLUSTER (System) and Tablespace Quota,
or UNLIMITED TABLESPACE (System)
Database links CREATE DATABASE LINK (System) and
CREATE SESSION (System) on
Triggers on tables CREATE TRIGGER (System)
Triggers on schemas CREATE ANY TRIGGER (System)
Indexes CREATE INDEX (System) and Tablespace Quota,
or UNLIMITED TABLESPACE (System)
Integrity constraints ALTER TABLE (Object)
Libraries CREATE ANY LIBRARY (System)
Packages CREATE PROCEDURE (System)
Private synonyms CREATE SYNONYM (System)
Sequences CREATE SEQUENCE (System)
Srinivas Maddali. is a Leading Software Solution Development and Information Technology Consulting Company which provides Services to many Businesses around the United States led by certified professionals with expertise in Research, Design, Development and Delivery of high-end Technology Solutions.