What Streams Can Do?

Capture Changes to Database

The user to capture the changes made to the tables, schema or entire database can configure a background process CAPTURE. For this purpose the background process CAPTURE reads the redo log files and formats each captured change into LOGICAL CHANGE RECORD (LCR).
The database where changes are generated in the redo log is called SOURCE DATABASE.

Set Enqueue Events

The events enqueued are of LCRs and User Messages. The Capture Process enqueues the LCR events specified by the User. The queue can then be shared by the same database or with other databases. Apart from the LCRs User Messages can also be queued.

Can Propagate Events

Events can be propagated from one queue to another. The queues may be in the same database or different database

Can De-queue Events

Another background process ‘apply process’ can de-queue the events. User can also de-queue the events explicitly in the application.

Can Apply Events To Database

Apply Process can apply user specified events to the Destination Database. Or all the change events can also be applied to the Destination Database. In some processes the source database and destination database can be one and the same database.

Can Manage Directed Networks

A Directed Network is a one in which propagated events may pass through one or more intermediary databases before arriving at a destination database. Streams can manage the direct networks

Can Automatic Conflict Detection and Resolution

The conflicts are detected by the apply process. Generally these conflicts are the result of the modification of a row in the source database as well in the destination database approximately at the same time. There are built in conflict resolution methods in Oracle Streams. If these Oracle Pre-built conflict methods do not satisfy a particular business rule, user is allowed to build conflict resolution methods and resolve them. The errors are queued for possible re-execution and resolution. A manual resolution is also possible.

Can Take Care Of Transformations of Data Types

A rule-based-transformation is any modification to an event that results when a rule evaluates to TRUE. An example can be that when a user wants to change the data type of a particular column in a table for an event at a particular database.

Can Handle Heterogeneous Information Sharing

In addition to information sharing between Oracle Databases, Streams supports information sharing between Oracle Databases and Non-Oracle Databases using Gateways.

Initialization Parameters Relevant to Oracle Steams


Parameter Values Description
ARCHIVE_LAG_TARGET Default: 0 Range: 0 or any integer in [60, 7200] Limits the amount of data that can be lost and effectively increases the availability of the standby database by forcing a log switch after a user-specified time period elapses. If you are using Streams in a Real Application Clusters environment, then set this parameter to a value greater than zero to switch the log files automatically.

AQ_TM_PROCESSES Default: 0 Range: 0 to 10 Establishes queue monitor processes. Setting the parameter to 1 or more starts the specified number of queue monitor processes. These queue monitor processes are responsible for managing time-based operations of messages such as delay and expiration, cleaning up retained messages after the specified retention time, and cleaning up consumed messages if the retention time is zero. If you want to enqueue user events into a Streams queue, then set this parameter to 1 or higher. User events are those created by users and applications, not by a Streams capture process.
COMPATIBLE Default: 8.1.0 Range: 8.1.0 to Current Release Number This parameter specifies the release with which the Oracle server must maintain compatibility. Oracle servers with different compatibility levels can interoperate. To use Streams, this parameter must be set to 9.2.0 or higher.

GLOBAL_NAMES Default: false Range: true or false Specifies whether a database link is required to have the same name as the database to which it connects. If you want to use Streams to share information between databases, then set this parameter to true at each database that is participating in your Streams environment.

JOB_QUEUE_PROCESSES Default: 0 Range: 0 to 1000 Specifies the number of Jn job queue processes for each instance (J000 … J999). Job queue processes handle requests created by DBMS_JOB. You can change the setting for JOB_QUEUE_PROCESSES dynamically by using the ALTER SYSTEM statement. This parameter must be set to at least 2 at each database that is propagating events in your Streams environment, and should be set to the same value as the maximum number of jobs that can run simultaneously plus two.

LOG_PARALLELISM Default: 1 Range: 1 to 255 Specifies the level of concurrency for redo allocation within Oracle. If you plan to run one or more capture processes on a database, then this parameter must be set to 1. Setting this parameter to 1 does not affect the parallelism of capture. You can set parallelism for a capture process using the SET_PARAMETER procedure in the DBMS_CAPTURE_ADM package.

LOGMNR_MAX_PERSISTENT_SESSIONS Default: 1 Range: 1 to LICENSE_MAX_SESSIONS Specifies the maximum number of persistent LogMiner mining sessions that are concurrently active when all sessions are mining redo logs generated by instances. If you plan to run multiple Streams capture processes on a single database, then set this parameter equal to or higher than the number of planned capture processes.

OPEN_LINKS Default: 4 Range: 0 to 255 Specifies the maximum number of concurrent open connections to remote databases in one session. These connections include database links, as well as external procedures and cartridges, each of which uses a separate process. In a Streams environment, make sure this parameter is set to the default value of 4 or higher.

PARALLEL_MAX_SERVERS Default: Derived from the values of the following parameters: CPU_COUNT PARALLEL_AUTOMATIC_TUNING PARALLEL_ADAPTIVE_MULTI_USER Range: 0 to 3599 Specifies the maximum number of parallel execution processes and parallel recovery processes for an instance. As demand increases, Oracle will increase the number of processes from the number created at instance startup up to this value. In a Streams environment, each capture process and apply process may use multiple parallel execution servers. Set this initialization parameter to an appropriate value to ensure that there are enough parallel execution servers.

PROCESSES Default: Derived from PARALLEL_MAX_SERVERS Range: 6 to operating system dependent limit Specifies the maximum number of operating system user processes that can simultaneously connect to Oracle. Make sure the value of this parameter allows for all background processes, such as locks, job queue processes, and parallel execution processes. In Streams, capture processes and apply processes use background processes and parallel execution processes, and propagation jobs use job queue processes.

SHARED_POOL_SIZE Default: 32-bit platforms: 8 MB, rounded up to the nearest granule size 64-bit platforms: 64 MB, rounded up to the nearest granule size Range: Minimum: the granule size Maximum: operating system-dependent Specifies (in bytes) the size of the shared pool. The shared pool contains shared cursors, stored procedures, control structures, and other structures. You should increase the size of the shared pool by 10 MB for each capture process on a database.
SGA_MAX_SIZE Default: Initial size of SGA at startup Range: 0 to operating system dependent limit Specifies the maximum size of SGA for the lifetime of a database instance. If you plan to run multiple capture processes on a single database, then you may need to increase the size of this parameter.

What is Essentially Required for Configuring Streams in a Database?

The database should be in Archive Log Mode. Which means media recovery is enabled and redo log files are archived and are available for the log miner to analyze the redo log files which are the real sources of the events that have taken place against the Source Database.

What is advised while configuring a database where streams are proposed to be used?

By default, the LogMiner tables are in the SYSTEM tablespace, but the SYSTEM tablespace may not have enough space for these tables once a capture process starts to capture changes. Therefore, you must create an alternate tablespace for the LogMiner tables.

The following example creates a tablespace named logmnrts for use by LogMiner:

Connect as an administrative user who has privileges to create tablespaces and execute subprograms in the DBMS_LOGMNR_D package.

Either create an alternate tablespace for the LogMiner tables or use an existing tablespace.

For example, the following statement creates an alternate tablespace for the LogMiner tables:

CREATE TABLESPACE logmnrts
DATAFILE ‘/usr/oracle/dbs/logmnrts.dbf’
SIZE 25 M
REUSE
AUTOEXTEND ON
MAXSIZE UNLIMITED;

Run the SET_TABLESPACE procedure in the DBMS_LOGMNR_D package to set the alternate tablespace for LogMiner.

For example, to specify a tablespace named logmnrts, run the following procedure:

EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE(‘logmnrts’);

How do you Network the Databases for Oracle Streams?

For Oracle databases, configure your network and Oracle Net so that the databases can communicate with each other.

For non-Oracle databases, configure an Oracle gateway for communication between the Oracle database and the non-Oracle database.

If you plan to propagate events from a source queue at a database to a destination queue at another database, then create a private database link between the database containing the source queue and the database containing the destination queue. Each database link should use a CONNECT TO clause for the user propagating events between databases.

For example, to create a database link to a database named dbs2.net connecting as a Streams
Administrator named strmadmin, run the following statement:

CREATE DATABASE LINK dbs2.net
CONNECT TO strmadmin
IDENTIFIED BY strmadmin_password
USING ‘dbs2.net’;
Who is to administer the Streams and What Privileges are required?

To administer the streams a separate user is to be created. For that Connect as an administrative user who can create users, grant privileges, create tablespaces, and alter users.

Create a new user to act as the Streams administrator or use an existing user. For example, to create a new user named strmadmin, run the following statement:

CREATE USER strmadmin IDENTIFIED BY strmadmin_password;

Grant the Streams administrator at least the following privileges:

GRANT CONNECT, RESOURCE TO strmadmin;
GRANT EXECUTE ON DBMS_AQADM TO strmadmin;
GRANT EXECUTE ON DBMS_STREAMS_ADM TO strmadmin;

BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
grantee => ‘strmadmin’,
grant_option => FALSE);
END;
/

BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
grantee => ‘strmadmin’,
grant_option => FALSE);
END;
/

If necessary, grant the Streams administrator the following privileges:

EXECUTE privilege on the DBMS_APPLY_ADM package if the Streams administrator
will manage one or more apply processes on the database

EXECUTE privilege on the DBMS_CAPTURE_ADM package if the Streams administrator
will manage one or more capture processes on the database

EXECUTE privilege on the DBMS_PROPAGATION_ADM package if the Streams administrator
will manage one or more propagation jobs on the database

EXECUTE privilege on the DBMS_FLASHBACK package if the Streams administrator
will need to obtain the current SCN for a database.

Typically, the Streams administrator must determine the current SCN to set an
instantiation SCN using the SET_TABLE_INSTANTIATION_SCN, SET_SCHEMA_INSTANTIATION_SCN,
or SET_GLOBAL_INSTANTIATION_SCN procedure in the DBMS_APPLY_ADM package.

SELECT_CATALOG_ROLE if you want to enable the Streams administrator to monitor the
environment easily

SELECT ANY DICTIONARY privilege if you plan to use the Streams tool in Oracle Enterprise
Manager

SELECT privilege on the DBA_APPLY_ERROR data dictionary view if you want the Streams
administrator to be able to select from this view within a PL/SQL subprogram.

If no apply user is specified for an apply process, then the necessary privileges to
perform DML and DDL changes on the apply objects owned by another user. If an apply user is
specified, then the apply user must have these privileges.

If no apply user is specified for an apply process, then EXECUTE privilege on any PL/SQL
procedure owned by another user that is executed by a Streams an apply process.

These procedures may be used in apply handlers. If an apply user is specified, then the apply user must have these privileges

EXECUTE privilege on any PL/SQL function owned by another user that is specified in a rule-based transformation for a rule used by a Streams process or job. For an apply process,
if an apply user is specified, then the apply user must have these privileges.

If the Streams administrator does not own a queue used by a Streams process or job, and is not specified as the queue user for the queue when the queue is created, then the Streams administrator must be configured as a secure queue user of the queue. The Streams administrator may also need ENQUEUE or DEQUEUE privileges on the queue, or both.

Either create a tablespace for the Streams administrator or use an existing tablespace.

For example, the following statement creates a new tablespace for the Streams administrator:

CREATE TABLESPACE streams_tbs
DATAFILE ‘/usr/oracle/dbs/streams_tbs.dbf’
SIZE 25 M
REUSE
AUTOEXTEND ON
MAXSIZE UNLIMITED;


Specify the tablespace for the Streams administrator:

ALTER USER strmadmin
DEFAULT TABLESPACE streams_tbs
QUOTA UNLIMITED ON streams_tbs;

Repeat all of the previous steps at each database in the environment that will use Streams.

How to Create a New Streams in Single Source Environment?

This section lists the general steps to perform when creating a new single source Streams environment. A single source environment is one in which there is only one source database for shared data. There may be more than one source database in a single source environment, but two source databases do not capture any of the same data.

Before starting capture processes and configuring propagation jobs in a new Streams environment, make sure any propagation jobs or apply processes that will receive events are configured to handle these events. That is, the propagation jobs or apply processes should exist, and each one should be associated with a rule set that handles the events appropriately. If these propagation jobs and apply processes are not configured properly to handle these events, then events may be lost.

In general, if you are configuring a new Streams environment in which changes for shared objects are captured at one database and then propagated and applied at remote databases, then you should configure the environment in the following order:

Create any necessary Streams queues that do not already exist. When you create a capture process or apply process, you associate the process with a specific Streams queue. When you create a propagation job, you associate it with a specific source queue and destination queue.

Specify supplemental logging at each source database for any shared object.

At each database, create the required capture processes, propagation jobs, and apply processes for your environment. You can create these processes and jobs in any order.

Create one or more capture processes at each database that will capture changes. Make sure each capture process uses a rule set that is appropriate for capturing changes. Do not start the capture processes you create.

When you use the DBMS_STREAMS_ADM package to add the capture rules, it automatically runs the PREPARE_TABLE_INSTANTIATION, PREPARE_SCHEMA_INSTANTIATION,
or PREPARE_GLOBAL_INSTANTIATION procedure in the DBMS_CAPTURE_ADM package for the specified table, specified schema, or entire database, respectively.

You must run the appropriate procedure to prepare for instantiation manually if any of the following conditions is true:

You use the DBMS_RULE_ADM package to add or modify capture rules.

You use an existing capture process and do not add capture rules for any shared object.
If you must prepare for instantiation manually, then see “Preparing Database Objects for Instantiation at a Source Database” for instructions.

Create all propagation jobs that propagate the captured events from a source queue to a destination queue. Make sure each propagation job uses a rule set that is appropriate for propagating changes.

Create one or more apply processes at each database that will apply changes. Make sure each apply process uses a rule set that is appropriate for applying changes. Do not start the apply processes you create.

Either instantiate, or set the instantiation SCN for, each database object for which changes are applied by an apply process. If a database object does not exist at a destination database, then instantiate it using Export/Import. If a database object already exists at a destination database, then set the instantiation SCN for it manually.

To instantiate database objects using Export/Import, first export them at the source database with the OBJECT_CONSISTENT export parameter set to Y, or use a more stringent degree of consistency. Then, import them at the destination database with the STREAMS_INSTANTIATION import parameter set to Y.

To set the instantiation SCN for a table, schema, or database manually, run the appropriate procedure or procedures in the DBMS_APPLY_ADM package at the destination database:

SET_TABLE_INSTANTIATION_SCN
SET_SCHEMA_INSTANTIATION_SCN
SET_GLOBAL_INSTANTIATION_SCN

When you run one of these procedures, you must ensure that the shared objects at the destination database are consistent with the source database as of the instantiation SCN.

If you run SET_GLOBAL_INSTANTIATION_SCN at a destination database, then you must also run SET_SCHEMA_INSTANTIATION_SCN for each existing schema in the source database whose DDL changes you are applying, and you must run SET_TABLE_INSTANTIATION_SCN for each existing table in the source database whose DML or DDL changes you are applying.

If you run SET_SCHEMA_INSTANTIATION_SCN at a destination database, then you must also run SET_TABLE_INSTANTIATION_SCN for each existing source database table in the schema whose DML or DDL changes you are applying.

Alternatively, you can perform a metadata export/import to set the instantiation SCNs for existing database objects. If you choose this option, then make sure no rows are imported. Also, make sure the shared objects at all of the destination databases are consistent with the source database that performed the export at the time of the export. If you are sharing DML changes only, then table level export/import is sufficient. If you are sharing DDL changes also, then additional considerations apply.

Start each apply process you created in Step 4.

Start each capture process you created in Step 4.

When you are configuring the environment, remember that capture processes and apply processes are stopped when they are created, but propagation jobs are scheduled to propagate events immediately when they are created. The capture process must be created before the relevant objects are instantiated at a remote destination database. You must create the propagation jobs and apply processes before starting the capture process, and you must instantiate the objects before running the whole stream.

How Can I Add Shared Objects to an Existing Single Source Environment?

You add existing database objects to an existing single source environment by adding the necessary rules to the appropriate capture processes, propagation jobs, and apply processes. Before creating or altering capture or propagation rules in a running Streams environment, make sure any propagation jobs or apply processes that will receive events as a result of the new or altered rules are configured to handle these events. That is, the propagation jobs or apply processes should exist, and each one should be associated with a rule set that handles the events appropriately. If these propagation jobs and apply processes are not configured properly to handle these events, then events may be lost.

For example, suppose you want to add a table to a Streams environment that already captures, propagates, and applies changes to other tables. Assume only one capture process will capture changes to this table, and only one apply process will apply changes to this table. In this case, you must add one or more table-level rules to the following rule sets:

The rule set for the capture process that will capture changes to the table

The rule set for each propagation job that will propagate changes to the table

The rule set for the apply process that will apply changes to the table

If you perform administrative steps in the wrong order, you may lose events. For example, if you add the rule to the capture rule set first, without stopping the capture process, then the propagation job will not propagate the changes if it does not have a rule that instructs it to do so, and the changes may be lost.

To avoid losing events, you should complete the configuration in the following order:

Either stop the capture process, disable one of the propagation jobs, or stop the apply processes.

Add the relevant rules to the rule sets for the propagation jobs and the apply processes. See the following

Add the relevant rules to the rule set used by the capture process.

When you use the DBMS_STREAMS_ADM package to add the capture rules, it automatically runs the

PREPARE_TABLE_INSTANTIATION,
PREPARE_SCHEMA_INSTANTIATION, or
PREPARE_GLOBAL_INSTANTIATION procedure in the DBMS_CAPTURE_ADM package

for the specified table, specified schema, or entire database, respectively.

You must run the appropriate procedure to prepare for instantiation manually if any of the following conditions is true:

You use DBMS_RULE_ADM to create or modify rules in the capture process rule set.

You do not add rules for the added objects to the capture process rule set, because the capture process already captures changes to these objects. In this case, rules for the objects may be added to propagation jobs and apply processes in the environment, but not to the capture process.

At each destination database, either instantiate, or set the instantiation SCN for, each database object you are adding to the Streams environment. If a database objects does not exist at a destination database, then instantiate it using Export/Import. If a database object exists at a destination database, then set the instantiation SCN for it.

To instantiate a database objects using Export/Import, first export them at the source database with the OBJECT_CONSISTENT export parameter set to Y, or use a more stringent degree of consistency. Then, import them at the destination database with the STREAMS_INSTANTIATION import parameter set to Y.

To set the instantiation SCN for a table, schema, or database manually, run the appropriate procedure or procedures in the DBMS_APPLY_ADM package at a destination database:

SET_TABLE_INSTANTIATION_SCN
SET_SCHEMA_INSTANTIATION_SCN
SET_GLOBAL_INSTANTIATION_SCN

When you run one of these procedures at a destination database, you must ensure that every added object at the destination database is consistent with the source database as of the instantiation SCN.

If you run SET_GLOBAL_INSTANTIATION_SCN at a destination database, then you must also run SET_SCHEMA_INSTANTIATION_SCN for each existing source database schema whose changes you are applying and SET_TABLE_INSTANTIATION_SCN for each existing source database table whose changes you are applying.

If you run SET_SCHEMA_INSTANTIATION_SCN at a destination database, then you must also run SET_TABLE_INSTANTIATION_SCN for each existing source database table in the schema whose DML or DDL changes you are applying.

Alternatively, you can perform a metadata export/import to set the instantiation SCNs for existing database objects. If you choose this option, then make sure no rows are imported. Also, make sure every added object at the importing destination database is consistent with the source database that performed the export at the time of the export. If you are sharing DML changes only, then table level export/import is sufficient. If you are sharing DDL changes also, then additional considerations apply. See “Setting Instantiation SCNs Using Export/Import” for more information about performing a metadata export/import.

Start any process you stopped in Step 1 or enable any job you disabled in Step 1.

You must stop the capture process, disable one of the propagation jobs, or stop the apply process in Step 1 to ensure that the table or schema is instantiated before the first LCR resulting from the added rule(s) reaches the apply process. Otherwise, events could be lost or could result in apply errors, depending on whether the apply rule(s) have been added.

If you are certain that the added table is not being modified at the source database during this procedure, and that there are no LCRs for the table already in the stream or waiting to be captured, then you can perform Step 5 before Step 4 to reduce the amount of time that a process or job is stopped.

How To Add a New Destination Database to an Existing Single Source Environment?

You add a destination database to an existing single source environment by creating one or more new apply processes at the new destination database and, if necessary, configuring one or more propagation jobs to propagate changes to the new destination database. You may also need to add rules to existing propagation jobs in the stream that propagates to the new destination database.

As in the example that describes “Adding Shared Objects to an Existing Single Source Environment”, before creating or altering propagation rules in a running Streams environment, make sure any propagation jobs or apply processes that will receive events as a result of the new or altered rules are configured to handle these events. Otherwise, events may be lost.

To avoid losing events, you should complete the configuration in the following order:

Complete the necessary tasks described previously in this chapter to prepare the new destination database for Streams:

Configuring a Streams Administrator
Setting Initialization Parameters Relevant to Streams
Configuring Network Connectivity and Database Links

Some of these tasks may not be required at the new database.

Create any necessary Streams queues that do not already exist at the destination database. When you create an apply process, you associate the apply process with a specific Streams queue.

Create one or more apply processes at the new destination database to apply the changes from its source databases. Make sure each apply process uses a rule set that is appropriate for applying changes. Do not start any apply process at the new database.

Keeping the apply processes stopped prevents changes made at the source databases from being applied before the instantiation of the new database is completed, which would otherwise lead to incorrect data and errors.

Configure any necessary propagation jobs to propagate changes from the source databases to the new destination database. Make sure each propagation job uses a rule set that is appropriate for propagating changes.

At the source database, prepare for instantiation each database object for which changes will be applied by an apply process at the new destination database. Run either the PREPARE_TABLE_INSTANTIATION, PREPARE_SCHEMA_INSTANTIATION, or PREPARE_GLOBAL_INSTANTIATION procedure in the DBMS_CAPTURE_ADM package for the specified table, specified schema, or entire database, respectively.

At the new destination database, either instantiate, or set the instantiation SCNs for, each database object for which changes will be applied by an apply process. If the database objects do not already exist at the new destination database, then instantiate them using Export/Import. If the database objects exist at the
new destination database, then set the instantiation SCN for them.

To instantiate database objects using Export/Import, first export them at the source database with the OBJECT_CONSISTENT export parameter set to Y, or use a more stringent degree of consistency. Then, import them at the new destination database with the STREAMS_INSTANTIATION import parameter set to
Y.

To set the instantiation SCN for a table, schema, or database manually, run the appropriate procedure or procedures in the DBMS_APPLY_ADM package at the new destination database:

SET_TABLE_INSTANTIATION_SCN
SET_SCHEMA_INSTANTIATION_SCN
SET_GLOBAL_INSTANTIATION_SCN

When you run one of these procedures, you must ensure that the shared objects at the new destination database are consistent with the source database as of the instantiation SCN.

If you run SET_GLOBAL_INSTANTIATION_SCN at a destination database, then you must also run SET_SCHEMA_INSTANTIATION_SCN for each existing schema in the source database whose DDL changes you are applying, and you must run SET_TABLE_INSTANTIATION_SCN for each existing table in the source database whose DML or DDL changes you are applying.

If you run SET_SCHEMA_INSTANTIATION_SCN at a destination database, then you must also run SET_TABLE_INSTANTIATION_SCN for each existing source database table in the schema whose DML or DDL changes you are applying.

Alternatively, you can perform a metadata export/import to set the instantiation SCNs for existing database objects. If you choose this option, then make sure no rows are imported. Also, make sure the shared objects at the importing destination database are consistent with the source database that performed the export at the time of the export. If you are sharing DML changes only, then table level export/import is sufficient. If you are sharing DDL changes also, then additional considerations apply.

Start the apply processes you created in Step 3

How to Create a New Multiple Source Environment?

A multiple source environment is one in which there is more than one source database for any of the shared data.

This example uses the following terms:

Populated database:

A database that already contains the shared database objects before you create the new multiple source environment. You must have at least one populated database to create the new Streams environment.

Export database:

A populated database on which you perform an export of the shared database objects. This export is used to instantiate the shared database objects at the import databases. You may not have an export database if all of the databases in the environment are populated databases.

Import database:

A database that does not contain the shared database objects before you create the new multiple source environments. You instantiate the shared database objects at an import database using the export dump file from the export database. You may not have any import databases if all of the databases in the environment are populated databases.


Here are the steps to create a new multiple source environments:

Make sure no changes are made to the objects being shared at the database you are adding to the Streams environment until the instantiation at the database is complete.

Other wise inconsistency can prevail.

Then

Configuring a Streams Administrator
Setting Initialization Parameters Relevant to Streams
Configuring a Database to Run a Streams Capture Process
Configuring Network Connectivity and Database Links

Some of these tasks may not be required at certain databases.

At each populated database, specify any necessary supplemental logging for the shared objects.

Create any necessary Streams queues that do not already exist. When you create a capture process or apply process, you associate the process with a specific Streams queue. When you create a propagation job, you associate it with a specific source queue and destination queue.

At each database, create the required capture processes, propagation jobs, and apply processes for your environment. You can create these processes and jobs in any order.

Create one or more capture processes at each database that will capture changes. Make sure each capture process uses a rule set that is appropriate for capturing changes. Do not start the capture processes you create.

When you use the DBMS_STREAMS_ADM package to add the capture rules, it automatically runs the PREPARE_TABLE_INSTANTIATION,
PREPARE_SCHEMA_INSTANTIATION, or
PREPARE_GLOBAL_INSTANTIATION procedure in the DBMS_CAPTURE_ADM package

for the specified table, specified schema, or entire database, respectively.

You must run the appropriate procedure to prepare for instantiation manually if any of the following conditions is true:

You use the DBMS_RULE_ADM package to add or modify capture rules.

You use an existing capture process and do not add capture rules for any shared object.

Create all propagation jobs that propagate the captured events from a source queue to a destination queue. Make sure each propagation job uses a rule set that is appropriate for propagating changes.

Create one or more apply processes at each database that will apply changes. Make sure each apply process uses a rule set that is appropriate for applying changes. Do not start the apply processes you create.

After completing these steps, complete the steps in each of the following sections that apply to your environment. You may need to complete the steps in only one of these sections or in both of these sections:

For each populated database, complete the steps in Configuring Populated Databases.

For each import database, complete the steps in Adding Shared Objects to Import Databases.

Configuring Populated Databases

After completing the steps in Creating a New Multiple Source Environment, complete the following steps for the populated databases:

For each populated database, set the instantiation SCN at each of the other populated databases in the environment. These instantiation SCNs must be set, and only the changes made at a particular populated database that are committed after the corresponding SCN for that database will be applied at another populated database.

For each populated database, you can set these instantiation SCNs in one of the following ways:

Perform a metadata only export of the shared objects at the populated database and import the metadata at each of the other populated databases. Such an import sets the required instantiation SCNs for the populated database at the other populated databases. Make sure no rows are imported. Also, make sure the shared objects at each populated database performing a metadata import are consistent with the populated database that performed the metadata export at the time of the export.

If you are sharing DML changes only, then table level export/import is sufficient. If you are sharing DDL changes also, then additional considerations apply.

Set the instantiation SCNs manually at each of the other populated databases. Do this for each of the shared objects. Make sure the shared objects at each populated database are consistent with the instantiation SCNs you set at that database.

How Can One Add Shared Objects to Import Databases?

After completing the steps in “Creating a New Multiple Source Environment”, complete the following steps for the import databases:

Pick the populated database that you will use as the export database. Do not perform the instantiations yet.

For each import database, set the instantiation SCNs at all of the other databases in the environment, including all populated databases and all of the other import databases.

If one or more schemas will be created at an import database during instantiation or by a subsequent shared DDL change, then run the SET_GLOBAL_INSTANTIATION_SCN procedure in the DBMS_APPLY_ADM package for this import database at all of the other databases in the environment.

If a schema exists at an import database, and one or more tables will be created in the schema during instantiation or by a subsequent shared DDL change, then run the SET_SCHEMA_INSTANTIATION_SCN procedure in the DBMS_APPLY_ADM package for the schema at all of the other databases in the environment for the import database. Do this for each such schema.

Because you are running these procedures before any tables are instantiated at the import databases, and because the local capture processes are configured already for these import databases, you will not need to run the SET_TABLE_INSTANTIATION_SCN for each table created during the instantiation.

At the export database you chose in Step 1, perform an export of the shared data with the OBJECT_CONSISTENT export parameter set to Y, or use a more stringent degree of consistency. Then, perform an import with the STREAMS_INSTANTIATION import parameter set to y at each import database

For each populated database, except for the export database, set the instantiation SCNs at each import database. These instantiation SCNs must be set, and only the changes made at a populated database that are committed after the corresponding SCN for that database will be applied at an import database.

You can set these instantiation SCNs in one of the following ways:

Perform a metadata only export at each populated database and import the metadata at each import database. Each import sets the required instantiation SCNs for the populated database at the import database. In this case, ensure that the shared objects at the import database are consistent with the populated database at the time of the export.

If you are sharing DML changes only, then table level export/import is sufficient. If you are sharing DDL changes also, then additional considerations apply.

For each populated database, set the instantiation SCN manually for each shared object at each import database. Make sure the shared objects at each import database are consistent with the populated database as of the corresponding instantiation SCN.

At each import database, specify any necessary supplemental logging:
Set the session tag to an appropriate non-NULL value.

This step ensures that the supplemental logging specifications are not applied at the other databases.

Specify any necessary supplemental logging.

Set the session tag back to NULL.

Make sure no changes are made to the relevant objects until after you have specified supplemental logging.

Complete the Multiple Source Environment Configuration

Before completing the steps in this section, you should have completed the following tasks:

Creating a New Multiple Source Environment

Configuring Populated Databases, if your environment has more than one populated database
Adding Shared Objects to Import Databases, if your environment has one or more import databases
When all of the previous configuration steps are finished, complete the following steps:

Start each apply process in the environment.

Start each capture process the environment.

How to Add Shared Objects to an Existing Multiple Source Environment?

You add existing database objects to an existing multiple source environment by adding the necessary rules to the appropriate capture processes, propagation jobs, and apply processes.

This example uses the following terms:

Populated database: A database that already contains the shared database objects being added to the multiple source environment. You must have at least one populated database to add the objects to the environment.

Export database:

A populated database on which you perform an export of the database objects you are adding to the environment. This export is used to instantiate the added database objects at the import databases.

You may not have an export database if all of the databases in the environment are populated databases.

Import database:

A database that does not contain the shared database objects before they are added to the multiple source environment. You instantiate the added database objects at an import database using the export dump file from the export database.

You may not have any import databases if all of the databases in the environment are populated databases.

Before creating or altering capture or propagation rules in a running Streams environment, make sure any propagation jobs or apply processes that will receive events as a result of the new or altered rules are configured to handle these events. That is, the propagation jobs or apply processes should exist, and each one should be associated with a rule set that handles the events appropriately. If these propagation jobs and apply processes are not configured properly to handle these events, then events may be lost.

For example, suppose you want to add a new table to a Streams environment that already captures, propagates, and applies changes to other tables. Assume multiple capture processes in the environment will capture changes to this table, and multiple apply processes will apply changes to this table. In this case, you must add one or more table-level rules to the following rule sets:

The rule set for each capture process that will capture changes to the table
The rule set for each propagation job that will propagate changes to the table
The rule set for each apply process that will apply changes to the table.

If you perform administrative steps in the wrong order, you may lose events.

For example, if you add the rule to the capture rule set first, without stopping the capture process, then the propagation job will not propagate the changes if it does not have a rule that instructs it to do so, and the changes may be lost.

To avoid losing events, you should complete the configuration in the following order:

At each populated database, specify any necessary supplemental logging for the objects being added to the environment.

Either stop all of the capture processes that will capture changes to the added objects, disable all of the propagation jobs that will propagate changes to the added objects, or stop all of the apply process that will apply changes to the added objects.

Add the relevant rules to the rule sets for the propagation jobs and the apply processes that will propagate or apply changes to the added objects.

Add the relevant rules to the rule set used by each capture process that will capture changes to the added objects.

When you use the DBMS_STREAMS_ADM package to add the capture rules, it automatically runs the

PREPARE_TABLE_INSTANTIATION,
PREPARE_SCHEMA_INSTANTIATION, or
PREPARE_GLOBAL_INSTANTIATION procedure in the DBMS_CAPTURE_ADM package

for the specified table, specified schema, or entire database, respectively.

You must run the appropriate procedure to prepare for instantiation manually if any of the following conditions is true:

You use DBMS_RULE_ADM to create or modify rules in a capture process rule set.

You do not add rules for the added objects to a capture process rule set, because the capture process already captures changes to these objects.

In this case, rules for the objects may be added to propagation jobs and apply processes in the
environment, but not to the capture process.

After completing these steps, complete the steps in each of the following sections that apply to your environment. You may need to complete the steps in only one of these sections or in both of these sections:

For each populated database, complete the steps in Configuring Populated Databases.

For each import database, complete the steps in Adding Shared Objects to Import Databases.

How Can One Configure Populated Databases In Oracle Streams?

After completing the steps in “Adding Shared Objects to an Existing Multiple Source Environment”, complete the following steps for each populated database:

For each populated database, set the instantiation SCN at the other populated databases in the environment. These instantiation SCNs must be set, and only the changes made at a particular populated database that are committed after the corresponding SCN for that database will be applied at another populated database.

For each populated database, you can set these instantiation SCNs in one of the following ways:

Perform a metadata only export of the added objects at the populated database and import the metadata at each of the other populated databases. Such an import sets the required instantiation SCNs for the database at the other databases. Make sure no rows are imported. Also, make sure the shared objects at each of the other populated databases are consistent with the populated database that performed the export at the time of the export.

If you are sharing DML changes only, then table level export/import is sufficient. If you are sharing DDL changes also, then additional considerations apply.

Set the instantiation SCNs manually for the added objects at each of the other populated databases. Make sure every added object at each populated database is consistent with the instantiation SCNs you set at that database

How to Add Shared Objects to Import Databases In Oracle Streams?

After completing the steps in “Adding Shared Objects to an Existing Multiple Source Environment”, complete the following steps for the import databases:

Pick the populated database that you will use as the export database.

Do not perform the instantiations yet.

For each import database, set the instantiation SCNs at all of the other databases in the environment, including all of the populated databases and all of the other import databases.

If one or more schemas will be created at an import database during instantiation or by a subsequent shared DDL change, then run the SET_GLOBAL_INSTANTIATION_SCN procedure in the DBMS_APPLY_ADM package for this import database at all of the other databases in the environment.

If a schema exists at an import database, and one or more tables will be created in the schema during instantiation or by a subsequent shared DDL change, the run the SET_SCHEMA_INSTANTIATION_SCN procedure in the DBMS_APPLY_ADM package for the schema for this import database at each of the other databases in the environment. Do this for each such schema.

Because you are running these procedures before any tables are instantiated at the import databases, and because the local capture processes are configured already for these import databases, you will not need to run the SET_TABLE_INSTANTIATION_SCN for each table created during instantiation.

At the export database you chose in Step 1, perform an export of the added objects with the OBJECT_CONSISTENT export parameter set to Y, or use a more stringent degree of consistency. Then, perform an import of the added objects at each import database with the STREAMS_INSTANTIATION import parameter set to Y.

For each populated database, except for the export database, set the instantiation SCNs at each import database. These instantiation SCNs must be set, and only the changes made at a populated database that are committed after the corresponding SCN for that database will be applied at an import database.

For each populated database, you can set these instantiation SCNs in one of the following ways:

Perform a metadata only export of the added objects at the populated database and import the metadata at each import database. Each import sets the required instantiation SCNs for the populated database at the import database. In this case, ensure that every added object at the import database is consistent with the populated database at the time of the export.

If you are sharing DML changes only, then table level export/import is sufficient. If you are sharing DDL changes also, then additional considerations apply.

Set the instantiation SCNs manually for the added objects at each import database. Make sure every added object at each import database is consistent with the populated database as of the corresponding instantiation SCN. See “Setting Instantiation SCNs Using a DBMS_APPLY_ADM Package Procedure” for instructions.

At each import database, specify any necessary supplemental logging.

Set the session tag to an appropriate non-NULL value.

This step ensures that the supplemental logging specifications are not applied at the other databases.

Specify any necessary supplemental logging.

Set the session tag back to NULL.
Make sure no changes are made to the relevant objects until after you have specified supplemental logging.

Complete the Adding Objects to a Multiple Source Environment Configuration

Before completing the configuration, you should have completed the following tasks:

Creating a New Multiple Source Environment

Configuring Populated Databases, if your environment had populated databases

Adding Shared Objects to Import Databases, if your environment had import databases

When all of the previous configuration steps are finished, start each process you stopped and enable each propagation job you disabled

How to Add a New Database to an Existing Multiple Source Environment?

Complete the following steps to add a new database to an existing multiple source Streams environment:

Please note that and make sure no changes are made to the objects being shared at the database you are adding to the Streams environment until the instantiation at the database is complete.

Complete the necessary tasks described previously in this chapter to prepare the new database for Streams:

Configuring a Streams Administrator

Setting Initialization Parameters Relevant to Streams

Configuring a Database to Run a Streams Capture Process

Configuring Network Connectivity and Database Links

Some of these tasks may not be required at the new database.

Create any necessary Streams queues that do not already exist.

When you create a capture process or apply process, you associate the process with a specific Streams queue. When you create a propagation job, you associate it with a specific source queue and destination queue.

Create one or more apply processes at the new database to apply the changes from its source databases. Make sure each apply process uses a rule set that is appropriate for applying changes. Do not start any apply process at the new database.

Keeping the apply processes stopped prevents changes made at the source databases from being applied before the instantiation of the new database is completed, which would otherwise lead to incorrect data and errors.

If the new database will be a source database, then, for all databases that will be destination databases for the changes made at the new database, create one or more apply processes to apply changes from the new database. Make sure each apply process uses a rule set that is appropriate for applying changes. Do not start any of these new apply processes.

Configure propagation jobs at the databases that will be source databases of the new database to send changes to the new database. Make sure each propagation job uses a rule set that is appropriate for propagating changes.

If the new database will be a source database, then configure propagation jobs at the new database to send changes from the new database to each of its destination databases. Make sure each propagation job uses a rule set that is appropriate for propagating changes. See “Creating a Propagation Job”.
If the new database will be a source database, and the shared objects already exist at the new database, then specify any necessary supplemental logging for the shared objects at the new database.

At each source database for the new database, prepare for instantiation each database object for which changes will be applied by an apply process at the new database.

Run either the
PREPARE_TABLE_INSTANTIATION,
PREPARE_SCHEMA_INSTANTIATION, or
PREPARE_GLOBAL_INSTANTIATION procedure in the DBMS_CAPTURE_ADM package

for the specified table, specified schema, or entire database, respectively.

If the new database will be a source database, then create one or more capture processes to capture the relevant changes.

When you use the DBMS_STREAMS_ADM package to add the capture rules, it automatically runs the

PREPARE_TABLE_INSTANTIATION,
PREPARE_SCHEMA_INSTANTIATION, or
PREPARE_GLOBAL_INSTANTIATION procedure in the DBMS_CAPTURE_ADM package

for the specified table, specified schema, or entire database, respectively.

You must run the appropriate procedure to prepare for instantiation manually if any of the following conditions is true:

You use the DBMS_RULE_ADM package to add or modify capture rules.

You use an existing capture process and do not add capture rules for any shared object.

If the new database will be a source database, then start any capture processes you created

After completing these steps, complete the steps in the appropriate section:

If the objects that are to be shared with the new database already exist at the new database, then complete the steps to Configure Databases If the Shared Objects Already Exist at the New Database.

If the objects that are to be shared with the new database do not already exist at the new database, complete the steps to Add Shared Objects to a New Database.

Configuring Databases If the Shared Objects Already Exist at the New Database

After completing the steps to Add a New Database to an Existing Multiple Source Environment , complete the following steps if the objects that are to be shared with the new database already exist at the new database:

For each source database of the new database, set the instantiation SCNs at the new database. These instantiation SCNs must be set, and only the changes made at a source database that are committed after the corresponding SCN for that database will be applied at the new database.

For each source database of the new database, you can set these instantiation SCNs in one of the following ways:

Perform a metadata only export of the shared objects at the source database and import the metadata at the new database. The import sets the required instantiation SCNs for the source database at the new database. Make sure no rows are imported. In this case, ensure that the shared objects at the new database are consistent with the source database at the time of the export.

If you are sharing DML changes only, then table level export/import is sufficient. If you are sharing DDL changes also, then additional considerations apply.

Set the instantiation SCNs manually at the new database for the shared objects. Make sure the shared objects at the new database are consistent with the source database as of the corresponding instantiation SCN.

For the new database, set the instantiation SCNs at each destination database of the new database. These instantiation SCNs must be set, and only the changes made at the new source database that are committed after the corresponding SCN will be applied at a destination database. If the new database is not a source database, then do not complete this step.

You can set these instantiation SCNs for the new database in one of the following ways:

Perform a metadata only export at the new database and import the metadata at each destination database.

Make sure no rows are imported.

The import sets the required instantiation SCNs for the new database at each destination database. In this case, ensure that the shared objects at each destination database are consistent with the new database at the time of the export.

If you are sharing DML changes only, then table level export/import is sufficient.

If you are sharing DDL changes also, then additional considerations apply.

Set the instantiation SCNs manually at each destination database for the shared objects. Make sure the shared objects at each destination database are consistent with the new database as of the corresponding instantiation SCN.

Start the apply processes that you created at the new database in Step 3. See “Starting an Apply Process” for instructions.

Start the apply processes that you created at each of the other destination databases.

If the new database is not a source database, then do not complete this step.

How to Add Shared Objects to a New Database?

After completing the steps in “How to Add a New Database to an Existing Multiple Source Environment?”, complete the following steps if the objects that are to be shared with the new database do not already exist at the new database:

If the new database is a source database for other databases, then, at each destination database of the new source database, set the instantiation SCNs for the new database.

If one or more schemas will be created at the new database during instantiation or by a subsequent shared DDL change, then run the SET_GLOBAL_INSTANTIATION_SCN procedure in the DBMS_APPLY_ADM package for the new database at each destination database of the new database.

If a schema exists at the new database, and one or more tables will be created in the schema during instantiation or by a subsequent shared DDL change, then run the SET_SCHEMA_INSTANTIATION_SCN procedure in the DBMS_APPLY_ADM package for the schema at each destination database of the new database. Do this for each such schema.

Because you are running these procedures before any tables are instantiated at the new database, and because the local capture processes are configured already at the new database, you will not need to run the SET_TABLE_INSTANTIATION_SCN for each table created during instantiation.

If the new database will not be a source database, then do not complete this step, and continue with the next step.

Pick one source database from which to instantiate the shared objects at the new database using Export/Import.

First, perform the export at the source database with the OBJECT_CONSISTENT export parameter set to Y, or use a more stringent degree of consistency. Then, perform the import at the new database with the STREAMS_INSTANTIATION import parameter set to Y.

If the new database is a source database for other databases, then specify any necessary supplemental logging at the new database:

Set the session tag to an appropriate non-NULL value.
This step ensures that the supplemental logging specifications are not applied at the other databases.
Specify any necessary supplemental logging.
Set the session tag back to NULL.
Make sure no changes are made to the relevant objects until after you have specified supplemental logging.

For each source database of the new database, except for the source database that performed the export for instantiation in Step 2, set the instantiation SCNs at the new database. These instantiation SCNs must be set, and only the changes made at a source database that are committed after the corresponding SCN for that database will be applied at the new database.

For each source database, you can set these instantiation SCNs in one of the following ways:

Perform a metadata only export at the source database and import the metadata at the new database. The import sets the required instantiation SCNs for the source database at the new database. In this case, ensure that the shared objects at the new database are consistent with the source database at the time of the export.

If you are sharing DML changes only, then table level export/import is sufficient. If you are sharing DDL changes also, then additional considerations apply.

Set the instantiation SCNs manually at the new database for the shared objects. Make sure the shared objects at the new database are consistent with the source database as of the corresponding instantiation SCN.

Start the apply processes at the new database that you created in Step 3. See “Starting an Apply Process” for instructions.
Start the apply processes at each of the other destination databases that you created in Step 4.

If the new database is not a source database, then do not complete this step.

Let me know what are all the new relevant parameters in the Export and Import Utilities for Oracle Streams.

Export Utility Parameters Relevant to Streams

The following Export utility parameter is relevant to Streams.

The OBJECT_CONSISTENT Export Utility Parameter and Streams

The OBJECT_CONSISTENT Export utility parameter specifies whether or not the Export utility repeatedly uses the SET TRANSACTION READ ONLY statement to ensure that the exported data and the exported procedural actions for each object are consistent to a single point in time. If OBJECT_CONSISTENT is set to y, then each object is exported in its own read-only transaction, even if it is partitioned. In contrast, if you use the CONSISTENT Export utility parameter, then there is only one read-only transaction.

When you perform an instantiation in a Streams environment, some degree of consistency is required in the export dump file. The OBJECT_CONSISTENT Export utility parameter is sufficient to ensure this consistency for Streams instantiations. If you are using an export dump file for other purposes in addition to a Streams instantiation, and these other purposes have more stringent consistency requirements than that provided by OBJECT_CONSISTENT, then you can use Export utility parameters CONSISTENT, FLASHBACK_SCN, or FLASHBACK_TIME for Streams instantiations.

By default the OBJECT_CONSISTENT Export utility parameter is set to n. Specify y when an export is performed as part of a Streams instantiation and no more stringent Export utility parameter is needed.

Important to Note

(1) During an export for a Streams instantiation, make sure no DDL changes are made to objects being exported.

(2) When you export a database or schema that contains rules with non-NULL action contexts, then the database or the default tablespace of the schema that owns the rules must be writeable. If the database or tablespace is read-only, then export errors result.

Import Utility Parameters Relevant to Streams

The following Import utility parameters are relevant to Streams.

The STREAMS_INSTANTIATION Import Utility Parameter and Streams

The STREAMS_INSTANTIATION Import utility parameter specifies whether to import Streams instantiation metadata that may be present in the export dump file. When this parameter is set to y, the import session sets its Streams tag to the hexadecimal equivalent of ’00’ to avoid cycling the changes made by the import. Redo entries resulting from the import have this tag value. By default the STREAMS_INSTANTIATION Import utility parameter is set to n. Specify y when an import is performed as part of a Streams instantiation.

The STREAMS_CONFIGURATION Import Utility Parameter and Streams

The STREAMS_CONFIGURATION Import utility parameter specifies whether to import any general Streams metadata that may be present in the export dump file. This import parameter is relevant only if you are performing a full database import. By default the STREAMS_CONFIGURATION Import utility parameter is set to y. Typically, specify y if an import is part of a backup or restore operation.

The following objects are imported regardless of the STREAMS_CONFIGURATION setting:

Streams queues and their queue tables (if STREAMS_CONFIGURATION is set to n, then these queues are not started when they are imported)

Queue subscribers

Advanced queuing agents

Job queue processes related to Streams propagations

Rules, including their rule sets and evaluation contexts. All rules are imported, including Streams rules and non-Streams rules. Streams rules are rules generated by the system when certain procedures in the DBMS_STREAMS_ADM package are run, while non-Streams rules are rules created using the DBMS_RULE_ADM package.

If the STREAMS_CONFIGURATION parameter is set to n, then information about Streams rules is not imported into the following data dictionary views:

ALL_STREAMS_GLOBAL_RULES,
ALL_STREAMS_SCHEMA_RULES,
ALL_STREAMS_TABLE_RULES,
DBA_STREAMS_GLOBAL_RULES,
DBA_STREAMS_SCHEMA_RULES, and
DBA_STREAMS_TABLE_RULES.

However, regardless of the STREAMS_CONFIGURATION parameter setting, information about these rules is imported into the

ALL_RULES, ALL_RULE_SETS,
ALL_RULE_SET_RULES,
DBA_RULES,
DBA_RULE_SETS,
DBA_RULE_SET_RULES,
USER_RULES,
USER_RULE_SETS, and
USER_RULE_SET_RULES

data dictionary views.

When the STREAMS_CONFIGURATION Import utility parameter is set to y, the import includes the following information;

when the STREAMS_CONFIGURATION Import utility parameter is set to n, the import does not include the following information:

Capture processes, including the following information for each capture process:

Name of the capture process
State of the capture process
Capture process parameter settings
Queue owner and queue name of the queue used by the capture process
Rule set owner and rule set name of the rule set used by the capture process

If any tables have been prepared for instantiation at the export database, then these tables are prepared for instantiation at the import database.

If any schemas have been prepared for instantiation at the export database, then these schemas are prepared for instantiation at the import database.

If the export database has been prepared for instantiation, then the import database is prepared for instantiation.

The state of each Streams queue, either started or stopped (Streams queues themselves are imported regardless of the parameter setting)

Propagations, including the following information for each propagation:

Name of the propagation
Queue owner and queue name of the source queue
Queue owner and queue name of the destination queue
Destination database link
Rule set owner and rule set name of the rule set used by the propagation
Apply processes, including the following information for each apply process:
Name of the apply process
State of the apply process
Apply process parameter settings
Queue owner and queue name of the queue used by the apply process
Rule set owner and rule set name of the rule set used by the apply process
Whether the apply process applies captured or user-enqueued events
Apply user for the apply process, if one exists
Message handler used by the apply process, if one exists
DDL handler used by the apply process, if one exists
Tag generated in the redo log for changes made by the apply process
Apply database link, if one exists
Source database for the apply process
The information about apply progress in the DBA_APPLY_PROGRESS data dictionary view, including applied message number, oldest message number, apply time, and applied message create time
Apply errors
DML handlers
Error handlers
Update conflict handlers
Substitute key columns for apply tables
Instantiation SCN for each apply object
Ignore SCN for each apply object
Some data dictionary information about Streams rules.

The rules themselves are imported regardless of the setting for the STREAMS_CONFIGURATION parameter.

( In Part II Oracle Streams Implementation is discussed Not in theory but in practice )

How to Implement Oracle Streams?

The following steps enable you to implement the Oracle Streams.

Step 01: Make sure that the database is in Archive Log Mode

To find out issue the SQL

select archiver
from v$instance;

output is STARTED

or

select log_mode
from v$database;

output is ARCHIVELOG

or

at the sql prompt issue as sysdba the following command

SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /archlogs/SAGE_DEV
Oldest online log sequence 4731
Next log sequence to archive 4733
Current log sequence 4733

Step 02: Make sure that log miner is set to a separate tablespace

Create a separate tablespace for the log miner and make it his default tablespace

CREATE TABLESPACE logmnr_tbsp
DATAFILE ‘/usr/oracle/dbs/logmnr_tbsp_01.dbf’
SIZE 50 M
REUSE
AUTOEXTEND ON
MAXSIZE UNLIMITED;

EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE(‘logmnr_tbsp’);

Step 03: Create CAPTURE PROCESS

-Using DBMS_STREAMS_ADM package

You can create a capture process using the DBMS_STREAMS_ADM package or the DBMS_CAPTURE_ADM package. Using the DBMS_STREAMS_ADM package to create a capture process is simpler because defaults are used automatically for some configuration options.

In addition, when you use the DBMS_STREAMS_ADM package, a rule set is created for the capture process and rules are added to the rule set automatically. The DBMS_STREAMS_ADM package was designed for use in replication environments.

Alternatively, using the DBMS_CAPTURE_ADM package to create a capture process is more flexible, and you create a rule set and rules for the capture process either before or after it is created. You can use the procedures in the DBMS_STREAMS_ADM package or the DBMS_RULE_ADM package to add rules to the rule set for the capture process.

When a capture process is created by a procedure in the DBMS_STREAMS_ADM package, a procedure in the DBMS_CAPTURE_ADM package is run automatically on the tables whose changes will be captured by the capture process. The following table lists which procedure is run in the DBMS_CAPTURE_ADM package when you run a procedure in the DBMS_STREAMS_ADM package.

This procedure creates CAPTURE PROCESS

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => ‘schema_owner.table_name’, — à Table Name with Owner
streams_type => ‘capture’, — à Process to be created capture or apply
streams_name => ‘strm01_capture’, — à Name of the capture process
queue_name => ‘strm01_queue’, — à Name of the Queue
include_dml => true,
include_ddl => true,
include_tagged_lcr => false);
END;
/

What this procedure does?

Creates a capture process named strm01_capture.
The capture process is created only if it does not already exist. If a new capture process is created, then this procedure also sets the start SCN to the point in time of creation. Associates the capture process with an existing queue named strm01_queue
Creates a rule set and associates it with the capture process, if the capture process
does not have a rule set.
The rule set uses the SYS.STREAMS$_EVALUATION_CONTEXTevaluation context. The rule set
name is specified by the system.
Creates two rules.
One rule specifies that the capture process captures DML changes to the
‘schema_owner.table_name’ table, and the other rule specifies that the capture process captures
DDL changes to the ‘schema_owner.table_name’ table.
The rule names are specified by the system.
Adds the two rules to the rule set associated with the capture process
Specifies that the capture process captures a change in the redo log only if the change has a
NULL tag, because the include_tagged_lcr parameter is set to false
This behavior is accomplished through the system created rules for the capture process

-Using DBMS_CAPTURE_ADM package

The following is an example that runs the CREATE_CAPTURE procedure in the DBMS_CAPTURE_ADM package to create a capture process:

BEGIN
DBMS_CAPTURE_ADM.CREATE_CAPTURE(
queue_name => ‘strm01_queue’,
capture_name => ‘strm02_capture’,
rule_set_name => ‘strmadmin.strm01_rule_set’,
start_scn => 829381993); — à is the system change number where from the capture process is
n started for the changes to the table,schema,database depending
n the choice. The default value is null. If invalid scn number is given
n the procedure returns error and exits
END;
/

What this procedure does?

Creates a capture process named strm02_capture.
A capture process with the same name must not exist.
Associates the capture process with an existing queue named strm01_queue
Associates the capture process with an existing rule set named strm01_rule_set
Specifies 829381993 as the start SCN for the capture process.

Step 04: Starting the capture Process:

BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(
capture_name => ‘strm01_capture’);
END;
/

Step 05: Specifying the Rule Set for a Capture Process

BEGIN
DBMS_CAPTURE_ADM.ALTER_CAPTURE(
capture_name => ‘strm01_capture’,
rule_set_name => ‘strmadmin.strm02_rule_set’);
END;
/

Before Configuring Streams important conditions to be met:

(1) The databases that participate in the streams should be of 9.2.x.x version
(2) The participating databases should have media recovery on. That means the databases should be in Archive Log Mode.
(3) The tables that are to participate in streams should have a primary key defined and enabled.

In the given example:

Participating databases SRIDEVI 9.2.0.1
ORCL 9.2.0.1
OEMREP 9.2.0.1
ARCHIVE_LOG_START TRUE For all the databases
Scott schema participated in the streams
The participating tables EMP and DEPT have primary key defined on them.

OEMREP database setup


REM
REM The following sections set up streams at the destination database
REM
ACCEPT dest_dba_passwd PROMPT ‘Please enter the password for user SYS at the
destination database SRIDEVI : ‘ HIDE

ACCEPT dest_strmadmin_passwd PROMPT ‘Please enter the password for user STREAMS
at the destination database SRIDEVI : ‘ HIDE

connect SYS/&dest_dba_passwd@SRIDEVI as SYSDBA


REM
REM Create Streams Administrator
REM
create user “STREAMS” identified by “streams”;


REM
REM The following section grants the privileges to Streams administrator
REM
grant CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE to “STREAMS”;

GRANT SELECT ANY DICTIONARY TO STREAMS;

GRANT EXECUTE ON sys.dbms_aq TO STREAMS;

GRANT EXECUTE ON sys.dbms_aqadm TO STREAMS;

BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(

privilege => ‘ENQUEUE_ANY’,

grantee => ‘STREAMS’,

admin_option => FALSE);

END;
/


BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(

privilege => ‘DEQUEUE_ANY’,

grantee => ‘STREAMS’,

admin_option => FALSE);

END;
/


BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(

privilege => ‘MANAGE_ANY’,

grantee => ‘STREAMS’,

admin_option => TRUE);

END;
/


BEGIN

DBMS_AQADM.GRANT_TYPE_ACCESS(
user_name => ‘STREAMS’);

END;
/


BEGIN

DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(

privilege => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ,

grantee => ‘STREAMS’,

grant_option => TRUE);

DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(

privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,

grantee => ‘STREAMS’,

grant_option => TRUE);

DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(

privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,

grantee => ‘STREAMS’,

grant_option => TRUE);

END;
/


BEGIN

DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(

privilege => DBMS_RULE_ADM.CREATE_ANY_RULE_SET,

grantee => ‘STREAMS’,

grant_option => TRUE);

DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(

privilege => DBMS_RULE_ADM.ALTER_ANY_RULE_SET,

grantee => ‘STREAMS’,

grant_option => TRUE);

DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(

privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE_SET,

grantee => ‘STREAMS’,

grant_option => TRUE);

DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(

privilege => DBMS_RULE_ADM.CREATE_ANY_RULE,

grantee => ‘STREAMS’,

grant_option => TRUE);

DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(

privilege => DBMS_RULE_ADM.ALTER_ANY_RULE,

grantee => ‘STREAMS’,

grant_option => TRUE);

DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(

privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE,

grantee => ‘STREAMS’,

grant_option => TRUE);

END;
/


BEGIN
DBMS_RULE_ADM.GRANT_OBJECT_PRIVILEGE(

privilege => DBMS_RULE_ADM.EXECUTE_ON_EVALUATION_CONTEXT,

object_name => ‘SYS.STREAMS$_EVALUATION_CONTEXT’,

grantee => ‘STREAMS’,

grant_option => FALSE );

END;
/


GRANT EXECUTE ON sys.dbms_capture_adm TO STREAMS;

GRANT EXECUTE ON sys.dbms_apply_adm TO STREAMS;

GRANT EXECUTE ON sys.dbms_rule_adm TO STREAMS;

GRANT SELECT_CATALOG_ROLE TO STREAMS;

BEGIN

DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(

privilege => DBMS_RULE_ADM.EXECUTE_ANY_EVALUATION_CONTEXT,

grantee => ‘STREAMS’,

grant_option => TRUE);

END;
/

GRANT EXECUTE ON SYS.dbms_streams_adm TO STREAMS;

connect STREAMS/&dest_strmadmin_passwd@SRIDEVI

REM
REM Create streams queue
REM
BEGIN

DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_user => ‘STREAMS’);

END;

/

REM
REM Add apply rules for tables at the destination database
REM
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(

table_name => ‘”OEMREP”.”EPC_CLI_COLLECT_BY_EVENTID”‘,

streams_type => ‘APPLY’,

streams_name => ‘STREAMS_OEMREP_US_ORACLE_’,

queue_name => ‘”STREAMS”.”STREAMS_QUEUE”‘,

include_dml => true,

include_ddl => true,

source_database => ‘OEMREP.US.ORACLE.COM’);

END;

/

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(

table_name => ‘”OEMREP”.”EPC_CLI_COLLECTION”‘,

streams_type => ‘APPLY’,

streams_name => ‘STREAMS_OEMREP_US_ORACLE_’,

queue_name => ‘”STREAMS”.”STREAMS_QUEUE”‘,

include_dml => true,

include_ddl => true,

source_database => ‘OEMREP.US.ORACLE.COM’);

END;

/

REM
REM The following sections set up streams at the source database
REM

ACCEPT source_dba_passwd PROMPT ‘Please enter the password for user SYS
at the source database OEMREP : ‘ HIDE

ACCEPT source_strmadmin_passwd PROMPT ‘Please enter the password for user
STREAMS at the source database OEMREP : ‘ HIDE

connect SYS/&source_dba_passwd@OEMREP as SYSDBA


REM
REM Turn on supplemental logging
REM

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY,UNIQUE INDEX) COLUMNS;

REM
REM Switch log file
REM

ALTER SYSTEM SWITCH LOGFILE;

REM
REM Create logminer tablespace
REM

CREATE TABLESPACE LOGMNRTS
DATAFILE ‘logmnrts_OEMREP.dbf’
SIZE 25M
REUSE
AUTOEXTEND ON
MAXSIZE UNLIMITED;

REM
REM Set logminer tablespace
REM
BEGIN

DBMS_LOGMNR_D.SET_TABLESPACE(‘LOGMNRTS’);

END;
/

REM
REM Create Streams Administrator
REM
create user “STREAMS” identified by “STReams”;

REM
REM The following section grants the privileges to Streams administrator
REM
grant CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE to “STREAMS”;

GRANT SELECT ANY DICTIONARY TO STREAMS;

GRANT EXECUTE ON sys.dbms_aq TO STREAMS;

GRANT EXECUTE ON sys.dbms_aqadm TO STREAMS;

BEGIN

DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(

privilege => ‘ENQUEUE_ANY’,

grantee => ‘STREAMS’,

admin_option => FALSE);

END;
/

BEGIN

DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(

privilege => ‘DEQUEUE_ANY’,

grantee => ‘STREAMS’,

admin_option => FALSE);

END;
/

BEGIN

DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(

privilege => ‘MANAGE_ANY’,

grantee => ‘STREAMS’,

admin_option => TRUE);

END;
/

BEGIN

DBMS_AQADM.GRANT_TYPE_ACCESS(
user_name => ‘STREAMS’);
END;
/

BEGIN

DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(

privilege => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ,

grantee => ‘STREAMS’,

grant_option => TRUE);

DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(

privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,

grantee => ‘STREAMS’,

grant_option => TRUE);

DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(

privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,

grantee => ‘STREAMS’,

grant_option => TRUE);

END;
/

BEGIN

DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(

privilege => DBMS_RULE_ADM.CREATE_ANY_RULE_SET,

grantee => ‘STREAMS’,

grant_option => TRUE);

DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(

privilege => DBMS_RULE_ADM.ALTER_ANY_RULE_SET,

grantee => ‘STREAMS’,

grant_option => TRUE);

DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(

privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE_SET,

grantee => ‘STREAMS’,

grant_option => TRUE);

DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(

privilege => DBMS_RULE_ADM.CREATE_ANY_RULE,

grantee => ‘STREAMS’,

grant_option => TRUE);

DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(

privilege => DBMS_RULE_ADM.ALTER_ANY_RULE,

grantee => ‘STREAMS’,

grant_option => TRUE);

DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(

privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE,

grantee => ‘STREAMS’,

grant_option => TRUE);

END;
/

BEGIN

DBMS_RULE_ADM.GRANT_OBJECT_PRIVILEGE(

privilege => DBMS_RULE_ADM.EXECUTE_ON_EVALUATION_CONTEXT,

object_name => ‘SYS.STREAMS$_EVALUATION_CONTEXT’,

grantee => ‘STREAMS’,

grant_option => FALSE );

END;
/

GRANT EXECUTE ON sys.dbms_capture_adm TO STREAMS;

GRANT EXECUTE ON sys.dbms_apply_adm TO STREAMS;

GRANT EXECUTE ON sys.dbms_rule_adm TO STREAMS;

GRANT SELECT_CATALOG_ROLE TO STREAMS;

BEGIN

DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(

privilege => DBMS_RULE_ADM.EXECUTE_ANY_EVALUATION_CONTEXT,

grantee => ‘STREAMS’,

grant_option => TRUE);

END;
/

GRANT EXECUTE ON SYS.dbms_streams_adm TO STREAMS;

connect STREAMS/&source_strmadmin_passwd@OEMREP

REM
REM Create a database link
REM

CREATE DATABASE LINK “SRIDEVI.US.ORACLE.COM”
connect to “STREAMS”
identified by “streams”
using ‘(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)
(HOST=sridevi)
(PORT=1521)
)
)
(CONNECT_DATA= (SERVER=DEDICATED)
(SERVICE_NAME=Sridevi)
)
)’;

REM
REM Create streams queue
REM
BEGIN

DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_user => ‘STREAMS’);

END;

/

REM
REM Add capture rules for tables at the source database
REM

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(

table_name => ‘”OEMREP”.”EPC_CLI_COLLECT_BY_EVENTID”‘,

streams_type => ‘CAPTURE’,

streams_name => ‘STREAMS_CAPTURE’,

queue_name => ‘”STREAMS”.”STREAMS_QUEUE”‘,

include_dml => true,

include_ddl => true,

source_database => ‘OEMREP.US.ORACLE.COM’);

END;

/

BEGIN

DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => ‘”OEMREP”.”EPC_CLI_COLLECTION”‘,

streams_type => ‘CAPTURE’,

streams_name => ‘STREAMS_CAPTURE’,

queue_name => ‘”STREAMS”.”STREAMS_QUEUE”‘,

include_dml => true,

include_ddl => true,

source_database => ‘OEMREP.US.ORACLE.COM’);

END;

/

REM
REM Add propagation rules for tables at the source database
REM
BEGIN

DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(

table_name => ‘”OEMREP”.”EPC_CLI_COLLECT_BY_EVENTID”‘,

streams_name => ‘STREAMS_PROPAGATE’,

source_queue_name => ‘”STREAMS”.”STREAMS_QUEUE”‘,

destination_queue_name => ‘”STREAMS”.”STREAMS_QUEUE”@SRIDEVI.US.ORACLE.COM’,

include_dml => true,

include_ddl => true,

source_database => ‘OEMREP.US.ORACLE.COM’);

END;

/

BEGIN

DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(

table_name => ‘”OEMREP”.”EPC_CLI_COLLECTION”‘,

streams_name => ‘STREAMS_PROPAGATE’,

source_queue_name => ‘”STREAMS”.”STREAMS_QUEUE”‘,

destination_queue_name => ‘”STREAMS”.”STREAMS_QUEUE”@SRIDEVI.US.ORACLE.COM’,

include_dml => true,

include_ddl => true,

source_database => ‘OEMREP.US.ORACLE.COM’);

END;

/

Destination database SRIDEVI setup

ACCEPT dest_strmadmin_passwd PROMPT ‘Please enter the password for user STREAMS at the destination database SRIDEVI : ‘ HIDE
connect STREAMS/&dest_strmadmin_passwd@SRIDEVI

REM
REM Start apply process at the destination database
REM
DECLARE

v_started number;

BEGIN
SELECT decode(status, ‘ENABLED’, 1, 0)
INTO v_started

FROM DBA_APPLY
WHERE APPLY_NAME = ‘STREAMS_OEMREP_US_ORACLE_’;

if (v_started = 0) then

DBMS_APPLY_ADM.START_APPLY(apply_name => ‘STREAMS_OEMREP_US_ORACLE_’);

end
if;

END;
/

ACCEPT source_strmadmin_passwd PROMPT ‘Please enter the password for user STREAMS at the source database OEMREP : ‘ HIDE
connect STREAMS/&source_strmadmin_passwd@OEMREP

DECLARE

v_started number;

BEGIN

SELECT decode(status, ‘ENABLED’, 1, 0)
INTO v_started

FROM DBA_CAPTURE
WHERE CAPTURE_NAME = ‘CAPTURE’;

if (v_started = 0) then

DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => ‘CAPTURE’);

end if;
END;
/

ORCL database setup

REM
REM The following sections set up streams at the destination database
REM
ACCEPT dest_strmadmin_passwd PROMPT ‘Please enter the password for user STREAMS_ADMIN at the destination database Sridevi : ‘ HIDE
connect STREAMS_ADMIN/&dest_strmadmin_passwd@Sridevi

REM
REM Add apply rules for schemes at the destination database
REM
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => ‘”SCOTT”‘,
streams_type => ‘APPLY’,
streams_name => ‘STREAMS_ADMIN_ORCL_US_ORA’,
queue_name => ‘”STREAMS_ADMIN”.”STREAMS_QUEUE”‘,
include_dml => true,
include_ddl => true,
source_database => ‘ORCL.US.ORACLE.COM’);
END;
/

REM
REM The following sections set up streams at the source database
REM
ACCEPT source_strmadmin_passwd PROMPT ‘Please enter the password for user STREAMS_ADMIN at the source database orcl : ‘ HIDE
connect STREAMS_ADMIN/&source_strmadmin_passwd@orcl

REM
REM Add propagation rules for schemes at the source database
REM
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
schema_name => ‘”SCOTT”‘,
streams_name => ‘STREAMS_ADMIN_PROPAGATE’,
source_queue_name => ‘”STREAMS_ADMIN”.”STREAMS_QUEUE”‘,
destination_queue_name => ‘”STREAMS_ADMIN”.”STREAMS_QUEUE”@SRIDEVI.US.ORACLE.COM’,
include_dml => true,
include_ddl => true,
source_database => ‘ORCL.US.ORACLE.COM’);
END;
/

REM
REM Add capture rules for schemes at the source database
REM
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => ‘”SCOTT”‘,
streams_type => ‘CAPTURE’,
streams_name => ‘STREAMS_ADMIN_CAPTURE’,
queue_name => ‘”STREAMS_ADMIN”.”STREAMS_QUEUE”‘,
include_dml => true,
include_ddl => true,
source_database => ‘ORCL.US.ORACLE.COM’);
END;
/

exp
USERID=”STREAMS_ADMIN”@orcl
OWNER=”SCOTT”
FILE=schemas.dmp
GRANTS=Y
ROWS=Y
LOG=exportSchemas.log
OBJECT_CONSISTENT=Y


imp
USERID=”STREAMS_ADMIN”@Sridevi
FULL=Y CONSTRAINTS=Y
FILE=schemas.dmp
IGNORE=Y
GRANTS=Y
ROWS=Y
COMMIT=Y
LOG=importSchemas.log
STREAMS_CONFIGURATION=Y
STREAMS_INSTANTIATION=Y