Skip Headers
Oracle® Streams Concepts and Administration
10g Release 2 (10.2)

Part Number B14229-04
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

3 Streams Staging and Propagation

This chapter explains the concepts relating to staging messages in a queue and propagating messages from one queue to another.

This chapter contains these topics:

See Also:

Chapter 12, "Managing Staging and Propagation"

Introduction to Message Staging and Propagation

Streams uses queues to stage messages. A queue of ANYDATA type can stage messages of almost any type and is called a ANYDATA queue. A typed queue can store messages of a specific type. Streams clients always use ANYDATA queues.

In Streams, two types of messages can be encapsulated into an ANYDATA object and staged in an ANYDATA queue: logical change records (LCRs) and user messages. An LCR is an object that contains information about a change to a database object. A user message is a message of a user-defined type created by users or applications. Both types of messages can be used for information sharing within a single database or between databases.

In a messaging environment, both ANYDATA queues and typed queues can be used to stage messages of a specific type. Publishing applications can enqueue messages into a single queue, and subscribing applications can dequeue these messages.

Staged messages can be consumed or propagated, or both. Staged messages can be consumed by an apply process, by a messaging client, or by a user application. A running apply process implicitly dequeues messages, but messaging clients and user applications explicitly dequeue messages. Even after a message is consumed, it can remain in the queue if you also have configured a Streams propagation to propagate, or send, the message to one or more other queues or if message retention is specified for user-enqueued messages. Message retention does not apply to LCRs captured by a capture process.

The queues to which messages are propagated can reside in the same database or in different databases than the queue from which the messages are propagated. In either case, the queue from which the messages are propagated is called the source queue, and the queue that receives the messages is called the destination queue. There can be a one-to-many, many-to-one, or many-to-many relationship between source and destination queues.

Figure 3-1 shows propagation from a source queue to a destination queue.

Figure 3-1 Propagation from a Source Queue to a Destination Queue

Description of Figure 3-1 follows
Description of "Figure 3-1 Propagation from a Source Queue to a Destination Queue"

You can create, alter, and drop a propagation, and you can define propagation rules that control which messages are propagated. The user who owns the source queue is the user who propagates messages, and this user must have the necessary privileges to propagate messages. These privileges include the following:

If the propagation propagates messages to a destination queue in a remote database, then the owner of the source queue must be able to use the database link used by the propagation, and the user to which the database link connects at the remote database must have enqueue privilege on the destination queue.

Note:

Connection qualifiers cannot be specified in the database links that are used by Streams propagations.

See Also:

Captured and User-Enqueued Messages in an ANYDATA Queue

Messages can be enqueued into an ANYDATA queue in two ways:

So, each captured message contains an LCR, but a user-enqueued message might or might not contain an LCR. Propagating a captured message or a user-enqueued message enqueues the message into the destination queue.

Messages can be dequeued from an ANYDATA queue in two ways:

The dequeued messages might have originated at the same database where they are dequeued, or they might have originated at a different database.

See Also:

Message Propagation Between Queues

You can use Streams to configure message propagation between two queues, which can reside in different databases. Streams uses job queues to propagate messages.

A propagation is always between a source queue and a destination queue. Although propagation is always between two queues, a single queue can participate in many propagations. That is, a single source queue can propagate messages to multiple destination queues, and a single destination queue can receive messages from multiple source queues. However, only one propagation is allowed between a particular source queue and a particular destination queue. Also, a single queue can be a destination queue for some propagations and a source queue for other propagations.

A propagation can propagate all of the messages in a source queue to a destination queue, or a propagation can propagate only a subset of the messages. Also, a single propagation can propagate both captured messages and user-enqueued messages. You can use rules to control which messages in the source queue are propagated to the destination queue and which messages are discarded.

Depending on how you set up your Streams environment, changes could be sent back to the site where they originated. You need to ensure that your environment is configured to avoid cycling a change in an endless loop. You can use Streams tags to avoid such a change cycling loop.

Note:

Propagations can propagate user-enqueued ANYDATA messages that encapsulate payloads of object types, varrays, or nested tables between databases only if the databases use the same character set.

See Also:

Propagation Rules

A propagation either propagates or discards messages based on rules that you define. For LCRs, each rule specifies the database objects and types of changes for which the rule evaluates to TRUE. You can place these rules in a positive rule set or a negative rule set used by the propagation.

If a rule evaluates to TRUE for a message, and the rule is in the positive rule set for a propagation, then the propagation propagates the change. If a rule evaluates to TRUE for a message, and the rule is in the negative rule set for a propagation, then the propagation discards the change. If a propagation has both a positive and a negative rule set, then the negative rule set is always evaluated first.

You can specify propagation rules for LCRs at the following levels:

  • A table rule propagates or discards either row changes resulting from DML changes or DDL changes to a particular table. Subset rules are table rules that include a subset of the row changes to a particular table.

  • A schema rule propagates or discards either row changes resulting from DML changes or DDL changes to the database objects in a particular schema.

  • A global rule propagates or discards either all row changes resulting from DML changes or all DDL changes in the source queue.

For non-LCR messages, you can create your own rules to control propagation.

A queue subscriber that specifies a condition causes the system to generate a rule. The rule sets for all subscribers to a queue are combined into a single system-generated rule set to make subscription more efficient.

Queue-to-Queue Propagations

A propagation can be queue-to-queue or queue-to-database link (queue-to-dblink). A queue-to-queue propagation always has its own exclusive propagation job to propagate messages from the source queue to the destination queue. Because each propagation job has its own propagation schedule, the propagation schedule of each queue-to-queue propagation can be managed separately. Even when multiple queue-to-queue propagations use the same database link, you can enable, disable, or set the propagation schedule for each queue-to-queue propagation separately. Propagation jobs are described in detail later in this chapter.

A single database link can be used by multiple queue-to-queue propagations. The database link must be created with the service name specified as the global name of the database that contains the destination queue.

In contrast, a queue-to-dblink propagation shares a propagation job with other queue-to-dblink propagations from the same source queue that use the same database link. Therefore, these propagations share the same propagation schedule, and any change to the propagation schedule affects all of the queue-to-dblink propagations from the same source queue that use the database link.

Queue-to-queue propagation connects to the destination queue service when one exists. Currently, a queue service is created when the database is a Real Application Clusters (RAC) database and the queue is a buffered queue. Because the queue service always runs on the owner instance of the queue, transparent failover can occur when RAC instances fail. When multiple queue-to-queue propagations use a single database link, the connect description for each queue-to-queue propagation changes automatically to propagate messages to the correct destination queue. In contrast, queue-to-dblink propagations require you to repoint your database links if the owner instance in a RAC database that contains the destination queue for the propagation fails.

Note:

To use queue-to-queue propagation, the compatibility level must be 10.2.0 or higher for each database that contains a queue involved in the propagation.

See Also:

Ensured Message Delivery

A user-enqueued message is propagated successfully to a destination queue when the enqueue into the destination queue is committed. A captured message is propagated successfully to a destination queue when both of the following actions are completed:

  • The message is processed by all relevant apply processes associated with the destination queue.

  • The message is propagated successfully from the destination queue to all of its relevant destination queues.

When a message is successfully propagated between two ANYDATA queues, the destination queue acknowledges successful propagation of the message. If the source queue is configured to propagate a message to multiple destination queues, then the message remains in the source queue until each destination queue has sent confirmation of message propagation to the source queue. When each destination queue acknowledges successful propagation of the message, and all local consumers in the source queue database have consumed the message, the source queue can drop the message.

This confirmation system ensures that messages are always propagated from the source queue to the destination queue, but, in some configurations, the source queue can grow larger than an optimal size. When a source queue grows, it uses more SGA memory and might use more disk space.

There are two common reasons for source-queue growth:

  • If a message cannot be propagated to a specified destination queue for some reason (such as a network problem), then the message will remain in the source queue until the destination queue becomes available. This situation could cause the source queue to grow large. So, you should monitor your queues regularly to detect problems early.

  • Suppose a source queue is propagating captured messages to multiple destination queues, and one or more destination databases acknowledge successful propagation of messages much more slowly than the other queues. In this case, the source queue can grow because the slower destination databases create a backlog of messages that have already been acknowledged by the faster destination databases. In such an environment, consider creating more than one capture process to capture changes at the source database. Doing so lets you use one source queue for the slower destination databases and another source queue for the faster destination databases.

Directed Networks

A directed network is one in which propagated messages pass through one or more intermediate databases before arriving at a destination database. A message might or might not be processed by an apply process at an intermediate database. Using Streams, you can choose which messages are propagated to each destination database, and you can specify the route that messages will traverse on their way to a destination database. Figure 3-2 shows an example of a directed networks environment.

Figure 3-2 Example Directed Networks Environment

Description of Figure 3-2 follows
Description of "Figure 3-2 Example Directed Networks Environment"

The advantage of using a directed network is that a source database does not need to have a physical network connection with a destination database. So, if you want messages to propagate from one database to another, but there is no direct network connection between the computers running these databases, then you can still propagate the messages without reconfiguring your network, as long as one or more intermediate databases connect the source database to the destination database.

If you use directed networks, and an intermediate site goes down for an extended period of time or is removed, then you might need to reconfigure the network and the Streams environment.

Queue Forwarding and Apply Forwarding

An intermediate database in a directed network can propagate messages using either queue forwarding or apply forwarding. Queue forwarding means that the messages being forwarded at an intermediate database are the messages received by the intermediate database. The source database for a message is the database where the message originated.

Apply forwarding means that the messages being forwarded at an intermediate database are first processed by an apply process. These messages are then recaptured by a capture process at the intermediate database and forwarded. When you use apply forwarding, the intermediate database becomes the new source database for the messages, because the messages are recaptured from the redo log generated there.

Consider the following differences between queue forwarding and apply forwarding when you plan your Streams environment:

  • With queue forwarding, a message is propagated through the directed network without being changed, assuming there are no capture or propagation transformations. With apply forwarding, messages are applied and recaptured at intermediate databases and can be changed by conflict resolution, apply handlers, or apply transformations.

  • With queue forwarding, a destination database must have a separate apply process to apply messages from each source database. With apply forwarding, fewer apply processes might be required at a destination database because recapturing of messages at intermediate databases can result in fewer source databases when changes reach a destination database.

  • With queue forwarding, one or more intermediate databases are in place between a source database and a destination database. With apply forwarding, because messages are recaptured at intermediate databases, the source database for a message can be the same as the intermediate database connected directly with the destination database.

A single Streams environment can use a combination of queue forwarding and apply forwarding.

Advantages of Queue Forwarding

Queue forwarding has the following advantages compared with apply forwarding:

  • Performance might be improved because a message is captured only once.

  • Less time might be required to propagate a message from the database where the message originated to the destination database, because the messages are not applied and recaptured at one or more intermediate databases. In other words, latency might be lower with queue forwarding.

  • The origin of a message can be determined easily by running the GET_SOURCE_DATABASE_NAME member procedure on the LCR contained in the message. If you use apply forwarding, then determining the origin of a message requires the use of Streams tags and apply handlers.

  • Parallel apply might scale better and provide more throughput when separate apply processes are used because there are fewer dependencies, and because there are multiple apply coordinators and apply reader processes to perform the work.

  • If one intermediate database goes down, then you can reroute the queues and reset the start SCN at the capture site to reconfigure end-to-end capture, propagation, and apply.

    If you use apply forwarding, then substantially more work might be required to reconfigure end-to-end capture, propagation, and apply of messages, because the destination database(s) downstream from the unavailable intermediate database were using the SCN information of this intermediate database. Without this SCN information, the destination databases cannot apply the changes properly.

Advantages of Apply Forwarding

Apply forwarding has the following advantages compared with queue forwarding:

  • A Streams environment might be easier to configure because each database can apply changes only from databases directly connected to it, rather than from multiple remote source databases.

  • In a large Streams environment where intermediate databases apply changes, the environment might be easier to monitor and manage because fewer apply processes might be required. An intermediate database that applies changes must have one apply process for each source database from which it receives changes. In an apply forwarding environment, the source databases of an intermediate database are only the databases to which it is directly connected. In a queue forwarding environment, the source databases of an intermediate database are all of the other source databases in the environment, whether they are directly connected to the intermediate database or not.

Binary File Propagation

You can propagate a binary file between databases by using Streams. To do so, you put one or more BFILE attributes in a message payload and then propagate the message to a remote queue. Each BFILE referenced in the payload is transferred to the remote database after the message is propagated, but before the message propagation is committed. The directory object and filename of each propagated BFILE are preserved, but you can map the directory object to different directories on the source and destination databases. The message payload can be a BFILE wrapped in an ANYDATA payload, or the message payload can be one or more BFILE attributes of an object wrapped in an ANYDATA payload.

The following are not supported in a message payload:

  • One or more BFILE attributes in a varray

  • A user-defined type object with an ANYDATA attribute that contains one or more BFILE attributes

Propagating a BFILE in Streams has the same restrictions as the procedure DBMS_FILE_TRANSFER.PUT_FILE.

See Also:

Oracle Database Concepts, Oracle Database Administrator's Guide, and Oracle Database PL/SQL Packages and Types Reference for more information about transferring files with the DBMS_FILE_TRANSFER package

Messaging Clients

A messaging client dequeues user-enqueued messages when it is invoked by an application or a user. You use rules to specify which user-enqueued messages in the queue are dequeued by a messaging client. These user-enqueued messages can be user-enqueued LCRs or user-enqueued messages.

You can create a messaging client by specifying dequeue for the streams_type parameter when you run one of the following procedures in the DBMS_STREAMS_ADM package:

When you create a messaging client, you specify the name of the messaging client and the ANYDATA queue from which the messaging client dequeues messages. These procedures can also add rules to the positive rule set or negative rule set of a messaging client. You specify the message type for each rule, and a single messaging client can dequeue messages of different types.

The user who creates a messaging client is granted the privileges to dequeue from the queue using the messaging client. This user is the messaging client user. The messaging client user can dequeue messages that satisfy the messaging client rule sets. A messaging client can be associated with only one user, but one user can be associated with many messaging clients.

Figure 3-3 shows a messaging client dequeuing user-enqueued messages.

Figure 3-3 Messaging Client

Description of Figure 3-3 follows
Description of "Figure 3-3 Messaging Client"

ANYDATA Queues and User Messages

Streams enables messaging with queues of type ANYDATA. These queues can stage user messages whose payloads are of ANYDATA type. An ANYDATA payload can be a wrapper for payloads of different datatypes.

By using ANYDATA wrappers for message payloads, publishing applications can enqueue messages of different types into a single queue, and subscribing applications can dequeue these messages, either explicitly using a messaging client or an application, or implicitly using an apply process. If the subscribing application is remote, then the messages can be propagated to the remote site, and the subscribing application can dequeue the messages from a local queue in the remote database. Alternatively, a remote subscribing application can dequeue messages directly from the source queue using a variety of standard protocols, such as PL/SQL and OCI.

Streams includes the features of Advanced Queuing (AQ), which supports all the standard features of message queuing systems, including multiconsumer queues, publish and subscribe, content-based routing, internet propagation, transformations, and gateways to other messaging subsystems.

You can wrap almost any type of payload in an ANYDATA payload. To do this, you use the Convertdata_type static functions of the ANYDATA type, where data_type is the type of object to wrap. These functions take the object as input and return an ANYDATA object.

You cannot enqueue ANYDATA payloads that contain payloads of the following types into an ANYDATA queue:

Note:

  • Payloads of ROWID datatype cannot be wrapped in an ANYDATA wrapper. This restriction does not apply to payloads of UROWID datatype.

  • A queue that can stage messages of only one particular type is called a typed queue.

See Also:

Buffered Messaging and Streams Clients

Buffered messaging enables users and applications to enqueue messages into and dequeue messages from a buffered queue. Propagations can propagate buffered messages from one buffered queue to another. Buffered messaging can improve the performance of a messaging environment by storing messages in memory instead of persistently on disk in a queue table. The following sections discuss how buffered messages interact with Streams clients:

Note:

To use buffered messaging, the compatibility level of the Oracle database must be 10.2.0 or higher.

See Also:

Buffered Messages and Capture Processes

Messages enqueued into a buffered queue by a capture process can be dequeued only by an apply process. Captured messages cannot be dequeued by users or applications.

Buffered Messages and Propagations

A propagation will propagate any messages in its source queue that satisfy its rule sets. These messages can be stored in a buffered queue or stored persistently in a queue table. A propagation can propagate both types of messages if the messages satisfy the rule sets used by the propagation.

Buffered Messages and Apply Processes

Apply processes can dequeue and process messages in a buffered queue. To dequeue messages in a buffered queue that were enqueued by a capture process, the apply process must be configured with the apply_captured parameter set to true. To dequeue messages in a buffered queue that were enqueued by a user or application, the apply process must be configured with the apply_captured parameter set to false. An apply process sends user-enqueued messages to its message handler for processing.

Buffered Messages and Messaging Clients

Currently, messaging clients cannot dequeue buffered messages. In addition, the DBMS_STREAMS_MESSAGING package cannot be used to enqueue messages into or dequeue messages from a buffered queue.

Note:

The DBMS_AQ and DBMS_AQADM packages support buffered messaging.

See Also:

Oracle Streams Advanced Queuing User's Guide and Reference for more information about using the DBMS_AQ and DBMS_AQADM packages

Queues and Oracle Real Application Clusters

You can configure a queue to stage captured messages and user-enqueued messages in an Oracle Real Application Clusters (RAC) environment, and propagations can propagate these messages from one queue to another. In a RAC environment, only the owner instance can have a buffer for a queue, but different instances can have buffers for different queues. A buffered queue is System Global Area (SGA) memory associated with a queue. Buffered queues are discussed in more detail later in this chapter.

Streams processes and jobs support primary instance and secondary instance specifications for queue tables. If you use these specifications, then the secondary instance assumes ownership of a queue table when the primary instance becomes unavailable, and ownership is transferred back to the primary instance when it becomes available again. If both the primary and secondary instance for a queue table containing a destination queue become unavailable, then queue ownership is transferred automatically to another instance in the cluster. In this case, if the primary or secondary instance becomes available again, then ownership is transferred back to one of them accordingly. You can set primary and secondary instance specifications using the ALTER_QUEUE_TABLE procedure in the DBMS_AQADM package.

Each capture process and apply process is started on the owner instance for its queue, even if the start procedure is run on a different instance. For propagations, if the owner instance for a queue table containing a destination queue becomes unavailable, then queue ownership is transferred automatically to another instance in the cluster. A queue-to-queue propagation to a buffered destination queue uses a service to provide transparent failover in a RAC environment. That is, a propagation job for a queue-to-queue propagation automatically connects to the instance that owns the destination queue.

The service used by a queue-to-queue propagation always runs on the owner instance of the destination queue. This service is created only for buffered queues in a RAC database. If you plan to use buffered messaging with a RAC database, then messages can be enqueued into a buffered queue on any instance. If messages are enqueued on an instance that does not own the queue, then the messages are sent to the correct instance, but it is more efficient to enqueue messages on the instance that owns the queue. The service can be used to connect to the owner instance of the queue before enqueuing messages into a buffered queue.

Queue-to-dblink propagations do not use services. To make the propagation job connect to the correct instance on the destination database, manually reconfigure the database link from the source database to connect to the instance that owns the destination queue.

The NAME column in the DBA_SERVICES data dictionary view contains the service name for a queue. The NETWORK_NAME column in the DBA_QUEUES data dictionary view contains the network name for a queue. Do not manage the services for queue-to-queue propagations in any way. Oracle manages them automatically. For queue-to-dblink propagations, use the network name as the service name in the connect string of the database link to connect to the correct instance.

The DBA_QUEUE_TABLES data dictionary view contains information about the owner instance for a queue table. A queue table can contain multiple queues. In this case, each queue in a queue table has the same owner instance as the queue table.

Note:

If a queue contains or will contain captured messages in a RAC environment, then queue-to-queue propagations should be used to propagate messages to a RAC destination database. If a queue-to-dblink propagation propagates captured messages to a RAC destination database, then this propagation must use an instance-specific database link that refers to the owner instance of the destination queue. If such a propagation connects to any other instance, then the propagation will raise an error.

Commit-Time Queues

You can control the order in which user-enqueued messages in a queue are browsed or dequeued. Message ordering in a queue is determined by its queue table, and you can specify message ordering for a queue table during queue table creation. Specifically, the sort_list parameter in the DBMS_AQADM.CREATE_QUEUE_TABLE procedure determines how user-enqueued messages are ordered. Oracle Database 10g Release 2 introduces commit-time queues. Each message in a commit-time queue is ordered by an approximate commit system change number (approximate CSCN) which is obtained when the transaction that enqueued the message commits.

Commit-time ordering is specified for a queue table, and queues that use the queue table are called commit-time queues. When commit_time is specified for the sort_list parameter in the DBMS_AQADM.CREATE_QUEUE_TABLE procedure, the resulting queue table uses commit-time ordering.

For Oracle Database 10g Release 2, the default sort_list setting for queue tables created by the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package is commit_time. For releases prior to Oracle Database 10g Release 2, the default is enq_time, which is described in the section that follows. When the queue_table parameter in the SET_UP_QUEUE procedure specifies an existing queue table, message ordering in the queue created by SET_UP_QUEUE is determined by the existing queue table.

When to Use Commit-Time Queues

A user or application can share information by enqueuing messages into a queue in an Oracle database. The enqueued messages can be shared within a single database or propagated to other databases, and the messages can be LCRs or user messages. For example, messages can be enqueued when an application-specific message occurs or when a trigger is fired for a database change. Also, in a heterogeneous environment, an application can enqueue messages that originated at a non-Oracle database into a queue in an Oracle database.

Other than commit_time, the settings for the sort_list parameter in the CREATE_QUEUE_TABLE procedure are priority and enq­_time. The priority setting orders messages by the priority specified during enqueue, highest priority to lowest priority. The enq_time setting orders messages by the time when they were enqueued, oldest to newest.

Commit-time queues are useful when an environment must support either of the following requirements for concurrent enqueues of user-enqueued messages:

Commit-time queues support these requirements. Neither priority nor enqueue time ordering supports these requirements because both allow transactional dependency violations and nonconsistent browses. Both settings allow transactional dependency violations, because messages are dequeued independent of the original dependencies. Also, both settings allow nonconsistent browses of the messages in a queue, because multiple browses performed without any dequeue operations between them can result in different sets of messages.

Transactional Dependency Ordering During Dequeue

A transactional dependency occurs when one database transaction requires that another database transaction commits before it can commit successfully. Messages that contain information about database transactions can be enqueued into a queue. For example, a database trigger can fire to enqueue messages. Figure 3-4 shows how enqueue time ordering does not support transactional dependency ordering during dequeue of such messages.

Figure 3-4 Transactional Dependency Violation During Dequeue

Description of Figure 3-4 follows
Description of "Figure 3-4 Transactional Dependency Violation During Dequeue"

Figure 3-4 shows how transactional dependency ordering can be violated with enqueue time ordering. The transaction that enqueued message e2 was committed before the transaction that enqueued messages e1 and e3 was committed, and the update in message e3 depends on the insert in message e2. So, the correct dequeue order that supports transactional dependencies is e2, e1, e3. However, with enqueue time ordering, e3 can be dequeued before e2. Therefore, when e3 is dequeued, an error results when an application attempts to apply the change in e3 to the hr.employees table. Also, after all three messages are dequeued, a row in the hr.employees table contains the wrong information because the change in e3 was not executed.

Consistent Browse of Messages in a Queue

Figure 3-5 shows how enqueue time ordering does not support consistent browse of messages in a queue.

Figure 3-5 Inconsistent Browse of Messages in a Queue

Description of Figure 3-5 follows
Description of "Figure 3-5 Inconsistent Browse of Messages in a Queue"

Figure 3-5 shows that a client browsing messages in a queue is not guaranteed a definite order with enqueue time ordering. Sessions 1 and 2 are concurrent sessions that are enqueuing messages. Session 3 shows two sets of client browses that return the three enqueued messages in different orders. If the client requires deterministic ordering of messages, then the client might fail. For example, the client might perform a browse to initiate a program state, and a subsequent dequeue might return messages in a different order than expected.

How Commit-Time Queues Work

The commit system change number (CSCN) for a message that is enqueued into a queue is not known until the redo record for the commit of the transaction that includes the message is written to the redo log. The CSCN cannot be recorded when the message is enqueued. Commit-time queues use the current SCN of the database when a transaction is committed as the approximate CSCN for all of the messages in the transaction. The order of messages in a commit-time queue is based on the approximate CSCN of the transaction that enqueued the messages.

In a commit-time queue, messages in a transaction are not visible to dequeue and browse operations until a deterministic order for the messages can be established using the approximate CSCN. When multiple transactions are enqueuing messages concurrently into the same commit-time queue, two or more transactions can commit at nearly the same time, and the commit intervals for these transactions can overlap. In this case, the messages in these transactions are not visible until all of the transactions have committed. At that time, the order of the messages can be determined using the approximate CSCN of each transaction. Dependencies are maintained by using the approximate CSCN for messages rather than the enqueue time. Read consistency for browses is maintained by ensuring that only messages with a fully determined order are visible.

A commit-time queue always maintains transactional dependency ordering for messages that are based on database transactions. However, applications and users can enqueue messages that are not based on database transactions. For these messages, if dependencies exist between transactions, then the application or user must ensure that transactions are committed in the correct order and that the commit intervals of the dependent transactions do not overlap.

The approximate CSCNs of transactions recorded by a commit-time queue might not reflect the actual commit order of these transactions. For example, transaction 1 and transaction 2 can commit at nearly the same time after enqueuing their messages. The approximate CSCN for transaction 1 can be lower than the approximate CSCN for transaction 2, but transaction 1 can take more time to complete the commit than transaction 2. In this case, the actual CSCN for transaction 2 is lower than the actual CSCN for transaction 1.

Note:

The sort_list parameter in CREATE_QUEUE_TABLE can be set to the following:
priority, commit_time

In this case, ordering is done by priority first and commit time second. Therefore, this setting does not ensure transactional dependency ordering and browse read consistency for messages with different priorities. However, transactional dependency ordering and browse read consistency are ensured for messages with the same priority.

See Also:

"Creating an ANYDATA Queue" for information about creating a commit-time queue

Streams Staging and Propagation Architecture

This section describes buffered queues, propagation jobs, and secure queues, and how they are used in Streams. In addition, this section discusses how transactional queues handle captured messages and user-enqueued messages, as well as the need for a Streams data dictionary at databases that propagate captured messages.

This section contains the following topics:

See Also:

Streams Pool

The Streams pool is a portion of memory in the System Global Area (SGA) that is used by Streams. The Streams pool stores buffered queue messages in memory, and it provides memory for capture processes and apply processes. The Streams pool always stores LCRs captured by a capture process, and it stores LCRs and messages that are enqueued into a buffered queue by applications or users.

The Streams pool is initialized the first time any one of the following actions occur in a database:

  • A message is enqueued into a buffered queue. Data Pump export and import operations initialize the Streams pool because these operations use buffered queues.

  • A capture process is started.

  • An apply process is started.

The size of the Streams pool is determined in one of the following ways:

Note:

If the Streams pool cannot be initialized, then an ORA-00832 error is returned. If this happens, then first ensure that there is enough space in the SGA for the Streams pool. If necessary, reset the SGA_MAX_SIZE initialization parameter to increase the SGA size. Next, either set the SGA_TARGET or the STREAMS_POOL_SIZE initialization parameter (or both).

Streams Pool Size Set by Automatic Shared Memory Management

The Automatic Shared Memory Management feature manages the size of the Streams pool when the SGA_TARGET initialization parameter is set to a nonzero value. If the STREAMS_POOL_SIZE initialization parameter also is set to a nonzero value, then Automatic Shared Memory Management uses this value as a minimum for the Streams pool. You can set a minimum size if your environment needs a minimum amount of memory in the Streams pool to function properly.

See Also:

Oracle Database Administrator's Guide and Oracle Database Reference for more information about Automatic Shared Memory Management and the SGA_TARGET initialization parameter

Streams Pool Size Set Manually by a Database Administrator

If the STREAMS_POOL_SIZE initialization parameter is set to a nonzero value, and the SGA_TARGET parameter is set to 0 (zero), then the Streams pool size is the value specified by the STREAMS_POOL_SIZE parameter, in bytes. If you plan to set the Streams pool size manually, then you can use the V$STREAMS_POOL_ADVICE dynamic performance view to determine an appropriate setting for the STREAMS_POOL_SIZE initialization parameter.

Streams Pool Size Set by Default

If both the STREAMS_POOL_SIZE and the SGA_TARGET initialization parameters are set to 0 (zero), then, by default, the first use of Streams in a database transfers an amount of memory equal to 10% of the shared pool from the buffer cache to the Streams pool. The buffer cache is set by the DB_CACHE_SIZE initialization parameter, and the shared pool size is set by the SHARED_POOL_SIZE initialization parameter.

For example, consider the following configuration in a database before Streams is used for the first time:

  • DB_CACHE_SIZE is set to 100 MB.

  • SHARED_POOL_SIZE is set to 80 MB.

  • STREAMS_POOL_SIZE is set to zero.

  • SGA_TARGET is set to zero.

Given this configuration, the amount of memory allocated after Streams is used for the first time is the following:

  • The buffer cache has 92 MB.

  • The shared pool has 80 MB.

  • The Streams pool has 8 MB.

The first use of Streams in a database is the first attempt to allocate memory from the Streams pool. Memory is allocated from the Streams pool in the following ways:

  • A message is enqueued into a buffered queue. The message can be an LCR captured by a capture process, or it can be a user-enqueued LCR or message.

  • A capture process is started.

  • An apply process is started.

Buffered Queues

A buffered queue includes the following storage areas:

  • Streams pool memory associated with a queue that contains messages that were captured by a capture process or enqueued by applications or users

  • Part of a queue table that stores messages that have spilled from memory to disk

Queue tables are stored on disk. Buffered queues enable Oracle to optimize messages by buffering them in the SGA instead of always storing them in a queue table.

If the size of the Streams pool is not managed automatically, then you should increase the size of the Streams pool by 10 MB for each buffered queue in a database. Buffered queues improve performance, but some of the information in a buffered queue can be lost if the instance containing the buffered queue shuts down normally or abnormally. Streams automatically recovers from these cases, assuming full database recovery is performed on the instance.

Messages in a buffered queue can spill from memory into the queue table if they have been staged in the buffered queue for a period of time without being dequeued, or if there is not enough space in memory to hold all of the messages. Messages that spill from memory are stored in the appropriate AQ$_queue_table_name_p table, where queue_table_name is the name of the queue table for the queue. Also, for each spilled message, information is stored in the AQ$_queue_table_name_d table about any propagations and apply processes that are eligible for processing the message.

Captured messages are always stored in a buffered queue, but user-enqueued LCRs and user-enqueued non-LCR messages might or might not be stored in a buffered queue. For a user-enqueued message, the enqueue operation specifies whether the enqueued message is stored in the buffered queue or in the persistent queue. A persistent queue only stores messages on hard disk in a queue table, not in memory. The delivery_mode attribute in the enqueue_options parameter of the DBMS_AQ.ENQUEUE procedure determines whether a message is stored in the buffered queue or the persistent queue. Specifically, if the delivery_mode attribute is the default PERSISTENT, then the message is enqueued into the persistent queue. If it is set to BUFFERED, then the message is enqueued as the buffered queue. When a transaction is moved to the error queue, all messages in the transaction always are stored in a queue table, not in a buffered queue.

Note:

  • Using triggers on queue tables is not recommended because it can have a negative impact on performance. Also, the use of triggers on index-organized queue tables is not supported.

  • Although buffered and persistent messages can be stored in the same queue, it is sometimes more convenient to think of a queue having a buffered portion and a persistent portion, referred to here as "buffered queue" and "persistent queue".

See Also:

Propagation Jobs

A Streams propagation is configured internally using the DBMS_JOB package. Therefore, a propagation job is a job used by a propagation that propagates messages from a source queue to a destination queue. Like other jobs configured using the DBMS_JOB package, propagation jobs have an owner, and they use job queue processes (Jnnn) as needed to execute jobs.

The following procedures can create a propagation job when they create a propagation:

  • The ADD_GLOBAL_PROPAGATION_RULES procedure in the DBMS_STREAMS_ADM package

  • The ADD_SCHEMA_PROPAGATION_RULES procedure in the DBMS_STREAMS_ADM package

  • The ADD_TABLE_PROPAGATION_RULES procedure in the DBMS_STREAMS_ADM package

  • The ADD_SUBSET_PROPAGATION_RULE procedure in the DBMS_STREAMS_ADM package

  • The CREATE_PROPAGATION procedure in the DBMS_PROPAGATION_ADM package

When one of these procedures creates a propagation, a new propagation job is created in the following cases:

  • If the queue_to_queue parameter is set to true, then a new propagation job always is created for the propagation. Each queue-to-queue propagation has its own propagation job. However, a job queue process can be used by multiple propagation jobs.

  • If the queue_to_queue parameter is set to false, then a propagation job is created when no propagation job exists for the source queue and database link specified. If a propagation job already exists for the specified source queue and database link, then the new propagation uses the existing propagation job and shares this propagation job with all of the other queue-to-dblink propagations that use the same database link.

A propagation job for a queue-to-dblink propagation can be used by more than one propagation. All destination queues at a database receive messages from a single source queue through a single propagation job. By using a single propagation job for multiple destination queues, Streams ensures that a message is sent to a destination database only once, even if the same message is received by multiple destination queues in the same database. Communication resources are conserved because messages are not sent more than once to the same database.

Note:

The source queue owner performs the propagation, but the propagation job is owned by the user who creates it. These two users might or might not be the same.

Propagation Scheduling and Streams Propagations

A propagation schedule specifies how often a propagation job propagates messages from a source queue to a destination queue. Each queue-to-queue propagation has its own propagation job and propagation schedule, but queue-to-dblink propagations that use the same propagation job have the same propagation schedule.

A default propagation schedule is established when a new propagation job is created by a procedure in the DBMS_STREAMS_ADM or DBMS_PROPAGATION_ADM package.

The default schedule has the following properties:

  • The start time is SYSDATE().

  • The duration is NULL, which means infinite.

  • The next time is NULL, which means that propagation restarts as soon as it finishes the current duration.

  • The latency is three seconds, which is the wait time after a queue becomes empty to resubmit the propagation job. Therefore, the latency is the maximum wait, in seconds, in the propagation window for a message to be propagated after it is enqueued.

You can alter the schedule for a propagation job using the ALTER_PROPAGATION_SCHEDULE procedure in the DBMS_AQADM package. Changes made to a propagation job affect all propagations that use the propagation job.

Propagation Jobs and RESTRICTED SESSION

When the restricted session is enabled during system startup by issuing a STARTUP RESTRICT statement, propagation jobs with enabled propagation schedules do not propagate messages. When the restricted session is disabled, each propagation schedule that is enabled and ready to run will run when there is an available job queue process.

When the restricted session is enabled in a running database by the SQL statement ALTER SYSTEM ENABLE RESTRICTED SESSION, any running propagation job continues to run to completion. However, any new propagation job submitted for a propagation schedule is not started. Therefore, propagation for an enabled schedule can eventually come to a halt.

Secure Queues

Secure queues are queues for which AQ agents must be associated explicitly with one or more database users who can perform queue operations, such as enqueue and dequeue. The owner of a secure queue can perform all queue operations on the queue, but other users cannot perform queue operations on a secure queue, unless they are configured as secure queue users. In Streams, secure queues can be used to ensure that only the appropriate users and Streams clients enqueue messages into a queue and dequeue messages from a queue.

Secure Queues and the SET_UP_QUEUE Procedure

All ANYDATA queues created using the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package are secure queues. When you use the SET_UP_QUEUE procedure to create a queue, any user specified by the queue_user parameter is configured as a secure queue user of the queue automatically, if possible. The queue user is also granted ENQUEUE and DEQUEUE privileges on the queue. To enqueue messages into and dequeue messages from a queue, a queue user must also have EXECUTE privilege on the DBMS_STREAMS_MESSAGING package or the DBMS_AQ package. The SET_UP_QUEUE procedure does not grant either of these privileges. Also, a message cannot be enqueued into a queue unless a subscriber who can dequeue the message is configured.

To configure a queue user as a secure queue user, the SET_UP_QUEUE procedure creates an AQ agent with the same name as the user name, if one does not already exist. The user must use this agent to perform queue operations on the queue. If an agent with this name already exists and is associated with the queue user only, then the existing agent is used. SET_UP_QUEUE then runs the ENABLE_DB_ACCESS procedure in the DBMS_AQADM package, specifying the agent and the user.

If you use the SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package to create a secure queue, and you want a user who is not the queue owner and who was not specified by the queue_user parameter to perform operations on the queue, then you can configure the user as a secure queue user of the queue manually. Alternatively, you can run the SET_UP_QUEUE procedure again and specify a different queue_user for the queue. In this case, SET_UP_QUEUE skips queue creation, but it configures the user specified by queue_user as a secure queue user of the queue.

If you create an ANYDATA queue using the DBMS_AQADM package, then you use the secure parameter when you run the CREATE_QUEUE_TABLE procedure to specify whether the queue is secure or not. The queue is secure if you specify true for the secure parameter when you run this procedure. When you use the DBMS_AQADM package to create a secure queue, and you want to allow users to perform queue operations on the secure queue, then you must configure these secure queue users manually.

Secure Queues and Streams Clients

When you create a capture process or an apply process, an AQ agent of the secure queue associated with the Streams process is configured automatically, and the user who runs the Streams process is specified as a secure queue user for this queue automatically. Therefore, a capture process is configured to enqueue into its secure queue automatically, and an apply process is configured to dequeue from its secure queue automatically. In either case, the AQ agent has the same name as the Streams client.

For a capture process, the user specified as the capture_user is the user who runs the capture process. For an apply process, the user specified as the apply_user is the user who runs the apply process. If no capture_user or apply_user is specified, then the user who invokes the procedure that creates the Streams process is the user who runs the Streams process.

Also, if you change the capture_user for a capture process or the apply_user for an apply process, then the specified capture_user or apply_user is configured as a secure queue user of the queue used by the Streams process. However, the old capture user or apply user remains configured as a secure queue user of the queue. To remove the old user, run the DISABLE_DB_ACCESS procedure in the DBMS_AQADM package, specifying the old user and the relevant AQ agent. You might also want to drop the agent if it is no longer needed. You can view the AQ agents and their associated users by querying the DBA_AQ_AGENT_PRIVS data dictionary view.

When you create a messaging client, an AQ agent of the secure queue with the same name as the messaging client is associated with the user who runs the procedure that creates the messaging client. This messaging client user is specified as a secure queue user for this queue automatically. Therefore, this user can use the messaging client to dequeue messages from the queue.

A capture process, an apply process, or a messaging client can be associated with only one user. However, one user can be associated with multiple Streams clients, including multiple capture processes, apply processes, and messaging clients. For example, an apply process cannot have both hr and oe as apply users, but hr can be the apply user for multiple apply processes.

If you drop a capture process, apply process, or messaging client, then the users who were configured as secure queue users for these Streams clients remain secure queue users of the queue. To remove these users as secure queue users, run the DISABLE_DB_ACCESS procedure in the DBMS_AQADM package for each user. You might also want to drop the agent if it is no longer needed.

Note:

No configuration is necessary for propagations and secure queues. Therefore, when a propagation is dropped, no additional steps are necessary to remove secure queue users from the propagation's queues.

Transactional and Nontransactional Queues

A transactional queue is a queue in which user-enqueued messages can be grouped into a set that are applied as one transaction. That is, an apply process performs a COMMIT after it applies all the user-enqueued messages in a group. The SET_UP_QUEUE procedure in the DBMS_STREAMS_ADM package always creates a transactional queue.

A nontransactional queue is one in which each user-enqueued message is its own transaction. That is, an apply process performs a COMMIT after each user-enqueued message it applies. In either case, the user-enqueued messages might or might not contain user-created LCRs.

The difference between transactional and nontransactional queues is important only for user-enqueued messages. An apply process always applies captured messages in transactions that preserve the transactions executed at the source database. Table 3-1 shows apply process behavior for each type of message and each type of queue.

Table 3-1 Apply Process Behavior for Transactional and Nontransactional Queues

Message Type Transactional Queue Nontransactional Queue

Captured Messages

Apply process preserves the original transaction

Apply process preserves the original transaction

User-Enqueued Messages

Apply a user-specified group of user-enqueued messages as one transaction

Apply each user-enqueued message in its own transaction


See Also:

Streams Data Dictionary for Propagations

When a database object is prepared for instantiation at a source database, a Streams data dictionary is populated automatically at the database where changes to the object are captured by a capture process. The Streams data dictionary is a multiversioned copy of some of the information in the primary data dictionary at a source database. The Streams data dictionary maps object numbers, object version information, and internal column numbers from the source database into table names, column names, and column datatypes. This mapping keeps each captured message as small as possible, because the message can store numbers rather than names internally.

The mapping information in the Streams data dictionary at the source database is needed to evaluate rules at any database that propagates the captured messages from the source database. To make this mapping information available to a propagation, Oracle automatically populates a multiversioned Streams data dictionary at each database that has a Streams propagation. Oracle automatically sends internal messages that contain relevant information from the Streams data dictionary at the source database to all other databases that receive captured messages from the source database.

The Streams data dictionary information contained in these internal messages in a queue might or might not be propagated by a propagation. Which Streams data dictionary information to propagate depends on the rule sets for the propagation. When a propagation encounters Streams data dictionary information for a table, the propagation rule sets are evaluated with partial information that includes the source database name, table name, and table owner. If the partial rule evaluation of these rule sets determines that there might be relevant LCRs for the given table from the specified database, then the Streams data dictionary information for the table is propagated.

When Streams data dictionary information is propagated to a destination queue, it is incorporated into the Streams data dictionary at the database that contains the destination queue, in addition to being enqueued into the destination queue. Therefore, a propagation reading the destination queue in a directed networks configuration can forward LCRs immediately without waiting for the Streams data dictionary to be populated. In this way, the Streams data dictionary for a source database always reflects the correct state of the relevant database objects for the LCRs relating to these database objects.