Friday, October 23, 2009

Oracle Streams -11g features

Database 11g Contains numerous Streams Performance improvements, such as,

1.The reader process in the capture component mines from the in-memory redo log buffer whenever possible to minimize disk I/O and reduce latency of capture.

2.The capture process directly communicate with the apply component to improve LCR transaction throughput and reduce end-to-end replication latency. This optimization, know as Combined Capture and Apply, is automatically enabled for single capture/single apply configuration.

3.The applier process employs an internal mechanism to execute change records and thus reduces CPU consumption. In addition, extensive caching minimized latch contention adn other wait events.

Sunday, September 20, 2009

Adding Partition Table to Streams

Steps for adding a Partition table to an existing single source Streams replication environment.

Source database (S): ORCL.ORACLEDB1.KRISH.COM
Target database (T): MYDEV.ORACLEDB2.KRISH.COM
Source & Target Schema: SCOTT
Table: SALES_PART

Note: Ensure that you have setup the streams for replication for the first time and created a streams administrator before executing following steps.

Steps for adding a Partitioned table:

1. Stop the capture process(S)
2. Setup logging & supplemental logging for the table (S)
3. Add capture rules on the table (S)
4. Add propagation rules on the table(S)
5. Set Instantiation of table from Source(S)
6. Export, Import of table(Data) from Source(S) to Target (T)
7. Add apply rules to the table(T)
8. Start the capture process(S)

1 Stop the capture process(S)

connect strmadmin/strmORCL@ORCL

SQL> BEGIN
DBMS_CAPTURE_ADM.STOP_CAPTURE(
capture_name => 'STREAMS_CAPTURE');
END;
/
2 Configure logging & supp. logging for the table (S)

connect strmadmin/strmORCL@ORCL

--Create Partition table if does not exist on the source database.

SQL> create table SCOTT.SALES_PART
(INVOICE_NO NUMBER,
SALE_YEAR INT NOT NULL,
SALE_MONTH INT NOT NULL,
SALE_DAY INT NOT NULL)
partition by range (SALE_YEAR,SALE_MONTH,SALE_DAY)
(PARTITION SALES_Q1 VALUES LESS THAN (1994, 04, 01) TABLESPACE TBS_USERDATA,
PARTITION SALES_Q2 VALUES LESS THAN (1994,07,01) TABLESPACE TBS_USERDATA)
/

alter table SCOTT.SALES_PART logging;
alter table SCOTT.SALES_PART add supplemental log data (primary key,unique,foreign key,all) columns;

select partition_name, logging from dba_tab_partitions where table_name='SALES_PART' and TABLE_OWNER='SCOTT';

3 Add capture rules on the table (S)
Source(ORCL):

-- This script needs to be run as "strmadmin"

connect strmadmin/strmORCL@ORCL


BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'SCOTT.SALES_PART',
streams_type => 'capture',
streams_name => 'STREAMS_CAPTURE',
queue_name => 'strmadmin.STREAMS_QUEUE_ORCL_MYDEV',
include_dml => true,
include_ddl => true,
inclusion_rule => true);
END;
/

4 Add propagation rules on the table(S)

Source(ORCL):

-- This script needs to be run as "strmadmin"

connect strmadmin/strmORCL@ORCL

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => 'SCOTT.SALES_PART',
streams_name => 'STREAMS_PROPAGATE',
source_queue_name => 'strmadmin.STREAMS_QUEUE_ORCL_MYDEV',
destination_queue_name => 'strmadmin.STREAMS_QUEUE_ORCL_MYDEV@MYDEV',
include_dml => true,
include_ddl => true,
source_database => 'ORCL',
inclusion_rule => true);
END;
/

5 Set Instantiation of table from Source (S)

--This script needs to be run as "strmadmin".

Source(ORCL):

connect strmadmin/strmORCL@ORCL

BEGIN
DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION('SCOTT.SALES_PART');
END;
/

6 Export, import of table (Data) from Source(S) to Target (T)

Source(ORCL):


$exp parfile=exp_sales_part.par

#exp_sales_part.par
USERID=system@ORCL
FILE=exp_SALES_PART.dmp
LOG=exp_SALES_PART.log
ROWS=Y
GRANTS=Y
INDEXES=Y
TRIGGERS=N
CONSTRAINTS=Y
CONSISTENT=Y
STATISTICS=NONE
TABLES= SCOTT.SALES_PART

ftp the exp_SALES_PART.dmp file to target site

Target(MYDEV):

$imp parfile=imp_MYDEV_sales_part.par

#imp_MYDEV_ sales_part.par
USERID=system@MYDEV
FILE=exp_SALES_PART.dmp
LOG=imp_SALES_PART.log
ROWS=Y
IGNORE=Y
resumable=y
resumable_timeout=43200
commit=y
statistics=NONE
buffer=50000000
RECORDLENGTH=65535
STREAMS_INSTANTIATION=Y
FULL=Y

make sure that target table instantiated

select SOURCE_DATABASE,SOURCE_OBJECT_OWNER,SOURCE_OBJECT_NAME,
INSTANTIATION_SCN from dba_apply_instantiated_objects where SOURCE_OBJECT_NAME='SALES_PART';

7 Add apply rules to the table(T)

Target(MYDEV):

connect strmadmin/strmMYDEV@MYDEV


BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'SCOTT.SALES_PART',
streams_type => 'apply',
streams_name => 'streams_apply',
queue_name =>
'strmadmin.STREAMS_QUEUE_ORCL_MYDEV',
include_dml => true,
include_ddl => true,
source_database => 'ORCL',
inclusion_rule => true);
END;
/

-- grant all permissions on SALES_PART to streams administrator
conn / as sysdba

GRANT ALL ON SCOTT.SALES_PART to strmadmin;


8 Start the capture process(S)

connect strmadmin/strmORCL@ORCL

SQL> BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(
capture_name => 'STREAMS_CAPTURE');
END;
/

Verification:

1. Add a partition

SQL> alter table SCOTT.SALES_PART add partition SALES_Q3
VALUES LESS THAN(1994,10,01) TABLESPACE TBS_USERDATA;

Check on the source and target databases.
-----------------------------------------
select TABLE_NAME,partition_name from dba_tab_partitions where table_name='SALES_PART' and TABLE_OWNER='SCOTT';

2. Drop partition
Alter table SCOTT.SALES_PART drop partition SALES_Q3;

3. Move partition to different tablespace
Alter table SCOTT.SALES_PART MOVE PARTITION SALES_Q2 TABLESPACE STREAMS01;

select TABLE_NAME,partition_name,tablespace_name from dba_tab_partitions where table_name='SALES_PART' and TABLE_OWNER='SCOTT';

4.Perform DML:
Insert into SCOTT.SALES_PART values (1,1994,02,05);
Insert into SCOTT.SALES_PART values (2,1994,05,10);
Commit;

DELETE FROM SCOTT.SALES_PART;
COMMIT;


5.Truncate Partition table:

Alter table SCOTT.SALES_PART TRUNCATE PARTITION SALES_Q1;
Alter table SCOTT.SALES_PART TRUNCATE PARTITION SALES_Q2;

6. SPLIT Table Partitions

Alter table SCOTT.SALES_PART SPLIT PARTITION SALES_Q2 AT(1994,06,01)
INTO (PARTITION SALES_Q2, PARTITION SALES_Q3);

select TABLE_NAME,partition_name,tablespace_name from dba_tab_partitions where table_name='SALES_PART' and TABLE_OWNER='SCOTT';

TABLE_NAME PARTITION_NAME TABLESPACE_NAME
------------------------------ ------------------------------ --------------------
SALES_PART SALES_Q1 TBS_USERDATA
SALES_PART SALES_Q2 STREAMS01
SALES_PART SALES_Q3 STREAMS01

7. Merge partition

Alter table SCOTT.SALES_PART MERGE PARTITIONS SALES_Q2, SALES_Q3 INTO PARTITION SALES_Q23;

TABLE_NAME PARTITION_NAME TABLESPACE_NAME
------------------------------ ------------------------------ -------------------
SALES_PART SALES_Q1 TBS_USERDATA
SALES_PART SALES_Q23 TBS_USERDATA

8. Rename Partition

Alter table SCOTT.SALES_PART RENAME PARTITION SALES_Q23 TO SALES_Q2;

TABLE_NAME PARTITION_NAME TABLESPACE_NAME
------------------------------ ------------------------------ --------------------
SALES_PART SALES_Q1 TBS_USERDATA
SALES_PART SALES_Q2 TBS_USERDATA

9. Create LOCAL index on SALES_PART

CREATE INDEX SCOTT.SALES_PART_IDX ON SCOTT.SALES_PART(INVOICE_NO,SALE_MONTH)
LOCAL
(PARTITION SALES_Q1 TABLESPACE TBS_USERIDX,
PARTITION SALES_Q2 TABLESPACE TBS_USERIDX);
select TABLE_NAME,INDEX_NAME,PARTITIONING_TYPE,LOCALITY,ALIGNMENT from dba_part_indexes where TABLE_NAME like 'SALES_PART';

TABLE_NAME INDEX_NAME PARTITI LOCALI ALIGNMENT
---------- ------------------------------ ------- ------ ------------
SALES_PART SALES_PART_IDX RANGE LOCAL NON_PREFIXED

select INDEX_NAME,COMPOSITE,PARTITION_NAME,HIGH_VALUE,PARTITION_POSITION,STATUS,USER_STATS FROM DBA_IND_PARTITIONS WHERE INDEX_NAME LIKE 'SALES_PART_IDX';

INDEX_NAME COM PARTITION_NAME HIGH_VALUE PARTITION_POSITION STATUS USE
------------------------------ --- ------------------------------ --------------- ------------------ -------- ---
SALES_PART_IDX NO SALES_Q1 1994, 04, 01 1 USABLE NO
SALES_PART_IDX NO SALES_Q2 1994, 07, 01 2 USABLE NO

Wednesday, September 2, 2009

Friday, August 7, 2009

Streams Database Move

Streams source database Migration:
Below steps I followed to migrate streams source database. Streams configured at table level using local capture.
1. Stopped streams capture & propagation process prior to export on source db.
2. Checked for any errors on the streams target database in dba_apply_error.
3. Exported Full database using Datapump (set parameter Flashback_time or flashback_scn).
4. Created new source database at new destination & enabled Archived Log mode.
5. Imported full database.
6. Ran utlrp.sql.
7. Disabled the propagation process.
8. Configured listener and tnsnames for new source db.
9. Ran Instantiation for the replication objects.
10. Started propagation process then capture process on the new source db.

Wednesday, March 18, 2009

Adding a table to a single source Streams environment

Adding a table to an existing single source Streams replication environment while the table is active.

Source database (S): ORCL.ORACLEDB1.KRISH.COM
Target database (T): MYDEV.ORACLEDB2.KRISH.COM
Source & Target Schema: SCOTT
Adding Table: DEPT

Steps to add table “DEPT” which exists at ORCL.ORACLEDB1.KRISH.COM to the Streams environment without stopping the existing apply/propagation/capture:

Note: Ensure that you have setup the streams for replication for the first time and created a streams administrator before executing following scripts.

1. Add apply rules on the table(T)
2. Add propagation rules on the table(S)
3. Configure supplemental logging for the table (S)
4. Add capture rules on the table (S)
5. Set Instantiation of table from Source(S)
6. Export, Import of table(Data) from Source(S) to Target (T)
7. Grant privileges to strmadmin user(T)

1 Add apply rules on the table(T)

Target(MYDEV.ORACLEDB2.KRISH.COM):

connect strmadmin/strmpwd@MYDEV.ORACLEDB2.KRISH.COM

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'SCOTT.DEPT',
streams_type => 'apply',
streams_name => 'streams_apply',
queue_name => 'strmadmin.STREAMS_QUEUE_ORCL_MYDEV,
include_dml => true,
include_ddl => true,
source_database => 'ORCL.ORACLEDB1.KRISH.COM',
inclusion_rule => true);
END;
/

2 Add propagation rules on the table(S)

Source(ORCL.ORACLEDB1.KRISH.COM):


-- This script needs to be run as "strmadmin"

connect strmadmin/strmpwd@ORCL.ORACLEDB1.KRISH.COM

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => 'SCOTT.DEPT',
streams_name => 'STREAMS_PROPAGATE',
source_queue_name => 'strmadmin.STREAMS_QUEUE_ORCL_MYDEV',
destination_queue_name => 'strmadmin.STREAMS_QUEUE_ORCL_MYDEV@MYDEV.ORACLEDB2.KRISH.COM',
include_dml => true,
include_ddl => true,
source_database => 'ORCL.ORACLEDB1.KRISH.COM',
inclusion_rule => true);
END;
/

3 Configure supplemental logging for the table (S)

connect strmadmin/strmpwd@ORCL.ORACLEDB1.KRISH.COM

alter table SCOTT.DEPT add supplemental log data (primary key, unique, foreign key, all) columns;

4 Add capture rules on the table (S)

Source(ORCL.ORACLEDB1.KRISH.COM):

-- This script needs to be run as "strmadmin"

connect strmadmin/strmpwd@ORCL.ORACLEDB1.KRISH.COM

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'SCOTT.DEPT',
streams_type => 'capture',
streams_name => 'STREAMS_CAPTURE',
queue_name => 'strmadmin.STREAMS_QUEUE_ORCL_MYDEV',
include_dml => true,
include_ddl => true,
inclusion_rule => true);
END;
/

5 Set Instantiation of the table from Source (S)

--This script needs to be run as "strmadmin".

Source(ORCL.ORACLEDB1.KRISH.COM):

connect strmadmin/strmpwd@ORCL.ORACLEDB1.KRISH.COM

BEGIN
DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION('SCOTT.DEPT');
END;
/

6 Export, import of table (Data) from Source(S) to Target (T)

Source(ORCL.ORACLEDB1.KRISH.COM):

$exp parfile=DEPT.par

DEPT.par
USERID=system/pwd@ORCL.ORACLEDB1.KRISH.COM
FILE=DEPT.dmp
LOG= DEPT.log
ROWS=Y
GRANTS=Y
INDEXES=Y
TRIGGERS=N
CONSTRAINTS=Y
CONSISTENT=Y
STATISTICS=NONE
TABLES= SCOTT.DEPT

ftp the DEPT.dmp file to target site

Target(MYDEV.ORACLEDB2.KRISH.COM):

$imp parfile=imp_DEPT.par

imp_DEPT.par
USERID=system/pwd@MYDEV.ORACLEDB2.KRISH.COM
FILE=DEPT.dmp
LOG= DEPT.log
ROWS=Y
IGNORE=Y
STREAMS_INSTANTIATION=Y
FULL=Y

7 Grant privileges to strmadmin user(T)
Target(MYDEV.ORACLEDB2.KRISH.COM):
-- grant all permissions on DEPT to streams administrator
conn SCOTT/tiger@MYDEV.ORACLEDB2.KRISH.COM

GRANT ALL ON SCOTT.DEPT to strmadmin;

Monday, March 16, 2009

Hub and Spoke Streams Replication Setup between 3 Databases

Hub-n-Spoke replication has typically a single Hub database to which all the other Spoke databases will be replicating. The changes (DML & DDL) originated on the Hub will be replicated to all the Spoke databases.
Publishing the document with example soon.

Saturday, February 7, 2009

Oracle Streams Overview

Oracle Streams enables information sharing. Oracle Streams can share database changes and other information in a stream, which can propagate events within a database or from one database to another. The specified information is routed to specified destinations. The result is a feature that provides greater functionality and flexibility than traditional solutions for capturing and managing information, and sharing the information with other databases and applications.

- A capture process is an Oracle background process that scans the database redo log to capture DML and DDL changes made to database objects. It formats these changes into events called logical change records (LCRs) and enqueues them into a queue.

- Propagation process send events from one queue to another, and these queues can be in the same database or in different databases.

- An apply process is an Oracle background process that dequeues events from a queue and applies each event directly to a database object or sends events to apply handlers for custom processing.

Saturday, January 10, 2009

Re-Synchronize/Refresh a Table in Streams Environment

Purpose:

The purpose of this article is to outline the steps for Re-Synchronize/Refresh a single table if it is out of sync with source site in one-way replication between two ORACLE databases in Streams environment.

Example:
Schema:SCOTT
Table:EMP

Steps for Refreshing a Table

If one of the tables on the Target site is out of Sync with Source site then follow below steps to re-synchronize both:

1. Stop Capture Process on Source(S)
2. Stop Propagation Process on Source(S)
3. Stop Apply Process on Target(T)
4. Delete the apply errors on Target(T)
5. Truncate the problem table on Target(T)
6. Re-instantiate the problem table on Source(S)
7. Synchronize the problem table through exp/imp
8. Start the apply Process on Target(T)
9. Start the propagation Process on the Source(S)
10.Start the Capture Process on the Source(S)

1. Stop Capture Process on Source(S)

Source(ORCL.ORACLEDB1.KRISH.COM):

connect strmadmin/strmadmin@ORCL.ORACLEDB1.KRISH.COM

SQL> BEGIN
DBMS_CAPTURE_ADM.STOP_CAPTURE(
capture_name => 'STREAMS_CAPTURE');
END;
/

2. Stop Propagation Process on Source(S)

Make sure all the changes are propagated to the Target site, then stop the propagation process on source using following sql

Source(ORCL.ORACLEDB1.KRISH.COM):

connect strmadmin/strmadmin@ORCL.ORACLEDB1.KRISH.COM

SQL>BEGIN
DBMS_PROPAGATION_ADM.STOP_PROPAGATION(
'STREAMS_PROPAGATE',FORCE=>TRUE);
END;
/

3. Stop Apply Process on Target(T)

Make sure all changes are applied on the target, then stop apply process on target using following sql

Target(MYDEV.ORACLEDB2.KRISH.COM):

connect strmadmin/strmadmin@MYDEV.ORACLEDB2.KRISH.COM

SQL> BEGIN
DBMS_APPLY_ADM.STOP_APPLY(
apply_name => 'STREAMS_APPLY');
END;
/

4. Delete the apply errors on Target

As we will re-synchronize, we can delete the errors that are in the error_queue of the apply process. To do so, we can execute either DBMS_APPLY_ADM.DELETE_ERROR or DBMS_APPLY_ADM.DELETE_ALL_ERRORS at the apply site.

You need to make sure that you are not deleting any needed transaction, specially if you are not synchronizing all the replicated for that apply process.

Target(MYDEV.ORACLEDB2.KRISH.COM):

Check the errors in the error queue:

SQL> select apply_name, LOCAL_TRANSACTION_ID, ERROR_MESSAGE from dba_apply_error;

To delete specific transaction:

SQL> exec DBMS_APPLY_ADM.DELETE_ERROR('transaction_id');

To delete all errors:

SQL> exec DBMS_APPLY_ADM.DELETE_ALL_ERRORS('apply_name');

5. Truncate the problem table on Target

Truncate the problem table if you are using traditional export/import for copying the data.

If you are using datapump exp/imp then ignore step 5 and go to step 7(use parameter TABLE_EXISTS_ACTION=truncate)

Target(MYDEV.ORACLEDB2.KRISH.COM):

Connect strmadmin/strmadmin@MYDEV.ORACLEDB2.KRISH.COM

SQL> Truncate table SCOTT.EMP;
6. Re-instantiate the problem table on Source(S)
Source(ORCL.ORACLEDB1.KRISH.COM):
connect strmadmin/strmadmin@ORCL.ORACLEDB1.KRISH.COM
BEGIN
DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION('SCOTT.EMP');
END;
/
DECLARE iscn NUMBER; BEGIN iscn:=DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER(); DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN@MYDEV.ORACLEDB2.KRISH.COM ( source_object_name => 'SCOTT.EMP', source_database_name => 'ORCL.ORACLEDB1.KRISH.COM', instantiation_scn => iscn);
END;
/

7. Synchronize the problem table through datapump exp/imp

Now synchronize the problem table on source and target so they are identical, Here we are copying the data through DP exp/imp from source to target.

6a) Export the source Table:

For example we are replicating table EMP of SCOTT, so we should do the following:

Source(ORCL.ORACLEDB1.KRISH.COM):

expdp parfile=expdp.par

expdp.par
userid=system/pwd@ORCL.ORACLEDB1.KRISH.COM
tables=SCOTT.EMP
DUMPFILE=dpump:expdpScott.dmp
LOGFILE= dpump:expdpScott.log
JOB_NAME=exp_job
CONTENT=DATA_ONLY

6b) Import the source table to target:

Target(MYDEV.ORACLEDB2.KRISH.COM):

impdp parfile=impdp.par

impdp.par
userid=system/pwd@MYDEV.ORACLEDB2.KISH.COM
tables=SCOTT.EMP
DUMPFILE=dpump:expdpScott.dmp
LOGFILE= dpump:impdpScott.log
JOB_NAME=imp_job
TABLE_EXISTS_ACTION=truncate

8. Start the Apply Process on Target(T)

Target(MYDEV.ORACLEDB2.KRISH.COM):

connect strmadmin/strmadmin@MYDEV.ORACLEDB2.KRISH.COM

SQL> BEGIN
DBMS_APPLY_ADM.START_APPLY(
apply_name => 'STREAMS_APPLY');
END;
/
Make sure apply process is started

SQL> select apply_name, status from dba_apply;

9. Start the propagation Process on the Source(S)

Source(ORCL.ORACLEDB1.KRISH.COM):

connect strmadmin/strmadmin@ORCL.ORACLEDB1.KRISH.COM

SQL> BEGIN
DBMS_PROPAGATION_ADM.START_PROPAGATION('STREAMS_PROPAGATE');
END;
/

10. Start the Capture Process on the Source(S)

Source(ORCL.ORACLEDB1.KRISH.COM):

connect strmadmin/strmadmin@ORCL.ORACLEDB1.KRISH.COM

SQL> BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(
capture_name => 'STREAMS_CAPTURE');
END;
/

Make sure capture process is started

SQL> select capture_name, status from dba_capture;

Conclusion:
The Target database is not up to date till the refresh is completed.