Today we are going to learn about the steps use for Setting Up Table Replication In Oracle Goldengate. In this below tutorial, we will setup one-way goldengate replication for below two tables from database SRCDB to TRGDB.

DBACLASS.EMP;
DBACLASS.DEPT;

SOURCE  TARGET
 HOST 172.65.47.32 172.20.21.56
DB_NAME SRCDB TEGAB
GG PROCESS EXT1(extract)

EXT1P(extract pump)

REPR1 (replicat)
TABLES DBACLASS.EMP

DBACLASS.DEPT

DBACLASS.EMP

DBACLASS.DEPT

Once the installation is done, proceed with below steps.

1. Enable supplemental logging for those tables.[SOURCE]

GGSCI > dblogin USERID ggate_user, PASSWORD ggate_user

Successfully logged into the database.

GGSCI > add trandata DBACLASS.EMP

Logging of supplemental redo data enabled for table DBACLASS.EMP.
TRANDATA for scheduling columns has been added on table ‘DBACLASS.EMP’.
TRANDATA for instantiation CSN has been added on table ‘DBACLASS.EMP’.
GGSCI >

GGSCI > add trandata DBACLASS.DEPT

Logging of supplemental redo data enabled for table DBACLASS.DEPT.
TRANDATA for scheduling columns has been added on table ‘DBACLASS.DEPT’.
TRANDATA for instantiation CSN has been added on table ‘DBACLASS.DEPT’.

2. Prepare extract parameter file[ SOURCE ]

GGSCI> edit params EXT1

EXTRACT EXT1
SETENV (ORACLE_SID=”SRCDB”)
SETENV (ORACLE_HOME = “/home/oracle/app/oracle/product/12.1.0/dbhome_1”)
SETENV (NLS_LANG = “AMERICAN_AMERICA.AL32UTF8”)
userid ggate_user, PASSWORD ggate_user
FETCHOPTIONS FETCHPKUPDATECOLS, USESNAPSHOT, USELATESTVERSION
DBOPTIONS ALLOWUNUSEDCOLUMN
DISCARDFILE /ggate/app/oracle/product/11.2.1.0.3/TST/EXT1.dsc, APPEND, MEGABYTES 100
EXTTRAIL /ggate/app/oracle/product/11.2.1.0.3/TST/E5
WARNLONGTRANS 6H, CHECKINTERVAL 1H
TABLE DBACLASS.EMP;
TABLE DBACLASS.DEPT;

Here – /ggate/app/oracle/product/11.2.1.0.3/TST is the location on LOCAL(SOURCE HOST) where the extract trail files will be created with prefix E5*

3. Prepare the extract pump parameter file [SOURCE]

GGSCI> EDIT PARAMS EXT1P

EXTRACT EXT1P
SETENV (ORACLE_HOME = “/home/oracle/app/oracle/product/12.1.0/dbhome_1”)
SETENV (ORACLE_SID = “SRCDB”)
SETENV (NLS_LANG = “AMERICAN_AMERICA.AL32UTF8”)
USERID ggate_user, PASSWORD ggate_user
RMTHOST 172.20.21.56, MGRPORT 7809
RMTTRAIL /ggate/oradata/datacapture/TRG/P1
TABLE DBACLASS.EMP;
TABLE DBACLASS.DEPT;

RMTTRAIL – Here /ggate/oradata/datacapture/TRG in the location on TARGET HOST where the extract trails from source will be pushed and the prefix will be P1*.

4. Add extract process [SOURCE]

add extract EXT1,tranlog,begin now

EXTRACT added.

5. Add the extract trail [SOURCE]

GGSCI > add exttrail /ggate/app/oracle/product/11.2.1.0.3/TST/E5,extract EXT1
EXTTRAIL added.

6. Add the extract pump process [SOURCE]

GGSCI > add extract EXT1P, exttrailsource /ggate/app/oracle/product/11.2.1.0.3/TST/E5
EXTRACT added.

7. Add remote trail [SOURCE]

Now we will configure replicate on the target server:

8. Prepare to replicate parameter file [ TARGET ]

Here we have to add HANDLECOLLISION parameter to resolve conflicts for the transactions during initial loading. We will remove this parameter once the lag is cleared after initial loading.

ggsci> edit params REPR1

REPLICAT REPR1
setenv (ORACLE_HOME=”/home/oracle/app/oracle/product/12.1.0/dbhome_1″)
setenv (ORACLE_SID=”TRGDB”)
USERID ggate_user, PASSWORD ggate_user
DBOPTIONS SUPPRESSTRIGGERS,DEFERREFCONST
ASSUMETARGETDEFS
HANDLECOLLISIONS
DISCARDFILE /ggate/oradata/datacapture/TRG/reprt.dsc,append,megabytes 1000
MAP DBACLASS.EMP,TARGET DBACLASS.EMP;
MAP DBACLASS.DEPT,TARGET DBACLASS.DEPT ;

9. add the replicat process [ TARGET ]

GGSCI > add replicat REPR1, exttrail /ggate/oradata/datacapture/TRG/P1
REPLICAT added.

Now complete setup is done. Next step is to start the extract process and do the initial loading.
There are multiple methods to do initial loading, But here We will use the simple expdp impdp method.

10. Start the extract [ SOURCE ]

GGSCI > start EXT1

Sending START request to MANAGER …
EXTRACT EXT1 starting

GGSCI > info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT RUNNING EXT1 00:00:00 00:00:03 —– >>> RUNNING
EXTRACT STOPPED EXT1P 00:00:00 00:02:34

 

Now as the extract is running, we will proceed with initial loading(datapump method). i.e it will start capturing the transactions.

10. INITIAL LOADING:

EXPORT FROM SOURCE:

expdp dumpfile=initload.dmp logfile=initload.log directory=DIRT tables=DBACLASS.EMP,DBACLASS.DEPT

E

With the Partitioning, OLAP, Data Mining and Real Application Testing options
FLASHBACK automatically enabled to preserve database integrity.
Starting “SYS”.”SYS_EXPORT_TABLE_02″: /******** AS SYSDBA dumpfile=initload.dmp logfile=initload.log directory=DIRT tables=DBACLASS.EMP,DBACLASS.DEPT
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 16 MB
Processing object type TABLE_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
. . exported “DBACLASS”.”EMP” 3.473 MB 90323 rows
. . exported “DBACLASS”.”DEPT” 3.473 MB 90323 rows
Master table “SYS”.”SYS_EXPORT_TABLE_02″ successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_02 is:
/archive/dump/initload.dmp
Job “SYS”.”SYS_EXPORT_TABLE_02″ successfully completed at Sun Jul 23 11:56:50 2017 elapsed 0 00:00:13
COPY THE DUMP TO TARGET:

scp oracle@***********:/archive/dump/initload.dmp .

IMPORT IN TARGET DB:

impdp dumpfile=initload.dmp logfile=imp_initialload.log directory=FULL_REFRESH

Import: Release 12.1.0.2.0 – Production on Sun Jul 23 12:01:58 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics, and Real Application Testing options
Master table “SYS”.”SYS_IMPORT_FULL_04″ successfully loaded/unloaded
import done in AL32UTF8 character set and AL16UTF16 NCHAR character set
export done in AL32UTF8 character set and UTF8 NCHAR character set
WARNING: possible data loss in character set conversions
Starting “SYS”.”SYS_IMPORT_FULL_04″: /******** AS SYSDBA dumpfile=initload.dmp logfile=imp_initialload.log directory=FULL_REFRESH
Processing object type TABLE_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “DBACLASS”.”EMP” 3.473 MB 90323 rows
. . imported “DBACLASS”.”DEPT” 3.473 MB 90323 rows
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Initial loading is completed.

11. Start the datapump extract [ SOURCE ]

GGSCI > start EXT1P

Sending START request to MANAGER …
EXTRACT EXT1P starting

GGSCI (dm-relay-dev-Test) 13> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT RUNNING EXT1 00:00:00 00:00:03
EXTRACT RUNNING EXT1P 00:00:00 00:00:01

12. Start the replicat [ TARGET ]

GGSCI (DBDvLBSS01) 17> start REPR1

Sending START request to MANAGER …
REPLICAT REPR1 starting

13. Remove HANDLECOLLSION after lag is cleared and restart replicate.

GGSCI > lag replicat REPR1

2017-07-24 15:33:12 INFO OGG-14054 Lag from heartbeat table requires DBLOGIN.

Sending GETLAG request to REPLICAT REPR1 …
Last record lag 6 seconds.

GGSCI > info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
REPLICAT RUNNING REPR1 00:43:54 00:00:06 — >>>> Lag at checkpoint is not zero yet.

GGSCI > !

2017-07-24 15:33:12 INFO OGG-14054 Lag from heartbeat table requires DBLOGIN.

Sending GETLAG request to REPLICAT REPR1 …
Last record lag 6 seconds.

At EOF, no more records to process. ——– >>>>>>>>> This indicates lag is zero.

GGSCI > info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
REPLICAT RUNNING REPR1 00:00:00 00:00:06 — > Lag at checkpoint is zero, i.e lag is zero now.
As lag is zero now, Remove the handles collision parameter and restart the replicate.

REPLICAT REPR1
setenv (ORACLE_HOME=”/home/oracle/app/oracle/product/12.1.0/dbhome_1″)
setenv (ORACLE_SID=”TRGDB”)
USERID ggate_user, PASSWORD ggate_user
DBOPTIONS SUPPRESSTRIGGERS,DEFERREFCONST
ASSUMETARGETDEFS
—HANDLECOLLISIONS
DISCARDFILE /ggate/oradata/datacapture/TRG/reprt.dsc,append,megabytes 1000
MAP DBACLASS.EMP,TARGET DBACLASS.EMP;
MAP DBACLASS.DEPT,TARGET DBACLASS.DEPT ;

stop REPR1

start REPR1

Let’s add some record in source and see whether it is replicating to target or not:

SQL> insert into “DBACLASS”.”EMP” values (‘TEST’,’DB’,’DB2′,384302938);

1 row created.

SQL> COMMIT;

Commit complete.

GGSCI > stats EXT1

Sending STATS request to EXTRACT EXT1 …

Start of Statistics at 2017-07-23 12:07:45.

Output to /ggate/app/oracle/product/11.2.1.0.3/TST/E5:

Extracting from DBACLASS.EMP to DBACLASS.EMP:

*** Total statistics since 2017-07-23 12:04:16 ***
Total inserts 1.00
Total updates 0.00
Total deletes 0.00
Total discards 0.00
Total operations 1.00

 

Check the replication status:

GGSCI > stats REPR1

Sending STATS request to REPLICAT REPR1 …

Start of Statistics at 2017-07-23 12:06:58.

Replicating from DBACLASS.EMP to DBACLASS.DEPT:

*** Total statistics since 2017-07-23 12:06:58 ***
Total inserts 1.00
Total updates 0.00
Total deletes 0.00
Total discards 0.00
Total operations 1.00

We can see the extract captures one insert and replicat received one insert. Lets do the row_count on both dbs .

ROW COUNT ON SOURCE:

SQL> select count(*) from “DBACLASS”.”EMP” ;

COUNT(*)
———-
90324

ROW COUNT ON TARGET:

SQL> select count(*) from “DBACLASS”.”EMP” ;

COUNT(*)
———-
90324

 

Both row_count are matching. Our table level replication setup  in goldengate  is completed. It will now apply all the transactions happening on the source to target db.

 

 

 

 

 

 

About The Author

Comments

  1. Naresh

    Hi .. Can you please bit explain more about parameter (FETCHOPTIONS FETCHPKUPDATECOLS, USESNAPSHOT, USELATESTVERSION) . It seems for me a optional parameter . are they ?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.