Category Archives: GoldenGate

Oracle GoldenGate

Oracle GoldenGate Initial Load Techniques

In order to start using OGG replication, the source and target databases must be in sync. This means that the source data must be copied over to the target and remain in sync while changes are occurring in the source database.

In this article I’m going to discuss the options available to instantiate the target database with the data in sync, their pros and conns, and their pre-requisites.

From Oracle documentation and some MOS notes, it is recommended as a general rule to use native tools if available to instantiate the data in the target database. This works well if the source and target databases are the same, but if they are not the same then an initial load technique may be the best option.

So, why is a native tool recommended? Some of Oracle native tools like RMAN and expdp/impdp can be used to sync the source and target to a particular SCN. Afterwards the OGG replication is started after that SCN; so no chance of duplicates and/or collisions in these cases. In the case of initial load techniques, there are collisions that occur and as a result, there are several steps that must be executed in order.

General Steps for Performing an Initial-Load

1. The continuous standard extract for replication ( or change synchronization extract ) is configured and started; however the corresponding replicat is not started yet.

2. Next the initial load and replicat are configured and executed.

3. Once the intial load and replicat are done, they will stop.

4. The replicat associated with the original change synchronization extract is now started with the HANDLECOLLISIONS parameter set.

5. When the replicat gets past the point where the intial load/replicat stopped, the HANDLECOLLISIONS parameter is turned off.

Here’s a quick time diagram of the order of events.

>-----|-----------|-----------|-----------|-----------|-------->  ( time )
	Time 1		Time 2		Time 3		Time 4		Time 5

	Change		Initial		Intial		Change		Turn
	Sync		Load		Load		Sync		Off
	Extract		Started		Done		Replicat	HandleCollisions
	Started								Started

OGG is now up and running. But did you notice in step 4 that the change synchronization replicat will be applying the same changes that the initial load replicat has already done? This is because the change synchronization was configured to start at a point in time before the intial load technique was. This is necessary in heterogenous systems to ensure that no transactional data is not lost between the time when the intial load is done and when change synchronization is started. The purpose of the HANDLECOLLISIONS parameter is to keep the target data in sync with the source with the same/multiple changes coming in.

There are several different techniques of intial load techniques, each with their own advantages and disadvantages.

Here’s a list of some of the different initial load techniques available:

1 File to Replicat

2 Trail to Replicat

3 File to Database Utility

4 Direct Load

5 Direct Bulk Load to SQL*Loader

Before performing an intial load technique, the following prerequisites must be met.

1 Disable DDL Processing

2 Prepare Target Tables

– Data : the target tables must be empty
– Constraints : disabled on the target
– Triggers : disabled on the target
– Indexes : delete for performance ( they can be added back in after the initial load is complete )

3 Configure and start the MGR Process on both Source and Target

4 Create Data-Definitions File

– this is only necessary if source and target tables differ in structure

5 Create Change-Sync Groups

I have a series of blogs that compare the different techniques advantages and disadvantages as well as show how to perform them.

Visual Diagram of Initial-Load Techniques

Spreadsheet Comparing Initial-Load Techniques

Using GoldenGate on the Oracle Database File System ( DBFS )

The Database File System (DBFS) creates a shared file system which has its files stored in the database.

Because the data is stored in the database, the file system inherits all the HA/DR capabilities provided by the database.

When configuring to run on Exadata or any RAC database, best practice is to store the Oracle GoldenGate trail files, checkpoint files, bounded recovery and configuration files in DBFS to provide the best performance, scalability, recoverability, and failover capabilities in the event of a system failure.

* the checkpoint and trail files remain available in the event of a node failure

* allows one of the surviving database instances to be the source of an Extract process or destination for the Replicat processes

* after a failure occurs, the Extract process can continue mining from the last known archived redo log file position

* replicat processes can start applying from the same trail file position before a failure occurred

In order to enable the DBFS, you must have the FUSE RPMs installed. The Filesystem in Userspace (FUSE) is a loadable kernel module for Unix-like computer operating systems that lets non-privileged users create their own file systems without editing kernel code.

The following configuration was implemented on and Oracle Enterprise Linix 6.2 environement.

Let’s set it up and configure GoldenGate to use DBFS.

FUSE Setup

Required RPM Packages

Ensure that the following packages are installed.

$ rpm -qa –queryformat=”%{name}-%{version}-%{release}.%{arch}\n” | grep -i fuse
fuse-2.8.3-4.el6.x86_64
fuse-libs-2.8.3-4.el6.x86_64
gvfs-fuse-1.4.3-12.el6.x86_64

Add the oracle user to the fuse group

$ usermod -a -G fuse oracle

Create the /etc/fuse.conf file with the user_allow_other option.

$ echo user_allow_other > /etc/fuse.conf

$ chmod 644 /etc/fuse.conf

Create an empty directory that will be used as the mount point for the DBFS filesystem.

$ mkdir -p /oracle/dbfs_direct

$ chown oracle:dba /oracle/dbfs_direct

To pick up the additional group (fuse) membership for the oracle user on Linux, Clusterware must be restarted.

Database Configuration

As the RDBMS software owner, create the DBFS repository inside the database. To create the repository, create a new tablespace to hold the DBFS objects and a database user that will own the objects.

I have the following disk groups defined in ASM, and will be using the DBFS disk group.

GRP NAME      STATE       TYPE    FREE_GB  USABLE_GB   TOTAL_GB PERCENT_FREE COMPATIBILITY DATABASE_COMP
---- ------------ ----------- ------ ---------- ---------- ---------- ------------ ------------- -------------
1 DATA      MOUNTED     EXTERN       5.01       5.01       10        50 11.2.0.3.0     11.2.0.3.0
2 DBFS      MOUNTED     EXTERN       7.74       7.74        8        97 11.2.0.3.0     11.2.0.3.0
3 RECO      MOUNTED     EXTERN       8.01       8.01       15        53 11.2.0.3.0     11.2.0.3.0

DBFS Repository Creation

> sqlplus / as sysdba
SQL> create bigfile tablespace dbfs_ts
datafile '+DBFS' size 200M
autoextend on
next 100M
maxsize 3G
NOLOGGING
EXTENT MANAGEMENT LOCAL
AUTOALLOCATE
SEGMENT SPACE MANAGEMENT AUTO;

SQL> create user dbfs_user identified by dbfs_user
default tablespace dbfs_ts
quota unlimited on dbfs_ts;

SQL> grant create session, create table, create view, create procedure, dbfs_role to dbfs_user;

Create the database objects that will hold DBFS

> cd $ORACLE_HOME/rdbms/admin

> sqlplus dbfs_user/dbfs_user

SQL> start dbfs_create_filesystem dbfs_ts FS1

DBFS Storage Options

The LOB segment used by DBFS should be configured with the storage options NOCACHE LOGGING which is the default.

SQL> SELECT owner
, table_name
, segment_name
, logging
, cache
FROM dba_lobs
WHERE tablespace_name='DBFS_TS';

OWNER        TABLE_NAME               SEGMENT_NAME              LOGGING CACHE
----------- ---------------------- ---------------------- ------- -----
DBFS_USER    T_FS1                   LOB_SFS$_FST_12              YES    NO

If the LOB segment is not using NOCACHE LOGGING, alter it:

SQL> ALTER TABLE DBFS_USER.T_FS1 MODIFY LOB (FILEDATA) (NOCACHE LOGGING);

Prepare the FUSE library links

 $ echo "/usr/local/lib" >> /etc/ld.so.conf.d/usr_local_lib.conf

$ cd /usr/local/lib

$ ln -s $ORACLE_HOME/lib/libclntsh.so.11.1

$ ln -s $ORACLE_HOME/lib/libnnz11.so

$ locate libfuse.so

/lib/libfuse.so.2
/lib/libfuse.so.2.8.3
/lib64/libfuse.so.2
/lib64/libfuse.so.2.8.3

$ ln -s /lib64/libfuse.so.2 libfuse.so

$ ldconfig

$ ln -s $ORACLE_HOME/bin/dbfs_client /sbin/mount.dbfs
 

Here’s what your links should look like when done.

 > ls -lhF
 total 0
 lrwxrwxrwx 1 root root 59 May 11 2012 libclntsh.so.11.1 -> /u01/app/oracle/product/11.2.0.3/db_1/lib/libclntsh.so.11.1*
 lrwxrwxrwx 1 root root 19 Feb 25 03:07 libfuse.so -> /lib64/libfuse.so.2*
 lrwxrwxrwx 1 root root 19 May 11 2012 libfuse.so.2 -> /lib64/libfuse.so.2*
 lrwxrwxrwx 1 root root 53 May 11 2012 libnnz11.so -> /u01/app/oracle/product/11.2.0.3/db_1/lib/libnnz11.so

> ls -lhF /sbin/mount.dbfs
 lrwxrwxrwx 1 root root 38 Feb 25 03:08 /sbin/mount.dbfs -> /u01/app/11.2.0.3/grid/bin/dbfs_client*
 

Register with Oracle Cluster Resources

Click here to get a copy of the mount-dbfs.sh script.

$ cp mount-dbfs.sh $GRID_HOME/crs/script/mount-dbfs.sh

$ chown oracle:dba $GRID_HOME/crs/script/mount-dbfs.sh

$ chmod 750 $GRID_HOME/crs/script/mount-dbfs.sh

Create the following file and execute it to create the resource.

> cat add-dbfs-resource.sh
#!/bin/bash

ACTION_SCRIPT=/u01/app/11.2.0.3/grid/crs/script/mount-dbfs.sh
RESNAME=dbfs_mount
DBNAME=GG
DBNAMEL=`echo $DBNAME | tr A-Z a-z`
ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db_1
PATH=$ORACLE_HOME/bin:$PATH
export PATH ORACLE_HOME

crsctl add resource $RESNAME \
-type cluster_resource \
-attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
START_DEPENDENCIES='hard(ora.$DBNAMEL.db)pullup(ora.$DBNAMEL.db)',\
STOP_DEPENDENCIES='hard(ora.$DBNAMEL.db)',\
SCRIPT_TIMEOUT=300"

##### end script add-dbfs-resource.sh

exit 0

> ./add-dbfs-resource.sh

Start the Database and then start DBFS resource, which will mount the filesystem

> srvctl start database -d GG

> crsctl start resource dbfs_mount

CRS-2672: Attempting to start 'dbfs_mount' on 'ol6u2'
CRS-2676: Start of 'dbfs_mount' on 'ol6u2' succeeded

> crsctl stat res dbfs_mount -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
dbfs_mount
      1        ONLINE  ONLINE       ol6u2

You should now be able to see the mount using “df -h”

> df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_ol6u2-lv_root
26G   22G  2.6G  90% /
tmpfs                1004M  212M  792M  22% /dev/shm
/dev/sda1             485M  115M  346M  25% /boot
/dev/sdb1              20G  8.7G   11G  47% /oracle
dbfs-dbfs_user@GG:/   199M  400K  199M   1% /oracle/dbfs_direct

Now for the GoldenGate configuration

Install GoldenGate, if you haven’t already done so. If not, click here to see a previous blog for installation.

Create the shared directories on the DBFS

> cd /oracle/dbfs_direct/FS1/golden_gate/goldengate
> mkdir dirchk
> mkdir dirpcs
> mkdir dirprm
> mkdir dirdat
> mkdir dirrpt
> mkdir BR

Changed directory to the $GG_HOME directory and make the following soft links:

> ls -lhF | egrep "drwx|lrwx"
lrwxrwxrwx 1 oracle oinstall   38 Mar  2 08:20 BR -> /oracle/dbfs_direct/FS1/golden_gate/BR/
lrwxrwxrwx 1 oracle oinstall   47 Mar 11 08:54 dirchk -> /oracle/dbfs_direct/FS1/golden_gate/dirchk/
lrwxrwxrwx 1 oracle oinstall   42 Mar  2 08:20 dirdat -> /oracle/dbfs_direct/FS1/golden_gate/dirdat/
lrwxrwxrwx 1 oracle oinstall   42 Mar 11 08:56 dirpcs -> /oracle/dbfs_direct/FS1/golden_gate/dirpcs/
lrwxrwxrwx 1 oracle oinstall   42 Mar  2 08:19 dirprm -> /oracle/dbfs_direct/FS1/golden_gate/dirprm/
lrwxrwxrwx 1 oracle oinstall   42 Mar  2 08:19 dirrpt -> /oracle/dbfs_direct/FS1/golden_gate/dirrpt/

GoldenGate is now ready to use.

You can follow the above procedure to configure the target environment as well, but with one exception. There should be two separate DBFS file systems; one for the the dirchk directory that contains the frequently written to checkpoint file, and the other one for the other directories. This is to separate the directory I/O requirements since the checkpoint file is continuously written to over and over again. Due to the heavy write activity, performance is best when the file is stored in DBFS with the CACHE LOGGING storage option.

Create a new tablespace to hold the DBFS Checkpoint Directory

SQL> create tablespace dbfs_chkpt_ts
datafile '+DBFS' size 100M
autoextend on
next 100M
maxsize 500M
NOLOGGING
EXTENT MANAGEMENT LOCAL
AUTOALLOCATE
SEGMENT SPACE MANAGEMENT AUTO;

SQL> alter user dbfs_user quota unlimited on dbfs_chkpt_ts;

Create the database objects that will hold DBFS

> cd $ORACLE_HOME/rdbms/admin

> sqlplus dbfs_user/dbfs_user

SQL> start dbfs_create_filesystem dbfs_chkpt_ts FS_CHKPT

No errors.
--------
CREATE STORE:
begin dbms_dbfs_sfs.createFilesystem(store_name => 'FS_FS_CHKPT', tbl_name => 'T_FS_CHKPT', tbl_tbs => 'dbfs_chkpt_ts', lob_tbs => 'dbfs_chkpt_ts',
do_partition => false, partition_key => 1, do_compress => false, compression => '', do_dedup => false, do_encrypt => false); end;
--------
REGISTER STORE:
begin dbms_dbfs_content.registerStore(store_name=> 'FS_FS_CHKPT', provider_name => 'sample1', provider_package => 'dbms_dbfs_sfs'); end;
--------
MOUNT STORE:
begin dbms_dbfs_content.mountStore(store_name=>'FS_FS_CHKPT', store_mount=>'FS_CHKPT'); end;
--------
CHMOD STORE:
declare m integer; begin m := dbms_fuse.fs_chmod('/FS_CHKPT', 16895); end;
No errors.

Connect to the DBFS database and change the LOB storage option

> sqlplus / as sysdba

SQL> SELECT owner
, table_name
, segment_name
, logging
, cache
FROM dba_lobs
WHERE tablespace_name like 'DBFS_%';

OWNER           TABLE_NAME                     SEGMENT_NAME                   LOGGING CACHE
--------------- ------------------------------ ------------------------------ ------- ----------
DBFS_USER       T_FS_CHKPT                     LOB_SFS$_FST_528               NO      NO
DBFS_USER       T_FS1                          LOB_SFS$_FST_1                 YES     NO

SQL> ALTER TABLE DBFS_USER.T_FS_CHKPT MODIFY LOB (FILEDATA) (CACHE LOGGING);

SQL> SELECT owner
, table_name
, segment_name
, logging
, cache
FROM dba_lobs
WHERE tablespace_name like 'DBFS_%';

OWNER           TABLE_NAME                     SEGMENT_NAME                   LOGGING CACHE
--------------- ------------------------------ ------------------------------ ------- ----------
DBFS_USER       T_FS_CHKPT                     LOB_SFS$_FST_528               YES     YES
DBFS_USER       T_FS1                          LOB_SFS$_FST_1                 YES     NO

Make the same database links here as before pointing to the DBFS mount, but the dirchk directory will be pointing toward the DBFS_CHKPT_TS mount.

> ls -lhF | egrep "drwx|lrwx"
lrwxrwxrwx 1 oracle oinstall   38 Mar  2 08:20 BR -> /oracle/dbfs_direct/FS1/golden_gate/BR/
lrwxrwxrwx 1 oracle oinstall   47 Mar 11 08:54 dirchk -> /oracle/dbfs_direct/FS_CHKPT/golden_gate/dirchk/
lrwxrwxrwx 1 oracle oinstall   42 Mar  2 08:20 dirdat -> /oracle/dbfs_direct/FS1/golden_gate/dirdat/
lrwxrwxrwx 1 oracle oinstall   42 Mar 11 08:56 dirpcs -> /oracle/dbfs_direct/FS1/golden_gate/dirpcs/
lrwxrwxrwx 1 oracle oinstall   42 Mar  2 08:19 dirprm -> /oracle/dbfs_direct/FS1/golden_gate/dirprm/
lrwxrwxrwx 1 oracle oinstall   42 Mar  2 08:19 dirrpt -> /oracle/dbfs_direct/FS1/golden_gate/dirrpt/

Configuring GoldenGate Replication and Heartbeat Table to Measure Lag

In my previous posts, I showed how you can install GoldenGate.  Now let’s setup replication while also presenting a reliable method of measuring the lag time between the source and target databases by using a “heartbeat table” as defined in the Oracle Note mentioned below.

Oracle GoldenGate ­ Heartbeat process to monitor lag and performance in GoldenGate [ID 1299679.1]

I will also be using the Integrated Capture environment here that will demonstrate how to register the capture process with the database.

The basic idea is this:

  • the heartbeat table on the source is updated at a predetermined interval
  • the extract capture process would capture the updates to the table
  • the extract capture and pump processes each add their group name and date to the heartbeat record
  • the information in the record will be used to calculate the lag between the processes on the source and target servers

Scripts

All the files and scripts mentioned in this article are in the zip file that can be downloaded from here.  Most of them were downloaded from Oracle’s note mentioned above, but they do have some modification due to differences in environments.  There is no guarantee that they will work for you.

Install the Database Objects

Create the heartbeat table in the source database

sourcedb> cd source/scripts
sourcedb> sqlplus ggate/ggate
SQL> @heartbeat_table_source_TS_v9.sql

Create the heartbeat and heartbeat_history tables in the target database

targetdb> cd target/scripts
targetdb> sqlplus ggate/ggate
SQL> @heartbeat_table_target_TS_v9.sql

Setup Replication

Configuration on Source Server

sourcedb> cd $GG_HOME
sourcedb> ggsci
GGSCI> DBLOGIN USERID ggate, PASSWORD ggate
GGSCI> MININGDBLOGIN USERID ggate, PASSWORD ggate
GGSCI> REGISTER EXTRACT cgghb DATABASE
GGSCI> ADD EXTRACT cgghb, INTEGRATED TRANLOG, BEGIN NOW
GGSCI> ADD EXTTRAIL /u01/app/oracle/product/gg/dirdat/Hb, EXTRACT cgghb
GGSCI> ADD EXTRACT pgghb, EXTTRAILSOURCE /u01/app/oracle/product/gg/dirdat/Hb
GGSCI> ADD RMTTRAIL /u01/app/oracle/product/gg/dirdat/hb, EXTRACT pgghb
GGSCI> exit

Create the following files in the $GG_HOME/dirprm directory. These can also found in the zip file downloaded earlier.  Keep in mind that you will need to modify the oby files for your environment.

sourcedb> cd dirprm
sourcedb> ls -lhF
total 80
-rw-r--r-- 1 oracle oracle 272B Jan 9 09:02 cgghb.inc
-rw-r--r-- 1 oracle oracle 808B Jan 9 09:02 cgghb.prm
-rw-r--r-- 1 oracle oracle 29B Jan 9 09:02 dblogin.oby
-rw-r--r-- 1 oracle oracle 138B Jan 9 09:02 env.oby
-rw-r--r-- 1 oracle oracle 54B Jan 9 09:02 iclogin.oby
-rw-r--r-- 1 oracle oracle 82B Jan 9 09:02 maint.oby
-rw-r--r-- 1 oracle oracle 208B Jan 9 09:02 pgghb.inc
-rw-r--r-- 1 oracle oracle 327B Jan 9 09:02 pgghb.prm

Startup the extract processes

sourcedb> cd $GG_HOME
sourcedb> ggsci
GGSCI> start extract *
Sending START request to MANAGER ...
EXTRACT CGGHB starting
Sending START request to MANAGER ...
EXTRACT PGGHB starting
GGSCI> info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING 
EXTRACT RUNNING CGGHB 00:00:03 00:00:06 
EXTRACT RUNNING PGGHB 00:00:00 00:00:01

Configuration on Target Server

targetdb> cd $GG_HOME
targetdb> ggsci
GGSCI> dblogin userid ggate, password ggate
GGSCI> add checkpointtable ggate.checkpoint
GGSCI> add replicat rgghb, exttrail /u01/app/oracle/product/gg/dirdat/hb, checkpointtable ggate.checkpoint
GGSCI> exit

Copy the target files over to the $GG_HOME/dirprm directory. These can also found in the zip file downloaded earlier.

targetdb> cd dirprm
targetdb> ls -lhF
-rw-r--r-- 1 oracle oracle 29B Jan 9 09:02 dblogin.oby
-rw-r--r-- 1 oracle oracle 142B Jan 9 09:02 env.oby
-rw-r--r-- 1 oracle oracle 139B Jan 9 09:02 hb_setup.oby
-rw-r--r-- 1 oracle oracle 82B Jan 9 09:02 maint.oby
-rw-r--r-- 1 oracle oracle 1.0K Jan 9 09:02 rgghb.inc
-rw-r--r-- 1 oracle oracle 431B Jan 9 09:02 rgghb.prm

Startup the replicat process

targetdb> cd $GG_HOME
targetdb> ggsci
GGSCI> start replicat *
Sending START request to MANAGER ...
REPLICAT RGGHB starting
GGSCI> info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING 
REPLICAT RUNNING RGGHB 00:00:00 00:00:03

Start the Heartbeat Timestamp Script

On the Source Database Server, start the heartbeat_timestamp_TS_v9.sh script. This script will initially populate the table with a row containing information about the extract capture process. Afterwards, it will update the record with the current time at the specified interval. Both the capture name and the interval time are specified within the script. This is an Oracle supplied script that is supplied in Note ID 1299679.1 mentioned earlier; however, it was designed to work with classic capture, not integrated capture, so I made some modification to it accordingly since the output of the “info showch” command differs between the two capture modes.

sourcdb> nohup heartbeat_timestamp_TS_v9.sh &

 Monitoring the Lag Time

On the target database server, connect to Oracle and run the heartbeat_lag.sql script.
Note that the output from the sql script was modified for brevity and formatting.

> sqlplus ggate/ggate

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options

GGATE:RPT> @heartbeat_lag.sql

hb_lag_01

hb_lag_02

Oracle GoldenGate Installation

Introduction

This document describes how to Install GoldenGate 11.2.1.0.1 for one way replication with sequence and DDL support.

Environment

Source Database Server

Server Enterprise Linux Enterprise Linux Server release 5.5 (Carthage)
Database version is 11.2.0.2
Hostname is server1

Target Database Server

Server Solaris 5.11
Database version is 11.2.0.3
Hostname is server2

GoldenGate Software Downloads

Based on your environment, download the proper version of GoldenGate and copy it over to source and target servers. Downloads an be found on Oracle’s website.

GoldenGate Installation on Source Server

Unzipping the Software

Login to the source server as the Oracle software owner, usually oracle, and copy the downloaded software to the golden_gate directory shown below.

> cd /u01/app/oracle/product/golden_gate
> unzip ogg112101_fbo_ggs_Linux_x64_ora11g_64bit.zip

Archive: ogg112101_fbo_ggs_Linux_x64_ora11g_64bit.zip
inflating: fbo_ggs_Linux_x64_ora11g_64bit.tar
inflating: OGG_WinUnix_Rel_Notes_11.2.1.0.1.pdf
inflating: Oracle GoldenGate 11.2.1.0.1 README.txt
inflating: Oracle GoldenGate 11.2.1.0.1 README.doc

> tar -xf fbo_ggs_Linux_x64_ora11g_64bit.tar

Environment Settings

Add the following lines to $HOME/.bash_profile, or whatever file is used to set oracle’s environment when logging into the server.

export PATH=$GG_HOME:$PATH
export LD_LIBRARY_PATH=$GG_HOME:$LD_LIBRARY_PATH
export GG_HOME=/u01/app/oracle/product/golden_gate

Source the file to set your current environment.

> . .bash_profile

Installation

After setting your environment, you can then login to ggsci and install the GoldenGate software

> cd /u01/app/oracle/product/golden_gate
> ggsci
GGSCI> create subdirs

Creating subdirectories under current directory /u01/app/oracle/product/golden_gate
Parameter files /u01/app/oracle/product/golden_gate/dirprm: already exists
Report files /u01/app/oracle/product/golden_gate/dirrpt: created
Checkpoint files /u01/app/oracle/product/golden_gate/dirchk: created
Process status files /u01/app/oracle/product/golden_gate/dirpcs: created
SQL script files /u01/app/oracle/product/golden_gate/dirsql: created
Database definitions files /u01/app/oracle/product/golden_gate/dirdef: created
Extract data files /u01/app/oracle/product/golden_gate/dirdat: created
Temporary files /u01/app/oracle/product/golden_gate/dirtmp: created
Stdout files /u01/app/oracle/product/golden_gate/dirout: created

GGSCI> exit

Create Database User in Source Database

> sqlplus / as sysdba

SQL> CREATE USER GGATE
IDENTIFIED BY ggate
DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP
PROFILE DEFAULT
ACCOUNT UNLOCK;

SQL> ALTER USER GGATE DEFAULT ROLE ALL;

SQL> GRANT UNLIMITED TABLESPACE TO GGATE;

Here are the privileges required for the GoldenGate Extract process.

SQL> grant create session to ggate;

SQL> grant alter session to ggate;

SQL> grant resource to ggate;

SQL> grant connect to ggate;

SQL> grant select any dictionary to ggate;

SQL> grant flashback any table to ggate;

SQL> grant select any table to ggate;

SQL> grant select on dba_clusters to ggate;

SQL> grant execute on dbms_flashback to ggate;

SQL> grant select any transaction to ggate;

Enabling Database-Level Supplemental Logging

GGSCI provides commands to configure the source database to log the appropriate key

values whenever it logs a row change, so that they are available to Oracle GoldenGate in

the redo record. By default, the Oracle database only logs column values that are changed.

The appropriate command must be issued before you start Oracle GoldenGate processing.

Because redo volume is increased as the result of logging key values, you might want to

wait until just before you start Oracle GoldenGate processing to enable the logging.

Otherwise, you can issue them any time before processing starts.

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

SQL> ALTER SYSTEM SWITCH LOGFILE;

Verify that supplemental logging is enabled at the database level with this command :

SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

The output must be YES or IMPLICIT.

Create the GLOBALS file

> cd $GG_HOME

> ggsci

GGSCI> edit params ./GLOBALS
-- DDL support
GGSCHEMA ggate

Exit out of the editor when done.

Enabling Sequence Support

Oracle GoldenGate ensures that the target sequence values will always be higher than

those of the source (or equal to them, if the cache is 0).

The following SQL script will prompt for a username; enter ggate at the prompt.

> sqlplus / as sysdba

SQL> @sequence

SQL> alter table sys.seq$ add supplemental log data (primary key) columns;

Enabling DDL Support

The following SQL scripts will prompt for a username; enter ggate at the prompt.

> sqlplus / as sysdba

SQL> grant execute on utl_file to ggate;

SQL> @marker_setup

SQL> @ddl_setup

SQL> @role_setup

SQL> GRANT GGS_GGSUSER_ROLE TO ggate;

SQL> @ddl_enable

Configure the MGR File

GGSCI> edit params mgr

port 7809
userid ggate@SID1, password ggate

Exit out of the editor when done. Then, start the manager process.

GGSCI> start mgr

GoldenGate Installation on Target Server

Unzipping the Software

Login to the source server as the Oracle software owner, usually oracle, and copy the downloaded software to the golden_gate directory shown below.

> cd /u01/app/oracle/product/golden_gate
> unzip ogg112101_fbo_ggs_Solaris_sparc_ora11g_64bit.zip

Archive: ogg112101_fbo_ggs_Solaris_sparc_ora11g_64bit.zip
inflating: fbo_ggs_Solaris_sparc_ora11g_64bit.tar
inflating: OGG_WinUnix_Rel_Notes_11.2.1.0.1.pdf
inflating: Oracle GoldenGate 11.2.1.0.1 README.txt
inflating: Oracle GoldenGate 11.2.1.0.1 README.doc</pre>

> tar -xf fbo_ggs_Solaris_sparc_ora11g_64bit.tar

Environment Settings

Add the following lines to $HOME/.bash_profile, or whatever file is used to set oracle’s environment when logging into the server.

export PATH=$GG_HOME:$PATH
export LD_LIBRARY_PATH=$GG_HOME:$LD_LIBRARY_PATH
export GG_HOME=/u01/app/oracle/product/golden_gate

Source the file to set your current environment.

> . .bash_profile

Installation

After setting your environment, you can then login to ggsci and install the GoldenGate software

> cd /u01/app/oracle/product/golden_gate
> ggsci
GGSCI> create subdirs

Creating subdirectories under current directory /u01/app/oracle/product/golden_gate
Parameter files /u01/app/oracle/product/golden_gate/dirprm: already exists
Report files /u01/app/oracle/product/golden_gate/dirrpt: created
Checkpoint files /u01/app/oracle/product/golden_gate/dirchk: created
Process status files /u01/app/oracle/product/golden_gate/dirpcs: created
SQL script files /u01/app/oracle/product/golden_gate/dirsql: created
Database definitions files /u01/app/oracle/product/golden_gate/dirdef: created
Extract data files /u01/app/oracle/product/golden_gate/dirdat: created
Temporary files /u01/app/oracle/product/golden_gate/dirtmp: created
Stdout files /u01/app/oracle/product/golden_gate/dirout: created

GGSCI> exit

Create Database User in Target Database

> sqlplus / as sysdba

SQL> CREATE USER GGATE
IDENTIFIED BY ggate
DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP
PROFILE DEFAULT
ACCOUNT UNLOCK;

SQL> ALTER USER GGATE DEFAULT ROLE ALL;

SQL> GRANT UNLIMITED TABLESPACE TO GGATE;

Here are the privileges required for the GoldenGate Replicat process.

SQL> grant create session to ggate;

SQL> grant alter session to ggate;

SQL> grant resource to ggate;

SQL> grant connect to ggate;

SQL> grant select any dictionary to ggate;

SQL> grant select any table to ggate;

-- ddl support

SQL> grant create table to ggate;

SQL> grant create any table to ggate;

SQL> grant lock any table to ggate;

SQL> grant execute on utl_file to ggate;

– the following can be done at a later time when the target tables are know

SQL> grant insert,update,delete on [ALL TARGET TABLES]

Add Checkpoint Table

> cd $GG_HOME
> ggsci
GGSCI> dblogin userid ggate, password ggate
GGSCI> add checkpointtable ggate.checkpoint

Create the GLOBALS file

> cd $GG_HOME
> ggsci
GGSCI> edit params ./GLOBALS

-- DDL support
GGSCHEMA ggate
CHECKPOINTTABLE ggate.checkpoint

Exit out of the editor when done.

Configure the MGR File

GGSCI> edit params mgr

port 7809
userid ggate@SID2, password ggate

Exit out of the editor when done, then start the manager process.

GGSCI> start mgr

This completes the GoldenGate Software Installation.

The software is now ready to be used for replication.

GoldenGate Integrated Capture

What is Integrated Capture

Instead of reading from the redo/archive logs, the Oracle GoldenGate Extract process interacts directly with a database logmining server to receive data changes in the form of logical change records (LCR).

Benefits

Some of the benefits of Integrated Capture are :

  • supports more data and storage types as compared to classic capture
  • automatic integration with the RMAN log retention feature to ensure that required archive logs are not deleted
  • transparent support with RAC for the addition or removal of nodes
  • automatic support of Transparent Data Encryption

To support all Oracle data and storage types, the compatibility setting of the source database must be at least 11.2.0.3.0 with the 11.2.0.3 Database specific bundle patch for Integrated Extract 11.2.x (Doc ID 1411356.1).

Database User Privileges

In addition the privileges required for Classic Capture, two more privileges are required.

exec dbms_goldengate_auth.grant_admin_privilege(‘GGATE’);
grant select on V_$DATABASE to ggate;

Replication

The GoldenGate installation for Integrated Capture is no different than if the Classic Capture was being used.  In fact, you may run both types of extracts on the same server simultaneously.  The differences are in the login and how the extract capture process is created.  For a complete installation, you check look at one of my previous posts here.  Let’s see how it works.

Adding the Capture Process

GGSCI> DBLOGIN USERID ggate, PASSWORD ggate
GGSCI> MININGDBLOGIN USERID ggate, PASSWORD ggate
GGSCI> REGISTER EXTRACT cname DATABASE
GGSCI> add extract cname, <em><strong>integrated tranlog</strong></em>, begin now
GGSCI> add exttrail /u01/app/oracle/product/gg/dirdat/lt, extract cname

The differences from Classic Capture are highlighted and italicized above.

In addition, you should put the miningdblogin in the capture parameter file as well.

TRANLOGOPTIONS MININGUSER ggate, MININGPASSWORD ggate

The capture process above was tested using GG 11.2.1.0.1 on an Oracle 11.2.0.3 database.