Monthly Archives: March 2013

Improving Log File Sync times with Exadata Smart Flash Logs

The Smart Flash Log feature of Exadata provides a high performance temporary store for redo log writes. When enabled, the log writes are written to both disk and flash at the same time, and the first one to return an acknowledgment wins.

The Requirements for Smart Flash Log as follows:
Storage Server Version 11.2.2.4.0 or greater
Database Version 11.2.0.2 BP 11 or greater

I’ve been working with a client that has an application that has a high frequency of commits running on an X2-2 1/2 Rack that also generates around 600G of redo per day. The MOS note, Troubleshooting: “log file sync” Waits [ID 1376916.1], gives a formula to calculating the frequency of commits:

In the AWR or Statspack report, if the average user calls per commit/rollback calculated as “user calls/(user commits+user rollbacks)” is less than 30, then commits are happening too frequently.

With this particular client, this number turns out to be 13. Looking at the wait events from the AWR reports show high times for both log file sync and log file parallel writes.

Here is a graph of the log file sync and log file parallel write times before and after applying the Exadata bundle patch to an X2-2 ½ Rack this past weekend which enabled the flashlog feature. The drop shown in the graph is of course when the Bundle Patch was installed.

flashlog_and_lfs

Now, lets take a look at the performance metrics from the cells. It shows the amount of redo writes made to both disk and flash as well as the number of outliers prevented. Below the spreadsheet is the certain metrics from output from the cells. Around 10.5% of redo writes are going to flash and the number of redo write outliers prevented.

flashlog_redo_writes

Also worth mentioning is the FL_EFFICIENCY_PERCENTAGE% metrics listed below, all at 100% which is telling us that no redo writes have occurred that have taken over 0.5 seconds.

# dcli -g cell_group -l root "cellcli -e list metriccurrent where objectType=\'FLASHLOG\'"

exadcel01: FL_ACTUAL_OUTLIERS                    FLASHLOG        0 IO requests
exadcel01: FL_BY_KEEP                            FLASHLOG        0
exadcel01: FL_DISK_FIRST                         FLASHLOG        58,255,655 IO requests
exadcel01: FL_DISK_IO_ERRS                       FLASHLOG        0 IO requests
exadcel01: FL_EFFICIENCY_PERCENTAGE              FLASHLOG        100 %
exadcel01: FL_EFFICIENCY_PERCENTAGE_HOUR         FLASHLOG        100 %
exadcel01: FL_FLASH_FIRST                        FLASHLOG        6,275,774 IO requests
exadcel01: FL_FLASH_IO_ERRS                      FLASHLOG        0 IO requests
exadcel01: FL_FLASH_ONLY_OUTLIERS                FLASHLOG        0 IO requests
exadcel01: FL_IO_DB_BY_W                         FLASHLOG        537,635 MB
exadcel01: FL_IO_DB_BY_W_SEC                     FLASHLOG        5.191 MB/sec
exadcel01: FL_IO_FL_BY_W                         FLASHLOG        677,543 MB
exadcel01: FL_IO_FL_BY_W_SEC                     FLASHLOG        7.189 MB/sec
exadcel01: FL_IO_W                               FLASHLOG        64,531,429 IO requests
exadcel01: FL_IO_W_SKIP_BUSY                     FLASHLOG        0 IO requests
exadcel01: FL_IO_W_SKIP_BUSY_MIN                 FLASHLOG        0.0 IO/sec
exadcel01: FL_IO_W_SKIP_LARGE                    FLASHLOG        0 IO requests
exadcel01: FL_PREVENTED_OUTLIERS                 FLASHLOG        163,136 IO requests
exadcel02: FL_ACTUAL_OUTLIERS                    FLASHLOG        0 IO requests
exadcel02: FL_BY_KEEP                            FLASHLOG        0
exadcel02: FL_DISK_FIRST                         FLASHLOG        53,770,032 IO requests
exadcel02: FL_DISK_IO_ERRS                       FLASHLOG        0 IO requests
exadcel02: FL_EFFICIENCY_PERCENTAGE              FLASHLOG        100 %
exadcel02: FL_EFFICIENCY_PERCENTAGE_HOUR         FLASHLOG        100 %
exadcel02: FL_FLASH_FIRST                        FLASHLOG        5,612,794 IO requests
exadcel02: FL_FLASH_IO_ERRS                      FLASHLOG        0 IO requests
exadcel02: FL_FLASH_ONLY_OUTLIERS                FLASHLOG        0 IO requests
exadcel02: FL_IO_DB_BY_W                         FLASHLOG        493,224 MB
exadcel02: FL_IO_DB_BY_W_SEC                     FLASHLOG        4.562 MB/sec
exadcel02: FL_IO_FL_BY_W                         FLASHLOG        621,982 MB
exadcel02: FL_IO_FL_BY_W_SEC                     FLASHLOG        6.283 MB/sec
exadcel02: FL_IO_W                               FLASHLOG        59,382,826 IO requests
exadcel02: FL_IO_W_SKIP_BUSY                     FLASHLOG        0 IO requests
exadcel02: FL_IO_W_SKIP_BUSY_MIN                 FLASHLOG        0.0 IO/sec
exadcel02: FL_IO_W_SKIP_LARGE                    FLASHLOG        0 IO requests
exadcel02: FL_PREVENTED_OUTLIERS                 FLASHLOG        173,809 IO requests
exadcel03: FL_ACTUAL_OUTLIERS                    FLASHLOG        0 IO requests
exadcel03: FL_BY_KEEP                            FLASHLOG        0
exadcel03: FL_DISK_FIRST                         FLASHLOG        57,699,713 IO requests
exadcel03: FL_DISK_IO_ERRS                       FLASHLOG        0 IO requests
exadcel03: FL_EFFICIENCY_PERCENTAGE              FLASHLOG        100 %
exadcel03: FL_EFFICIENCY_PERCENTAGE_HOUR         FLASHLOG        100 %
exadcel03: FL_FLASH_FIRST                        FLASHLOG        6,605,007 IO requests
exadcel03: FL_FLASH_IO_ERRS                      FLASHLOG        0 IO requests
exadcel03: FL_FLASH_ONLY_OUTLIERS                FLASHLOG        0 IO requests
exadcel03: FL_IO_DB_BY_W                         FLASHLOG        535,277 MB
exadcel03: FL_IO_DB_BY_W_SEC                     FLASHLOG        5.676 MB/sec
exadcel03: FL_IO_FL_BY_W                         FLASHLOG        674,600 MB
exadcel03: FL_IO_FL_BY_W_SEC                     FLASHLOG        7.894 MB/sec
exadcel03: FL_IO_W                               FLASHLOG        64,304,720 IO requests
exadcel03: FL_IO_W_SKIP_BUSY                     FLASHLOG        0 IO requests
exadcel03: FL_IO_W_SKIP_BUSY_MIN                 FLASHLOG        0.0 IO/sec
exadcel03: FL_IO_W_SKIP_LARGE                    FLASHLOG        0 IO requests
exadcel03: FL_PREVENTED_OUTLIERS                 FLASHLOG        282,466 IO requests
exadcel04: FL_ACTUAL_OUTLIERS                    FLASHLOG        0 IO requests
exadcel04: FL_BY_KEEP                            FLASHLOG        0
exadcel04: FL_DISK_FIRST                         FLASHLOG        48,862,092 IO requests
exadcel04: FL_DISK_IO_ERRS                       FLASHLOG        0 IO requests
exadcel04: FL_EFFICIENCY_PERCENTAGE              FLASHLOG        100 %
exadcel04: FL_EFFICIENCY_PERCENTAGE_HOUR         FLASHLOG        100 %
exadcel04: FL_FLASH_FIRST                        FLASHLOG        4,944,046 IO requests
exadcel04: FL_FLASH_IO_ERRS                      FLASHLOG        0 IO requests
exadcel04: FL_FLASH_ONLY_OUTLIERS                FLASHLOG        0 IO requests
exadcel04: FL_IO_DB_BY_W                         FLASHLOG        450,441 MB
exadcel04: FL_IO_DB_BY_W_SEC                     FLASHLOG        4.516 MB/sec
exadcel04: FL_IO_FL_BY_W                         FLASHLOG        566,957 MB
exadcel04: FL_IO_FL_BY_W_SEC                     FLASHLOG        6.287 MB/sec
exadcel04: FL_IO_W                               FLASHLOG        53,806,149 IO requests
exadcel04: FL_IO_W_SKIP_BUSY                     FLASHLOG        0 IO requests
exadcel04: FL_IO_W_SKIP_BUSY_MIN                 FLASHLOG        0.0 IO/sec
exadcel04: FL_IO_W_SKIP_LARGE                    FLASHLOG        0 IO requests
exadcel04: FL_PREVENTED_OUTLIERS                 FLASHLOG        156,286 IO requests
exadcel05: FL_ACTUAL_OUTLIERS                    FLASHLOG        0 IO requests
exadcel05: FL_BY_KEEP                            FLASHLOG        0
exadcel05: FL_DISK_FIRST                         FLASHLOG        23,384,160 IO requests
exadcel05: FL_DISK_IO_ERRS                       FLASHLOG        0 IO requests
exadcel05: FL_EFFICIENCY_PERCENTAGE              FLASHLOG        100 %
exadcel05: FL_EFFICIENCY_PERCENTAGE_HOUR         FLASHLOG        100 %
exadcel05: FL_FLASH_FIRST                        FLASHLOG        2,313,231 IO requests
exadcel05: FL_FLASH_IO_ERRS                      FLASHLOG        0 IO requests
exadcel05: FL_FLASH_ONLY_OUTLIERS                FLASHLOG        0 IO requests
exadcel05: FL_IO_DB_BY_W                         FLASHLOG        192,781 MB
exadcel05: FL_IO_DB_BY_W_SEC                     FLASHLOG        6.347 MB/sec
exadcel05: FL_IO_FL_BY_W                         FLASHLOG        249,018 MB
exadcel05: FL_IO_FL_BY_W_SEC                     FLASHLOG        8.762 MB/sec
exadcel05: FL_IO_W                               FLASHLOG        25,697,391 IO requests
exadcel05: FL_IO_W_SKIP_BUSY                     FLASHLOG        0 IO requests
exadcel05: FL_IO_W_SKIP_BUSY_MIN                 FLASHLOG        0.0 IO/sec
exadcel05: FL_IO_W_SKIP_LARGE                    FLASHLOG        0 IO requests
exadcel05: FL_PREVENTED_OUTLIERS                 FLASHLOG        38,088 IO requests
exadcel06: FL_ACTUAL_OUTLIERS                    FLASHLOG        0 IO requests
exadcel06: FL_BY_KEEP                            FLASHLOG        0
exadcel06: FL_DISK_FIRST                         FLASHLOG        58,983,319 IO requests
exadcel06: FL_DISK_IO_ERRS                       FLASHLOG        0 IO requests
exadcel06: FL_EFFICIENCY_PERCENTAGE              FLASHLOG        100 %
exadcel06: FL_EFFICIENCY_PERCENTAGE_HOUR         FLASHLOG        100 %
exadcel06: FL_FLASH_FIRST                        FLASHLOG        6,125,856 IO requests
exadcel06: FL_FLASH_IO_ERRS                      FLASHLOG        0 IO requests
exadcel06: FL_FLASH_ONLY_OUTLIERS                FLASHLOG        0 IO requests
exadcel06: FL_IO_DB_BY_W                         FLASHLOG        541,559 MB
exadcel06: FL_IO_DB_BY_W_SEC                     FLASHLOG        5.832 MB/sec
exadcel06: FL_IO_FL_BY_W                         FLASHLOG        682,942 MB
exadcel06: FL_IO_FL_BY_W_SEC                     FLASHLOG        8.171 MB/sec
exadcel06: FL_IO_W                               FLASHLOG        65,109,175 IO requests
exadcel06: FL_IO_W_SKIP_BUSY                     FLASHLOG        0 IO requests
exadcel06: FL_IO_W_SKIP_BUSY_MIN                 FLASHLOG        0.0 IO/sec
exadcel06: FL_IO_W_SKIP_LARGE                    FLASHLOG        0 IO requests
exadcel06: FL_PREVENTED_OUTLIERS                 FLASHLOG        192,679 IO requests
exadcel07: FL_ACTUAL_OUTLIERS                    FLASHLOG        0 IO requests
exadcel07: FL_BY_KEEP                            FLASHLOG        0
exadcel07: FL_DISK_FIRST                         FLASHLOG        49,116,562 IO requests
exadcel07: FL_DISK_IO_ERRS                       FLASHLOG        0 IO requests
exadcel07: FL_EFFICIENCY_PERCENTAGE              FLASHLOG        100 %
exadcel07: FL_EFFICIENCY_PERCENTAGE_HOUR         FLASHLOG        100 %
exadcel07: FL_FLASH_FIRST                        FLASHLOG        4,924,721 IO requests
exadcel07: FL_FLASH_IO_ERRS                      FLASHLOG        0 IO requests
exadcel07: FL_FLASH_ONLY_OUTLIERS                FLASHLOG        0 IO requests
exadcel07: FL_IO_DB_BY_W                         FLASHLOG        450,464 MB
exadcel07: FL_IO_DB_BY_W_SEC                     FLASHLOG        4.574 MB/sec
exadcel07: FL_IO_FL_BY_W                         FLASHLOG        567,645 MB
exadcel07: FL_IO_FL_BY_W_SEC                     FLASHLOG        6.334 MB/sec
exadcel07: FL_IO_W                               FLASHLOG        54,041,283 IO requests
exadcel07: FL_IO_W_SKIP_BUSY                     FLASHLOG        0 IO requests
exadcel07: FL_IO_W_SKIP_BUSY_MIN                 FLASHLOG        0.0 IO/sec
exadcel07: FL_IO_W_SKIP_LARGE                    FLASHLOG        0 IO requests
exadcel07: FL_PREVENTED_OUTLIERS                 FLASHLOG        258,842 IO requests

Your mileage may will vary with the use of flash logs as I have seen some posts that don’t see a performance gain after enabling the flash logs, but it definitely helped in this case on a heavy write OLTP system.

Using GoldenGate on the Oracle Database File System ( DBFS )

The Database File System (DBFS) creates a shared file system which has its files stored in the database.

Because the data is stored in the database, the file system inherits all the HA/DR capabilities provided by the database.

When configuring to run on Exadata or any RAC database, best practice is to store the Oracle GoldenGate trail files, checkpoint files, bounded recovery and configuration files in DBFS to provide the best performance, scalability, recoverability, and failover capabilities in the event of a system failure.

* the checkpoint and trail files remain available in the event of a node failure

* allows one of the surviving database instances to be the source of an Extract process or destination for the Replicat processes

* after a failure occurs, the Extract process can continue mining from the last known archived redo log file position

* replicat processes can start applying from the same trail file position before a failure occurred

In order to enable the DBFS, you must have the FUSE RPMs installed. The Filesystem in Userspace (FUSE) is a loadable kernel module for Unix-like computer operating systems that lets non-privileged users create their own file systems without editing kernel code.

The following configuration was implemented on and Oracle Enterprise Linix 6.2 environement.

Let’s set it up and configure GoldenGate to use DBFS.

FUSE Setup

Required RPM Packages

Ensure that the following packages are installed.

$ rpm -qa –queryformat=”%{name}-%{version}-%{release}.%{arch}\n” | grep -i fuse
fuse-2.8.3-4.el6.x86_64
fuse-libs-2.8.3-4.el6.x86_64
gvfs-fuse-1.4.3-12.el6.x86_64

Add the oracle user to the fuse group

$ usermod -a -G fuse oracle

Create the /etc/fuse.conf file with the user_allow_other option.

$ echo user_allow_other > /etc/fuse.conf

$ chmod 644 /etc/fuse.conf

Create an empty directory that will be used as the mount point for the DBFS filesystem.

$ mkdir -p /oracle/dbfs_direct

$ chown oracle:dba /oracle/dbfs_direct

To pick up the additional group (fuse) membership for the oracle user on Linux, Clusterware must be restarted.

Database Configuration

As the RDBMS software owner, create the DBFS repository inside the database. To create the repository, create a new tablespace to hold the DBFS objects and a database user that will own the objects.

I have the following disk groups defined in ASM, and will be using the DBFS disk group.

GRP NAME      STATE       TYPE    FREE_GB  USABLE_GB   TOTAL_GB PERCENT_FREE COMPATIBILITY DATABASE_COMP
---- ------------ ----------- ------ ---------- ---------- ---------- ------------ ------------- -------------
1 DATA      MOUNTED     EXTERN       5.01       5.01       10        50 11.2.0.3.0     11.2.0.3.0
2 DBFS      MOUNTED     EXTERN       7.74       7.74        8        97 11.2.0.3.0     11.2.0.3.0
3 RECO      MOUNTED     EXTERN       8.01       8.01       15        53 11.2.0.3.0     11.2.0.3.0

DBFS Repository Creation

> sqlplus / as sysdba
SQL> create bigfile tablespace dbfs_ts
datafile '+DBFS' size 200M
autoextend on
next 100M
maxsize 3G
NOLOGGING
EXTENT MANAGEMENT LOCAL
AUTOALLOCATE
SEGMENT SPACE MANAGEMENT AUTO;

SQL> create user dbfs_user identified by dbfs_user
default tablespace dbfs_ts
quota unlimited on dbfs_ts;

SQL> grant create session, create table, create view, create procedure, dbfs_role to dbfs_user;

Create the database objects that will hold DBFS

> cd $ORACLE_HOME/rdbms/admin

> sqlplus dbfs_user/dbfs_user

SQL> start dbfs_create_filesystem dbfs_ts FS1

DBFS Storage Options

The LOB segment used by DBFS should be configured with the storage options NOCACHE LOGGING which is the default.

SQL> SELECT owner
, table_name
, segment_name
, logging
, cache
FROM dba_lobs
WHERE tablespace_name='DBFS_TS';

OWNER        TABLE_NAME               SEGMENT_NAME              LOGGING CACHE
----------- ---------------------- ---------------------- ------- -----
DBFS_USER    T_FS1                   LOB_SFS$_FST_12              YES    NO

If the LOB segment is not using NOCACHE LOGGING, alter it:

SQL> ALTER TABLE DBFS_USER.T_FS1 MODIFY LOB (FILEDATA) (NOCACHE LOGGING);

Prepare the FUSE library links

 $ echo "/usr/local/lib" >> /etc/ld.so.conf.d/usr_local_lib.conf

$ cd /usr/local/lib

$ ln -s $ORACLE_HOME/lib/libclntsh.so.11.1

$ ln -s $ORACLE_HOME/lib/libnnz11.so

$ locate libfuse.so

/lib/libfuse.so.2
/lib/libfuse.so.2.8.3
/lib64/libfuse.so.2
/lib64/libfuse.so.2.8.3

$ ln -s /lib64/libfuse.so.2 libfuse.so

$ ldconfig

$ ln -s $ORACLE_HOME/bin/dbfs_client /sbin/mount.dbfs
 

Here’s what your links should look like when done.

 > ls -lhF
 total 0
 lrwxrwxrwx 1 root root 59 May 11 2012 libclntsh.so.11.1 -> /u01/app/oracle/product/11.2.0.3/db_1/lib/libclntsh.so.11.1*
 lrwxrwxrwx 1 root root 19 Feb 25 03:07 libfuse.so -> /lib64/libfuse.so.2*
 lrwxrwxrwx 1 root root 19 May 11 2012 libfuse.so.2 -> /lib64/libfuse.so.2*
 lrwxrwxrwx 1 root root 53 May 11 2012 libnnz11.so -> /u01/app/oracle/product/11.2.0.3/db_1/lib/libnnz11.so

> ls -lhF /sbin/mount.dbfs
 lrwxrwxrwx 1 root root 38 Feb 25 03:08 /sbin/mount.dbfs -> /u01/app/11.2.0.3/grid/bin/dbfs_client*
 

Register with Oracle Cluster Resources

Click here to get a copy of the mount-dbfs.sh script.

$ cp mount-dbfs.sh $GRID_HOME/crs/script/mount-dbfs.sh

$ chown oracle:dba $GRID_HOME/crs/script/mount-dbfs.sh

$ chmod 750 $GRID_HOME/crs/script/mount-dbfs.sh

Create the following file and execute it to create the resource.

> cat add-dbfs-resource.sh
#!/bin/bash

ACTION_SCRIPT=/u01/app/11.2.0.3/grid/crs/script/mount-dbfs.sh
RESNAME=dbfs_mount
DBNAME=GG
DBNAMEL=`echo $DBNAME | tr A-Z a-z`
ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db_1
PATH=$ORACLE_HOME/bin:$PATH
export PATH ORACLE_HOME

crsctl add resource $RESNAME \
-type cluster_resource \
-attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
START_DEPENDENCIES='hard(ora.$DBNAMEL.db)pullup(ora.$DBNAMEL.db)',\
STOP_DEPENDENCIES='hard(ora.$DBNAMEL.db)',\
SCRIPT_TIMEOUT=300"

##### end script add-dbfs-resource.sh

exit 0

> ./add-dbfs-resource.sh

Start the Database and then start DBFS resource, which will mount the filesystem

> srvctl start database -d GG

> crsctl start resource dbfs_mount

CRS-2672: Attempting to start 'dbfs_mount' on 'ol6u2'
CRS-2676: Start of 'dbfs_mount' on 'ol6u2' succeeded

> crsctl stat res dbfs_mount -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
dbfs_mount
      1        ONLINE  ONLINE       ol6u2

You should now be able to see the mount using “df -h”

> df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_ol6u2-lv_root
26G   22G  2.6G  90% /
tmpfs                1004M  212M  792M  22% /dev/shm
/dev/sda1             485M  115M  346M  25% /boot
/dev/sdb1              20G  8.7G   11G  47% /oracle
dbfs-dbfs_user@GG:/   199M  400K  199M   1% /oracle/dbfs_direct

Now for the GoldenGate configuration

Install GoldenGate, if you haven’t already done so. If not, click here to see a previous blog for installation.

Create the shared directories on the DBFS

> cd /oracle/dbfs_direct/FS1/golden_gate/goldengate
> mkdir dirchk
> mkdir dirpcs
> mkdir dirprm
> mkdir dirdat
> mkdir dirrpt
> mkdir BR

Changed directory to the $GG_HOME directory and make the following soft links:

> ls -lhF | egrep "drwx|lrwx"
lrwxrwxrwx 1 oracle oinstall   38 Mar  2 08:20 BR -> /oracle/dbfs_direct/FS1/golden_gate/BR/
lrwxrwxrwx 1 oracle oinstall   47 Mar 11 08:54 dirchk -> /oracle/dbfs_direct/FS1/golden_gate/dirchk/
lrwxrwxrwx 1 oracle oinstall   42 Mar  2 08:20 dirdat -> /oracle/dbfs_direct/FS1/golden_gate/dirdat/
lrwxrwxrwx 1 oracle oinstall   42 Mar 11 08:56 dirpcs -> /oracle/dbfs_direct/FS1/golden_gate/dirpcs/
lrwxrwxrwx 1 oracle oinstall   42 Mar  2 08:19 dirprm -> /oracle/dbfs_direct/FS1/golden_gate/dirprm/
lrwxrwxrwx 1 oracle oinstall   42 Mar  2 08:19 dirrpt -> /oracle/dbfs_direct/FS1/golden_gate/dirrpt/

GoldenGate is now ready to use.

You can follow the above procedure to configure the target environment as well, but with one exception. There should be two separate DBFS file systems; one for the the dirchk directory that contains the frequently written to checkpoint file, and the other one for the other directories. This is to separate the directory I/O requirements since the checkpoint file is continuously written to over and over again. Due to the heavy write activity, performance is best when the file is stored in DBFS with the CACHE LOGGING storage option.

Create a new tablespace to hold the DBFS Checkpoint Directory

SQL> create tablespace dbfs_chkpt_ts
datafile '+DBFS' size 100M
autoextend on
next 100M
maxsize 500M
NOLOGGING
EXTENT MANAGEMENT LOCAL
AUTOALLOCATE
SEGMENT SPACE MANAGEMENT AUTO;

SQL> alter user dbfs_user quota unlimited on dbfs_chkpt_ts;

Create the database objects that will hold DBFS

> cd $ORACLE_HOME/rdbms/admin

> sqlplus dbfs_user/dbfs_user

SQL> start dbfs_create_filesystem dbfs_chkpt_ts FS_CHKPT

No errors.
--------
CREATE STORE:
begin dbms_dbfs_sfs.createFilesystem(store_name => 'FS_FS_CHKPT', tbl_name => 'T_FS_CHKPT', tbl_tbs => 'dbfs_chkpt_ts', lob_tbs => 'dbfs_chkpt_ts',
do_partition => false, partition_key => 1, do_compress => false, compression => '', do_dedup => false, do_encrypt => false); end;
--------
REGISTER STORE:
begin dbms_dbfs_content.registerStore(store_name=> 'FS_FS_CHKPT', provider_name => 'sample1', provider_package => 'dbms_dbfs_sfs'); end;
--------
MOUNT STORE:
begin dbms_dbfs_content.mountStore(store_name=>'FS_FS_CHKPT', store_mount=>'FS_CHKPT'); end;
--------
CHMOD STORE:
declare m integer; begin m := dbms_fuse.fs_chmod('/FS_CHKPT', 16895); end;
No errors.

Connect to the DBFS database and change the LOB storage option

> sqlplus / as sysdba

SQL> SELECT owner
, table_name
, segment_name
, logging
, cache
FROM dba_lobs
WHERE tablespace_name like 'DBFS_%';

OWNER           TABLE_NAME                     SEGMENT_NAME                   LOGGING CACHE
--------------- ------------------------------ ------------------------------ ------- ----------
DBFS_USER       T_FS_CHKPT                     LOB_SFS$_FST_528               NO      NO
DBFS_USER       T_FS1                          LOB_SFS$_FST_1                 YES     NO

SQL> ALTER TABLE DBFS_USER.T_FS_CHKPT MODIFY LOB (FILEDATA) (CACHE LOGGING);

SQL> SELECT owner
, table_name
, segment_name
, logging
, cache
FROM dba_lobs
WHERE tablespace_name like 'DBFS_%';

OWNER           TABLE_NAME                     SEGMENT_NAME                   LOGGING CACHE
--------------- ------------------------------ ------------------------------ ------- ----------
DBFS_USER       T_FS_CHKPT                     LOB_SFS$_FST_528               YES     YES
DBFS_USER       T_FS1                          LOB_SFS$_FST_1                 YES     NO

Make the same database links here as before pointing to the DBFS mount, but the dirchk directory will be pointing toward the DBFS_CHKPT_TS mount.

> ls -lhF | egrep "drwx|lrwx"
lrwxrwxrwx 1 oracle oinstall   38 Mar  2 08:20 BR -> /oracle/dbfs_direct/FS1/golden_gate/BR/
lrwxrwxrwx 1 oracle oinstall   47 Mar 11 08:54 dirchk -> /oracle/dbfs_direct/FS_CHKPT/golden_gate/dirchk/
lrwxrwxrwx 1 oracle oinstall   42 Mar  2 08:20 dirdat -> /oracle/dbfs_direct/FS1/golden_gate/dirdat/
lrwxrwxrwx 1 oracle oinstall   42 Mar 11 08:56 dirpcs -> /oracle/dbfs_direct/FS1/golden_gate/dirpcs/
lrwxrwxrwx 1 oracle oinstall   42 Mar  2 08:19 dirprm -> /oracle/dbfs_direct/FS1/golden_gate/dirprm/
lrwxrwxrwx 1 oracle oinstall   42 Mar  2 08:19 dirrpt -> /oracle/dbfs_direct/FS1/golden_gate/dirrpt/