The chapter includes the following sections:
User exit properties include properties to control logging and general properties that control naming and handling of transactions.
Logging is controlled by the following properties.
Specifies the prefix to the log file name. This must be a valid ASCII string. The log file name has the current date appended to it, in yyyymmdd
format, together with the .log
extension.
The following example will create a log file of name writer_20140803.log
on August 3, 2014.
goldengate.log.logname=writer
Specifies the overall log level for all modules. The syntax is:
goldengate.log.level=ERROR | WARN | INFO| DEBUG
The log levels are defined as follows:
ERROR
– Only write messages if errors occur
WARN
– Write error and warning messages
INFO
– Write error, warning and informational messages
DEBUG
– Write all messages, including debug ones.
The default logging level is INFO
. The messages in this case will be produced on startup, shutdown, and periodically during operation. For example, the following sets the global logging level to INFO
:
goldengate.log.level=INFO
Note:
If the level is switched to DEBUG
, large volumes of messages may occur, which could impact performance.
Controls whether or not log information is written to standard out. This setting is useful if the Extract process is running with a VAM
started from the command line or on an operating system where stdout
is piped into the report file. However, Oracle GoldenGate processes generally run as background processes. The syntax is:
goldengate.log.tostdout={true | false}
The default is false
.
General properties control file writer names, check pointing, handling of transactions, representation of timestamps, and the format used for column and object names.
Specifies the name of the writer that will run within the user exit. Enter multiple string values to enable multiple named writers to run within the same user exit. For example:
goldengate.flatfilewriter.writers=dsvwriter,diffswriter,binwriter
Ensure there are no spaces before or after the equal sign or the commas. All other properties in the file should be prefixed by one of the writer names.
Controls whether entire transactions are read before being output.When set to true
, an entire transaction is read from the trail before being output. For example:
goldengate.userexit.buffertxs=true
The default is false
. Setting this to true is useful only if the numops
metadata column is used. Currently the only way to calculate the numops
value is to buffer transactions and output one transaction at a time.
Specifies a string value as the prefix to be added to the checkpoint file name. When running multiple data pumps, the checkpoint prefix should be set to the name of the process. For example:
goldengate.userexit.chkptprefix=pump1_
Controls whether the need to roll files over is checked after every transaction or only when the Extract process checkpoints. If set to true,
the adapter checks if a file is due to be rolled over after it has processed a transaction. If due, the rollover is performed and the checkpoint file updated. This is useful if tight control over the contents of output files is required. For example, if all data up to midnight should be written to files before rolling over at midnight, it is important that the check occurs on every transaction. For example:
goldengate.userexit.chkpt.ontxend=true
The default is false
. If set to false,
the adapter will only check for rollover when Extract checkpoints (every 10 seconds by default).
Controls whether or not a colon is written between the date and time. When set to false
, the date and time column values are written to the output files in the default format of the Oracle GoldenGate trail, YYYY-MM-DD:HH:MI:SS.FFFF
. When set to true
, the format is changed to YYYY-MM-DD HH:MI:SS.FFF
with no colon between date and time. The default is false
.
goldengate.userexit.datetime.removecolon=true
Controls whether the record timestamp is output as local time or Coordinated Universal Time (UTC). When this is not set to utc
the record timestamp is output as local time using the local time zone. The default is local time.
goldengate.userexit.timestamp=utc
Controls the maximum output length of a date time column. Setting this to an integer value truncates the column value to that length. Since the date and time format is YYYY-MM-DD:HH:MI:SS.F(9)
the maximum length of a date and time column is 29 characters.
For example:
goldengate.userexit.datetime.maxlen=19
Setting goldengate.userexit.maxlen=19
truncates to date and time with no fractional seconds. Setting goldengate.userexit.maxlen=10
truncates to date only. The default is to output the full date and time column value.
Controls whether column data and table, file, and column names are returned in the UTF8 character set. When this is set to false
, all data will be in the character set of the operating system. The default is true
.
The syntax is:
goldengate.userexit.utf8mode=true|false
File writer properties control the format of the output file and how the files are written.
The following properties set the delimiter types of the values and the grouping of columns.
Controls whether the output format is DSV or LDV.
POSITION|OPCODE|TIMESTAMP|COLVALA|COLVALB|. . .
Note:
DSV is not limited to comma separated values (as is CSV).
Note:
Lengths can be ASCII or binary, some metadata columns can be fixed length (see Metadata Columns) and this format will support unicode multi-byte data.
For example:
writer.mode=dsv writer2.mode=ldv
Note:
For backward compatibility, csv
is accepted instead of dsv
, binary instead of ldv
. There is no difference in the output formats when using the alternate options.
Controls whether or not the column names, before values and after values are grouped together.
The syntax is:
writer.groupcols=true|false
The default is false. This results in a set of name, before value and after value listed together, as shown in this example for COL1
and COL2
:
"COL1", COL1_B4, COL1, "COL2", COL2_B4, COL2
With the property set to true
, the columns are grouped into sets of all names, all before values, and all after values:
"COL1", "COL2", COL1_B4, COL2_B4, COL1, COL2
The following properties control how files are written, where to, and what their extensions will be. This is independent of the writer mode and data contents.
Controls whether data is split over multiple rolling files (one per table in the input data) or all data is written to one rolling file. The default is true.
The syntax is:
writer.files.onepertable=true|false
In the following example the writer
file writer will create one file per table, and writer2
will write all data to one file.
writer.files.onepertable=true writer2.files.onepertable=false
Controls whether or not data is split based on the insert, update, delete, or primary key operation codes.
For example, the following setting will create separate output files for inserts, updates, deletes, and primary key updates:
writer.files.oneperopcode=true
The default is false; output all records to the same files independent of the type of operation.
In addition to this property, you must also modify the files.formatstring
property to accept the %O
placeholder. This indicates the position to write the operation code when the file name is created if the files.oneperopcode
property is set. The default filename should also include the operation code if that property is set.
Specifies a value to be used as the prefix for data files and control files. This property only applies if the writer is not in one per table mode (files.onepertable=true
). For data files, the prefix is ignored if the property files.formatstring
is being used.
By default, the prefix is set to the string output
. A file named data1
will become outputdata1
by default. The file name will be test_data1
using the following example.
writer.files.prefix=test_
Specifies the location and extension of all data files. Before rolling over the files will have the tmpext
extension, after rolling over they will have the ext
extension. The extension does not have to be just an .ext
format, additional characters can be appended to the file name before the extension to differentiate the data output. You should ensure the named output directory exists, and that the user running the Oracle GoldenGate processes has the correct permissions to write to that directory. For example:
# specify the root directory for outputting data files writer.files.data.rootdir=./out # determine the extension for data files when rolled over writer.files.data.ext=_data.dsv # determine the extension for data files before rolling over writer.files.data.tmpext=_data.dsv.temp
writer.
files.control.use
is a boolean true or false value that defaults to true. The others are ASCII values. These properties determine the user, location and extension of control files. Control files will share the same name prefix as the data files they are related to, but will have the defined extension. By default files.control.ext
is .control
. For example:
# specify whether or not to output a control file writer.files.control.use=true # specify the extension to use for control files writer.files.control.ext=_data.control # directory in which to place control files, defaults to data directory writer.files.control.rootdir=./out
Specifies the value in characters or hexadecimal code to be used as the data delimiter or the end-of-line indicator. The default for the delimiter is a comma (,) The default new line trigger is the newline
character that is valid for the platform.
For example, to override the comma as the data delimiter:
writer.files.control.delim.chars=#
For example, to set the new line indicator:
writer.files.control.eol.chars=\n
Specifies the filename format string to be used in creating the filenames for data files. The format string overrides the files.prefix
property. This filename format string is similar in syntax to standard C formatting except the following placeholders can be added to the filename:
%s = schema
%t = table
%n = seqno
%d = timestamp
%o = opcode
The format of the seqno
can be specified. For example %05n means 5 digits will be displayed and padded with 0's. The seqno
starts at zero and is incremented by one each time a file rolls over. It is stored as a long int
and therefore the maximum value is platform dependent. For example on a 64 bit machine the largest value is 2^64-1.
These placeholders can be intermingled with user specified text in any order desired. For example:
writer.files.formatstring=myext_%d_%010n_%s_%
Specifies a hexadecimal value as the byte order marker (BOM) to be written to the beginning of the file. The user is responsible for ensuring the BOM matches the data in the files. If no hexadecimal value is specified the marker is not written.
The following example results in the UTF8 BOM efbbf
written as the first bytes of all output files.
writer.files.data.bom.code=efbbbf
Controls whether or not the name of the Extract process is included as part of the file name. The default is false.
The syntax is:
writer.files.includeprocessname=true|false
Controls whether or not hidden files are created to identify the Extract process that owns the file. This can be used to avoid overwriting files from different Oracle GoldenGate installations. The default is false.
The syntax is:
writer.files.useownerfiles=true|false
The following properties determine the policies for rolling over files.
Specifies the maximum number of seconds of elapsed time that must pass from the first record written to the file before the file is rolled over. For example:
# number of seconds before rolling over writer.files.data.rollover.time=10
Specifies the minimum number of kilobytes that must be written to the file before the file is rolled over.
This example sets the minimum to 10,000 KB:
# max file size in KB before rolling over writer.files.data.rollover.size=10000
Specifies the maximum number of elapsed seconds since data was written to a file to wait before rolling over the file. The default is 120 seconds.
This example sets the timeout interval to 10 seconds:
# roll over in case no records for a period of time writer.files.data.norecords.timeout=10
Controls the policy for roll over when the Extract process stops. If this value is false, all empty temporary files will be deleted, but any that have data will be left as temporary files. If this property is true, all non-empty temporary files will be rolled over to their rolled file name, a checkpoint written and empty temporary files deleted. For example:
# roll over non-empty and delete all empty files when Extract stops writer.files.rolloveronshutdown=true
Note:
You can use time and/or size. If you use both, the first reached will cause a roll over. The time out interval ensures files are rolled over if they contain data, even if there are no records to be processed. If neither time or size are specified, files will roll over after a default maximum size of 1MB.
Controls whether to use the Julian commit timestamp rather than the system time to trigger file roll over. The syntax is:
writer.files.data.rollover.timetype=commit|system
The following example will use the commit timestamp of the source trail records to determine roll over:
writer.files.data.rollover.timetype=commit
The default is to use the system time to determine when to roll over files.
Controls whether or not all files will be rolled over simultaneously independent of when they first received records. Normally files are rolled over individually based on the time or size properties. The time is based on the roll over period, so it depends on the time records were first written to a particular file. In some cases, especially when outputting data with one file per table, you may want to roll over all currently open files at the same time, independent of when data was first written to that file.
The following example instructs the adapter to roll over all files simultaneously.
writer.files.data.rollover.multiple=true
The default value is false
.
Specifies a time for the adapter to roll over files. Enter the specified time in 24 hour format (HH:MM
). Only one value entry is supported. The wildcard (*) is supported for hours. The syntax is:
writer.files.data.rollover.attime=time_specifier
The following example will roll over to a new file every hour on the hour:
writer.files.data.rollover.attime=*:00
The following example will roll over every hour at fifteen minutes after the hour:
writer.files.data.rollover.attime=*:15
Note that the writer.
rollover.timetype
property determines whether the time to use is system or commit time.
The following properties determine the data that is written to the data files. These properties are independent of the format of the output data.
Controls whether character data retains its original binary form or is output as ASCII. The default is false. This property should be set if the input data contains Unicode multibyte data that should not be converted to ASCII. For example:
# whether to output characters as ASCII or binary (for Unicode data) writer.rawchars=false writer2.rawchars=true
Controls whether or not both the before and after image of data is included in the output for update operations. The default is false. This is only relevant if the before images are available in the original data, and getupdatebefores
is present in all Oracle GoldenGate parameter files in the processing chain. For example:
# whether to output update before images writer.includebefores=true
This produces . . ."VAL_BEFORE_1","VAL_1","VAL_BEFORE_2","VAL_2". . .
Controls whether or not the after image is written before the before image when includebefores
is set to true.
For example:
writer.afterfirst=true
This true setting results in the after image listed before the before image.
"VAL_1", "VAL_BEFORE_1", "VAL_2", "VAL_BEFORE_2"
The default is false. In this case
the after image is written after the before image.
Controls whether or not column names are output before the column values. The default is false
. For example:
# whether to output column names writer.includecolnames=true
This produces …"COL_1","VAL_1","COL_2","VAL_2"…
Controls whether or not column values are omitted in the output files. The default is false
. For example:
# whether to output column values writer.omitvalues=false
This produces …"COL_1","COL_2"…
, if includecolnames
is also set to true
.
Controls whether all columns are output, or only those where the before image is different from the after image. The default is false
. This only applies to updates and requires GETUPDATEBEFORES
in all Oracle GoldenGate parameter files in the processing chain. This property is independent of the includebefores
property. For example:
# whether to output only columns with differences between before and # after images (deletes and inserts have all available columns) writer.diffsonly=true
This produces . . ."VAL_1",,,"VAL_4",,,"VAL_7". . .
Controls whether delimiters/lengths are included in the output for missing columns. The default is false
. This applies to updates and deletes where the COMPRESSUPDATES
or COMPRESSDELETES
flag was present in a Oracle GoldenGate parameter file in the processing chain. In this case, values may be missing. Also, if writer
.diffsonly
is true
, values that are not different are said to be missing. For example:
# whether to skip record delimiters if columns are missing writer.omitplaceholders=true
This changes . . ."VAL_1",,,"VAL_4",,,"VAL_7". . .
to . . ."VAL_1","VAL_4","VAL_7". . .
Valid metadata columns are:
position - A unique position indicator of records in a trail.
opcode - I
, U
, D
or K
for Insert, Update, Delete, or Primary Key update records.
txind - The general record position in a transaction (0 - begin, 1 - middle, 2 - end, 3 - only).
txoppos - Position of record in a transaction, starting from 0.
schema - The schema (owner) name of the changed record.
table - The table name of the changed record.
schemaandtable - Both the schema and table name concatenated as schema.table
timestamp - The commit timestamp of the record.
@<token name> - A token value defined in the Extract parameter file.
$getenv - A GETENV
value as documented in the Oracle GoldenGate Reference Guide; for example $GGHEADER.OPCODE.
%COLNAME - The value of a data column.
numops -The number of operations in the current transaction. This value will always be 1 if goldengate.userexit
.buffertxs
is not true.
numcols - The number of columns to be output. This value is equal to the number of columns in the original record, minus the number of columns output as metadata columns up until the point this metadata column is used.
"<value>" - Any literal value.
Some things to consider when using metadata columns:
The ASCII values for opcode
and txind
can be overridden.
For LDV
, metadata columns can be variable or fixed length.
The position
can be written in hexadecimal or decimal.
Any metadata column can be the internal value or it can be read from a column of the original data.
A literal value is indicated by enclosing it in quotes. When a literal value is specified, that value will be output as a character string in the specified metadata column position using the appropriate quote policy.
A column value is indicated by %COLNAME
. When a column value is specified, that column value is output in the metadata section of the output record, rather than in the column values section. This may be used to ensure that the column is always output in the same position in the record, independent of the table being output.
The following properties apply to metadata columns.
Specifies the metadata columns to output in the order of output. Enter multiple names as ASCII values separated by commas. For example:
# which metacols to output and in which order writer.metacols=timestamp,opcode,txind,position,schema,table
Specifies an integer value to determine the length of data to write for the metadata column specified by metacol_name
. If the actual data is longer than the fixed length it will be truncated, if it is shorter the output will be padded. For example:
# timestamp is fixed length writer.metacols.timestamp.fixedlen=23
This truncates 2011-08-03 10:30:51.123456
to 2011-08-03 10:30:51.123.
Specifies an ASCII value to use as the column name of data values instead of using the metacol_name
value for a metadata column. If set, this column name must exist in all tables processed by the user exit. There is currently no way to override this column name on a per table basis. For example, to override the internal timestamp from a column:
# timestamp is read from a column writer.metacols.timestamp.column=MY_TIMESTAMP_COL
Specifies values to represent characters or hexadecimal code to be used when the value of token_name
is not available. Use ASCII values for chars and
hexadecimal values for code
. The default value is NO VALUE
. For example:
writer.metacols.TKN-SCN.novalue.chars=0
Controls whether the justification for the metacol_name
column value is to the left or right. By default all metadata columns will be justified to the left. For example, to justify a token to the right:
writer.metacols.TKN-SCN.fixedjustify=right
Specifies either a character or code value to be used for padding a metadata column. Use ASCII values for chars and
hexadecimal values for code
. The default character used for padding is a space (" "). For example:
writer.metacols.TKN-SCN.fixedpadchar.chars=0
Specifies an override value for the default character I
that identifies insert operations. Use ASCII values for chars
and hexadecimal values for code
.
The following example instructs the adapter to use INS
for inserts:
writer.metacols.opcode.insert.chars=INS
Specifies an override value for the default character U
that identifies update operations. Use ASCII values for chars
and hexadecimal values for code
.
The following example instructs the adapter to use UPD
for updates:
writer.metacols.opcode.update.chars=UPD
Specifies an override value for the default character D
that identifies delete operations. Use ASCII values for chars
and hexadecimal values for code
.
The following example instructs the adapter to use DEL
for deletes:
writer.metacols.opcode.delete.chars=DEL
Specifies an override value for the default character K
that identifies primary key update operations. Use ASCII values for chars
and hexadecimal values for code
.
The following example instructs the adapter to use PKU
for primary key updates:
writer.metacols.opcode.updatepk.chars=PKU
Specifies the override values to use to identify the beginning, middle, end of transactions, or if an operation that is the whole transaction. Use ASCII values for chars
and hexadecimal values for code
. The default value is 0 for Begin.
The following example overrides the 0 with the letter B.
# tx indicator values is overridden writer.metacols.txind.begin.chars=B
Specifies the override value to use to identify the middle transactions. Use ASCII values for chars
and hexadecimal values for code
. The default value is 1 for Middle.
The following example overrides the1 with the letter M.
# tx indicator value is overridden writer.metacols.txind.middle.chars=M
Specifies the override value to use to identify the end transactions. Use ASCII values for chars
and hexadecimal values for code
. The default value is 2 for End.
The following example overrides the 2 with the letter E.
# tx indicator value is overridden writer.metacols.txind.end.chars=E
Specifies the override value to use to identify. if an operation that is the whole transaction. Use ASCII values for chars
and hexadecimal values for code
. The default value is 3 for Whole.
The following example overrides the 3 with the letter W.
# tx indicator value is overridden writer.metacols.txind.whole.chars=W
Controls whether the output of the of the position
metadata column is in decimal or hexadecimal format. If hexadecimal, this will typically be a 16 character value; if decimal, the length will vary. Currently this contains the sequence number and RBA of the Oracle GoldenGate trail that the Extract process is reading from. For example:
# position is in decimal format (seqno0000000rba) writer.metacols.position.format=dec
This produces 120000012345
for seqno
12, rba
12345
writer2.metacols.position.format=hex
This produces 0000000c00003039
for seqno 12, rba 12345.
Controls whether the COLNAME
column can be used as metadata but not output.
The following example specifies that numcols
can be used as metadata, but not output.
writer.metacols.numcols.omit=true
Specifies the metadata columns to use to mark the beginning and end of a transaction. These marker records are written (with end of line delimiters) to the output files before and after the operation records that make up the transaction.
The syntax is:
writer.begintx.metacols=metacols_list
The following example specifies marking the beginning of a transaction with the letter B
and the number of operations in the transaction.
writer.begintx.metacols="B",numops
In the following example, the end of the transaction marker will be the letter E
.
writer.endtx.metacols="E"
Any of the existing metadata columns can be used in the transaction begin and end markers. If you specify a column value or specific property of a record (such as table name) for begintx.metacols
, the value for the first record in the transaction is used. For endtx.metacols,
the value for the last record is used.
For example, if the transaction has the following records:
rec=0,table=tabA,operation=insert,col1=val1,col2=val2 rec=1,table=tabA,operation=update,col1=val3,col2=val4 rec=2,table=tabA,operation=delete,col1=val5,col2=val6 rec=3,table=tabB,operation=update,col1=val7,col2=val8
And the properties are set as follows:
writer.begintx.metacols="B",table,%col2 writer.endtx.metacols="E",table,%col2
Then the begin transaction marker will be "B","tabA","val2
" and the end marker will be "E","tabB","val8".
If numops
is used to output the number of operations in a transaction for either the begin or end markers, the user must also set:
goldengate.userexit.buffertxs=true
Note:
When this property is set, the adapter buffers transactions in memory, so care should be taken to limit the number of operations in the transactions being handled by the system.
DSV
files have the following record format:
{[METACOL][FD]}n{[COL][FD]}m[LD]
Where:
METACOL
is any defined metadata column
COL
is any data column
FD
is the field delimiter
LD
is the line delimiter
Column values may be quoted, e.g. "2013-01-10 10:20:31","U","MY.TABLE", 2000,"DAVE"
Specifies the characters to use for NULL
values in delimiter separated files. These values override the default NULL
value of an empty string. Use ASCII values for chars and
hexadecimal values for code
. For example:
writer.dsv.nullindicator.chars=NULL writer.dsv.nullindicator.code=0a0a0a0a
Specifies an override value for the field delimiter. The default is a comma (,). Use ASCII values for chars and
hexadecimal values for code
. For example:
# define the characters to use for field delimiters in DSV files
writer.dsv.fielddelim.chars=|
Specifies an override value for the line delimiter. The default is a new line character appropriate to the operating system. Use ASCII values for chars
and hexadecimal values for code
. For example:
# define the characters to use for line delimiters in DSV files
writer.dsv.linedelim.chars=\n
Specifies an override value for the quote character. The default is a double quote ("). Use ASCII values for chars
and hexadecimal values for code
.For example:
# define the characters to use for quotes in DSV files
writer.dsv.quotes.chars='
Controls the policy for applying quotes.
The syntax is:
writer.dsv.quotes.policy={default|none|always|datatypes}
Where:
default – Only dates and chars are quoted
none – No metadata column or column values are quoted
always – All metadata columns and column values are quoted
datatypes – Only specific data types are quoted
If this property is set it will override the dsv.quotealways
property. Use the dsv.quotes.datatypes
property to specify which data types should be quoted.
Controls whether integer, character, float, or datetime data types are to be quoted when dsv.quotes.policy
is set to datatype
.
The syntax is:
writer.dsv.quotes.datatypes=[char][,integer][,float][,date]
For example the following instructs the adapter to quote characters and date time values only.
writer.dsv.quotes.datatypes=char,date
If no data types are specified, the data types option defaults to all data types, which is equivalent to always
.
Specifies the escaped value for a null indicator. If set, all values will be checked for the null indicator value and replaced with the escaped value when output. Use ASCII values for chars
and hexadecimal values for code
. For example:
# (optionally) you can define the characters (or code) to use
# to escape these values if found in data values
writer.dsv.nullindicator.escaped.chars=NULL
This changes the null indicator to NULL
.
Specifies the escaped value for a field delimiter. If set, all values will be checked for the field delimiter value and replaced with the escaped value when output. Use ASCII values for chars
and hexadecimal values for code
. For example:
writer.dsv.fielddelim.escaped.chars=|
This changes the field delimiter to |
.
Specifies the escaped value for a line delimiter. If set, all values will be checked for the line delimiter value and replaced with the escaped value when output. Use ASCII values for chars
and hexadecimal values for code
. For example:
writer.dsv.linedelim.escaped.chars=\n writer.dsv.linedelim.escaped.code=D
Both change the line delimiter to \n
.
Specifies the escaped value for a field delimiter. If set, all values will be checked for the field delimiter value and replaced with the escaped value when output. Use ASCII values for chars
and hexadecimal values for code
. For example:
writer.dsv.quotes.escaped.chars=""
This changes the "some text" to ""some text"".
Controls whether or not each column value is forced onto a new line. Each line will also contain the metadata columns defined for this writer. The default is false. For example:
# Force each column onto a new line with its own meta cols
writer.dsv.onecolperline=true
This changes:{metacols},val_1,val_2
to
{metacols},val1 {metacols},val2
Controls whether or not each column is surrounded by quotes, even if it is a numeric value. The default is false.
Note:
This property has been superseded by dsv.quotes.policy
and is supported only for backward compatibility. The value set for dsv.quotealways
is ignored if dsv.quotes.policy
is set.
For example:
writer.dsv.quotealways=true
Changes: . . .,1234,"Hello",10
to . . .,"1234","Hello","10
"
LDV files have the following record format:
[RECLEN][METACOLS]{[FLAG][LEN][VALUE]}n
Where:
RECLEN
is the full record length in bytes
METACOLS
are all selected metadata columns
FLAG
can be (M
)issing, (P
)resent, or (N
)ull
LEN
is the column values length (0 for missing and null)
VALUE
is the column value
For example:
01072007-01-10 10:20:31U302MY05TABLEP042000M00N00P04DAVE
Specifies override values for missing indicators. Use ASCII values for chars and
hexadecimal values for code
. For example:
writer.ldv.vals.missing.chars=MI
Specifies override values for present indicators. Use ASCII values for chars and
hexadecimal values for code
. For example:
writer.ldv.vals.present.chars=PR
Specifies override values for null indicators. Use ASCII values for chars and
hexadecimal values for code
. For example:
writer.ldv.vals.null.chars=NL
Controls the output mode of record and field lengths. The value can be either binary
or ASCII
. The default is binary
.
If binary
, the number written to the file will be encoded in binary bytes. If ASCII
, characters representing the decimal value of the length will be used. For example:
writer.ldv.lengths.record.mode=binary writer.ldv.lengths.field.mode=binary
Specifies the record and field lengths as integer values. If the mode is ASCII
, this represents the fixed number of decimal digits to use. If binary,
it represents the number of bytes.
In ASCII mode the lengths can be any value, but the exit will stop if a length exceeds the maximum. In binary mode, the lengths can be 2,4, or 8 bytes, but record length must be greater than field length. For example:
# Lengths can be binary (2,4, or 8 bytes) or ASCII (any length) writer.ldv.lengths.record.length=4 writer.ldv.lengths.field.length=2
There are two ways that statistics regarding the data written to data files can be obtained:
As a report written to the Oracle GoldenGate report file
As a separate summary file associated with a data file on rollover
These two mechanisms can be used together or separately.
The data that can be obtained includes, 1) the total records processed, broken down to inserts, updates, deletes; 2) records processed per table, also broken down; 3) total rate and rate per table; 4) delta for these since last report. Reporting can be time based, or synced to file rollover
This data can be written to the report file or as a summary file linked to a data file on rollover. The reporting format is fixed. The summary file contains the data in a delimited format, but related to the contents of a particular data file. This can be used by a data integration product to cross-check processing. It will have the same name as the data file, but a different extension.
Controls whether or not statistics are output to the Oracle GoldenGate report file. For example:
writer.statistics.toreportfile=true
Specifies the time period for statistics. The value can be either timebased
or onrollover
.
For example:
writer.statistics.period=onrollover writer.statistics.period=timebased
If timebased
, the time period should be set in statistics.time
.
Note:
These values are valid only for outputting statistics to the report file. Statistics will be output to the summary file only on rollover.
Specifies a time interval in seconds after which statistics will be reported.
For example:
writer.statistics.time=5
Controls whether or not a summary file containing statistics for each data file will be created on rollover.
The following example creates the summary file.
writer.statistics.tosummaryfile=true
Controls the content of the summary files and the order in which the content is written. Multiple comma separated ASCII values can be specified.
Valid values are:
schema – schema or owner of the table that the statistics relate to
table – table that the statistics relate to
schemaandtable – schema and table in one column separated by a period '.'
gtotal – total number of records output for the specified table since the user exit was started
gtotaldetail – total number of inserts, updates and deletes separated by the delimiter since the user exit was started
gctimestamp – minimum and maximum commit timestamp for the specified table since user exit was started
ctimestamp – minimum and maximum commit timestamps for the specified table in the related data file.
total – total number of records output for the specified table in the related data file
totaldetail – total number of inserts, updates and deletes output for the specified table in the related data file
rate – average rate of output of data for the specified table in the related data file in records per second
ratedetail – average rate of inserts, updates and deletes for the specified table in the related data file in records per second
For example:
writer.statistics.summary.fileformat= schema,table,total,totaldetail,gctimestamp,ctimestamp
Controls whether or not an additional statistics row is written to the summary files. This row contains the overall (across all tables) statistics defined by the user using the statistics.summary.fileformat
property.
The following example will write this row.
writer.statistics.overall=true
Specifies override values for the field delimiter and end of line delimiter for the summary files. Use ASCII values for chars
and hexadecimal values for code
. The default is a comma ',' delimiter and new line character. For example:
writer.statistics.summary.delimiter.chars=| writer.statistics.summary.eol.code=0a0c