Skip to main content
Toggle navigation
Sign Out
Sign In
Table of Contents
Search
Print
Download
Database
Oracle
Oracle Database
Release 19
Utilities
Table of Contents
Search
Print
Download
Table of Contents
Expand All
Collapse All
List of Examples
List of Figures
List of Tables
Title and Copyright Information
Preface
Audience
Documentation Accessibility
Related Documentation
Syntax Diagrams
Conventions
Changes in This Release for Oracle Database Utilities
Changes in Oracle Database 19c
New Features
Desupported Features
Changes in Oracle Database 18c
Part I Oracle Data Pump
1
Overview of Oracle Data Pump
1.1
Oracle Data Pump Components
1.2
How Does Data Pump Move Data?
1.2.1
Using Data File Copying to Move Data
1.2.2
Using Direct Path to Move Data
1.2.3
Using External Tables to Move Data
1.2.4
Using Conventional Path to Move Data
1.2.5
Using Network Link Import to Move Data
1.3
Using Oracle Data Pump With CDBs
1.3.1
About Using Data Pump in a Multitenant Environment
1.3.2
Using Data Pump to Move Data Into a CDB
1.3.3
Using Data Pump to Move PDBs Within Or Between CDBs
1.4
Required Roles for Data Pump Export and Import Operations
1.5
What Happens During Execution of an Oracle Data Pump Job?
1.5.1
Coordination of a Job
1.5.2
Tracking Progress Within a Job
1.5.3
Filtering Data and Metadata During a Job
1.5.4
Transforming Metadata During a Job
1.5.5
Maximizing Job Performance
1.5.6
Loading and Unloading of Data
1.6
Monitoring Job Status
1.7
Monitoring the Progress of Executing Jobs
1.8
File Allocation
1.8.1
Understanding File Allocation in Data Pump
1.8.2
Specifying Files and Adding Additional Dump Files
1.8.3
Default Locations for Dump, Log, and SQL Files
1.8.3.1
Understanding Dump, Log, and SQL File Default Locations
1.8.3.2
Understanding How to Use Oracle Data Pump with Oracle RAC
1.8.3.3
Using Directory Objects When Oracle Automatic Storage Management Is Enabled
1.8.3.4
The DATA_PUMP_DIR Directory Object and Pluggable Databases
1.8.4
Using Substitution Variables
1.9
Exporting and Importing Between Different Oracle Database Releases
1.10
SecureFiles LOB Considerations
1.11
Oracle Data Pump Exit Codes
1.12
Auditing Data Pump Jobs
1.13
How Does Data Pump Handle Timestamp Data?
1.13.1
TIMESTAMP WITH TIMEZONE Restrictions
1.13.1.1
Understanding TIMESTAMP WITH TIME ZONE Restrictions
1.13.1.2
Data Pump Support for TIMESTAMP WITH TIME ZONE Data
1.13.1.3
Time Zone File Versions on the Source and Target
1.13.2
TIMESTAMP WITH LOCAL TIME ZONE Restrictions
1.14
Character Set and Globalization Support Considerations
1.14.1
Data Definition Language (DDL)
1.14.2
Single-Byte Character Sets and Export and Import
1.14.3
Multibyte Character Sets and Export and Import
1.15
Oracle Data Pump Behavior with Data-Bound Collation
2
Data Pump Export
2.1
What Is Data Pump Export?
2.2
Invoking Data Pump Export
2.2.1
Data Pump Export Interfaces
2.2.2
Data Pump Export Modes
2.2.2.1
Full Mode
2.2.2.2
Schema Mode
2.2.2.3
Table Mode
2.2.2.4
Tablespace Mode
2.2.2.5
Transportable Tablespace Mode
2.2.3
Network Considerations
2.3
Filtering During Export Operations
2.3.1
Data Filters
2.3.2
Metadata Filters
2.4
Parameters Available in Export's Command-Line Mode
2.4.1
About Data Pump Export Parameters
2.4.2
ABORT_STEP
2.4.3
ACCESS_METHOD
2.4.4
ATTACH
2.4.5
CLUSTER
2.4.6
COMPRESSION
2.4.7
COMPRESSION_ALGORITHM
2.4.8
CONTENT
2.4.9
DATA_OPTIONS
2.4.10
DIRECTORY
2.4.11
DUMPFILE
2.4.12
ENABLE_SECURE_ROLES
2.4.13
ENCRYPTION
2.4.14
ENCRYPTION_ALGORITHM
2.4.15
ENCRYPTION_MODE
2.4.16
ENCRYPTION_PASSWORD
2.4.17
ENCRYPTION_PWD_PROMPT
2.4.18
ESTIMATE
2.4.19
ESTIMATE_ONLY
2.4.20
EXCLUDE
2.4.21
FILESIZE
2.4.22
FLASHBACK_SCN
2.4.23
FLASHBACK_TIME
2.4.24
FULL
2.4.25
HELP
2.4.26
INCLUDE
2.4.27
JOB_NAME
2.4.28
KEEP_MASTER
2.4.29
LOGFILE
2.4.30
LOGTIME
2.4.31
METRICS
2.4.32
NETWORK_LINK
2.4.33
NOLOGFILE
2.4.34
PARALLEL
2.4.35
PARFILE
2.4.36
QUERY
2.4.37
REMAP_DATA
2.4.38
REUSE_DUMPFILES
2.4.39
SAMPLE
2.4.40
SCHEMAS
2.4.41
SERVICE_NAME
2.4.42
SOURCE_EDITION
2.4.43
STATUS
2.4.44
TABLES
2.4.45
TABLESPACES
2.4.46
TRANSPORT_FULL_CHECK
2.4.47
TRANSPORT_TABLESPACES
2.4.48
TRANSPORTABLE
2.4.49
TTS_CLOSURE_CHECK
2.4.50
VERSION
2.4.51
VIEWS_AS_TABLES
2.5
Commands Available in Data Pump Export Interactive-Command Mode
2.5.1
About Data Pump Export Interactive Command Mode
2.5.2
ADD_FILE
2.5.3
CONTINUE_CLIENT
2.5.4
EXIT_CLIENT
2.5.5
FILESIZE
2.5.6
HELP
2.5.7
KILL_JOB
2.5.8
PARALLEL
2.5.9
START_JOB
2.5.10
STATUS
2.5.11
STOP_JOB
2.6
Examples of Using Data Pump Export
2.6.1
Performing a Table-Mode Export
2.6.2
Data-Only Unload of Selected Tables and Rows
2.6.3
Estimating Disk Space Needed in a Table-Mode Export
2.6.4
Performing a Schema-Mode Export
2.6.5
Performing a Parallel Full Database Export
2.6.6
Using Interactive Mode to Stop and Reattach to a Job
2.7
Syntax Diagrams for Data Pump Export
3
Data Pump Import
3.1
What Is Data Pump Import?
3.2
Invoking Data Pump Import
3.2.1
Data Pump Import Interfaces
3.2.2
Data Pump Import Modes
3.2.2.1
About Data Pump Import Modes
3.2.2.2
Full Import Mode
3.2.2.3
Schema Mode
3.2.2.4
Table Mode
3.2.2.5
Tablespace Mode
3.2.2.6
Transportable Tablespace Mode
3.2.3
Network Considerations
3.3
Filtering During Import Operations
3.3.1
Data Filters
3.3.2
Metadata Filters
3.4
Parameters Available in Import's Command-Line Mode
3.4.1
About Import Command-Line Mode
3.4.2
ABORT_STEP
3.4.3
ACCESS_METHOD
3.4.4
ATTACH
3.4.5
CLUSTER
3.4.6
CONTENT
3.4.7
CREDENTIAL
3.4.8
DATA_OPTIONS
3.4.9
DIRECTORY
3.4.10
DUMPFILE
3.4.11
ENABLE_SECURE_ROLES
3.4.12
ENCRYPTION_PASSWORD
3.4.13
ENCRYPTION_PWD_PROMPT
3.4.14
ESTIMATE
3.4.15
EXCLUDE
3.4.16
FLASHBACK_SCN
3.4.17
FLASHBACK_TIME
3.4.18
FULL
3.4.19
HELP
3.4.20
INCLUDE
3.4.21
JOB_NAME
3.4.22
KEEP_MASTER
3.4.23
LOGFILE
3.4.24
LOGTIME
3.4.25
MASTER_ONLY
3.4.26
METRICS
3.4.27
NETWORK_LINK
3.4.28
NOLOGFILE
3.4.29
PARALLEL
3.4.30
PARFILE
3.4.31
PARTITION_OPTIONS
3.4.32
QUERY
3.4.33
REMAP_DATA
3.4.34
REMAP_DATAFILE
3.4.35
REMAP_DIRECTORY
3.4.36
REMAP_SCHEMA
3.4.37
REMAP_TABLE
3.4.38
REMAP_TABLESPACE
3.4.39
SCHEMAS
3.4.40
SERVICE_NAME
3.4.41
SKIP_UNUSABLE_INDEXES
3.4.42
SOURCE_EDITION
3.4.43
SQLFILE
3.4.44
STATUS
3.4.45
STREAMS_CONFIGURATION
3.4.46
TABLE_EXISTS_ACTION
3.4.47
REUSE_DATAFILES
3.4.48
TABLES
3.4.49
TABLESPACES
3.4.50
TARGET_EDITION
3.4.51
TRANSFORM
3.4.52
TRANSPORT_DATAFILES
3.4.53
TRANSPORT_FULL_CHECK
3.4.54
TRANSPORT_TABLESPACES
3.4.55
TRANSPORTABLE
3.4.56
VERSION
3.4.57
VIEWS_AS_TABLES (Network Import)
3.4.58
VIEWS_AS_TABLES (Non-Network Import)
3.5
Commands Available in Import's Interactive-Command Mode
3.5.1
About Data Pump Import Interactive Command Mode
3.5.2
CONTINUE_CLIENT
3.5.3
EXIT_CLIENT
3.5.4
HELP
3.5.5
KILL_JOB
3.5.6
PARALLEL
3.5.7
START_JOB
3.5.8
STATUS
3.5.9
STOP_JOB
3.6
Examples of Using Data Pump Import
3.6.1
Performing a Data-Only Table-Mode Import
3.6.2
Performing a Schema-Mode Import
3.6.3
Performing a Network-Mode Import
3.6.4
Using Wildcards in URL-Based Dumpfile Names
3.7
Syntax Diagrams for Data Pump Import
4
Data Pump Legacy Mode
4.1
Oracle Data Pump Legacy Mode Use Cases
4.2
Parameter Mappings
4.2.1
Using Original Export Parameters with Data Pump
4.2.2
Using Original Import Parameters with Data Pump
4.3
Management of File Locations in Data Pump Legacy Mode
4.4
Adjusting Existing Scripts for Data Pump Log Files and Errors
4.4.1
Log Files
4.4.2
Error Cases
4.4.3
Exit Status
5
Data Pump Performance
5.1
Data Performance Improvements for Data Pump Export and Import
5.2
Tuning Performance
5.2.1
Controlling Resource Consumption
5.2.2
Effect of Compression and Encryption on Performance
5.2.3
Memory Considerations When Exporting and Importing Statistics
5.3
Initialization Parameters That Affect Data Pump Performance
5.3.1
Setting the Size Of the Buffer Cache In a GoldenGate Replication Environment
5.3.2
Managing Resource Usage for Multiple User Data Pump Jobs
6
The Data Pump API
6.1
How Does the Client Interface to the Data Pump API Work?
6.1.1
Job States
6.2
What Are the Basic Steps in Using the Data Pump API?
6.3
Examples of Using the Data Pump API
Part II SQL*Loader
7
SQL*Loader Concepts
7.1
SQL*Loader Features
7.2
SQL*Loader Parameters
7.3
SQL*Loader Control File
7.4
Input Data and Data Files
7.4.1
Fixed Record Format
7.4.2
Variable Record Format
7.4.3
Stream Record Format
7.4.4
Logical Records
7.4.5
Data Fields
7.5
LOBFILEs and Secondary Data Files (SDFs)
7.6
Data Conversion and Data Type Specification
7.7
Discarded and Rejected Records
7.7.1
The Bad File
7.7.1.1
Records Rejected by SQL*Loader
7.7.1.2
Records Rejected by Oracle Database During a SQL*Loader Operation
7.7.2
The Discard File
7.8
Log File and Logging Information
7.9
Conventional Path Loads, Direct Path Loads, and External Table Loads
7.9.1
Conventional Path Loads
7.9.2
Direct Path Loads
7.9.2.1
Parallel Direct Path
7.9.3
External Table Loads
7.9.4
Choosing External Tables Versus SQL*Loader
7.9.5
Behavior Differences Between SQL*Loader and External Tables
7.9.5.1
Multiple Primary Input Data Files
7.9.5.2
Syntax and Data Types
7.9.5.3
Byte-Order Marks
7.9.5.4
Default Character Sets, Date Masks, and Decimal Separator
7.9.5.5
Use of the Backslash Escape Character
7.10
Loading Objects, Collections, and LOBs
7.10.1
Supported Object Types
7.10.1.1
column objects
7.10.1.2
row objects
7.10.2
Supported Collection Types
7.10.2.1
Nested Tables
7.10.2.2
VARRAYs
7.10.3
Supported LOB Data Types
7.11
Partitioned Object Support
7.12
Application Development: Direct Path Load API
7.13
SQL*Loader Case Studies
7.13.1
Case Study Files
7.13.2
Running the Case Studies
7.13.3
Case Study Log Files
7.13.4
Checking the Results of a Case Study
8
SQL*Loader Command-Line Reference
8.1
Invoking SQL*Loader
8.1.1
Specifying Parameters on the Command Line
8.1.2
Alternative Ways to Specify SQL*Loader Parameters
8.1.3
Using SQL*Loader to Load Data Across a Network
8.2
Command-Line Parameters for SQL*Loader
8.2.1
BAD
8.2.2
BINDSIZE
8.2.3
COLUMNARRAYROWS
8.2.4
CONTROL
8.2.5
DATA
8.2.6
DATE_CACHE
8.2.7
DEFAULTS
8.2.8
DEGREE_OF_PARALLELISM
8.2.9
DIRECT
8.2.10
DIRECT_PATH_LOCK_WAIT
8.2.11
DISCARD
8.2.12
DISCARDMAX
8.2.13
DNFS_ENABLE
8.2.14
DNFS_READBUFFERS
8.2.15
EMPTY_LOBS_ARE_NULL
8.2.16
ERRORS
8.2.17
EXTERNAL_TABLE
8.2.18
FILE
8.2.19
HELP
8.2.20
LOAD
8.2.21
LOG
8.2.22
MULTITHREADING
8.2.23
NO_INDEX_ERRORS
8.2.24
PARALLEL
8.2.25
PARFILE
8.2.26
PARTITION_MEMORY
8.2.27
READSIZE
8.2.28
RESUMABLE
8.2.29
RESUMABLE_NAME
8.2.30
RESUMABLE_TIMEOUT
8.2.31
ROWS
8.2.32
SDF_PREFIX
8.2.33
SILENT
8.2.34
SKIP
8.2.35
SKIP_INDEX_MAINTENANCE
8.2.36
SKIP_UNUSABLE_INDEXES
8.2.37
STREAMSIZE
8.2.38
TRIM
8.2.39
USERID
8.3
Exit Codes for Inspection and Display
9
SQL*Loader Control File Reference
9.1
Control File Contents
9.1.1
Comments in the Control File
9.2
Specifying Command-Line Parameters in the Control File
9.2.1
OPTIONS Clause
9.2.2
Specifying the Number of Default Expressions to Be Evaluated At One Time
9.3
Specifying File Names and Object Names
9.3.1
File Names That Conflict with SQL and SQL*Loader Reserved Words
9.3.2
Specifying SQL Strings
9.3.3
Operating System Considerations
9.3.3.1
Specifying a Complete Path
9.3.3.2
Backslash Escape Character
9.3.3.3
Nonportable Strings
9.3.3.4
Using the Backslash as an Escape Character
9.3.3.5
Escape Character Is Sometimes Disallowed
9.4
Identifying XMLType Tables
9.5
Specifying Field Order
9.6
Specifying Data Files
9.6.1
Examples of INFILE Syntax
9.6.2
Specifying Multiple Data Files
9.7
Specifying CSV Format Files
9.8
Identifying Data in the Control File with BEGINDATA
9.9
Specifying Data File Format and Buffering
9.10
Specifying the Bad File
9.10.1
Examples of Specifying a Bad File Name
9.10.2
How Bad Files Are Handled with LOBFILEs and SDFs
9.10.3
Criteria for Rejected Records
9.11
Specifying the Discard File
9.11.1
Specifying the Discard File in the Control File
9.11.1.1
Limiting the Number of Discard Records
9.11.2
Examples of Specifying a Discard File Name
9.11.3
Criteria for Discarded Records
9.11.4
How Discard Files Are Handled with LOBFILEs and SDFs
9.11.5
Specifying the Discard File from the Command Line
9.12
Specifying a NULLIF Clause At the Table Level
9.13
Specifying Datetime Formats At the Table Level
9.14
Handling Different Character Encoding Schemes
9.14.1
Multibyte (Asian) Character Sets
9.14.2
Unicode Character Sets
9.14.3
Database Character Sets
9.14.4
Data File Character Sets
9.14.5
Input Character Conversion
9.14.5.1
Considerations When Loading Data into VARRAYs or Primary-Key-Based REFs
9.14.5.2
CHARACTERSET Parameter
9.14.5.3
Control File Character Set
9.14.5.4
Character-Length Semantics
9.14.6
Shift-sensitive Character Data
9.15
Interrupted Loads
9.15.1
Discontinued Conventional Path Loads
9.15.2
Discontinued Direct Path Loads
9.15.2.1
Load Discontinued Because of Space Errors
9.15.2.2
Load Discontinued Because Maximum Number of Errors Exceeded
9.15.2.3
Load Discontinued Because of Fatal Errors
9.15.2.4
Load Discontinued Because a Ctrl+C Was Issued
9.15.3
Status of Tables and Indexes After an Interrupted Load
9.15.4
Using the Log File to Determine Load Status
9.15.5
Continuing Single-Table Loads
9.16
Assembling Logical Records from Physical Records
9.16.1
Using CONCATENATE to Assemble Logical Records
9.16.2
Using CONTINUEIF to Assemble Logical Records
9.17
Loading Logical Records into Tables
9.17.1
Specifying Table Names
9.17.1.1
INTO TABLE Clause
9.17.2
Table-Specific Loading Method
9.17.2.1
Loading Data into Empty Tables
9.17.2.1.1
INSERT
9.17.2.2
Loading Data into Nonempty Tables
9.17.2.2.1
APPEND
9.17.2.2.2
REPLACE
9.17.2.2.3
Updating Existing Rows
9.17.2.2.4
TRUNCATE
9.17.3
Table-Specific OPTIONS Parameter
9.17.4
Loading Records Based on a Condition
9.17.4.1
Using the WHEN Clause with LOBFILEs and SDFs
9.17.5
Specifying Default Data Delimiters
9.17.5.1
fields_spec
9.17.5.2
termination_spec
9.17.5.3
enclosure_spec
9.17.6
Handling Short Records with Missing Data
9.17.6.1
TRAILING NULLCOLS Clause
9.18
Index Options
9.18.1
SORTED INDEXES Clause
9.18.2
SINGLEROW Option
9.19
Benefits of Using Multiple INTO TABLE Clauses
9.19.1
Extracting Multiple Logical Records
9.19.1.1
Relative Positioning Based on Delimiters
9.19.2
Distinguishing Different Input Record Formats
9.19.2.1
Relative Positioning Based on the POSITION Parameter
9.19.3
Distinguishing Different Input Row Object Subtypes
9.19.4
Loading Data into Multiple Tables
9.19.5
Summary of Using Multiple INTO TABLE Clauses
9.20
Bind Arrays and Conventional Path Loads
9.20.1
Size Requirements for Bind Arrays
9.20.2
Performance Implications of Bind Arrays
9.20.3
Specifying Number of Rows Versus Size of Bind Array
9.20.4
Calculations to Determine Bind Array Size
9.20.4.1
Determining the Size of the Length Indicator
9.20.4.2
Calculating the Size of Field Buffers
9.20.5
Minimizing Memory Requirements for Bind Arrays
9.20.6
Calculating Bind Array Size for Multiple INTO TABLE Clauses
10
SQL*Loader Field List Reference
10.1
Field List Contents
10.2
Specifying the Position of a Data Field
10.2.1
Using POSITION with Data Containing Tabs
10.2.2
Using POSITION with Multiple Table Loads
10.2.3
Examples of Using POSITION
10.3
Specifying Columns and Fields
10.3.1
Specifying Filler Fields
10.3.2
Specifying the Data Type of a Data Field
10.4
SQL*Loader Data Types
10.4.1
Nonportable Data Types
10.4.1.1
INTEGER(n)
10.4.1.2
SMALLINT
10.4.1.3
FLOAT
10.4.1.4
DOUBLE
10.4.1.5
BYTEINT
10.4.1.6
ZONED
10.4.1.7
DECIMAL
10.4.1.8
VARGRAPHIC
10.4.1.9
VARCHAR
10.4.1.10
VARRAW
10.4.1.11
LONG VARRAW
10.4.2
Portable Data Types
10.4.2.1
CHAR
10.4.2.2
Datetime and Interval Data Types
10.4.2.2.1
DATE
10.4.2.2.2
TIME
10.4.2.2.3
TIME WITH TIME ZONE
10.4.2.2.4
TIMESTAMP
10.4.2.2.5
TIMESTAMP WITH TIME ZONE
10.4.2.2.6
TIMESTAMP WITH LOCAL TIME ZONE
10.4.2.2.7
INTERVAL YEAR TO MONTH
10.4.2.2.8
INTERVAL DAY TO SECOND
10.4.2.3
GRAPHIC
10.4.2.4
GRAPHIC EXTERNAL
10.4.2.5
Numeric EXTERNAL
10.4.2.6
RAW
10.4.2.7
VARCHARC
10.4.2.8
VARRAWC
10.4.2.9
Conflicting Native Data Type Field Lengths
10.4.2.10
Field Lengths for Length-Value Data Types
10.4.3
Data Type Conversions
10.4.4
Data Type Conversions for Datetime and Interval Data Types
10.4.5
Specifying Delimiters
10.4.5.1
Syntax for Termination and Enclosure Specification
10.4.5.2
Delimiter Marks in the Data
10.4.5.3
Maximum Length of Delimited Data
10.4.5.4
Loading Trailing Blanks with Delimiters
10.4.6
How Delimited Data Is Processed
10.4.6.1
Fields Using Only TERMINATED BY
10.4.6.2
Fields Using ENCLOSED BY Without TERMINATED BY
10.4.6.3
Fields Using ENCLOSED BY With TERMINATED BY
10.4.6.4
Fields Using OPTIONALLY ENCLOSED BY With TERMINATED BY
10.4.7
Conflicting Field Lengths for Character Data Types
10.4.7.1
Predetermined Size Fields
10.4.7.2
Delimited Fields
10.4.7.3
Date Field Masks
10.5
Specifying Field Conditions
10.5.1
Comparing Fields to BLANKS
10.5.2
Comparing Fields to Literals
10.6
Using the WHEN, NULLIF, and DEFAULTIF Clauses
10.7
Examples of Using the WHEN, NULLIF, and DEFAULTIF Clauses
10.8
Loading Data Across Different Platforms
10.9
Byte Ordering
10.9.1
Specifying Byte Order
10.9.2
Using Byte Order Marks (BOMs)
10.9.2.1
Suppressing Checks for BOMs
10.10
Loading All-Blank Fields
10.11
Trimming Whitespace
10.11.1
Data Types for Which Whitespace Can Be Trimmed
10.11.2
Specifying Field Length for Data Types for Which Whitespace Can Be Trimmed
10.11.2.1
Predetermined Size Fields
10.11.2.2
Delimited Fields
10.11.3
Relative Positioning of Fields
10.11.3.1
No Start Position Specified for a Field
10.11.3.2
Previous Field Terminated by a Delimiter
10.11.3.3
Previous Field Has Both Enclosure and Termination Delimiters
10.11.4
Leading Whitespace
10.11.4.1
Previous Field Terminated by Whitespace
10.11.4.2
Optional Enclosure Delimiters
10.11.5
Trimming Trailing Whitespace
10.11.6
Trimming Enclosed Fields
10.12
How the PRESERVE BLANKS Option Affects Whitespace Trimming
10.13
How [NO] PRESERVE BLANKS Works with Delimiter Clauses
10.14
Applying SQL Operators to Fields
10.14.1
Referencing Fields
10.14.2
Common Uses of SQL Operators in Field Specifications
10.14.3
Combinations of SQL Operators
10.14.4
Using SQL Strings with a Date Mask
10.14.5
Interpreting Formatted Fields
10.14.6
Using SQL Strings to Load the ANYDATA Database Type
10.15
Using SQL*Loader to Generate Data for Input
10.15.1
Loading Data Without Files
10.15.2
Setting a Column to a Constant Value
10.15.2.1
CONSTANT Parameter
10.15.3
Setting a Column to an Expression Value
10.15.3.1
EXPRESSION Parameter
10.15.4
Setting a Column to the Data File Record Number
10.15.4.1
RECNUM Parameter
10.15.5
Setting a Column to the Current Date
10.15.5.1
SYSDATE Parameter
10.15.6
Setting a Column to a Unique Sequence Number
10.15.6.1
SEQUENCE Parameter
10.15.7
Generating Sequence Numbers for Multiple Tables
10.15.7.1
Example: Generating Different Sequence Numbers for Each Insert
11
Loading Objects, LOBs, and Collections
11.1
Loading Column Objects
11.1.1
Loading Column Objects in Stream Record Format
11.1.2
Loading Column Objects in Variable Record Format
11.1.3
Loading Nested Column Objects
11.1.4
Loading Column Objects with a Derived Subtype
11.1.5
Specifying Null Values for Objects
11.1.5.1
Specifying Attribute Nulls
11.1.5.2
Specifying Atomic Nulls
11.1.6
Loading Column Objects with User-Defined Constructors
11.2
Loading Object Tables
11.2.1
Loading Object Tables with a Subtype
11.3
Loading REF Columns
11.3.1
Specifying Table Names in a REF Clause
11.3.2
System-Generated OID REF Columns
11.3.3
Primary Key REF Columns
11.3.4
Unscoped REF Columns That Allow Primary Keys
11.4
Loading LOBs
11.4.1
Loading LOB Data from a Primary Data File
11.4.1.1
LOB Data in Predetermined Size Fields
11.4.1.2
LOB Data in Delimited Fields
11.4.1.3
LOB Data in Length-Value Pair Fields
11.4.2
Loading LOB Data from LOBFILEs
11.4.2.1
Dynamic Versus Static LOBFILE Specifications
11.4.2.2
Examples of Loading LOB Data from LOBFILEs
11.4.2.2.1
One LOB per File
11.4.2.2.2
Predetermined Size LOBs
11.4.2.2.3
Delimited LOBs
11.4.2.2.4
Length-Value Pair Specified LOBs
11.4.2.3
Considerations When Loading LOBs from LOBFILEs
11.4.3
Loading Data Files that Contain LLS Fields
11.5
Loading BFILE Columns
11.6
Loading Collections (Nested Tables and VARRAYs)
11.6.1
Restrictions in Nested Tables and VARRAYs
11.6.2
Secondary Data Files (SDFs)
11.7
Dynamic Versus Static SDF Specifications
11.8
Loading a Parent Table Separately from Its Child Table
11.8.1
Memory Issues When Loading VARRAY Columns
12
Conventional and Direct Path Loads
12.1
Data Loading Methods
12.1.1
Loading ROWID Columns
12.2
Conventional Path Load
12.2.1
Conventional Path Load of a Single Partition
12.2.2
When to Use a Conventional Path Load
12.3
Direct Path Load
12.3.1
Data Conversion During Direct Path Loads
12.3.2
Direct Path Load of a Partitioned or Subpartitioned Table
12.3.3
Direct Path Load of a Single Partition or Subpartition
12.3.4
Advantages of a Direct Path Load
12.3.5
Restrictions on Using Direct Path Loads
12.3.6
Restrictions on a Direct Path Load of a Single Partition
12.3.7
When to Use a Direct Path Load
12.3.8
Integrity Constraints
12.3.9
Field Defaults on the Direct Path
12.3.10
Loading into Synonyms
12.4
Using Direct Path Load
12.4.1
Setting Up for Direct Path Loads
12.4.2
Specifying a Direct Path Load
12.4.3
Building Indexes
12.4.3.1
Improving Performance
12.4.3.2
Temporary Segment Storage Requirements
12.4.4
Indexes Left in an Unusable State
12.4.5
Using Data Saves to Protect Against Data Loss
12.4.5.1
Using the ROWS Parameter
12.4.5.2
Data Save Versus Commit
12.4.6
Data Recovery During Direct Path Loads
12.4.6.1
Media Recovery and Direct Path Loads
12.4.6.2
Instance Recovery and Direct Path Loads
12.4.7
Loading Long Data Fields
12.4.7.1
Loading Data As PIECED
12.4.8
Auditing SQL*Loader Operations That Use Direct Path Mode
12.5
Optimizing Performance of Direct Path Loads
12.5.1
Preallocating Storage for Faster Loading
12.5.2
Presorting Data for Faster Indexing
12.5.2.1
SORTED INDEXES Clause
12.5.2.2
Unsorted Data
12.5.2.3
Multiple-Column Indexes
12.5.2.4
Choosing the Best Sort Order
12.5.3
Infrequent Data Saves
12.5.4
Minimizing Use of the Redo Log
12.5.4.1
Disabling Archiving
12.5.4.2
Specifying the SQL*Loader UNRECOVERABLE Clause
12.5.4.3
Setting the SQL NOLOGGING Parameter
12.5.5
Specifying the Number of Column Array Rows and Size of Stream Buffers
12.5.6
Specifying a Value for DATE_CACHE
12.6
Optimizing Direct Path Loads on Multiple-CPU Systems
12.7
Avoiding Index Maintenance
12.8
Direct Path Loads, Integrity Constraints, and Triggers
12.8.1
Integrity Constraints
12.8.1.1
Enabled Constraints
12.8.1.2
Disabled Constraints
12.8.1.3
Reenable Constraints
12.8.2
Database Insert Triggers
12.8.2.1
Replacing Insert Triggers with Integrity Constraints
12.8.2.2
When Automatic Constraints Cannot Be Used
12.8.2.3
Preparation
12.8.2.4
Using an Update Trigger
12.8.2.5
Duplicating the Effects of Exception Conditions
12.8.2.6
Using a Stored Procedure
12.8.3
Permanently Disabled Triggers and Constraints
12.8.4
Increasing Performance with Concurrent Conventional Path Loads
12.9
Parallel Data Loading Models
12.9.1
Concurrent Conventional Path Loads
12.9.2
Intersegment Concurrency with Direct Path
12.9.3
Intrasegment Concurrency with Direct Path
12.9.4
Restrictions on Parallel Direct Path Loads
12.9.5
Initiating Multiple SQL*Loader Sessions
12.9.6
Parameters for Parallel Direct Path Loads
12.9.6.1
Using the FILE Parameter to Specify Temporary Segments
12.9.6.1.1
Using the FILE Parameter
12.9.6.1.2
Using the STORAGE Parameter
12.9.7
Enabling Constraints After a Parallel Direct Path Load
12.9.8
PRIMARY KEY and UNIQUE KEY Constraints
12.10
General Performance Improvement Hints
13
SQL*Loader Express
13.1
What is SQL*Loader Express Mode?
13.2
Using SQL*Loader Express Mode
13.2.1
Default Values Used by SQL*Loader Express Mode
13.2.1.1
How SQL*Loader Express Mode Handles Byte Order
13.3
SQL*Loader Express Mode Parameter Reference
13.3.1
BAD
13.3.2
CHARACTERSET
13.3.3
CSV
13.3.4
DATA
13.3.5
DATE_FORMAT
13.3.6
DEGREE_OF_PARALLELISM
13.3.7
DIRECT
13.3.8
DNFS_ENABLE
13.3.9
DNFS_READBUFFERS
13.3.10
ENCLOSED_BY
13.3.11
EXTERNAL_TABLE
13.3.12
FIELD_NAMES
13.3.13
LOAD
13.3.14
NULLIF
13.3.15
OPTIONALLY_ENCLOSED_BY
13.3.16
PARFILE
13.3.17
SILENT
13.3.18
TABLE
13.3.19
TERMINATED_BY
13.3.20
TIMESTAMP_FORMAT
13.3.21
TRIM
13.3.22
USERID
13.4
SQL*Loader Express Mode Syntax Diagrams
Part III External Tables
14
External Tables Concepts
14.1
How Are External Tables Created?
14.2
Location of Data Files and Output Files
14.3
Access Parameters for External Tables
14.4
Data Type Conversion During External Table Use
15
The ORACLE_LOADER Access Driver
15.1
access_parameters Clause
15.2
record_format_info Clause
15.2.1
FIXED length
15.2.2
VARIABLE size
15.2.3
DELIMITED BY
15.2.4
XMLTAG
15.2.5
CHARACTERSET
15.2.6
EXTERNAL VARIABLE DATA
15.2.7
PREPROCESSOR
15.2.7.1
Using Parallel Processing with the PREPROCESSOR Clause
15.2.7.2
Restrictions When Using the PREPROCESSOR Clause
15.2.8
LANGUAGE
15.2.9
TERRITORY
15.2.10
DATA IS...ENDIAN
15.2.11
BYTEORDERMARK (CHECK | NOCHECK)
15.2.12
STRING SIZES ARE IN
15.2.13
LOAD WHEN
15.2.14
BADFILE | NOBADFILE
15.2.15
DISCARDFILE | NODISCARDFILE
15.2.16
LOGFILE | NOLOGFILE
15.2.17
SKIP
15.2.18
FIELD NAMES
15.2.19
READSIZE
15.2.20
DISABLE_DIRECTORY_LINK_CHECK
15.2.21
DATE_CACHE
15.2.22
string
15.2.23
condition_spec
15.2.24
[directory object name:] [filename]
15.2.25
condition
15.2.25.1
range start : range end
15.2.26
IO_OPTIONS clause
15.2.27
DNFS_DISABLE | DNFS_ENABLE
15.2.28
DNFS_READBUFFERS
15.3
field_definitions Clause
15.3.1
delim_spec
15.3.1.1
Example: External Table with Terminating Delimiters
15.3.1.2
Example: External Table with Enclosure and Terminator Delimiters
15.3.1.3
Example: External Table with Optional Enclosure Delimiters
15.3.2
trim_spec
15.3.3
MISSING FIELD VALUES ARE NULL
15.3.4
field_list
15.3.5
pos_spec Clause
15.3.5.1
start
15.3.5.2
*
15.3.5.3
increment
15.3.5.4
end
15.3.5.5
length
15.3.6
datatype_spec Clause
15.3.6.1
[UNSIGNED] INTEGER [EXTERNAL] [(len)]
15.3.6.2
DECIMAL [EXTERNAL] and ZONED [EXTERNAL]
15.3.6.3
ORACLE_DATE
15.3.6.4
ORACLE_NUMBER
15.3.6.5
Floating-Point Numbers
15.3.6.6
DOUBLE
15.3.6.7
FLOAT [EXTERNAL]
15.3.6.8
BINARY_DOUBLE
15.3.6.9
BINARY_FLOAT
15.3.6.10
RAW
15.3.6.11
CHAR
15.3.6.12
date_format_spec
15.3.6.12.1
DATE
15.3.6.12.2
MASK
15.3.6.12.3
TIMESTAMP
15.3.6.12.4
INTERVAL
15.3.6.13
VARCHAR and VARRAW
15.3.6.14
VARCHARC and VARRAWC
15.3.7
init_spec Clause
15.3.8
LLS Clause
15.4
column_transforms Clause
15.4.1
transform
15.4.1.1
column_name FROM
15.4.1.2
NULL
15.4.1.3
CONSTANT
15.4.1.4
CONCAT
15.4.1.5
LOBFILE
15.4.1.6
lobfile_attr_list
15.4.1.7
STARTOF source_field (length)
15.5
Parallel Loading Considerations for the ORACLE_LOADER Access Driver
15.6
Performance Hints When Using the ORACLE_LOADER Access Driver
15.7
Restrictions When Using the ORACLE_LOADER Access Driver
15.8
Reserved Words for the ORACLE_LOADER Access Driver
16
The ORACLE_DATAPUMP Access Driver
16.1
Using the ORACLE_DATAPUMP Access Driver
16.2
access_parameters Clause
16.2.1
comments
16.2.2
COMPRESSION
16.2.3
ENCRYPTION
16.2.4
LOGFILE | NOLOGFILE
16.2.4.1
Log File Naming in Parallel Loads
16.2.5
VERSION Clause
16.2.6
Effects of Using the SQL ENCRYPT Clause
16.3
Unloading and Loading Data with the ORACLE_DATAPUMP Access Driver
16.3.1
Parallel Loading and Unloading
16.3.2
Combining Dump Files
16.4
Supported Data Types
16.5
Unsupported Data Types
16.5.1
Unloading and Loading BFILE Data Types
16.5.2
Unloading LONG and LONG RAW Data Types
16.5.3
Unloading and Loading Columns Containing Final Object Types
16.5.4
Tables of Final Object Types
16.6
Performance Hints When Using the ORACLE_DATAPUMP Access Driver
16.7
Restrictions When Using the ORACLE_DATAPUMP Access Driver
16.8
Reserved Words for the ORACLE_DATAPUMP Access Driver
17
ORACLE_HDFS and ORACLE_HIVE Access Drivers
17.1
Syntax Rules for Specifying Properties
17.2
ORACLE_HDFS Access Parameters
17.2.1
Default Parameter Settings for ORACLE_HDFS
17.2.2
Optional Parameter Settings for ORACLE_HDFS
17.3
ORACLE_HIVE Access Parameters
17.3.1
Default Parameter Settings for ORACLE_HIVE
17.3.2
Optional Parameter Settings for ORACLE_HIVE
17.4
Descriptions of com.oracle.bigdata Parameters
17.4.1
com.oracle.bigdata.colmap
17.4.2
com.oracle.bigdata.datamode
17.4.3
com.oracle.bigdata.erroropt
17.4.4
com.oracle.bigdata.fields
17.4.5
com.oracle.bigdata.fileformat
17.4.6
com.oracle.bigdata.log.exec
17.4.7
com.oracle.bigdata.log.qc
17.4.8
com.oracle.bigdata.overflow
17.4.9
com.oracle.bigdata.rowformat
17.4.10
com.oracle.bigdata.tablename
18
External Tables Examples
18.1
Using the ORACLE_LOADER Access Driver to Create Partitioned External Tables
18.2
Using the ORACLE_LOADER Access Driver to Create Partitioned Hybrid Tables
18.3
Using the ORACLE_DATAPUMP Access Driver to Create Partitioned External Tables
18.4
Using the ORACLE_HDFS Access Driver to Create Partitioned External Tables
18.5
Using the ORACLE_HIVE Access Driver to Create Partitioned External Tables
18.6
Loading LOBs From External Tables
18.7
Loading CSV Files From External Tables
Part IV Other Utilities
19
ADRCI: ADR Command Interpreter
19.1
About the ADR Command Interpreter (ADRCI) Utility
19.2
Definitions
19.3
Starting ADRCI and Getting Help
19.3.1
Using ADRCI in Interactive Mode
19.3.2
Getting Help
19.3.3
Using ADRCI in Batch Mode
19.4
Setting the ADRCI Homepath Before Using ADRCI Commands
19.5
Viewing the Alert Log
19.6
Finding Trace Files
19.7
Viewing Incidents
19.8
Packaging Incidents
19.8.1
About Packaging Incidents
19.8.2
Creating Incident Packages
19.8.2.1
Creating a Logical Incident Package
19.8.2.2
Adding Diagnostic Information to a Logical Incident Package
19.8.2.3
Generating a Physical Incident Package
19.9
ADRCI Command Reference
19.9.1
CREATE REPORT
19.9.2
ECHO
19.9.3
EXIT
19.9.4
HOST
19.9.5
IPS
19.9.5.1
Using the <ADR_HOME> and <ADR_BASE> Variables in IPS Commands
19.9.5.2
IPS ADD
19.9.5.3
IPS ADD FILE
19.9.5.4
IPS ADD NEW INCIDENTS
19.9.5.5
IPS COPY IN FILE
19.9.5.6
IPS COPY OUT FILE
19.9.5.7
IPS CREATE PACKAGE
19.9.5.8
IPS DELETE PACKAGE
19.9.5.9
IPS FINALIZE
19.9.5.10
IPS GENERATE PACKAGE
19.9.5.11
IPS GET MANIFEST
19.9.5.12
IPS GET METADATA
19.9.5.13
IPS PACK
19.9.5.14
IPS REMOVE
19.9.5.15
IPS REMOVE FILE
19.9.5.16
IPS SET CONFIGURATION
19.9.5.17
IPS SHOW CONFIGURATION
19.9.5.18
IPS SHOW FILES
19.9.5.19
IPS SHOW INCIDENTS
19.9.5.20
IPS SHOW PACKAGE
19.9.5.21
IPS UNPACK FILE
19.9.6
PURGE
19.9.7
QUIT
19.9.8
RUN
19.9.9
SELECT
19.9.9.1
AVG
19.9.9.2
CONCAT
19.9.9.3
COUNT
19.9.9.4
DECODE
19.9.9.5
LENGTH
19.9.9.6
MAX
19.9.9.7
MIN
19.9.9.8
NVL
19.9.9.9
REGEXP_LIKE
19.9.9.10
SUBSTR
19.9.9.11
SUM
19.9.9.12
TIMESTAMP_TO_CHAR
19.9.9.13
TOLOWER
19.9.9.14
TOUPPER
19.9.10
SET BASE
19.9.11
SET BROWSER
19.9.12
SET CONTROL
19.9.13
SET ECHO
19.9.14
SET EDITOR
19.9.15
SET HOMEPATH
19.9.16
SET TERMOUT
19.9.17
SHOW ALERT
19.9.18
SHOW BASE
19.9.19
SHOW CONTROL
19.9.20
SHOW HM_RUN
19.9.21
SHOW HOMEPATH
19.9.22
SHOW HOMES
19.9.23
SHOW INCDIR
19.9.24
SHOW INCIDENT
19.9.25
SHOW LOG
19.9.26
SHOW PROBLEM
19.9.27
SHOW REPORT
19.9.28
SHOW TRACEFILE
19.9.29
SPOOL
19.10
Troubleshooting ADRCI
20
DBVERIFY: Offline Database Verification Utility
20.1
Using DBVERIFY to Validate Disk Blocks of a Single Data File
20.1.1
DBVERIFY Syntax When Validating Blocks of a Single File
20.1.2
DBVERIFY Parameters When Validating Blocks of a Single File
20.1.3
Sample DBVERIFY Output For a Single Data File
20.2
Using DBVERIFY to Validate a Segment
20.2.1
DBVERIFY Syntax When Validating a Segment
20.2.2
DBVERIFY Parameters When Validating a Single Segment
20.2.3
Sample DBVERIFY Output For a Validated Segment
21
DBNEWID Utility
21.1
What Is the DBNEWID Utility?
21.2
Ramifications of Changing the DBID and DBNAME
21.2.1
Considerations for Global Database Names
21.3
DBNEWID Considerations for CDBs and PDBs
21.4
Changing the DBID and DBNAME of a Database
21.4.1
Changing the DBID and Database Name
21.4.2
Changing Only the Database ID
21.4.3
Changing Only the Database Name
21.4.4
Troubleshooting DBNEWID
21.5
DBNEWID Syntax
21.5.1
DBNEWID Parameters
21.5.2
Restrictions and Usage Notes
21.5.3
Additional Restrictions for Releases Earlier Than Oracle Database 10g
22
Using LogMiner to Analyze Redo Log Files
22.1
LogMiner Benefits
22.2
Introduction to LogMiner
22.2.1
LogMiner Configuration
22.2.1.1
Sample LogMiner Configuration
22.2.1.2
Requirements
22.2.2
Directing LogMiner Operations and Retrieving Data of Interest
22.3
Using LogMiner in a CDB
22.3.1
LogMiner V$ Views and DBA Views in a CDB
22.3.2
The V$LOGMNR_CONTENTS View in a CDB
22.3.3
Enabling Supplemental Logging in a CDB
22.3.4
Using a Flat File Dictionary in a CDB
22.4
LogMiner Dictionary Files and Redo Log Files
22.4.1
LogMiner Dictionary Options
22.4.1.1
Using the Online Catalog
22.4.1.2
Extracting a LogMiner Dictionary to the Redo Log Files
22.4.1.3
Extracting the LogMiner Dictionary to a Flat File
22.4.2
Redo Log File Options
22.5
Starting LogMiner
22.6
Querying V$LOGMNR_CONTENTS for Redo Data of Interest
22.6.1
How the V$LOGMNR_CONTENTS View Is Populated
22.6.2
Querying V$LOGMNR_CONTENTS Based on Column Values
22.6.2.1
The Meaning of NULL Values Returned by the MINE_VALUE Function
22.6.2.2
Usage Rules for the MINE_VALUE and COLUMN_PRESENT Functions
22.6.2.3
Restrictions When Using the MINE_VALUE Function To Get an NCHAR Value
22.6.3
Querying V$LOGMNR_CONTENTS Based on XMLType Columns and Tables
22.6.3.1
Restrictions When Using LogMiner With XMLType Data
22.6.3.2
Example of a PL/SQL Procedure for Assembling XMLType Data
22.7
Filtering and Formatting Data Returned to V$LOGMNR_CONTENTS
22.7.1
Showing Only Committed Transactions
22.7.2
Skipping Redo Corruptions
22.7.3
Filtering Data by Time
22.7.4
Filtering Data by SCN
22.7.5
Formatting Reconstructed SQL Statements for Re-execution
22.7.6
Formatting the Appearance of Returned Data for Readability
22.8
Reapplying DDL Statements Returned to V$LOGMNR_CONTENTS
22.9
Calling DBMS_LOGMNR.START_LOGMNR Multiple Times
22.10
Supplemental Logging
22.10.1
Database-Level Supplemental Logging
22.10.1.1
Minimal Supplemental Logging
22.10.1.2
Database-Level Identification Key Logging
22.10.1.3
Procedural Supplemental Logging
22.10.2
Disabling Database-Level Supplemental Logging
22.10.3
Table-Level Supplemental Logging
22.10.3.1
Table-Level Identification Key Logging
22.10.3.2
Table-Level User-Defined Supplemental Log Groups
22.10.3.3
Usage Notes for User-Defined Supplemental Log Groups
22.10.4
Tracking DDL Statements in the LogMiner Dictionary
22.10.5
DDL_DICT_TRACKING and Supplemental Logging Settings
22.10.6
DDL_DICT_TRACKING and Specified Time or SCN Ranges
22.11
Accessing LogMiner Operational Information in Views
22.11.1
Querying V$LOGMNR_LOGS
22.11.2
Querying Views for Supplemental Logging Settings
22.12
Steps in a Typical LogMiner Session
22.12.1
Typical LogMiner Session Task 1: Enable Supplemental Logging
22.12.2
Typical LogMiner Session Task 2: Extract a LogMiner Dictionary
22.12.3
Typical LogMiner Session Task 3: Specify Redo Log Files for Analysis
22.12.4
Typical LogMiner Session Task 4: Start LogMiner
22.12.5
Typical LogMiner Session Task 5: Query V$LOGMNR_CONTENTS
22.12.6
Typical LogMiner Session Task 6: End the LogMiner Session
22.13
Examples Using LogMiner
22.13.1
Examples of Mining by Explicitly Specifying the Redo Log Files of Interest
22.13.1.1
Example 1: Finding All Modifications in the Last Archived Redo Log File
22.13.1.2
Example 2: Grouping DML Statements into Committed Transactions
22.13.1.3
Example 3: Formatting the Reconstructed SQL
22.13.1.4
Example 4: Using the LogMiner Dictionary in the Redo Log Files
22.13.1.5
Example 5: Tracking DDL Statements in the Internal Dictionary
22.13.1.6
Example 6: Filtering Output by Time Range
22.13.2
Examples of Mining Without Specifying the List of Redo Log Files Explicitly
22.13.2.1
Example 1: Mining Redo Log Files in a Given Time Range
22.13.2.2
Example 2: Mining the Redo Log Files in a Given SCN Range
22.13.2.3
Example 3: Using Continuous Mining to Include Future Values in a Query
22.13.3
Example Scenarios
22.13.3.1
Scenario 1: Using LogMiner to Track Changes Made by a Specific User
22.13.3.2
Scenario 2: Using LogMiner to Calculate Table Access Statistics
22.14
Supported Data Types, Storage Attributes, and Database and Redo Log File Versions
22.14.1
Supported Data Types and Table Storage Attributes
22.14.1.1
Compatibility Requirements
22.14.2
Unsupported Data Types and Table Storage Attributes
22.14.3
Supported Databases and Redo Log File Versions
22.14.4
SecureFiles LOB Considerations
23
Using the Metadata APIs
23.1
Why Use the DBMS_METADATA API?
23.2
Overview of the DBMS_METADATA API
23.3
Using the DBMS_METADATA API to Retrieve an Object's Metadata
23.3.1
Typical Steps Used for Basic Metadata Retrieval
23.3.2
Retrieving Multiple Objects
23.3.3
Placing Conditions on Transforms
23.3.4
Accessing Specific Metadata Attributes
23.4
Using the DBMS_METADATA API to Re-Create a Retrieved Object
23.5
Using the DBMS_METADATA API to Retrieve Collections of Different Object Types
23.5.1
Filtering the Return of Heterogeneous Object Types
23.6
Using the DBMS_METADATA_DIFF API to Compare Object Metadata
23.7
Performance Tips for the Programmatic Interface of the DBMS_METADATA API
23.8
Example Usage of the DBMS_METADATA API
23.8.1
What Does the DBMS_METADATA Example Do?
23.8.2
Output Generated from the GET_PAYROLL_TABLES Procedure
23.9
Summary of DBMS_METADATA Procedures
23.10
Summary of DBMS_METADATA_DIFF Procedures
24
Original Export
24.1
What is the Export Utility?
24.2
Before Using Export
24.2.1
Running catexp.sql or catalog.sql
24.2.2
Ensuring Sufficient Disk Space for Export Operations
24.2.3
Verifying Access Privileges for Export and Import Operations
24.3
Invoking Export
24.3.1
Invoking Export as SYSDBA
24.3.2
Command-Line Entries
24.3.3
Parameter Files
24.3.4
Interactive Mode
24.3.4.1
Restrictions When Using Export's Interactive Method
24.3.5
Getting Online Help
24.4
Export Modes
24.4.1
Table-Level and Partition-Level Export
24.4.1.1
Table-Level Export
24.4.1.2
Partition-Level Export
24.5
Export Parameters
24.5.1
BUFFER
24.5.1.1
Example: Calculating Buffer Size
24.5.2
COMPRESS
24.5.3
CONSISTENT
24.5.4
CONSTRAINTS
24.5.5
DIRECT
24.5.6
FEEDBACK
24.5.7
FILE
24.5.8
FILESIZE
24.5.9
FLASHBACK_SCN
24.5.10
FLASHBACK_TIME
24.5.11
FULL
24.5.11.1
Points to Consider for Full Database Exports and Imports
24.5.12
GRANTS
24.5.13
HELP
24.5.14
INDEXES
24.5.15
LOG
24.5.16
OBJECT_CONSISTENT
24.5.17
OWNER
24.5.18
PARFILE
24.5.19
QUERY
24.5.19.1
Restrictions When Using the QUERY Parameter
24.5.20
RECORDLENGTH
24.5.21
RESUMABLE
24.5.22
RESUMABLE_NAME
24.5.23
RESUMABLE_TIMEOUT
24.5.24
ROWS
24.5.25
STATISTICS
24.5.26
TABLES
24.5.26.1
Table Name Restrictions
24.5.27
TABLESPACES
24.5.28
TRANSPORT_TABLESPACE
24.5.29
TRIGGERS
24.5.30
TTS_FULL_CHECK
24.5.31
USERID (username/password)
24.5.32
VOLSIZE
24.6
Example Export Sessions
24.6.1
Example Export Session in Full Database Mode
24.6.2
Example Export Session in User Mode
24.6.3
Example Export Sessions in Table Mode
24.6.3.1
Example 1: DBA Exporting Tables for Two Users
24.6.3.2
Example 2: User Exports Tables That He Owns
24.6.3.3
Example 3: Using Pattern Matching to Export Various Tables
24.6.4
Example Export Session Using Partition-Level Export
24.6.4.1
Example 1: Exporting a Table Without Specifying a Partition
24.6.4.2
Example 2: Exporting a Table with a Specified Partition
24.6.4.3
Example 3: Exporting a Composite Partition
24.7
Warning, Error, and Completion Messages
24.7.1
Log File
24.7.2
Warning Messages
24.7.3
Nonrecoverable Error Messages
24.7.4
Completion Messages
24.8
Exit Codes for Inspection and Display
24.9
Conventional Path Export Versus Direct Path Export
24.10
Invoking a Direct Path Export
24.10.1
Security Considerations for Direct Path Exports
24.10.2
Performance Considerations for Direct Path Exports
24.10.3
Restrictions for Direct Path Exports
24.11
Network Considerations
24.11.1
Transporting Export Files Across a Network
24.11.2
Exporting with Oracle Net
24.12
Character Set and Globalization Support Considerations
24.12.1
User Data
24.12.1.1
Effect of Character Set Sorting Order on Conversions
24.12.2
Data Definition Language (DDL)
24.12.3
Single-Byte Character Sets and Export and Import
24.12.4
Multibyte Character Sets and Export and Import
24.13
Using Instance Affinity with Export and Import
24.14
Considerations When Exporting Database Objects
24.14.1
Exporting Sequences
24.14.2
Exporting LONG and LOB Data Types
24.14.3
Exporting Foreign Function Libraries
24.14.4
Exporting Offline Locally-Managed Tablespaces
24.14.5
Exporting Directory Aliases
24.14.6
Exporting BFILE Columns and Attributes
24.14.7
Exporting External Tables
24.14.8
Exporting Object Type Definitions
24.14.9
Exporting Nested Tables
24.14.10
Exporting Advanced Queue (AQ) Tables
24.14.11
Exporting Synonyms
24.14.12
Possible Export Errors Related to Java Synonyms
24.14.13
Support for Fine-Grained Access Control
24.15
Transportable Tablespaces
24.16
Exporting From a Read-Only Database
24.17
Using Export and Import to Partition a Database Migration
24.17.1
Advantages of Partitioning a Migration
24.17.2
Disadvantages of Partitioning a Migration
24.17.3
How to Use Export and Import to Partition a Database Migration
24.18
Using Different Releases of Export and Import
24.18.1
Restrictions When Using Different Releases of Export and Import
24.18.2
Examples of Using Different Releases of Export and Import
25
Original Import
25.1
What Is the Import Utility?
25.1.1
Table Objects: Order of Import
25.2
Before Using Import
25.2.1
Running catexp.sql or catalog.sql
25.2.2
Verifying Access Privileges for Import Operations
25.2.2.1
Importing Objects Into Your Own Schema
25.2.2.2
Importing Grants
25.2.2.3
Importing Objects Into Other Schemas
25.2.2.4
Importing System Objects
25.2.3
Processing Restrictions
25.3
Importing into Existing Tables
25.3.1
Manually Creating Tables Before Importing Data
25.3.2
Disabling Referential Constraints
25.3.3
Manually Ordering the Import
25.4
Effect of Schema and Database Triggers on Import Operations
25.5
Invoking Import
25.5.1
Command-Line Entries
25.5.2
Parameter Files
25.5.3
Interactive Mode
25.5.4
Invoking Import As SYSDBA
25.5.5
Getting Online Help
25.6
Import Modes
25.7
Import Parameters
25.7.1
BUFFER
25.7.2
COMMIT
25.7.3
COMPILE
25.7.4
CONSTRAINTS
25.7.5
DATA_ONLY
25.7.6
DATAFILES
25.7.7
DESTROY
25.7.8
FEEDBACK
25.7.9
FILE
25.7.10
FILESIZE
25.7.11
FROMUSER
25.7.12
FULL
25.7.12.1
Points to Consider for Full Database Exports and Imports
25.7.13
GRANTS
25.7.14
HELP
25.7.15
IGNORE
25.7.16
INDEXES
25.7.17
INDEXFILE
25.7.18
LOG
25.7.19
PARFILE
25.7.20
RECORDLENGTH
25.7.21
RESUMABLE
25.7.22
RESUMABLE_NAME
25.7.23
RESUMABLE_TIMEOUT
25.7.24
ROWS
25.7.25
SHOW
25.7.26
SKIP_UNUSABLE_INDEXES
25.7.27
STATISTICS
25.7.28
STREAMS_CONFIGURATION
25.7.29
STREAMS_INSTANTIATION
25.7.30
TABLES
25.7.30.1
Table Name Restrictions
25.7.31
TABLESPACES
25.7.32
TOID_NOVALIDATE
25.7.33
TOUSER
25.7.34
TRANSPORT_TABLESPACE
25.7.35
TTS_OWNERS
25.7.36
USERID (username/password)
25.7.37
VOLSIZE
25.8
Example Import Sessions
25.8.1
Example Import of Selected Tables for a Specific User
25.8.2
Example Import of Tables Exported by Another User
25.8.3
Example Import of Tables from One User to Another
25.8.4
Example Import Session Using Partition-Level Import
25.8.4.1
Example 1: A Partition-Level Import
25.8.4.2
Example 2: A Partition-Level Import of a Composite Partitioned Table
25.8.4.3
Example 3: Repartitioning a Table on a Different Column
25.8.5
Example Import Using Pattern Matching to Import Various Tables
25.9
Exit Codes for Inspection and Display
25.10
Error Handling During an Import
25.10.1
Row Errors
25.10.1.1
Failed Integrity Constraints
25.10.1.2
Invalid Data
25.10.2
Errors Importing Database Objects
25.10.2.1
Object Already Exists
25.10.2.2
Sequences
25.10.2.3
Resource Errors
25.10.2.4
Domain Index Metadata
25.11
Table-Level and Partition-Level Import
25.11.1
Guidelines for Using Table-Level Import
25.11.2
Guidelines for Using Partition-Level Import
25.11.3
Migrating Data Across Partitions and Tables
25.12
Controlling Index Creation and Maintenance
25.12.1
Delaying Index Creation
25.12.2
Index Creation and Maintenance Controls
25.12.2.1
Example of Postponing Index Maintenance
25.13
Network Considerations
25.14
Character Set and Globalization Support Considerations
25.14.1
User Data
25.14.1.1
Effect of Character Set Sorting Order on Conversions
25.14.2
Data Definition Language (DDL)
25.14.3
Single-Byte Character Sets
25.14.4
Multibyte Character Sets
25.15
Using Instance Affinity
25.16
Considerations When Importing Database Objects
25.16.1
Importing Object Identifiers
25.16.2
Importing Existing Object Tables and Tables That Contain Object Types
25.16.3
Importing Nested Tables
25.16.4
Importing REF Data
25.16.5
Importing BFILE Columns and Directory Aliases
25.16.6
Importing Foreign Function Libraries
25.16.7
Importing Stored Procedures, Functions, and Packages
25.16.8
Importing Java Objects
25.16.9
Importing External Tables
25.16.10
Importing Advanced Queue (AQ) Tables
25.16.11
Importing LONG Columns
25.16.12
Importing LOB Columns When Triggers Are Present
25.16.13
Importing Views
25.16.14
Importing Partitioned Tables
25.17
Support for Fine-Grained Access Control
25.18
Snapshots and Snapshot Logs
25.18.1
Snapshot Log
25.18.2
Snapshots
25.18.2.1
Importing a Snapshot
25.18.2.2
Importing a Snapshot into a Different Schema
25.19
Transportable Tablespaces
25.20
Storage Parameters
25.20.1
The OPTIMAL Parameter
25.20.2
Storage Parameters for OID Indexes and LOB Columns
25.20.3
Overriding Storage Parameters
25.21
Read-Only Tablespaces
25.22
Dropping a Tablespace
25.23
Reorganizing Tablespaces
25.24
Importing Statistics
25.25
Using Export and Import to Partition a Database Migration
25.25.1
Advantages of Partitioning a Migration
25.25.2
Disadvantages of Partitioning a Migration
25.25.3
How to Use Export and Import to Partition a Database Migration
25.26
Tuning Considerations for Import Operations
25.26.1
Changing System-Level Options
25.26.2
Changing Initialization Parameters
25.26.3
Changing Import Options
25.26.4
Dealing with Large Amounts of LOB Data
25.26.5
Dealing with Large Amounts of LONG Data
25.27
Using Different Releases of Export and Import
25.27.1
Restrictions When Using Different Releases of Export and Import
25.27.2
Examples of Using Different Releases of Export and Import
Part V Appendixes
A
SQL*Loader Syntax Diagrams
B
Instant Client for SQL*Loader, Export, and Import
B.1
What is the Tools Instant Client?
B.2
Choosing the Instant Client to Install
B.3
Installing Tools Instant Client by Downloading from OTN
B.4
Installing Tools Instant Client from the 12c Client Release Media
B.5
Configuring Tools Instant Client Package
B.6
Connecting to a Database with the Tools Instant Client Package
B.7
Uninstalling Instant Client
Index
Search
Search
Search this book
Search this product
Search this category
Search All Documentation
Print
Print this page
Download
PDF
for offline and print
Previous
Next
Utilities
Appendixes
Part V
Appendixes
This section contains the following topics:
Back to main content
Previous
Next
×
Video