2 Data Pump Export
The Oracle Data Pump Export utility is used to unload data and metadata into a set of operating system files called a dump file set.
- What Is Data Pump Export?
Data Pump Export (hereinafter referred to as Export for ease of reading) is a utility for unloading data and metadata into a set of operating system files called a dump file set. - Invoking Data Pump Export
The Data Pump Export utility is started using theexpdp
command. - Filtering During Export Operations
Data Pump Export provides data and metadata filtering capability. This capability helps you limit the type of information that is exported. - Parameters Available in Export's Command-Line Mode
This section describes the parameters available in the command-line mode of Data Pump Export. - Commands Available in Data Pump Export Interactive-Command Mode
Check which command options are available to you when using Data Pump Export in interactive mode. - Examples of Using Data Pump Export
These sections provides examples of using Data Pump Export. - Syntax Diagrams for Data Pump Export
This section provides syntax diagrams for Data Pump Export.
Parent topic: Oracle Data Pump
2.1 What Is Data Pump Export?
Data Pump Export (hereinafter referred to as Export for ease of reading) is a utility for unloading data and metadata into a set of operating system files called a dump file set.
The dump file set can be imported only by the Data Pump Import utility. The dump file set can be imported on the same system or it can be moved to another system and loaded there.
The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format. During an import operation, the Data Pump Import utility uses these files to locate each database object in the dump file set.
Because the dump files are written by the server, rather than by the client, the database administrator (DBA) must create directory objects that define the server locations to which files are written.
Data Pump Export enables you to specify that a job should move a subset of the data and metadata, as determined by the export mode. This is done using data filters and metadata filters, which are specified through Export parameters.
See Also:
-
Default Locations for Dump_ Log_ and SQL Files for more information about directory objects.
-
Filtering During Export Operations for more information about using filters.
-
Examples of Using Data Pump Export for examples of the various ways in which you can use Data Pump Export.
Parent topic: Data Pump Export
2.2 Invoking Data Pump Export
The Data Pump Export utility is started using the expdp
command.
The characteristics of the export operation are determined by the Export parameters that you specify. These parameters can be specified either on the command line or in a parameter file.
Note:
Do not start Export as SYSDBA,
except at the request of Oracle technical support. SYSDBA
is used internally and has specialized functions; its behavior is not the same as for general users.
- Data Pump Export Interfaces
You can interact with Data Pump Export by using a command line, a parameter file, or an interactive-command mode. - Data Pump Export Modes
Export provides different modes for unloading different portions of the database. - Network Considerations
You can specify a connect identifier in the connect string when you start the Data Pump Export utility. This identifier can specify a database instance that is different from the current instance identified by the current Oracle System ID (SID).
Parent topic: Data Pump Export
2.2.1 Data Pump Export Interfaces
You can interact with Data Pump Export by using a command line, a parameter file, or an interactive-command mode.
-
Command-Line Interface: Enables you to specify most of the Export parameters directly on the command line. For a complete description of the parameters available in the command-line interface.
-
Parameter File Interface: Enables you to specify command-line parameters in a parameter file. The only exception is the
PARFILE
parameter, because parameter files cannot be nested. The use of parameter files is recommended if you are using parameters whose values require quotation marks. -
Interactive-Command Interface: Stops logging to the terminal and displays the Export prompt, from which you can enter various commands, some of which are specific to interactive-command mode. This mode is enabled by pressing Ctrl+C during an export operation started with the command-line interface or the parameter file interface. Interactive-command mode is also enabled when you attach to an executing or stopped job.
2.2.2 Data Pump Export Modes
Export provides different modes for unloading different portions of the database.
The mode is specified on the command line, using the appropriate parameter. The available modes are described in the following sections:
Note:
Several system schemas cannot be exported because they are not user schemas; they contain Oracle-managed data and metadata. Examples of system schemas that are not exported include SYS
, ORDSYS
, and MDSYS
.
- Full Mode
You can use Data Pump to carry out a full database export by using theFULL
parameter. - Schema Mode
You can specify a schema export with Data Pump by using theSCHEMAS
parameter. A schema export is the default export mode. - Table Mode
You can use Data Pump to carry out a table mode export by specifying the table using theTABLES
parameter. - Tablespace Mode
You can use Data Pump to carry out a tablespace export by specifying tables using theTABLESPACES
parameter. - Transportable Tablespace Mode
You can use Oracle Data Pump to carry out a transportable tablespace export by using theTRANSPORT_TABLESPACES
parameter.
Related Topics
See Also:
Parent topic: Invoking Data Pump Export
2.2.2.1 Full Mode
You can use Data Pump to carry out a full database export by using the FULL
parameter.
In a full database export, the entire database is unloaded. This mode requires that you have the DATAPUMP_EXP_FULL_DATABASE
role.
Using the Transportable Option During Full Mode Exports
If you specify the TRANSPORTABLE=ALWAYS
parameter along with the FULL
parameter, then Data Pump performs a full transportable export. A full transportable export exports all objects and data necessary to create a complete copy of the database. A mix of data movement methods is used:
-
Objects residing in transportable tablespaces have only their metadata unloaded into the dump file set; the data itself is moved when you copy the data files to the target database. The data files that must be copied are listed at the end of the log file for the export operation.
-
Objects residing in non-transportable tablespaces (for example,
SYSTEM
andSYSAUX
) have both their metadata and data unloaded into the dump file set, using direct path unload and external tables.
Performing a full transportable export has the following restrictions:
-
The user performing a full transportable export requires the
DATAPUMP_EXP_FULL_DATABASE
privilege. -
The default tablespace of the user performing the export must not be set to one of the tablespaces being transported.
-
If the database being exported contains either encrypted tablespaces or tables with encrypted columns (either Transparent Data Encryption (TDE) columns or SecureFiles LOB columns), then the
ENCRYPTION_PASSWORD
parameter must also be supplied. -
The source and target databases must be on platforms with the same endianness if there are encrypted tablespaces in the source database.
-
If the source platform and the target platform are of different endianness, then you must convert the data being transported so that it is in the format of the target platform. You can use the
DBMS_FILE_TRANSFER
package or theRMAN CONVERT
command to convert the data. -
A full transportable export is not restartable.
-
All objects with storage that are selected for export must have all of their storage segments either entirely within administrative, non-transportable tablespaces (
SYSTEM/SYSAUX
) or entirely within user-defined, transportable tablespaces. Storage for a single object cannot straddle the two kinds of tablespaces. -
When transporting a database over the network using full transportable export, auditing cannot be enabled for tables stored in an administrative tablespace (such as
SYSTEM
andSYSAUX
) if the audit trail information itself is stored in a user-defined tablespace. -
If both the source and target databases are running Oracle Database 12c, then to perform a full transportable export, either the Data Pump
VERSION
parameter must be set to at least 12.0. or theCOMPATIBLE
database initialization parameter must be set to at least 12.0 or later.
Full transportable exports are supported from a source database running release 11.2.0.3. To do so, set the Data Pump VERSION
parameter to at least 12.0, as shown in the following syntax example, where user_name
is the user performing a full transportable export:
> expdp user_name FULL=y DUMPFILE=expdat.dmp DIRECTORY=data_pump_dir
TRANSPORTABLE=always VERSION=12.0 LOGFILE=export.log
Related Topics
See Also:
-
Oracle Database Backup and Recovery Reference for information about the RMAN
CONVERT
command -
Oracle Database Administrator’s Guide for an example of performing a full transportable export
Parent topic: Data Pump Export Modes
2.2.2.2 Schema Mode
You can specify a schema export with Data Pump by using the SCHEMAS
parameter. A schema export is the default export mode.
If you have the DATAPUMP_EXP_FULL_DATABASE
role, then you can specify a list of schemas, optionally including the schema definitions themselves and also system privilege grants to those schemas. If you do not have the DATAPUMP_EXP_FULL_DATABASE
role, then you can export only your own schema.
The SYS
schema cannot be used as a source schema for export jobs.
Cross-schema references are not exported unless the referenced schema is also specified in the list of schemas to be exported. For example, a trigger defined on a table within one of the specified schemas, but that resides in a schema not explicitly specified, is not exported. Also, external type definitions upon which tables in the specified schemas depend are not exported. In such a case, it is expected that the type definitions already exist in the target instance at import time.
Related Topics
Parent topic: Data Pump Export Modes
2.2.2.3 Table Mode
You can use Data Pump to carry out a table mode export by specifying the table using the TABLES
parameter.
In table mode, only a specified set of tables, partitions, and their dependent objects are unloaded. Any object required to create the table, such as the owning schema, or types for columns, must already exist.
If you specify the TRANSPORTABLE=ALWAYS
parameter with the TABLES
parameter, then only object metadata is unloaded. To move the actual data, you copy the data files to the target database. This results in quicker export times. If you are moving data files between releases or platforms, then the data files need to be processed by Oracle Recovery Manager (RMAN).
You must have the DATAPUMP_EXP_FULL_DATABASE
role to specify tables that are not in your own schema. Note that type definitions for columns are not exported in table mode. It is expected that the type definitions already exist in the target instance at import time. Also, as in schema exports, cross-schema references are not exported.
To recover tables and table partitions, you can also use RMAN backups and the RMAN RECOVER
TABLE
command. During this process, RMAN creates (and optionally imports) a Data Pump export dump file that contains the recovered objects. Refer to Oracle Database Backup and Recovery Guide for more information about transporting data across platforms.
Carrying out a table mode export has the following restriction:
-
When using
TRANSPORTABLE=ALWAYS
parameter with theTABLES
parameter, theENCRYPTION_PASSWORD
parameter must also be used if the table being exported contains encrypted columns, either Transparent Data Encryption (TDE) columns or SecureFiles LOB columns.
Related Topics
Parent topic: Data Pump Export Modes
2.2.2.4 Tablespace Mode
You can use Data Pump to carry out a tablespace export by specifying tables using the TABLESPACES
parameter.
In tablespace mode, only the tables contained in a specified set of tablespaces are unloaded. If a table is unloaded, then its dependent objects are also unloaded. Both object metadata and data are unloaded. In tablespace mode, if any part of a table resides in the specified set, then that table and all of its dependent objects are exported. Privileged users get all tables. Unprivileged users get only the tables in their own schemas.
Related Topics
Parent topic: Data Pump Export Modes
2.2.2.5 Transportable Tablespace Mode
You can use Oracle Data Pump to carry out a transportable tablespace export by using the TRANSPORT_TABLESPACES
parameter.
In transportable tablespace mode, only the metadata for the tables (and their dependent objects) within a specified set of tablespaces is exported. The tablespace data files are copied in a separate operation. Then, a transportable tablespace import is performed to import the dump file containing the metadata and to specify the data files to use.
Transportable tablespace mode requires that the specified tables be completely self-contained. That is, all storage segments of all tables (and their indexes) defined within the tablespace set must also be contained within the set. If there are self-containment violations, then Export identifies all of the problems without actually performing the export.
Type definitions for columns of tables in the specified tablespaces are exported and imported. The schemas owning those types must be present in the target instance.
Transportable tablespace exports cannot be restarted once stopped. Also, they cannot have a degree of parallelism greater than 1.
Note:
You cannot export transportable tablespaces and then import them into a database at a lower release level. The target database must be at the same or later release level as the source database.
Using Oracle Data Pump to carry out a transportable tablespace export has the following restrictions:
-
If any of the tablespaces being exported contains tables with encrypted columns, either Transparent Data Encryption (TDE) columns or SecureFiles LOB columns, then the
ENCRYPTION_PASSWORD
parameter must also be supplied.. -
If any of the tablespaces being exported is encrypted, then the use of the
ENCRYPTION_PASSWORD
is optional but recommended. If theENCRYPTION_PASSWORD
is omitted in this case, then the following warning message is displayed:ORA-39396: Warning: exporting encrypted data using transportable option without password
This warning points out that in order to successfully import such a transportable tablespace job, the target database wallet must contain a copy of the same database master key used in the source database when performing the export. Using the
ENCRYPTION_PASSWORD
parameter during the export and import eliminates this requirement.
Related Topics
Parent topic: Data Pump Export Modes
2.2.3 Network Considerations
You can specify a connect identifier in the connect string when you start the Data Pump Export utility. This identifier can specify a database instance that is different from the current instance identified by the current Oracle System ID (SID).
The connect identifier can be an Oracle*Net connect descriptor or a net service name (usually defined in the tnsnames.ora
file) that maps to a connect descriptor. Use of a connect identifier requires that you have Oracle Net Listener running (to start the default listener, enter lsnrctl
start
). The following is an example of this type of connection, in which inst1
is the connect identifier:
expdp hr@inst1 DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp TABLES=employees
Export then prompts you for a password:
Password: password
The local Export client connects to the database instance defined by the connect identifier inst1
(a net service name), retrieves data from inst1
, and writes it to the dump file hr.dmp
on inst1
.
Specifying a connect identifier when you start the Export utility is different from performing an export operation using the NETWORK_LINK
parameter. When you start an export operation and specify a connect identifier, the local Export client connects to the database instance identified by the connect identifier, retrieves data from that database instance, and writes it to a dump file set on that database instance. Whereas, when you perform an export using the NETWORK_LINK
parameter, the export is performed using a database link. (A database link is a connection between two physical database servers that allows a client to access them as one logical database.)
Related Topics
See Also:
-
Oracle Database Administrator’s Guide for more information about database links
-
Oracle Database Net Services Administrator's Guide for more information about connect identifiers and Oracle Net Listener
Parent topic: Invoking Data Pump Export
2.3 Filtering During Export Operations
Data Pump Export provides data and metadata filtering capability. This capability helps you limit the type of information that is exported.
- Data Filters
Data-specific filtering is implemented through theQUERY
andSAMPLE
parameters, which specify restrictions on the table rows that are to be exported. - Metadata Filters
Metadata filtering is implemented through theEXCLUDE
andINCLUDE
parameters. TheEXCLUDE
andINCLUDE
parameters are mutually exclusive.
Parent topic: Data Pump Export
2.3.1 Data Filters
Data-specific filtering is implemented through the QUERY
and SAMPLE
parameters, which specify restrictions on the table rows that are to be exported.
Data filtering can also occur indirectly because of metadata filtering, which can include or exclude table objects along with any associated row data.
Each data filter can be specified once per table within a job. If different filters using the same name are applied to both a particular table and to the whole job, then the filter parameter supplied for the specific table takes precedence.
Parent topic: Filtering During Export Operations
2.3.2 Metadata Filters
Metadata filtering is implemented through the EXCLUDE
and INCLUDE
parameters. The EXCLUDE
and INCLUDE
parameters are mutually exclusive.
Metadata filters identify a set of objects to be included or excluded from an Export or Import operation. For example, you could request a full export, but without Package Specifications or Package Bodies.
To use filters correctly and to get the results you expect, remember that dependent objects of an identified object are processed along with the identified object. For example, if a filter specifies that an index is to be included in an operation, then statistics from that index will also be included. Likewise, if a table is excluded by a filter, then indexes, constraints, grants, and triggers upon the table will also be excluded by the filter.
If multiple filters are specified for an object type, then an implicit AND
operation is applied to them. That is, objects pertaining to the job must pass all of the filters applied to their object types.
The same metadata filter name can be specified multiple times within a job.
To see a list of valid object types, query the following views: DATABASE_EXPORT_OBJECTS
for full mode, SCHEMA_EXPORT_OBJECTS
for schema mode, and TABLE_EXPORT_OBJECTS
for table and tablespace mode. The values listed in the OBJECT_PATH
column are the valid object types. For example, you could perform the following query:
SQL> SELECT OBJECT_PATH, COMMENTS FROM SCHEMA_EXPORT_OBJECTS 2 WHERE OBJECT_PATH LIKE '%GRANT' AND OBJECT_PATH NOT LIKE '%/%';
The output of this query looks similar to the following:
OBJECT_PATH -------------------------------------------------------------------------------- COMMENTS -------------------------------------------------------------------------------- GRANT Object grants on the selected tables OBJECT_GRANT Object grants on the selected tables PROCDEPOBJ_GRANT Grants on instance procedural objects PROCOBJ_GRANT Schema procedural object grants in the selected schemas ROLE_GRANT Role grants to users associated with the selected schemas SYSTEM_GRANT System privileges granted to users associated with the selected schemas
Parent topic: Filtering During Export Operations
2.4 Parameters Available in Export's Command-Line Mode
This section describes the parameters available in the command-line mode of Data Pump Export.
- About Data Pump Export Parameters
Learn how to use Oracle Data Pump Export parameters in command-line mode, including case sensitivity, quotation marks, escape characters, and information about how to use examples. - ABORT_STEP
The Data Pump Export command-line utilityABORT_STEP
parameter stops the job after it is initialized. - ACCESS_METHOD
The Data Pump Export command-line utilityACCESS_METHOD
parameter instructs Export to use a particular method to unload data. - ATTACH
The Data Pump Export command-line utilityATTACH
parameter attaches the client session to an existing export job, and automatically places you in the interactive-command interface. - CLUSTER
The Data Pump Export command-line utilityCLUSTER
parameter determines whether Data Pump can use Oracle RAC, resources, and start workers on other Oracle RAC instances. - COMPRESSION
The Data Pump Export command-line utilityCOMPRESSION
parameter specifies which data to compress before writing to the dump file set. - COMPRESSION_ALGORITHM
The Data Pump Export command-line utilityCOMPRESSION_ALGORITHM
parameter specifies the compression algorithm that you want to use when compressing dump file data. - CONTENT
The Data Pump Export command-line utilityCONTENT
parameter enables you to filter what Export unloads: data only, metadata only, or both. - DATA_OPTIONS
The Data Pump Export command-line utilityDATA_OPTIONS
parameter designates how you want certain types of data handled during export operations. - DIRECTORY
The Data Pump Export command-line utilityPARALLEL
parameter specifies the default location to which Export can write the dump file set and the log file. - DUMPFILE
The Data Pump Export command-line utilityDUMPFILE
parameter specifies the names, and optionally, the directory objects of dump files for an export job. - ENABLE_SECURE_ROLES
The Data Pump Export command-line utilityENABLE_SECURE_ROLES
parameter prevents inadvertent use of protected roles during exports. - ENCRYPTION
The Data Pump Export command-line utilityENCRYPTION
parameter specifies whether to encrypt data before writing it to the dump file set. - ENCRYPTION_ALGORITHM
The Data Pump Export command-line utilityENCRYPTION_ALGORITHM
parameter specifies which cryptographic algorithm should be used to perform the encryption. - ENCRYPTION_MODE
The Data Pump Export command-line utilityENCRYPTION_MODE
parameter specifies the type of security to use when encryption and decryption are performed. - ENCRYPTION_PASSWORD
The Data Pump Export command-line utilityENCRYPTION_PASSWORD
parameter specifies a password for encrypting encrypted column data, metadata, or table data in the export dump file. This parameter prevents unauthorized access to an encrypted dump file set. - ENCRYPTION_PWD_PROMPT
The Data Pump Export command-line utilityENCRYPTION_PWD_PROMPT
specifies whether Data Pump should prompt you for the encryption password. - ESTIMATE
The Data Pump Export command-line utilityESTIMATE
parameter specifies the method that Export uses to estimate how much disk space each table in the export job will consume (in bytes). - ESTIMATE_ONLY
The Data Pump Export command-line utilityESTIMATE_ONLY
parameter instructs Export to estimate the space that a job consumes, without actually performing the export operation. - EXCLUDE
The Data Pump Export command-line utilityEXCLUDE
parameter enables you to filter the metadata that is exported by specifying objects and object types that you want to exclude from the export operation. - FILESIZE
The Data Pump Export command-line utilityFILESIZE
parameter specifies the maximum size of each dump file. - FLASHBACK_SCN
The Data Pump Export command-line utilityFLASHBACK_SCN
parameter specifies the system change number (SCN) that Export uses to enable the Flashback Query utility. - FLASHBACK_TIME
The Data Pump Export command-line utilityFLASHBACK_TIME
parameter finds the SCN that most closely matches the specified time. This SCN is used to enable the Flashback utility. The export operation is performed with data that is consistent up to this SCN. - FULL
The Export command-line FULL parameter specifies that you want to perform a full database mode export - HELP
The Data Pump Export command-line utilityHELP
parameter displays online help for the Export utility. - INCLUDE
The Data Pump Export command-line utility INCLUDE parameter enables you to filter the metadata that is exported by specifying objects and object types for the current export mode. The specified objects and all their dependent objects are exported. Grants on these objects are also exported. - JOB_NAME
The Data Pump Export command-line utilityJOB_NAME
parameter identifies the export job in subsequent actions, such as when usingATTACH
to attach to a job, or to identify a job usingDBA_DATAPUMP_JOBS
orUSER_DATAPUMP_JOBS
views. - KEEP_MASTER
The Data Pump Export command-line utility KEEP_MASTER parameter indicates whether the master table should be deleted or retained at the end of a Data Pump job that completes successfully. The master table is automatically retained for jobs that do not complete successfully. - LOGFILE
The Data Pump Export command-line utility LOGFILE parameter specifies the name, and optionally, a directory, for the log file of the export job. - LOGTIME
The Data Pump Export command-line utilityLOGTIME
parameter specifies that messages displayed during export operations are timestamped. - METRICS
The Data Pump Export command-line utilityMETRICS
parameter indicates whether you want additional information about the job reported to the Data Pump log file. - NETWORK_LINK
The Data Pump Export command-line utilityNETWORK_LINK
parameter enables an export from a (source) database identified by a valid database link. The data from the source database instance is written to a dump file set on the connected database instance. - NOLOGFILE
The Data Pump Export command-line utility NOLOGFILE parameter specifies whether to suppress creation of a log file. - PARALLEL
The Export command-line utility PARALLEL parameter specifies the maximum number of processes of active execution operating on behalf of the export job. - PARFILE
The Data Pump Export command-line utility PARFILE parameter specifies the name of an export parameter file. - QUERY
The Data Pump Export command-line utilityQUERY
parameter enables you to specify a query clause that is used to filter the data that gets exported. - REMAP_DATA
The Data Pump Export command-line utilityREMAP_DATA
parameter enables you to specify a remap function that takes as a source the original value of the designated column and returns a remapped value that will replace the original value in the dump file. - REUSE_DUMPFILES
The Data Pump Export command-line utility xxx parameter specifies whether to overwrite a preexisting dump file. - SAMPLE
The Data Pump Export command-line utilitySAMPLE
parameter specifies a percentage of the data rows that you want to be sampled and unloaded from the source database. - SCHEMAS
The Data Pump Export command-line utilitySCHEMAS
parameter specifies that you want to perform a schema-mode export. This is the default mode for Export. - SERVICE_NAME
- SOURCE_EDITION
The Data Pump Export command-line utilitySOURCE_EDITION
parameter specifies the database edition from which objects are exported. - STATUS
The Data Pump Export command-line utility STATUS parameter specifies the frequency at which the job status display is updated - TABLES
The Data Pump Export command-line utilityTABLES
parameter specifies that you want to perform a table-mode export. - TABLESPACES
The Data Pump Export command-line utility TABLESPACES parameter specifies a list of tablespace names to be exported in tablespace mode. - TRANSPORT_FULL_CHECK
The Data Pump Export command-line utilityTRANSPORT_FULL_CHECK
parameter specifies whether to check for dependencies between those objects inside the transportable set and those outside the transportable set. This parameter is applicable only to a transportable-tablespace mode export. - TRANSPORT_TABLESPACES
The Data Pump Export command-line utility TRANSPORT_TABLESPACES parameter specifies that you want to perform an export in transportable-tablespace mode - TRANSPORTABLE
The Data Pump Export command-line utilityTRANSPORTABLE
parameter specifies whether the transportable option should be used during a table mode export (specified with theTABLES
parameter) or a full mode export (specified with theFULL
parameter). - TTS_CLOSURE_CHECK
The Data Pump Export command-line modeTTS_CLOSURE_CHECK
parameter is used to indicate the degree of closure checking to be performed as part of a Data Pump transportable tablespace operation. - VERSION
The Data Pump Export command-line utilityVERSION
parameter specifies the version of database objects that you want to export. - VIEWS_AS_TABLES
The Data Pump Export command-line utilityVIEWS_AS_TABLES
parameter specifies that you want one or more views exported as tables.
Related Topics
Parent topic: Data Pump Export
2.4.1 About Data Pump Export Parameters
Learn how to use Oracle Data Pump Export parameters in command-line mode, including case sensitivity, quotation marks, escape characters, and information about how to use examples.
Specifying Export Parameters
For parameters that can have multiple values specified, you can specify the values by commas, or by spaces. For example, you can specify TABLES=employees,jobs
or TABLES=employees jobs
.
For every parameter you enter, you must enter an equal sign (=
), and a value. Data Pump has no other way of knowing that the previous parameter specification is complete and a new parameter specification is beginning. For example, in the following command line, even though NOLOGFILE
is a valid parameter, Export interprets the string as another dump file name for the DUMPFILE
parameter:
expdp DIRECTORY=dpumpdir DUMPFILE=test.dmp NOLOGFILE TABLES=employees
This command results in two dump files being created, test.dmp
and nologfile.dmp
.
To avoid this result, specify either NOLOGFILE=YES
or NOLOGFILE=NO
.
Case Sensitivity When Specifying Parameter Values
For tablespace names, schema names, table names, and so on, that you enter as parameter values, Oracle Data Pump by default changes values entered as lowercase or mixed-case into uppercase. For example, if you enter TABLE=hr.employees
, then it is changed to TABLE=HR.EMPLOYEES
. To maintain case, you must enclose the value within quotation marks. For example, TABLE="hr.employees"
would preserve the table name in all lower case. The name you enter must exactly match the name stored in the database.
Use of Quotation Marks On the Data Pump Command Line
Some operating systems treat quotation marks as special characters. These operating systems therefore do not pass quotation marks on to an application unless quotation marks are preceded by an escape character, such as the backslash (\
). This requirement is true both on the command lin, and within parameter files. Some operating systems can require an additional set of single or double quotation marks on the command line around the entire parameter value containing the special characters.
The following examples are provided to illustrate these concepts. Note that your particular operating system can have different requirements. The documentation examples cannot fully anticipate operating environments, which are unique to each user.
In this example, the TABLES
parameter is specified in a parameter file:
TABLES = \"MixedCaseTableName\"
If you specify that value on the command line, then some operating systems require that you surround the parameter file name using single quotation marks, as follows:
TABLES = '\"MixedCaseTableName\"'
To avoid having to supply more quotation marks on the command line, Oracle recommends the use of parameter files. Also, note that if you use a parameter file, and the parameter value being specified does not have quotation marks as the first character in the string (for example, TABLES=scott."EmP"
), then some operating systems do not require the use of escape characters.
Using the Export Parameter Examples
If you try running the examples that are provided for each parameter, be aware of the following:
-
After you enter the user name and parameters as shown in the example, Export is started, and you are prompted for a password. You are required to enter the password before a database connection is made.
-
Most of the examples use the sample schemas of the seed database, which is installed by default when you install Oracle Database. In particular, the human resources (
hr
) schema is often used. -
The examples assume that the directory objects,
dpump_dir1
anddpump_dir2
, already exist, and thatREAD
andWRITE
privileges are granted to thehr
user for these directory objects. -
Some of the examples require the
DATAPUMP_EXP_FULL_DATABASE
andDATAPUMP_IMP_FULL_DATABASE
roles. The examples assume that thehr
user is granted these roles.
If necessary, ask your DBA for help in creating these directory objects and assigning the necessary privileges and roles.
Unless specifically noted, you can also specify these parameters in a parameter file.
See Also:
-
Your Oracle operating system-specific documentation for information about how special and reserved characters are handled on your system
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.2 ABORT_STEP
The Data Pump Export command-line utility ABORT_STEP
parameter stops the job after it is initialized.
Default: Null
Purpose
Used to stop the job after it is initialized. Stopping a job after it is initialized enables you to query the master table to be queried before any data is exported.
Syntax and Description
ABORT_STEP=[n | -1]
The possible values correspond to a process order number in the master table. The result of using each number is as follows:
-
n
: If the value is zero or greater, then the export operation is started and the job is aborted at the object that is stored in the master table with the corresponding process order number. -
-1
: If the value is negative one (-1
), then abort the job after setting it up, but before exporting any objects or data.
Restrictions
-
None
Example
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=hr ABORT_STEP=-1
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.3 ACCESS_METHOD
The Data Pump Export command-line utility ACCESS_METHOD
parameter instructs Export to use a particular method to unload data.
Default: AUTOMATIC
Purpose
Instructs Export to use a particular method to unload data.
Syntax and Description
ACCESS_METHOD=[AUTOMATIC | DIRECT_PATH | EXTERNAL_TABLE | INSERT_AS_SELECT]
The ACCESS_METHOD
parameter is provided so that you can try an alternative method if the default method does not work for some reason. All methods can be specified for a network export. If the data for a table cannot be unloaded with the specified access method, then the data displays an error for the table and continues with the next work item.
-
AUTOMATIC
— Data Pump determines the best way to unload data for each table. Oracle recommends that you useAUTOMATIC
whenever possible because it allows Data Pump to automatically select the most efficient method. -
DIRECT_PATH
— Data Pump uses direct path unload for every table. -
EXTERNAL_TABLE
— Data Pump uses a SQLCREATE TABLE AS SELECT
statement to create an external table using data that is stored in the dump file. TheSELECT
clause reads from the table to be unloaded. -
INSERT_AS_SELECT
— Data Pump executes a SQLINSERT AS SELECT
statement to unload data from a remote database. This option is only available for network mode exports.
Restrictions
-
To use the
ACCESS_METHOD
parameter with network exports, you must be using Oracle Database 12c
Release 2 (12.2.0.1) or later. -
The
ACCESS_METHOD
parameter for Data Pump Export is not valid for transportable tablespace jobs.
Example
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=hr
ACCESS_METHOD=EXTERNAL_TABLE
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.4 ATTACH
The Data Pump Export command-line utility ATTACH
parameter attaches the client session to an existing export job, and automatically places you in the interactive-command interface.
Default: job currently in the user schema, if there is only one
Purpose
Attaches the client session to an existing export job and automatically places you in the interactive-command interface. Export displays a description of the job to which you are attached and also displays the Export prompt.
Syntax and Description
ATTACH [=[schema_name.]job_name]
The schema_name
is optional. To specify a schema other than your own, you must have the DATAPUMP_EXP_FULL_DATABASE
role.
The job_name
is optional if only one export job is associated with your schema and the job is active. To attach to a stopped job, you must supply the job name. To see a list of Data Pump job names, you can query the DBA_DATAPUMP_JOBS
view, or the USER_DATAPUMP_JOBS
view.
When you are attached to the job, Export displays a description of the job and then displays the Export prompt.
Restrictions
-
When you specify the
ATTACH
parameter, the only other Data Pump parameter you can specify on the command line isENCRYPTION_PASSWORD
. -
If the job to which you are attaching was initially started using an encryption password, then when you attach to the job, you must again enter the
ENCRYPTION_PASSWORD
parameter on the command line to respecify that password. The only exception to this requirement is if the job was initially started with theENCRYPTION=ENCRYPTED_COLUMNS_ONLY
parameter. In that case, the encryption password is not needed when attaching to the job. -
You cannot attach to a job in another schema unless it is already running.
-
If the dump file set or master table for the job have been deleted, then the attach operation fails.
-
Altering the master table in any way leads to unpredictable results.
Example
The following is an example of using the ATTACH
parameter. It assumes that the job hr.export_job
is an existing job.
> expdp hr ATTACH=hr.export_job
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.5 CLUSTER
The Data Pump Export command-line utility CLUSTER
parameter determines whether Data Pump can use Oracle RAC, resources, and start workers on other Oracle RAC instances.
Default: YES
Purpose
Determines whether Data Pump can use Oracle Real Application Clusters (Oracle RAC) resources and start workers on other Oracle RAC instances.
Syntax and Description
CLUSTER=[YES | NO]
To force Data Pump Export to use only the instance where the job is started and to replicate pre-Oracle Database 11g release 2 (11.2) behavior, specify CLUSTER=NO
.
To specify a specific, existing service and constrain worker processes to run only on instances defined for that service, use the SERVICE_NAME
parameter with the CLUSTER=YES
parameter.
Use of the CLUSTER
parameter can affect performance because there is some additional overhead in distributing the export job across Oracle RAC instances. For small jobs, it can be better to specify CLUSTER=NO
to constrain the job to run on the instance where it is started. Jobs whose performance benefits the most from using the CLUSTER
parameter are those involving large amounts of data.
Example
The following is an example of using the CLUSTER
parameter:
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_clus%U.dmp CLUSTER=NO PARALLEL=3
This example starts a schema-mode export (the default) of the hr
schema. Because CLUSTER=NO
is specified, the job uses only the instance on which it started. (If you do not specify the CLUSTER
parameter, then the default value of Y
is used. With that value, if necessary, workers are started on other instances in the Oracle RAC cluster). The dump files are written to the location specified for the dpump_dir1
directory object. The job can have up to 3 parallel processes.
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.6 COMPRESSION
The Data Pump Export command-line utility COMPRESSION
parameter specifies which data to compress before writing to the dump file set.
Default: METADATA_ONLY
Purpose
Specifies which data to compress before writing to the dump file set.
Syntax and Description
COMPRESSION=[ALL | DATA_ONLY | METADATA_ONLY | NONE]
-
ALL
enables compression for the entire export operation. TheALL
option requires that the Oracle Advanced Compression option be enabled. -
DATA_ONLY
results in all data being written to the dump file in compressed format. TheDATA_ONLY
option requires that the Oracle Advanced Compression option be enabled. -
METADATA_ONLY
results in all metadata being written to the dump file in compressed format. This is the default. -
NONE
disables compression for the entire export operation.
Restrictions
-
To make full use of all these compression options, the
COMPATIBLE
initialization parameter must be set to at least 11.0.0. -
The
METADATA_ONLY
option can be used even if theCOMPATIBLE
initialization parameter is set to 10.2. -
Compression of data using
ALL
orDATA_ONLY
is valid only in the Enterprise Edition of Oracle Database 11g or later, and they require that the Oracle Advanced Compression option be enabled.
Example
The following is an example of using the COMPRESSION
parameter:
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_comp.dmp COMPRESSION=METADATA_ONLY
This command executes a schema-mode export that compresses all metadata before writing it out to the dump file, hr_comp.dmp
. It defaults to a schema-mode export, because no export mode is specified.
See Oracle Database Licensing Information for information about licensing requirements for the Oracle Advanced Compression option.
Related Topics
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.7 COMPRESSION_ALGORITHM
The Data Pump Export command-line utility COMPRESSION_ALGORITHM
parameter specifies the compression algorithm that you want to use when compressing dump file data.
Default: BASIC
Purpose
Specifies the compression algorithm to be used when compressing dump file data.
Syntax and Description
COMPRESSION_ALGORITHM = {BASIC | LOW | MEDIUM | HIGH}
The parameter options are defined as follows:
-
BASIC
: Offers a good combination of compression ratios and speed; the algorithm used is the same as in previous versions of Oracle Data Pump. -
LOW
: Least impact on export throughput. This option is suited for environments where CPU resources are the limiting factor. -
MEDIUM
: Recommended for most environments. This option, like theBASIC
option, provides a good combination of compression ratios and speed, but it uses a different algorithm thanBASIC
. -
HIGH
: Best suited for situations in which dump files are copied over slower networks, where the limiting factor is network speed.
You characterize the performance of a compression algorithm by its CPU usage, and by the compression ratio (the size of the compressed output as a percentage of the uncompressed input). These measures vary, based on the size and type of inputs, as well as the speed of the compression algorithms used. The compression ratio generally increases from low to high, with a trade-off of potentially consuming more CPU resources.
It is recommended that you run tests with the different compression levels on the data in your environment. Choosing a compression level based on your environment, workload characteristics, and size and type of data is the only way to ensure that the exported dump file set compression level meets your performance and storage requirements.
Restrictions
-
To use this feature, database compatibility must be set to 12.0.0 or later.
-
This feature requires that you have the Oracle Advanced Compression option enabled.
Example 1
This example performs a schema-mode unload of the HR
schema and compresses only the table data using a compression algorithm with a low level of compression. Using this command option can result in fewer CPU resources being used, at the expense of a less than optimal compression ratio.
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp COMPRESSION=DATA_ONLY
COMPRESSION_ALGORITHM=LOW
Example 2
This example performs a schema-mode unload of the HR
schema, and compresses both metadata and table data using the basic level of compression. Omitting the COMPRESSION_ALGORITHM
parameter altogether is equivalent to specifying BASIC
as the value.
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp COMPRESSION=ALL COMPRESSION_ALGORITHM=BASIC
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.8 CONTENT
The Data Pump Export command-line utility CONTENT
parameter enables you to filter what Export unloads: data only, metadata only, or both.
Default: ALL
Purpose
Enables you to filter what Export unloads: data only, metadata only, or both.
Syntax and Description
CONTENT=[ALL | DATA_ONLY | METADATA_ONLY]
-
ALL
unloads both data and metadata. This option is the default. -
DATA_ONLY
unloads only table row data; no database object definitions are unloaded. -
METADATA_ONLY
unloads only database object definitions; no table row data is unloaded. Be aware that if you specifyCONTENT=METADATA_ONLY
, then afterward, when the dump file is imported, any index or table statistics imported from the dump file are locked after the import.
Restrictions
-
The
CONTENT=METADATA_ONLY
parameter cannot be used with theTRANSPORT_TABLESPACES
(transportable-tablespace mode) parameter or with theQUERY
parameter.
Example
The following is an example of using the CONTENT
parameter:
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp CONTENT=METADATA_ONLY
This command executes a schema-mode export that unloads only the metadata associated with the hr
schema. It defaults to a schema-mode export of the hr
schema, because no export mode is specified.
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.9 DATA_OPTIONS
The Data Pump Export command-line utility DATA_OPTIONS
parameter designates how you want certain types of data handled during export operations.
Default: There is no default. If this parameter is not used, then the special data handling options it provides simply do not take effect.
Purpose
The DATA_OPTIONS
parameter designates how certain types of data should be handled during export operations.
Syntax and Description
-
GROUP_PARTITION_TABLE_DATA
: Tells Data Pump to unload all table data in one operation rather than unload each table partition as a separate operation. As a result, the definition of the table will not matter at import time because Import will see one partition of data that will be loaded into the entire table. -
VERIFY_STREAM_FORMAT
: Validates the format of a data stream before it is written to the Data Pump dump file. The verification checks for a valid format for the stream after it is generated but before it is written to disk. This assures that there are no errors when the dump file is created, which in turn helps to assure that there will not be errors when the stream is read at import time.
Restrictions
DATA_OPTIONS= [GROUP_PARTITION_TABLE_DATA | VERIFY_STREAM_FORMAT]
-
The Export
DATA_OPTIONS
parameter requires the job version to be set to11.0.0
or later. SeeVERSION
.
Example
This example shows an export operation in which data for all partitions of a table are unloaded together instead of the default behavior of unloading the data for each partition separately.
> expdp hr TABLES=hr.tab1 DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp VERSION=11.2 GROUP_PARTITION_TABLE_DATA
See Oracle XML DB Developer’s Guide for information specific to exporting and importing XMLType
tables.
Related Topics
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.10 DIRECTORY
The Data Pump Export command-line utility PARALLEL
parameter specifies the default location to which Export can write the dump file set and the log file.
Default: DATA_PUMP_DIR
Purpose
Specifies the default location to which Export can write the dump file set and the log file.
Syntax and Description
DIRECTORY=directory_object
The directory_object
is the name of a database directory object. It is not the file path of an actual directory. Privileged users have access to a default directory object named DATA_PUMP_DIR
. The definition of the DATA_PUMP_DIR
directory can be changed by Oracle during upgrades, or when patches are applied.
Users with access to the default DATA_PUMP_DIR
directory object do not need to use the DIRECTORY
parameter.
A directory object specified on the DUMPFILE
or LOGFILE
parameter overrides any directory object that you specify for the DIRECTORY
parameter.
Example
The following is an example of using the DIRECTORY
parameter:
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=employees.dmp CONTENT=METADATA_ONLY
In this example, the dump file, employees.dump
is written to the path that is associated with the directory object dpump_dir1
.
2.4.11 DUMPFILE
The Data Pump Export command-line utility DUMPFILE
parameter specifies the names, and optionally, the directory objects of dump files for an export job.
Default: expdat.dmp
Purpose
Specifies the names, and optionally, the directory objects of dump files for an export job.
Syntax and Description
DUMPFILE=[directory_object:]file_name [, ...]
Specifying directory_object
is optional if you have already specified the directory object by using the DIRECTORY
parameter. If you supply a value here, then it must be a directory object that exists, and to which you have access. A database directory object that is specified as part of the DUMPFILE
parameter overrides a value specified by the DIRECTORY
parameter, or by the default directory object.
You can supply multiple file_name
specifications as a comma-delimited list or in separate DUMPFILE
parameter specifications. If no extension is given for the file name, then Export uses the default file extension of .dmp
. The file names can contain a substitution variable. The following table lists the available substitution variables.
Substitution Variable | Meaning |
---|---|
%U |
The substitution variable is expanded in the resulting file names into a 2-digit, fixed-width, incrementing integer that starts at 01 and ends at 99. If a file specification contains two substitution variables, then both are incremented at the same time. For example, exp%Uaa%U.dmp resolves to exp01aa01.dmp , exp02aa02.dmp , and so forth.
|
%d , %D |
Specifies the current day of the month from the Gregorian calendar in format DD .
Note: This substitution variable cannot be used in an import file name. |
%m , %M |
Specifies the month in the Gregorian calendar in format MM .
Note: This substitution variable cannot be used in an import file name. |
%t , %T |
Specifies the year, month, and day in the Gregorian calendar in this format: YYYYMMDD .
Note: This substitution variable cannot be used in an import file name. |
%l , %L |
Specifies a system-generated unique file name.
The file names can contain a substitution variable ( For example if the current integer is 1, then
and so forth, up until 99. Then, the next file name has 3 digits substituted:
and so forth, up until 999, where the next file has 4 digits substituted. The substitutions continue up to the largest number substitution allowed, which is 2147483646. |
%y , %Y |
Specifies the year in this format: YYYY.
Note: This substitution variable cannot be used in an import file name. |
If the FILESIZE
parameter is specified, then each dump file has a maximum of that size and be nonextensible. If more space is required for the dump file set, and a template with a substitution variable was supplied, then a new dump file is automatically created of the size specified by the FILESIZE
parameter, if there is room on the device.
As each file specification or file template containing a substitution variable is defined, it is instantiated into one fully qualified file name, and Export attempts to create the file. The file specifications are processed in the order in which they are specified. If the job needs extra files because the maximum file size is reached, or to keep parallel workers active, then more files are created if file templates with substitution variables were specified.
Although it is possible to specify multiple files using the DUMPFILE
parameter, the export job can only require a subset of those files to hold the exported data. The dump file set displayed at the end of the export job shows exactly which files were used. It is this list of files that is required to perform an import operation using this dump file set. Any files that were not used can be discarded.
When you specify the DUMPFILE
parameter, it is possible to introduce conflicting file names, regardless of whether substitution variables are used. The following are some examples of expdp
commands that would produce file name conflicts. For all these examples, an ORA-27308 created file already exists
error is returned:
expdp system/manager directory=dpump_dir schemas=hr DUMPFILE=foo%U.dmp,foo%U.dmp
expdp system/manager directory=dpump_dir schemas=hr DUMPFILE=foo%U.dmp,foo%L.dmp
expdp system/manager directory=dpump_dir schemas=hr DUMPFILE=foo%U.dmp,foo%D.dmp
expdp system/manager directory =dpump_dir schemas=hr DUMPFILE=foo%tK_%t_%u_%y_P,foo%TK_%T_%U_%Y_P
Restrictions
-
Any resulting dump file names that match preexisting dump file names generate an error, and the preexisting dump files are not overwritten. You can override this behavior by specifying the Export parameter
REUSE_DUMPFILES=YES
. -
Dump files created on Oracle Database 11g releases with the Data Pump parameter
VERSION=12
can only be imported on Oracle Database 12c Release 1 (12.1) and later.
Example
The following is an example of using the DUMPFILE
parameter:
> expdp hr SCHEMAS=hr DIRECTORY=dpump_dir1 DUMPFILE=dpump_dir2:exp1.dmp,
exp2%U.dmp PARALLEL=3
The dump file, exp1
.dmp,
is written to the path associated with the directory object dpump_dir2
, because dpump_dir2
was specified as part of the dump file name, and therefore overrides the directory object specified with the DIRECTORY
parameter. Because all three parallel processes are given work to perform during this job, dump files named exp201.dmp
and exp202.dmp
is created, based on the specified substitution variable exp2%U.dmp
. Because no directory is specified for them, they are written to the path associated with the directory object, dpump_dir1
, that was specified with the DIRECTORY
parameter.
Related Topics
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.12 ENABLE_SECURE_ROLES
The Data Pump Export command-line utility ENABLE_SECURE_ROLES
parameter prevents inadvertent use of protected roles during exports.
Default: In Oracle Database 19c and later releases, the default value is NO
.
Purpose
Some Oracle roles require authorization. If you need to use these roles with Oracle Data Pump exports, then you must explicitly enable them by setting the ENABLE_SECURE_ROLES
parameter to YES
.
Syntax
ENABLE_SECURE_ROLES=[NO|YES]
-
NO
Disables Oracle roles that require authorization. -
YES
Disables Oracle roles that require authorization.
expdp hr SCHEMAS=hr DIRECTORY=dpump_dir1 DUMPFILE=dpump_dir2:exp1.dmp,
exp2%U.dmp ENABLE_SECURE_ROLES=YES
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.13 ENCRYPTION
The Data Pump Export command-line utility ENCRYPTION
parameter specifies whether to encrypt data before writing it to the dump file set.
Default: The default value depends upon the combination of encryption-related parameters that are used. To enable encryption, either the ENCRYPTION
or ENCRYPTION_PASSWORD
parameter, or both, must be specified.
If only the ENCRYPTION_PASSWORD
parameter is specified, then the ENCRYPTION
parameter defaults to ALL
.
If only the ENCRYPTION
parameter is specified and the Oracle encryption wallet is open, then the default mode is TRANSPARENT
. If only the ENCRYPTION
parameter is specified and the wallet is closed, then an error is returned.
If neither ENCRYPTION
nor ENCRYPTION_PASSWORD
is specified, then ENCRYPTION
defaults to NONE
.
Purpose
Specifies whether to encrypt data before writing it to the dump file set.
Syntax and Description
ENCRYPTION = [ALL | DATA_ONLY | ENCRYPTED_COLUMNS_ONLY | METADATA_ONLY | NONE]
-
ALL
enables encryption for all data and metadata in the export operation. -
DATA_ONLY
specifies that only data is written to the dump file set in encrypted format. -
ENCRYPTED_COLUMNS_ONLY
specifies that only encrypted columns are written to the dump file set in encrypted format. This option cannot be used with theENCRYPTION_ALGORITHM
parameter because the columns already have an assigned encryption format and by definition, a column can have only one form of encryption.To use the
ENCRYPTED_COLUMNS_ONLY
option, you must have Oracle Advanced Security Transparent Data Encryption (TDE) enabled. See Oracle Database Advanced Security Guide for more information about TDE. -
METADATA_ONLY
specifies that only metadata is written to the dump file set in encrypted format. -
NONE
specifies that no data is written to the dump file set in encrypted format.
SecureFiles Considerations for Encryption
If the data being exported includes SecureFiles that you want to be encrypted, then you must specify ENCRYPTION=ALL
to encrypt the entire dump file set. Encryption of the entire dump file set is the only way to achieve encryption security for SecureFiles during a Data Pump export operation. For more information about SecureFiles, see Oracle Database SecureFiles and Large Objects Developer’s Guide.
Oracle Database Vault Considerations for Encryption
When an export operation is started, Data Pump determines whether Oracle Database Vault is enabled. If it is, and dump file encryption has not been specified for the job, a warning message is returned to alert you that secure data is being written in an insecure manner (clear text) to the dump file set:
ORA-39327: Oracle Database Vault data is being stored unencrypted in dump file set
You can abort the current export operation and start a new one, specifying that you want the output dump file set to be encrypted.
Restrictions
-
To specify the
ALL
,DATA_ONLY
, orMETADATA_ONLY
options, theCOMPATIBLE
initialization parameter must be set to at least 11.0.0. -
This parameter is valid only in the Enterprise Edition of Oracle Database 11g or later.
-
Data Pump encryption features require that you have the Oracle Advanced Security option enabled. See Oracle Database Licensing Information for information about licensing requirements for the Oracle Advanced Security option.
The following example performs an export operation in which only data is encrypted in the dump file:
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_enc.dmp JOB_NAME=enc1 ENCRYPTION=data_only ENCRYPTION_PASSWORD=foobar
2.4.14 ENCRYPTION_ALGORITHM
The Data Pump Export command-line utility ENCRYPTION_ALGORITHM
parameter specifies which cryptographic algorithm should be used to perform the encryption.
Default: AES128
Purpose
Specifies which cryptographic algorithm should be used to perform the encryption.
Syntax and Description
ENCRYPTION_ALGORITHM = [AES128 | AES192 | AES256]
Restrictions
-
To use this encryption feature, the
COMPATIBLE
initialization parameter must be set to at least 11.0.0. -
The
ENCRYPTION_ALGORITHM
parameter requires that you also specify either theENCRYPTION
orENCRYPTION_PASSWORD
parameter; otherwise an error is returned. -
The
ENCRYPTION_ALGORITHM
parameter cannot be used in conjunction withENCRYPTION=ENCRYPTED_COLUMNS_ONLY
because columns that are already encrypted cannot have an additional encryption format assigned to them. -
This parameter is valid only in the Enterprise Edition of Oracle Database 11g or later.
-
Data Pump encryption features require that the Oracle Advanced Security option be enabled. See Oracle Database Licensing Information for information about licensing requirements for the Oracle Advanced Security option.
Example
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_enc3.dmp
ENCRYPTION_PASSWORD=foobar ENCRYPTION_ALGORITHM=AES128
2.4.15 ENCRYPTION_MODE
The Data Pump Export command-line utility ENCRYPTION_MODE
parameter specifies the type of security to use when encryption and decryption are performed.
Default: The default mode depends on which other encryption-related parameters are used. If only the ENCRYPTION
parameter is specified and the Oracle encryption wallet is open, then the default mode is TRANSPARENT
. If only the ENCRYPTION
parameter is specified and the wallet is closed, then an error is returned.
If the ENCRYPTION_PASSWORD
parameter is specified and the wallet is open, then the default is DUAL
. If the ENCRYPTION_PASSWORD
parameter is specified and the wallet is closed, then the default is PASSWORD
.
Purpose
Specifies the type of security to use when encryption and decryption are performed.
Syntax and Description
ENCRYPTION_MODE = [DUAL | PASSWORD | TRANSPARENT]
DUAL
mode creates a dump file set that can later be imported either transparently or by specifying a password that was used when the dual-mode encrypted dump file set was created. When you later import the dump file set created in DUAL
mode, you can use either the wallet or the password that was specified with the ENCRYPTION_PASSWORD
parameter. DUAL
mode is best suited for cases in which the dump file set will be imported on-site using the wallet, but which may also need to be imported offsite where the wallet is not available.
PASSWORD
mode requires that you provide a password when creating encrypted dump file sets. You will need to provide the same password when you import the dump file set. PASSWORD
mode requires that you also specify the ENCRYPTION_PASSWORD
parameter. The PASSWORD
mode is best suited for cases in which the dump file set will be imported into a different or remote database, but which must remain secure in transit.
TRANSPARENT
mode enables you to create an encrypted dump file set without any intervention from a database administrator (DBA), provided the required wallet is available. Therefore, the ENCRYPTION_PASSWORD
parameter is not required. The parameter will, in fact, cause an error if it is used in TRANSPARENT
mode. This encryption mode is best suited for cases in which the dump file set is imported into the same database from which it was exported.
Restrictions
-
To use
DUAL
orTRANSPARENT
mode, theCOMPATIBLE
initialization parameter must be set to at least 11.0.0. -
When you use the
ENCRYPTION_MODE
parameter, you must also use either theENCRYPTION
orENCRYPTION_PASSWORD
parameter. Otherwise, an error is returned. -
When you use the
ENCRYPTION=
ENCRYPTED_COLUMNS_ONLY
, you cannot use theENCRYPTION_MODE
parameter. Otherwise, an error is returned. -
This parameter is valid only in the Enterprise Edition of Oracle Database 11g or later.
-
Data Pump encryption features require that the Oracle Advanced Security option be enabled. See Oracle Database Licensing Information for information about licensing requirements for the Oracle Advanced Security option.
Example
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_enc4.dmp
ENCRYPTION=all ENCRYPTION_PASSWORD=secretwords
ENCRYPTION_ALGORITHM=AES256 ENCRYPTION_MODE=DUAL
Related Topics
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.16 ENCRYPTION_PASSWORD
The Data Pump Export command-line utility ENCRYPTION_PASSWORD
parameter specifies a password for encrypting encrypted column data, metadata, or table data in the export dump file. This parameter prevents unauthorized access to an encrypted dump file set.
Default: There is no default; the value is user-provided.
Purpose
Specifies a password for encrypting encrypted column data, metadata, or table data in the export dump file.Using this parameter prevents unauthorized access to an encrypted dump file set.
Note:
Data Pump encryption functionality changed as of Oracle Database 11g release 1 (11.1). Before release 11.1, the ENCRYPTION_PASSWORD
parameter applied only to encrypted columns. However, as of release 11.1, the new ENCRYPTION
parameter provides options for encrypting other types of data. As a result of this change, if you now specify ENCRYPTION_PASSWORD
without also specifying ENCRYPTION
and a specific option, then all data written to the dump file is encrypted (equivalent to specifying ENCRYPTION=ALL
). To re-encrypt only encrypted columns, you must now specify ENCRYPTION=ENCRYPTED_COLUMNS_ONLY
in addition to ENCRYPTION_PASSWORD
.
Syntax and Description
ENCRYPTION_PASSWORD = password
The password
value that is supplied specifies a key for re-encrypting encrypted table columns, metadata, or table data so that they are not written as clear text in the dump file set. If the export operation involves encrypted table columns, but an encryption password is not supplied, then the encrypted columns are written to the dump file set as clear text and a warning is issued.
The password that you enter is echoed to the screen. If you do not want the password shown on the screen as you enter it, then use the ENCRYPTION_PWD_PROMPT
parameter.
The maximum length allowed for an encryption password is usually 128 bytes. However, the limit is 30 bytes if ENCRYPTION=ENCRYPTED_COLUMNS_ONLY
and either the VERSION
parameter or database compatibility is set to less than 12.2.
For export operations, this parameter is required if the ENCRYPTION_MODE
parameter is set to either PASSWORD
or DUAL
.
Note:
There is no connection or dependency between the key specified with the Data Pump ENCRYPTION_PASSWORD
parameter and the key specified with the ENCRYPT
keyword when the table with encrypted columns was initially created. For example, suppose that a table is created as follows, with an encrypted column whose key is xyz
:
CREATE TABLE emp (col1 VARCHAR2(256) ENCRYPT IDENTIFIED BY "xyz");
When you export the emp
table, you can supply any arbitrary value for ENCRYPTION_PASSWORD
. It does not have to be xyz
.
Restrictions
-
This parameter is valid only in the Enterprise Edition of Oracle Database 11g or later.
-
The
ENCRYPTION_PASSWORD
parameter is required for the transport of encrypted tablespaces and tablespaces containing tables with encrypted columns in a full transportable export. -
Data Pump encryption features require that the Oracle Advanced Security option be enabled. See Oracle Database Licensing Information for information about licensing requirements for the Oracle Advanced Security option.
-
If
ENCRYPTION_PASSWORD
is specified butENCRYPTION_MODE
is not specified, then it is not necessary to have Oracle Advanced Security Transparent Data Encryption enabled, becauseENCRYPTION_MODE
defaults toPASSWORD
. -
If the requested encryption mode is
TRANSPARENT
, then theENCRYPTION_PASSWORD
parameter is not valid. -
If
ENCRYPTION_MODE
is set toDUAL
, then to use theENCRYPTION_PASSWORD
parameter, you must have Oracle Advanced Security Transparent Data Encryption (TDE) enabled. See Oracle Database Advanced Security Guide for more information about TDE. -
For network exports, the
ENCRYPTION_PASSWORD
parameter in conjunction withENCRYPTION=ENCRYPTED_COLUMNS_ONLY
is not supported with user-defined external tables that have encrypted columns. The table is skipped, and an error message is displayed, but the job continues. -
Encryption attributes for all columns must match between the exported table definition and the target table. For example, suppose you have a table,
EMP
, and one of its columns is namedEMPNO
. Both of the following scenarios result in an error, because the encryption attribute for theEMP
column in the source table does not match the encryption attribute for theEMP
column in the target table:-
The
EMP
table is exported with theEMPNO
column being encrypted, but before importing the table you remove the encryption attribute from theEMPNO
column. -
The
EMP
table is exported without theEMPNO
column being encrypted, but before importing the table you enable encryption on theEMPNO
column.
-
Example
In the following example, an encryption password, 123456
, is assigned to the dump file, dpcd2be1.dmp
.
> expdp hr TABLES=employee_s_encrypt DIRECTORY=dpump_dir1
DUMPFILE=dpcd2be1.dmp ENCRYPTION=ENCRYPTED_COLUMNS_ONLY
ENCRYPTION_PASSWORD=123456
Encrypted columns in the employee_s_encrypt
table are not written as clear text in the dpcd2be1.dmp
dump file. Afterward, if you want to import the dpcd2be1.dmp
file created by this example, then you must supply the same encryption password.
2.4.17 ENCRYPTION_PWD_PROMPT
The Data Pump Export command-line utility ENCRYPTION_PWD_PROMPT
specifies whether Data Pump should prompt you for the encryption password.
Default: NO
Purpose
Specifies whether Data Pump should prompt you for the encryption password.
Syntax and Description
ENCRYPTION_PWD_PROMPT=[YES | NO]
Specify ENCRYPTION_PWD_PROMPT=YES
on the command line to instruct Data Pump to prompt you for the encryption password, rather than you entering it on the command line with the ENCRYPTION_PASSWORD
parameter. The advantage to doing this is that the encryption password is not echoed to the screen when it is entered at the prompt. Whereas, when it is entered on the command line using the ENCRYPTION_PASSWORD
parameter, it appears in plain text.
The encryption password that you enter at the prompt is subject to the same criteria described for the ENCRYPTION_PASSWORD
parameter.
If you specify an encryption password on the export operation, you must also supply it on the import operation.
Restrictions
-
Concurrent use of the
ENCRYPTION_PWD_PROMPT
andENCRYPTION_PASSWORD
parameters is prohibited.
Example
The following syntax example shows Data Pump first prompting for the user password and then for the encryption password.
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp ENCRYPTION_PWD_PROMPT=YES . . . Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved. Password: Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Version 18.1.0.0.0 Encryption Password: Starting "HR"."SYS_EXPORT_SCHEMA_01": hr/******** directory=dpump_dir1 dumpfile=hr.dmp encryption_pwd_prompt=Y . . .
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.18 ESTIMATE
The Data Pump Export command-line utility ESTIMATE
parameter specifies the method that Export uses to estimate how much disk space each table in the export job will consume (in bytes).
Default: BLOCKS
Purpose
Specifies the method that Export will use to estimate how much disk space each table in the export job will consume (in bytes). The estimate is printed in the log file and displayed on the client's standard output device. The estimate is for table row data only; it does not include metadata.
Syntax and Description
ESTIMATE=[BLOCKS | STATISTICS]
-
BLOCKS
- The estimate is calculated by multiplying the number of database blocks used by the source objects, times the appropriate block sizes. -
STATISTICS
- The estimate is calculated using statistics for each table. For this method to be as accurate as possible, all tables should have been analyzed recently. (Table analysis can be done with either the SQLANALYZE
statement or theDBMS_STATS
PL/SQL package.)
Restrictions
-
If the Data Pump export job involves compressed tables, then when you use
ESTIMATE=BLOCKS
, the default size estimation given for the compressed table is inaccurate. This inaccuracy results because the size estimate does not reflect that the data was stored in a compressed form. To obtain a more accurate size estimate for compressed tables, useESTIMATE=STATISTICS
. -
If either the
QUERY
orREMAP_DATA
parameter is used, then the estimate can also be inaccurate.
Example
The following example shows a use of the ESTIMATE
parameter in which the estimate is calculated using statistics for the employees
table:
> expdp hr TABLES=employees ESTIMATE=STATISTICS DIRECTORY=dpump_dir1
DUMPFILE=estimate_stat.dmp
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.19 ESTIMATE_ONLY
The Data Pump Export command-line utility ESTIMATE_ONLY
parameter instructs Export to estimate the space that a job consumes, without actually performing the export operation.
Default: NO
Purpose
Instructs Export to estimate the space that a job consumes, without actually performing the export operation.
Syntax and Description
ESTIMATE_ONLY=[YES | NO]
If ESTIMATE_ONLY=YES
, then Export estimates the space that would be consumed, but quits without actually performing the export operation.
Restrictions
-
The
ESTIMATE_ONLY
parameter cannot be used in conjunction with theQUERY
parameter.
Example
The following shows an example of using the ESTIMATE_ONLY
parameter to determine how much space an export of the HR
schema requires.
> expdp hr ESTIMATE_ONLY=YES NOLOGFILE=YES SCHEMAS=HR
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.20 EXCLUDE
The Data Pump Export command-line utility EXCLUDE
parameter enables you to filter the metadata that is exported by specifying objects and object types that you want to exclude from the export operation.
Default: There is no default
Purpose
Enables you to filter the metadata that is exported by specifying objects and object types that you want to exclude from the export operation.
Syntax and Description
EXCLUDE=object_type[:name_clause] [, ...]
The object_type
specifies the type of object that you want to exclude. To see a list of valid values for object_type
, query the following views: DATABASE_EXPORT_OBJECTS
for full mode, SCHEMA_EXPORT_OBJECTS
for schema mode, and TABLE_EXPORT_OBJECTS
for table and tablespace mode. The values listed in the OBJECT_PATH
column are the valid object types.
All object types for the given mode of export are included in the export, except object types specified in an EXCLUDE
statement. If an object is excluded, then all dependent objects are also excluded. For example, excluding a table also excludes all indexes and triggers on the table.
The name_clause
is optional. Using this parameter enables selection of specific objects within an object type. It is a SQL expression used as a filter on the object names of that type. It consists of a SQL operator, and the values against which you want to compare the object names of the specified type. The name_clause
applies only to object types whose instances have names (for example, it is applicable to TABLE
, but not to GRANT
). It must be separated from the object type with a colon, and enclosed in double quotation marks, because single quotation marks are required to delimit the name strings. For example, you can set EXCLUDE=INDEX:"LIKE 'EMP%'"
to exclude all indexes whose names start with EMP
.
The name that you supply for the name_clause
must exactly match, including upper and lower casing, an existing object in the database. For example, if the name_clause
you supply is for a table named EMPLOYEES
, then there must be an existing table named EMPLOYEES
using all upper case. If you supplied the name_clause
as Employees
or employees
or any other variation that does not match the existing table, then the table is not found.
If no name_clause
is provided, then all objects of the specified type are excluded.
You can specify more than one EXCLUDE
statement.
Depending on your operating system, the use of quotation marks when you specify a value for this parameter can also require that you use escape characters. Oracle recommends that you place this parameter in a parameter file, which can reduce the number of escape characters that otherwise can be needed on the command line.
If the object_type
you specify is CONSTRAINT
, GRANT
, or USER
, then be aware of the effects, as described in the following paragraphs.
Excluding Constraints
The following constraints cannot be explicitly excluded:
-
Constraints needed for the table to be created and loaded successfully; for example, primary key constraints for index-organized tables, or
REF
SCOPE
andWITH ROWID
constraints for tables withREF
columns
For example, the following EXCLUDE
statements are interpreted as follows:
-
EXCLUDE=CONSTRAINT
excludes all constraints, except for any constraints needed for successful table creation and loading. -
EXCLUDE=REF_CONSTRAINT
excludes referential integrity (foreign key) constraints.
Excluding Grants and Users
Specifying EXCLUDE=GRANT
excludes object grants on all object types and system privilege grants.
Specifying EXCLUDE=USER
excludes only the definitions of users, not the objects contained within user schemas.
To exclude a specific user and all objects of that user, specify a command such as the following, where hr
is the schema name of the user you want to exclude.
expdp FULL=YES DUMPFILE=expfull.dmp EXCLUDE=SCHEMA:"='HR'"
In this example, the export mode FULL
is specified. If no mode is specified, then the default mode is used. The default mode is SCHEMAS
. But if the default mode is used, then in this example, the default causes an error, because if SCHEMAS
is used, then the command indicates that you want the schema both exported and excluded at the same time.
If you try to exclude a user by using a statement such as EXCLUDE=USER:"='HR'"
, then only the information used in CREATE USER hr
DDL statements is excluded, and you can obtain unexpected results.
Restrictions
-
The
EXCLUDE
andINCLUDE
parameters are mutually exclusive.
Example
The following is an example of using the EXCLUDE
statement.
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_exclude.dmp EXCLUDE=VIEW,
PACKAGE, FUNCTION
This example results in a schema-mode export (the default export mode) in which all the hr
schema is exported except its views, packages, and functions.
2.4.21 FILESIZE
The Data Pump Export command-line utility FILESIZE
parameter specifies the maximum size of each dump file.
Default: 0
(equivalent to the maximum size of 16 terabytes)
Purpose
Specifies the maximum size of each dump file. If the size is reached for any member of the dump file set, then that file is closed and an attempt is made to create a new file, if the file specification contains a substitution variable or if more dump files have been added to the job.
Syntax and Description
FILESIZE=integer[B | KB | MB | GB | TB]
The integer
can be immediately followed (do not insert a space) by B
, KB
, MB
, GB
, or TB
(indicating bytes, kilobytes, megabytes, gigabytes, and terabytes respectively). Bytes is the default. The actual size of the resulting file can be rounded down slightly to match the size of the internal blocks used in dump files.
Restrictions
-
The minimum size for a file is 10 times the default Data Pump block size, which is 4 kilobytes.
-
The maximum size for a file is 16 terabytes.
Example
The following example shows setting the size of the dump file to 3 megabytes:
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_3m.dmp FILESIZE=3MB
In this scenario, if the 3 megabytes allocated was not sufficient to hold all the exported data, then the following error results, and displayed and the job stops:
ORA-39095: Dump file space has been exhausted: Unable to allocate 217088 bytes
The actual number of bytes that cannot be allocated can vary. Also, this number does not represent the amount of space required complete the entire export operation. It indicates only the size of the current object that was being exported when the job ran out of dump file space. You can correct this problem by first attaching to the stopped job, adding one or more files using the ADD_FILE
command, and then restarting the operation.
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.22 FLASHBACK_SCN
The Data Pump Export command-line utility FLASHBACK_SCN
parameter specifies the system change number (SCN) that Export uses to enable the Flashback Query utility.
Default: There is no default
Purpose
Specifies the system change number (SCN) that Export will use to enable the Flashback Query utility.
Syntax and Description
FLASHBACK_SCN=scn_value
The export operation is performed with data that is consistent up to the specified SCN. If the NETWORK_LINK
parameter is specified, then the SCN refers to the SCN of the source database.
As of Oracle Database 12c release 2 (12.2) and later, the SCN value can be a big SCN (8 bytes). You can also specify a big SCN when you create a dump file for an earlier version that does not support big SCNs, because actual SCN values are not moved. See the following restrictions for more information about using big SCNs.
Restrictions
-
FLASHBACK_SCN
andFLASHBACK_TIME
are mutually exclusive. -
The
FLASHBACK_SCN
parameter pertains only to the Flashback Query capability of Oracle Database. It is not applicable to Flashback Database, Flashback Drop, or Flashback Data Archive. -
You cannot specify a big SCN for a network export or network import from a version that does not support big SCNs.
Example
The following example assumes that an existing SCN value of 384632
exists. It exports the hr
schema up to SCN 384632.
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_scn.dmp FLASHBACK_SCN=384632
Note:
If you are on a logical standby system and using a network link to access the logical standby primary, then the FLASHBACK_SCN
parameter is ignored because SCNs are selected by logical standby. See Oracle Data Guard Concepts and Administration for information about logical standby databases.
Related Topics
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.23 FLASHBACK_TIME
The Data Pump Export command-line utility FLASHBACK_TIME
parameter finds the SCN that most closely matches the specified time. This SCN is used to enable the Flashback utility. The export operation is performed with data that is consistent up to this SCN.
Default: There is no default
Purpose
Finds the SCN that most closely matches the specified time. This SCN is used to enable the Flashback utility. The export operation is performed with data that is consistent up to this SCN.
Syntax and Description
FLASHBACK_TIME="TO_TIMESTAMP(time-value)"
Because the TO_TIMESTAMP
value is enclosed in quotation marks, it is best to put this parameter in a parameter file.
Alternatively, you can enter the following parameter setting. This setting initiate a consistent export that is based on current system time:
FLASHBACK_TIME=systimestamp
Restrictions
-
FLASHBACK_TIME
andFLASHBACK_SCN
are mutually exclusive. -
The
FLASHBACK_TIME
parameter pertains only to the flashback query capability of Oracle Database. It is not applicable to Flashback Database, Flashback Drop, or Flashback Data Archive.
Example
You can specify the time in any format that the DBMS_FLASHBACK.ENABLE_AT_TIME
procedure accepts. For example, suppose you have a parameter file, flashback.par
, with the following contents:
DIRECTORY=dpump_dir1
DUMPFILE=hr_time.dmp
FLASHBACK_TIME="TO_TIMESTAMP('27-10-2012 13:16:00', 'DD-MM-YYYY HH24:MI:SS')"
You can then issue the following command:
> expdp hr PARFILE=flashback.par
The export operation is performed with data that is consistent with the SCN that most closely matches the specified time.
Note:
If you are on a logical standby system and using a network link to access the logical standby primary, then the FLASHBACK_SCN
parameter is ignored, because the logical standby selects the SCNs. See Oracle Data Guard Concepts and Administration for information about logical standby databases.
See Oracle Database Development Guide for information about using Flashback Query.
2.4.24 FULL
The Export command-line FULL parameter specifies that you want to perform a full database mode export
Default: NO
Purpose
Specifies that you want to perform a full database mode export.
Syntax and Description
FULL=[YES | NO]
FULL=YES
indicates that all data and metadata are to be exported. To perform a full export, you must have the DATAPUMP_EXP_FULL_DATABASE
role.
Filtering can restrict what is exported using this export mode.
You can perform a full mode export using the transportable option (TRANSPORTABLE=ALWAYS
). This is referred to as a full transportable export, which exports all objects and data necessary to create a complete copy of the database. See Full Mode.
Note:
Be aware that when you later import a dump file that was created by a full-mode export, the import operation attempts to copy the password for the SYS
account from the source database. This sometimes fails (for example, if the password is in a shared password file). If it does fail, then after the import completes, you must set the password for the SYS
account at the target database to a password of your choice.
Restrictions
-
To use the
FULL
parameter in conjunction withTRANSPORTABLE
(a full transportable export), either the Data PumpVERSION
parameter must be set to at least 12.0. or theCOMPATIBLE
database initialization parameter must be set to at least 12.0 or later. -
A full export does not, by default, export system schemas that contain Oracle-managed data and metadata. Examples of system schemas that are not exported by default include
SYS
,ORDSYS
, andMDSYS
. -
Grants on objects owned by the
SYS
schema are never exported. -
A full export operation exports objects from only one database edition; by default it exports the current edition but you can use the Export
SOURCE_EDITION
parameter to specify a different edition. -
If you are exporting data that is protected by a realm, then you must have authorization for that realm.
-
The Automatic Workload Repository (AWR) is not moved in a full database export and import operation. (See Oracle Database Performance Tuning Guide for information about using Data Pump to move AWR snapshots.)
-
The XDB repository is not moved in a full database export and import operation. User created XML schemas are moved.
Example
The following is an example of using the FULL
parameter. The dump file, expfull.dmp
is written to the dpump_dir2
directory.
> expdp hr DIRECTORY=dpump_dir2 DUMPFILE=expfull.dmp FULL=YES NOLOGFILE=YES
To see a detailed example of how to perform a full transportable export, see Oracle Database Administrator’s Guide. For information about configuring realms, see Oracle Database Vault Administrator’s Guide.
2.4.25 HELP
The Data Pump Export command-line utility HELP
parameter displays online help for the Export utility.
Default: NO
Purpose
Displays online help for the Export utility.
Syntax and Description
HELP = [YES | NO]
If HELP=YES
is specified, then Export displays a summary of all Export command-line parameters and interactive commands.
Example
> expdp HELP = YES
This example display a brief description of all Export parameters and commands.
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.26 INCLUDE
The Data Pump Export command-line utility INCLUDE parameter enables you to filter the metadata that is exported by specifying objects and object types for the current export mode. The specified objects and all their dependent objects are exported. Grants on these objects are also exported.
Default: There is no default
Purpose
Enables you to filter the metadata that is exported by specifying objects and object types for the current export mode. The specified objects and all their dependent objects are exported. Grants on these objects are also exported.
Syntax and Description
INCLUDE = object_type[:name_clause] [, ...]
The object_type
specifies the type of object to be included. To see a list of valid values for object_type
, query the following views: DATABASE_EXPORT_OBJECTS
for full mode, SCHEMA_EXPORT_OBJECTS
for schema mode, and TABLE_EXPORT_OBJECTS
for table and tablespace mode. The values listed in the OBJECT_PATH
column are the valid object types.
Only object types explicitly specified in INCLUDE
statements, and their dependent objects, are exported. No other object types, including the schema definition information that is normally part of a schema-mode export when you have the DATAPUMP_EXP_FULL_DATABASE
role, are exported.
The name_clause
is optional. It allows fine-grained selection of specific objects within an object type. It is a SQL expression used as a filter on the object names of the type. It consists of a SQL operator and the values against which the object names of the specified type are to be compared. The name_clause
applies only to object types whose instances have names (for example, it is applicable to TABLE
, but not to GRANT
). It must be separated from the object type with a colon and enclosed in double quotation marks, because single quotation marks are required to delimit the name strings.
The name that you supply for the name_clause
must exactly match an existing object in the database, including upper- and lower- case letters. For example, if the name_clause
you supply is for a table named EMPLOYEES
, then there must be an existing table named EMPLOYEES
using all upper-case letters. If the name_clause
is provided as Employees
or employees
or any other variation, then the table is not found.
Depending on your operating system, the use of quotation marks when you specify a value for this parameter can also require that you use escape characters. Oracle recommends that you place this parameter in a parameter file, which can reduce the number of escape characters that you otherwise need to enter on the command line.
For example, suppose you have a parameter file named hr.par
with the following content:
SCHEMAS=HR
DUMPFILE=expinclude.dmp
DIRECTORY=dpump_dir1
LOGFILE=expinclude.log
INCLUDE=TABLE:"IN ('EMPLOYEES', 'DEPARTMENTS')"
INCLUDE=PROCEDURE
INCLUDE=INDEX:"LIKE 'EMP%'"
You can then use the hr.par
file to start an export operation, without having to enter any other parameters on the command line. The EMPLOYEES
and DEPARTMENTS
tables, all procedures, and all index names with an EMP prefix, are included in the export.
> expdp hr PARFILE=hr.par
Including Constraints
If the object_type
that you specify is a CONSTRAINT
, then be aware of the effects of using a constraint..
You cannot include explicitly the following constraints:
-
NOT NULL
constraints -
Constraints that are required for the table to be created and loaded successfully. For example: you cannot include primary key constraints for index-organized tables, or
REF SCOPE
andWITH ROWID
constraints for tables withREF
columns.
For example, the following INCLUDE
statements are interpreted as follows:
-
INCLUDE=CONSTRAINT
includes all (nonreferential) constraints, except forNOT NULL
constraints, and any constraints needed for successful table creation and loading. -
INCLUDE=REF_CONSTRAINT
includeS referential integrity (foreign key) constraints.
Restrictions
-
The
INCLUDE
andEXCLUDE
parameters are mutually exclusive. -
Grants on objects owned by the
SYS
schema are never exported.
Example
The following example performs an export of all tables (and their dependent objects) in the hr
schema:
> expdp hr INCLUDE=TABLE DUMPFILE=dpump_dir1:exp_inc.dmp NOLOGFILE=YES
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.27 JOB_NAME
The Data Pump Export command-line utility JOB_NAME
parameter identifies the export job in subsequent actions, such as when using ATTACH
to attach to a job, or to identify a job using DBA_DATAPUMP_JOBS
or USER_DATAPUMP_JOBS
views.
Default: system-generated name of the form SYS_EXPORT_<mode>_NN
Purpose
Used to identify the export job in subsequent actions, such as when the ATTACH
parameter is used to attach to a job, or to identify the job using the DBA_DATAPUMP_JOBS
or USER_DATAPUMP_JOBS
views.
Syntax and Description
JOB_NAME=jobname_string
The jobname_string
specifies a name of up to 128 bytes for this export job. The bytes must represent printable characters and spaces. If spaces are included, then the name must be enclosed in single quotation marks (for example, 'Thursday Export'). The job name is implicitly qualified by the schema of the user performing the export operation. The job name is used as the name of the master table, which controls the export job.
The default job name is system-generated in the form SYS_EXPORT_mode_NN
, where NN
expands to a 2-digit incrementing integer starting at 01. An example of a default name is 'SYS_EXPORT_TABLESPACE_02'
.
Example
The following example shows an export operation that is assigned a job name of exp_job
:
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=exp_job.dmp JOB_NAME=exp_job
NOLOGFILE=YES
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.28 KEEP_MASTER
The Data Pump Export command-line utility KEEP_MASTER parameter indicates whether the master table should be deleted or retained at the end of a Data Pump job that completes successfully. The master table is automatically retained for jobs that do not complete successfully.
Default: NO
Purpose
Indicates whether the master table should be deleted or retained at the end of a Data Pump job that completes successfully. The master table is automatically retained for jobs that do not complete successfully.
Syntax and Description
KEEP_MASTER=[YES | NO]
Restrictions
-
None
Example
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=hr KEEP_MASTER=YES
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.29 LOGFILE
The Data Pump Export command-line utility LOGFILE parameter specifies the name, and optionally, a directory, for the log file of the export job.
Default: export
.log
Purpose
Specifies the name, and optionally, a directory, for the log file of the export job.
Syntax and Description
LOGFILE=[directory_object:]file_name
You can specify a database directory_object
previously established by the DBA, assuming that you have access to it. This setting overrides the directory object specified with the DIRECTORY
parameter.
The file_name
specifies a name for the log file. The default behavior is to create a file named export.log
in the directory referenced by the directory object specified in the DIRECTORY
parameter.
All messages regarding work in progress, work completed, and errors encountered are written to the log file. (For a real-time status of the job, use the STATUS
command in interactive mode.)
A log file is always created for an export job unless the NOLOGFILE
parameter is specified. As with the dump file set, the log file is relative to the server and not the client.
An existing file matching the file name is overwritten.
Restrictions
-
To perform a Data Pump Export using Oracle Automatic Storage Management (Oracle ASM), you must specify a
LOGFILE
parameter that includes a directory object that does not include the Oracle ASM + notation. That is, the log file must be written to a disk file, and not written into the Oracle ASM storage. Alternatively, you can specifyNOLOGFILE=YES
. However, if you specifyNOLOGFILE=YES
, then that setting prevents the writing of the log file.
Example
The following example shows how to specify a log file name when you do not want to use the default:
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp LOGFILE=hr_export.log
Note:
Data Pump Export writes the log file using the database character set. If your client NLS_LANG
environment setting sets up a different client character set from the database character set, then it is possible that table names can be different in the log file than they are when displayed on the client output screen.
2.4.30 LOGTIME
The Data Pump Export command-line utility LOGTIME
parameter specifies that messages displayed during export operations are timestamped.
Default: No timestamps are recorded
Purpose
Specifies that messages displayed during export operations are timestamped. You can use the timestamps to figure out the elapsed time between different phases of a Data Pump operation. Such information can be helpful in diagnosing performance problems and estimating the timing of future similar operations.
Syntax and Description
LOGTIME=[NONE | STATUS | LOGFILE | ALL]
The available options are defined as follows:
-
NONE
: No timestamps on status or log file messages (same as default) -
STATUS
: Timestamps on status messages only -
LOGFILE
: Timestamps on log file messages only -
ALL
: Timestamps on both status and log file messages
Restrictions
-
None
Example
The following example records timestamps for all status and log file messages that are displayed during the export operation:
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=hr LOGTIME=ALL
The output looks similar to the following:
10-JUL-12 10:12:22.300: Starting "HR"."SYS_EXPORT_SCHEMA_01": hr/********
directory=dpump_dir1 dumpfile=expdat.dmp schemas=hr logtime=all
10-JUL-12 10:12:22.915: Estimate in progress using BLOCKS method...
10-JUL-12 10:12:24.422: Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
10-JUL-12 10:12:24.498: Total estimation using BLOCKS method: 128 KB
10-JUL-12 10:12:24.822: Processing object type SCHEMA_EXPORT/USER
10-JUL-12 10:12:24.902: Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
10-JUL-12 10:12:24.926: Processing object type SCHEMA_EXPORT/ROLE_GRANT
10-JUL-12 10:12:24.948: Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
10-JUL-12 10:12:24.967: Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
10-JUL-12 10:12:25.747: Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
10-JUL-12 10:12:32.762: Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
10-JUL-12 10:12:46.631: Processing object type SCHEMA_EXPORT/TABLE/TABLE
10-JUL-12 10:12:58.007: Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
10-JUL-12 10:12:58.106: Processing object type SCHEMA_EXPORT/TABLE/COMMENT
10-JUL-12 10:12:58.516: Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
10-JUL-12 10:12:58.630: Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
10-JUL-12 10:12:59.365: Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
10-JUL-12 10:13:01.066: Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
10-JUL-12 10:13:01.143: Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
10-JUL-12 10:13:02.503: Processing object type SCHEMA_EXPORT/VIEW/VIEW
10-JUL-12 10:13:03.288: Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
10-JUL-12 10:13:04.067: Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
10-JUL-12 10:13:05.251: Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
10-JUL-12 10:13:06.172: . . exported "HR"."EMPLOYEES" 17.05 KB 107 rows
10-JUL-12 10:13:06.658: . . exported "HR"."COUNTRIES" 6.429 KB 25 rows
10-JUL-12 10:13:06.691: . . exported "HR"."DEPARTMENTS" 7.093 KB 27 rows
10-JUL-12 10:13:06.723: . . exported "HR"."JOBS" 7.078 KB 19 rows
10-JUL-12 10:13:06.758: . . exported "HR"."JOB_HISTORY" 7.164 KB 10 rows
10-JUL-12 10:13:06.794: . . exported "HR"."LOCATIONS" 8.398 KB 23 rows
10-JUL-12 10:13:06.824: . . exported "HR"."REGIONS" 5.515 KB 4 rows
10-JUL-12 10:13:07.500: Master table "HR"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
10-JUL-12 10:13:07.503: ******************************************************************************
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.31 METRICS
The Data Pump Export command-line utility METRICS
parameter indicates whether you want additional information about the job reported to the Data Pump log file.
Default: NO
Purpose
Indicates whether additional information about the job should be reported to the Data Pump log file.
Syntax and Description
METRICS=[YES | NO]
When METRICS=YES
is used, the number of objects and the elapsed time are recorded in the Data Pump log file.
Restrictions
-
None
Example
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=hr METRICS=YES
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.32 NETWORK_LINK
The Data Pump Export command-line utility NETWORK_LINK
parameter enables an export from a (source) database identified by a valid database link. The data from the source database instance is written to a dump file set on the connected database instance.
Default: There is no default
Purpose
Enables an export from a (source) database identified by a valid database link. The data from the source database instance is written to a dump file set on the connected database instance.
Syntax and Description
NETWORK_LINK=source_database_link
The NETWORK_LINK
parameter initiates an export using a database link. This export setting means that the system to which the expdp
client is connected contacts the source database referenced by the source_database_link
, retrieves data from it, and writes the data to a dump file set back on the connected system.
The source_database_link
provided must be the name of a database link to an available database. If the database on that instance does not already have a database link, then you or your DBA must create one using the SQL CREATE DATABASE LINK
statement.
If the source database is read-only, then the user on the source database must have a locally managed temporary tablespace assigned as the default temporary tablespace. Otherwise, the job will fail.
The following types of database links are supported for use with Data Pump Export:
-
Public fixed user
-
Public connected user
-
Public shared user (only when used by link owner)
-
Private shared user (only when used by link owner)
-
Private fixed user (only when used by link owner)
Caution:
If an export operation is performed over an unencrypted network link, then all data is exported as clear text, even if it is encrypted in the database. See Oracle Database Security Guide for more information about network security.
Restrictions
-
The following types of database links are not supported for use with Data Pump Export:
-
Private connected user
-
Current user
-
-
When operating across a network link, Data Pump requires that the source and target databases differ by no more than two versions. For example, if one database is Oracle Database 12c, then the other database must be 12c, 11g, or 10g. Note that Data Pump checks only the major version number (for example, 10g,11g, 12c), not specific release numbers (for example, 12.1, 12.2, 11.1, 11.2, 10.1 or 10.2).
-
When transporting a database over the network using full transportable export, auditing cannot be enabled for tables stored in an administrative tablespace (such as
SYSTEM
andSYSAUX
) if the audit trail information itself is stored in a user-defined tablespace.
Example
The following is a syntax example of using the NETWORK_LINK
parameter. Replace the variable source_database_link
with the name of a valid database link that must already exist.
> expdp hr DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_link
DUMPFILE=network_export.dmp LOGFILE=network_export.log
See Also:
-
Oracle Database Administrator’s Guide for more information about database links
-
Oracle Database SQL Language Reference for more information about the
CREATE DATABASE LINK
statement -
Oracle Database Administrator’s Guide for more information about locally managed tablespaces
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.33 NOLOGFILE
The Data Pump Export command-line utility NOLOGFILE parameter specifies whether to suppress creation of a log file.
Default: NO
Purpose
Specifies whether to suppress creation of a log file.
Syntax and Description
NOLOGFILE=[YES | NO]
Specify NOLOGFILE=YES
to suppress the default behavior of creating a log file. Progress and error information is still written to the standard output device of any attached clients, including the client that started the original export operation. If there are no clients attached to a running job, and you specify NOLOGFILE=YES
, then you run the risk of losing important progress and error information.
Example
The following is an example of using the NOLOGFILE
parameter:
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp NOLOGFILE=YES
This command results in a schema-mode export (the default), in which no log file is written.
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.34 PARALLEL
The Export command-line utility PARALLEL parameter specifies the maximum number of processes of active execution operating on behalf of the export job.
Default: 1
Purpose
Specifies the maximum number of processes of active execution operating on behalf of the export job. This execution set consists of a combination of worker processes and parallel I/O server processes. The master control process and worker processes acting as query coordinators in parallel query operations do not count toward this total.
This parameter enables you to make trade-offs between resource consumption and elapsed time.
Syntax and Description
PARALLEL=integer
The value that you specify for integer
should be less than, or equal to, the number of files in the dump file set (or you should specify either the %U
or %L
substitution variables in the dump file specifications). Because each active worker process or I/O server process writes exclusively to one file at a time, an insufficient number of files can have adverse effects. For example, some of the worker processes can be idle while waiting for files, thereby degrading the overall performance of the job. More importantly, if any member of a cooperating group of parallel I/O server processes cannot obtain a file for output, then the export operation is stopped with an ORA-39095
error. Both situations can be corrected by attaching to the job using the Data Pump Export utility, adding more files using the ADD_FILE
command while in interactive mode, and in the case of a stopped job, restarting the job.
To increase or decrease the value of PARALLEL
during job execution, use interactive-command mode. Decreasing parallelism does not result in fewer worker processes associated with the job; it decreases the number of worker processes that are executing at any given time. Also, any ongoing work must reach an orderly completion point before the decrease takes effect. Therefore, it may take a while to see any effect from decreasing the value. Idle workers are not deleted until the job exits.
If there is work that can be performed in parallel, then increasing the parallelism takes effect immediately .
Using PARALLEL During An Export In An Oracle RAC Environment
In an Oracle Real Application Clusters (Oracle RAC) environment, if an export operation has PARALLEL=1
, then all Data Pump processes reside on the instance where the job is started. Therefore, the directory object can point to local storage for that instance.
If the export operation has PARALLEL
set to a value greater than 1, then Data Pump processes can reside on instances other than the one where the job was started. Therefore, the directory object must point to shared storage that is accessible by all instances of the Oracle RAC.
Restrictions
-
This parameter is valid only in the Enterprise Edition of Oracle Database 11g or later.
-
To export a table or table partition in parallel (using PQ slaves), you must have the
DATAPUMP_EXP_FULL_DATABASE
role.
Example
The following is an example of using the PARALLEL
parameter:
> expdp hr DIRECTORY=dpump_dir1 LOGFILE=parallel_export.log
JOB_NAME=par4_job DUMPFILE=par_exp%u.dmp PARALLEL=4
This results in a schema-mode export (the default) of the hr
schema, in which up to four files can be created in the path pointed to by the directory object, dpump_dir1
.
2.4.35 PARFILE
The Data Pump Export command-line utility PARFILE parameter specifies the name of an export parameter file.
Default: There is no default
Purpose
Specifies the name of an export parameter file.
Syntax and Description
PARFILE=[directory_path]file_name
A parameter file enables you to specify Data Pump parameters within a file. You can then specify that file on the command line, instead of entering all of the individual commands. Using a parameter file can be useful if you use the same parameter combination many times. The use of parameter files is also highly recommended when you use parameters whose values require the use of quotation marks.
A directory object is not specified for the parameter file. You do not specify a directory object, because the parameter file is opened and read by the expdp
client, unlike dump files, log files, and SQL files which are created and written by the server. The default location of the parameter file is the user's current directory.
Within a parameter file, a comma is implicit at every newline character so you do not have to enter commas at the end of each line. If you have a long line that wraps, such as a long table name, then enter the backslash continuation character (\
) at the end of the current line to continue onto the next line.
The contents of the parameter file are written to the Data Pump log file.
Restrictions
-
The
PARFILE
parameter cannot be specified within a parameter file.
Example
The content of an example parameter file, hr.par
, might be as follows:
SCHEMAS=HR DUMPFILE=exp.dmp DIRECTORY=dpump_dir1 LOGFILE=exp.log
You could then issue the following Export command to specify the parameter file:
> expdp hr PARFILE=hr.par
Related Topics
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.36 QUERY
The Data Pump Export command-line utility QUERY
parameter enables you to specify a query clause that is used to filter the data that gets exported.
Default: There is no default
Purpose
enables you to specify a query clause that is used to filter the data that gets exported.
Syntax and Description
QUERY = [schema.][table_name:] query_clause
The query_clause
is typically a SQL WHERE
clause for fine-grained row selection, but could be any SQL clause. For example, you can use an ORDER BY
clause to speed up a migration from a heap-organized table to an index-organized table. If a schema and table name are not supplied, then the query is applied to (and must be valid for) all tables in the export job. A table-specific query overrides a query applied to all tables.
When the query is to be applied to a specific table, a colon must separate the table name from the query clause. More than one table-specific query can be specified, but only one query can be specified per table.
If the NETWORK_LINK
parameter is specified along with the QUERY
parameter, then any objects specified in the query_clause
that are on the remote (source) node must be explicitly qualified with the NETWORK_LINK
value. Otherwise, Data Pump assumes that the object is on the local (target) node; if it is not, then an error is returned and the import of the table from the remote (source) system fails.
For example, if you specify NETWORK_LINK=dblink1
, then the query_clause
of the QUERY
parameter must specify that link, as shown in the following example:
QUERY=(hr.employees:"WHERE last_name IN(SELECT last_name
FROM hr.employees@dblink1)")
Depending on your operating system, when you specify a value for this parameter that the uses quotation marks, it can also require that you use escape characters. Oracle recommends that you place this parameter in a parameter file, which can reduce the number of escape characters that might otherwise be needed on the command line. .
To specify a schema other than your own in a table-specific query, you must be granted access to that specific table.
Restrictions
-
The
QUERY
parameter cannot be used with the following parameters:-
CONTENT=METADATA_ONLY
-
ESTIMATE_ONLY
-
TRANSPORT_TABLESPACES
-
-
When the
QUERY
parameter is specified for a table, Data Pump uses external tables to unload the target table. External tables uses a SQLCREATE TABLE AS SELECT
statement. The value of theQUERY
parameter is theWHERE
clause in theSELECT
portion of theCREATE TABLE
statement. If theQUERY
parameter includes references to another table with columns whose names match the table being unloaded, and if those columns are used in the query, then you will need to use a table alias to distinguish between columns in the table being unloaded and columns in theSELECT
statement with the same name. The table alias used by Data Pump for the table being unloaded isKU$
.For example, suppose you want to export a subset of the
sh.sales
table based on the credit limit for a customer in thesh.customers
table. In the following example,KU$
is used to qualify thecust_id
field in theQUERY
parameter for unloadingsh.sales
. As a result, Data Pump exports only rows for customers whose credit limit is greater than $10,000.QUERY='sales:"WHERE EXISTS (SELECT cust_id FROM customers c WHERE cust_credit_limit > 10000 AND ku$.cust_id = c.cust_id)"'
In the following query,
KU$
is not used for a table alias. The result is that all rows are unloaded:QUERY='sales:"WHERE EXISTS (SELECT cust_id FROM customers c WHERE cust_credit_limit > 10000 AND cust_id = c.cust_id)"'
-
The maximum length allowed for a
QUERY
string is 4000 bytes, which includes quotation marks. This restriction means that the actual maximum length allowed is 3998 bytes.
Example
The following is an example of using the QUERY
parameter:
> expdp hr PARFILE=emp_query.par
The contents of the emp_query.par
file are as follows:
QUERY=employees:"WHERE department_id > 10 AND salary > 10000"
NOLOGFILE=YES
DIRECTORY=dpump_dir1
DUMPFILE=exp1.dmp
This example unloads all tables in the hr
schema, but only the rows that fit the query expression. In this case, all rows in all tables (except employees
) in the hr
schema are unloaded. For the employees
table, only rows that meet the query criteria are unloaded.
Related Topics
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.37 REMAP_DATA
The Data Pump Export command-line utility REMAP_DATA
parameter enables you to specify a remap function that takes as a source the original value of the designated column and returns a remapped value that will replace the original value in the dump file.
Default: There is no default
Purpose
The REMAP_DATA
parameter allows you to specify a remap function that takes as a source the original value of the designated column and returns a remapped value that will replace the original value in the dump file. A common use for this option is to mask data when moving from a production system to a test system. For example, a column of sensitive customer data such as credit card numbers could be replaced with numbers generated by a REMAP_DATA
function. This would allow the data to retain its essential formatting and processing characteristics without exposing private data to unauthorized personnel.
The same function can be applied to multiple columns being dumped. This is useful when you want to guarantee consistency in remapping both the child and parent column in a referential constraint.
Syntax and Description
REMAP_DATA=[schema.]tablename.column_name:[schema.]pkg.function
The description of each syntax element, in the order in which they appear in the syntax, is as follows:
schema: the schema containing the table to be remapped. By default, this is the schema of the user doing the export.
tablename : the table whose column will be remapped.
column_name: the column whose data is to be remapped.
schema : the schema containing the PL/SQL package you have created that contains the remapping function. As a default, this is the schema of the user doing the export.
pkg: the name of the PL/SQL package you have created that contains the remapping function.
function: the name of the function within the PL/SQL that will be called to remap the column table in each row of the specified table.
Restrictions
-
The data types and sizes of the source argument and the returned value must both match the data type and size of the designated column in the table.
-
Remapping functions should not perform commits or rollbacks except in autonomous transactions.
-
The use of synonyms as values for the
REMAP_DATA
parameter is not supported. For example, if theregions
table in thehr
schema had a synonym ofregn
, an error would be returned if you specifiedregn
as part of theREMPA_DATA
specification. -
Remapping LOB column data of a remote table is not supported.
-
Columns of the following types are not supported by
REMAP_DATA
: User Defined Types, attributes of User Defined Types, LONGs, REFs, VARRAYs, Nested Tables, BFILEs, and XMLtype.
Example
The following example assumes a package named remap
has been created that contains functions named minus10
and plusx
. These functions change the values for employee_id
and first_name
in the employees
table.
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=remap1.dmp TABLES=employees
REMAP_DATA=hr.employees.employee_id:hr.remap.minus10
REMAP_DATA=hr.employees.first_name:hr.remap.plusx
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.38 REUSE_DUMPFILES
The Data Pump Export command-line utility xxx parameter specifies whether to overwrite a preexisting dump file.
Default: NO
Purpose
Specifies whether to overwrite a preexisting dump file.
Syntax and Description
REUSE_DUMPFILES=[YES | NO]
Normally, Data Pump Export will return an error if you specify a dump file name that already exists. The REUSE_DUMPFILES
parameter allows you to override that behavior and reuse a dump file name. For example, if you performed an export and specified DUMPFILE=hr.dmp
and REUSE_DUMPFILES=YES
, then hr.dmp
is overwritten if it already exists. Its previous contents are then lost, and it instead contains data for the current export.
Example
The following export operation creates a dump file named enc1.dmp
, even if a dump file with that name already exists.
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=enc1.dmp
TABLES=employees REUSE_DUMPFILES=YES
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.39 SAMPLE
The Data Pump Export command-line utility SAMPLE
parameter specifies a percentage of the data rows that you want to be sampled and unloaded from the source database.
Default: There is no default
Purpose
Specifies a percentage of the data rows that you want to be sampled and unloaded from the source database.
Syntax and Description
SAMPLE=[[schema_name.]table_name:]sample_percent
This parameter allows you to export subsets of data by specifying the percentage of data to be sampled and exported. The sample_percent
indicates the probability that a row will be selected as part of the sample. It does not mean that the database will retrieve exactly that amount of rows from the table. The value you supply for sample_percent
can be anywhere from .000001 up to, but not including, 100.
You can apply the sample_percent
to specific tables. In the following example, 50% of the HR.EMPLOYEES
table is exported:
SAMPLE="HR"."EMPLOYEES":50
If you specify a schema, then you must also specify a table. However, you can specify a table without specifying a schema. In that scenario, the current user is assumed. If no table is specified, then the sample_percent
value applies to the entire export job.
You can use this parameter with the Data Pump Import PCTSPACE
transform, so that the size of storage allocations matches the sampled data subset. (See the Import TRANSFORM
parameter).
Restrictions
-
The
SAMPLE
parameter is not valid for network exports.
Example
In the following example, the value 70
for SAMPLE
is applied to the entire export job because no table name is specified.
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=sample.dmp SAMPLE=70
Related Topics
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.40 SCHEMAS
The Data Pump Export command-line utility SCHEMAS
parameter specifies that you want to perform a schema-mode export. This is the default mode for Export.
Default: current user's schema
Purpose
Specifies that you want to perform a schema-mode export. This is the default mode for Export.
Syntax and Description
SCHEMAS=schema_name [, ...]
If you have the DATAPUMP_EXP_FULL_DATABASE
role, then you can specify a single schema other than your own or a list of schema names. The DATAPUMP_EXP_FULL_DATABASE
role also allows you to export additional nonschema object information for each specified schema so that the schemas can be re-created at import time. This additional information includes the user definitions themselves and all associated system and role grants, user password history, and so on. Filtering can further restrict what is exported using schema mode.
Restrictions
-
If you do not have the
DATAPUMP_EXP_FULL_DATABASE
role, then you can specify only your own schema. -
The
SYS
schema cannot be used as a source schema for export jobs.
Example
The following is an example of using the SCHEMAS
parameter. Note that user hr
is allowed to specify more than one schema, because the DATAPUMP_EXP_FULL_DATABASE
role was previously assigned to it for the purpose of these examples.
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=hr,sh,oe
This results in a schema-mode export in which the schemas, hr
, sh,
and oe
will be written to the expdat.dmp
dump file located in the dpump_dir1
directory.
Related Topics
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.41 SERVICE_NAME
Default: There is no default
Purpose
Used to specify a service name to be used in conjunction with the CLUSTER
parameter.
Syntax and Description
SERVICE_NAME=name
You can use the SERVICE_NAME
parameter with the CLUSTER=YES
parameter to specify an existing service associated with a resource group that defines a set of Oracle Real Application Clusters (Oracle RAC) instances belonging to that resource group. Typically, the resource group is a subset of all the Oracle RAC instances.
The service name is only used to determine the resource group, and the instances defined for that resource group. The instance where the job is started is always used, regardless of whether it is part of the resource group.
If CLUSTER=NO
is also specified, then the SERVICE_NAME
parameter is ignored
Suppose you have an Oracle RAC configuration containing instances A, B, C, and D. Also suppose that a service named my_service
exists with a resource group consisting of instances A, B, and C only. In such a scenario, the following is true:
-
If you start a Data Pump job on instance A, and specify
CLUSTER=YES
(or accept the default, which isY
), and you do not specify theSERVICE_NAME
parameter, then Data Pump creates workers on all instances: A, B, C, and D, depending on the degree of parallelism specified. -
If you start a Data Pump job on instance A, and specify
CLUSTER=YES
, andSERVICE_NAME=my_service
, then workers can be started on instances A, B, and C only. -
If you start a Data Pump job on instance D, and specify
CLUSTER=YES
, andSERVICE_NAME=my_service
, then workers can be started on instances A, B, C, and D. Even though instance D is not inmy_service
it is included because it is the instance on which the job was started. -
If you start a Data Pump job on instance A, and specify
CLUSTER=NO
, then anySERVICE_NAME
parameter that you specify is ignored. All processes start on instance A.
Example
The following is an example of using the SERVICE_NAME
parameter:
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_svname2.dmp SERVICE_NAME=sales
This example starts a schema-mode export (the default mode) of the hr
schema. Even though CLUSTER=YES
is not specified on the command line, it is the default behavior, so the job will use all instances in the resource group associated with the service name sales
. A dump file named hr_svname2.dmp
will be written to the location specified by the dpump_dir1
directory object.
Related Topics
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.42 SOURCE_EDITION
The Data Pump Export command-line utility SOURCE_EDITION
parameter specifies the database edition from which objects are exported.
Default: the default database edition on the system
Purpose
Specifies the database edition from which objects are exported.
Syntax and Description
SOURCE_EDITION=edition_name
If SOURCE_EDITION=edition_name
is specified, then the objects from that edition are exported. Data Pump selects all inherited objects that have not changed, and all actual objects that have changed.
If this parameter is not specified, then the default edition is used. If the specified edition does not exist or is not usable, then an error message is returned.
Restrictions
-
This parameter is only useful if there are two or more versions of the same versionable objects in the database.
-
The job version must be
11.2
or later.
Example
The following is an example of using the SOURCE_EDITION
parameter:
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=exp_dat.dmp SOURCE_EDITION=exp_edition EXCLUDE=USER
This example assumes the existence of an edition named exp_edition
on the system from which objects are being exported. Because no export mode is specified, the default of schema mode will be used. The EXCLUDE=user
parameter excludes only the definitions of users, not the objects contained within users' schemas.
See Also:
-
Oracle Database SQL Language Reference for information about how editions are created
-
Oracle Database Development Guide for more information about the editions feature, including inherited and actual objects
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.43 STATUS
The Data Pump Export command-line utility STATUS parameter specifies the frequency at which the job status display is updated
Default: 0
Purpose
Specifies the frequency at which the job status display is updated.
Syntax and Description
STATUS=[integer]
If you supply a value for integer
, it specifies how frequently, in seconds, job status should be displayed in logging mode. If no value is entered or if the default value of 0 is used, then no additional information is displayed beyond information about the completion of each object type, table, or partition.
This status information is written only to your standard output device, not to the log file (if one is in effect).
Example
The following is an example of using the STATUS
parameter.
> expdp hr DIRECTORY=dpump_dir1 SCHEMAS=hr,sh STATUS=300
This example exports the hr
and sh
schemas, and displays the status of the export every 5 minutes (60 seconds x 5 = 300 seconds).
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.44 TABLES
The Data Pump Export command-line utility TABLES
parameter specifies that you want to perform a table-mode export.
Default: There is no default
Purpose
Specifies that you want to perform a table-mode export.
Syntax and Description
TABLES=[schema_name.]table_name[:partition_name] [, ...]
Filtering can restrict what is exported using this mode. You can filter the data and metadata that is exported by specifying a comma-delimited list of tables and partitions or subpartitions. If a partition name is specified, then it must be the name of a partition or subpartition in the associated table. Only the specified set of tables, partitions, and their dependent objects are unloaded.
If an entire partitioned table is exported, then it is imported in its entirety as a partitioned table. The only case in which this is not true is if PARTITION_OPTIONS=DEPARTITION
is specified during import.
The table name that you specify can be preceded by a qualifying schema name. The schema defaults to that of the current user. To specify a schema other than your own, you must have the DATAPUMP_EXP_FULL_DATABASE
role.
Use of the wildcard character (%
) to specify table names and partition names is supported.
The following restrictions apply to table names:
-
By default, table names in a database are stored as uppercase. If you have a table name in mixed-case or lowercase, and you want to preserve case-sensitivity for the table name, then you must enclose the name in quotation marks. The name must exactly match the table name stored in the database.
Some operating systems require that quotation marks on the command line are preceded by an escape character. The following examples show of how case-sensitivity can be preserved in the different Export modes.
-
In command-line mode:
TABLES='\"Emp\"'
-
In parameter file mode:
TABLES='"Emp"'
-
-
Table names specified on the command line cannot include a pound sign (
#
), unless the table name is enclosed in quotation marks. Similarly, in the parameter file, if a table name includes a pound sign (#
), then the Export utility interprets the rest of the line as a comment, unless the table name is enclosed in quotation marks.For example, if the parameter file contains the following line, then Export interprets everything on the line after
emp#
as a comment, and does not export the tablesdept
andmydata:
TABLES=(emp#, dept, mydata)
However, if the parameter file contains the following line, then the Export utility exports all three tables, because
emp#
is enclosed in quotation marks:TABLES=('"emp#"', dept, mydata)
Note:
Some operating systems use single quotation marks as escape characters, rather than double quotation marks, and others the reverse. See your Oracle operating system-specific documentation. Different operating systems also have other restrictions on table naming.
For example, the UNIX C shell attaches a special meaning to a dollar sign (
$
) or pound sign (#
), or certain other special characters. You must use escape characters to be able to use such characters in the name and have them ignored by the shell, and used by Export.
Using the Transportable Option During Table-Mode Export
To use the transportable option during a table-mode export, specify the TRANSPORTABLE=ALWAYS
parameter with the TABLES
parameter. Metadata for the specified tables, partitions, or subpartitions is exported to the dump file. To move the actual data, you copy the data files to the target database.
If only a subset of a table's partitions are exported and the TRANSPORTABLE=ALWAYS
parameter is used, then on import each partition becomes a non-partitioned table.
Restrictions
-
Cross-schema references are not exported. For example, a trigger defined on a table within one of the specified schemas, but that resides in a schema not explicitly specified, is not exported.
-
Types used by the table are not exported in table mode. This restriction means that if you subsequently import the dump file, and the type does not already exist in the destination database, then the table creation fails.
-
The use of synonyms as values for the
TABLES
parameter is not supported. For example, if theregions
table in thehr
schema had a synonym ofregn
, then it is not valid to useTABLES=regn
. If you attempt to use the synonym, then an error is returned. -
The export of tables that include a wildcard character (
%
) in the table name is not supported if the table has partitions. -
The length of the table name list specified for the
TABLES
parameter is limited to a maximum of 4 MB, unless you are using theNETWORK_LINK
parameter to an Oracle Database release 10.2.0.3 or earlier, or to a read-only database. In such cases, the limit is 4 KB. -
You can only specify partitions from one table if
TRANSPORTABLE=ALWAYS
is also set on the export.
Examples
The following example shows a simple use of the TABLES
parameter to export three tables found in the hr
schema: employees
, jobs
, and departments
. Because user hr
is exporting tables found in the hr
schema, the schema name is not needed before the table names.
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=tables.dmp
TABLES=employees,jobs,departments
The following example assumes that user hr
has the DATAPUMP_EXP_FULL_DATABASE
role. It shows the use of the TABLES
parameter to export partitions.
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=tables_part.dmp
TABLES=sh.sales:sales_Q1_2012,sh.sales:sales_Q2_2012
This example exports the partitions, sales_Q1_2012
and sales_Q2_2012
, from the table sales
in the schema sh
.
2.4.45 TABLESPACES
The Data Pump Export command-line utility TABLESPACES parameter specifies a list of tablespace names to be exported in tablespace mode.
Default: There is no default
Purpose
Specifies a list of tablespace names to be exported in tablespace mode.
Syntax and Description
TABLESPACES=tablespace_name [, ...]
In tablespace mode, only the tables contained in a specified set of tablespaces are unloaded. If a table is unloaded, then its dependent objects are also unloaded. Both object metadata and data are unloaded. If any part of a table resides in the specified set, then that table and all of its dependent objects are exported. Privileged users get all tables. Unprivileged users obtain only the tables in their own schemas
Filtering can restrict what is exported using this mode.
Restrictions
-
The length of the tablespace name list specified for the
TABLESPACES
parameter is limited to a maximum of 4 MB, unless you are using theNETWORK_LINK
to an Oracle Database release 10.2.0.3 or earlier, or to a read-only database. In such cases, the limit is 4 KB.
Example
The following is an example of using the TABLESPACES
parameter. The example assumes that tablespaces tbs_4
, tbs_5
, and tbs_6
already exist.
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=tbs.dmp
TABLESPACES=tbs_4, tbs_5, tbs_6
This command results in a tablespace export in which tables (and their dependent objects) from the specified tablespaces (tbs_4
, tbs_5
, and tbs_6
) is unloaded.
Related Topics
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.46 TRANSPORT_FULL_CHECK
The Data Pump Export command-line utility TRANSPORT_FULL_CHECK
parameter specifies whether to check for dependencies between those objects inside the transportable set and those outside the transportable set. This parameter is applicable only to a transportable-tablespace mode export.
Default: NO
Purpose
Specifies whether to check for dependencies between those objects inside the transportable set and those outside the transportable set. This parameter is applicable only to a transportable-tablespace mode export.
Syntax and Description
TRANSPORT_FULL_CHECK=[YES | NO]
If TRANSPORT_FULL_CHECK=YES
, then Export verifies that there are no dependencies between those objects inside the transportable set and those outside the transportable set. The check addresses two-way dependencies. For example, if a table is inside the transportable set, but its index is not, then a failure is returned, and the export operation is terminated. Similarly, a failure is also returned if an index is in the transportable set, but the table is not.
If TRANSPORT_FULL_CHECK=NO
then Export verifies only that there are no objects within the transportable set that are dependent on objects outside the transportable set. This check addresses a one-way dependency. For example, a table is not dependent on an index, but an index is dependent on a table, because an index without a table has no meaning. Therefore, if the transportable set contains a table, but not its index, then this check succeeds. However, if the transportable set contains an index, but not the table, then the export operation is terminated.
There are other checks performed as well. For instance, export always verifies that all storage segments of all tables (and their indexes) defined within the tablespace set specified by TRANSPORT_TABLESPACES
are actually contained within the tablespace set.
There are two current command line parameters that control full closure check:
TTS_FULL_CHECK=[YES|NO]
TRANSPORT_FULL_CHECK=[YES|NO]
[TTS|TRANSPORT]_FULL_CHECK=YES
is interpreted as TTS_CLOSURE_CHECK=FULL
.[TTS|TRANSPORT]_FULL_CHECK=NO
is interpreted as TTS_CLOSURE_CHECK=ON.
Example
The following is an example of using the TRANSPORT_FULL_CHECK
parameter. It assumes that tablespace tbs_1
exists.
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=tts.dmp
TRANSPORT_TABLESPACES=tbs_1 TRANSPORT_FULL_CHECK=YES LOGFILE=tts.log
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.47 TRANSPORT_TABLESPACES
The Data Pump Export command-line utility TRANSPORT_TABLESPACES parameter specifies that you want to perform an export in transportable-tablespace mode
Default: There is no default
Purpose
Specifies that you want to perform an export in transportable-tablespace mode.
Syntax and Description
TRANSPORT_TABLESPACES=tablespace_name [, ...]
Use the TRANSPORT_TABLESPACES
parameter to specify a list of tablespace names for which object metadata will be exported from the source database into the target database.
The log file for the export lists the data files that are used in the transportable set, the dump files, and any containment violations.
The TRANSPORT_TABLESPACES
parameter exports metadata for all objects within the specified tablespaces. If you want to perform a transportable export of only certain tables, partitions, or subpartitions, then you must use the TABLES
parameter with the TRANSPORTABLE=ALWAYS
parameter.
Note:
You cannot export transportable tablespaces and then import them into a database at a lower release level. The target database must be at the same or later release level as the source database.
Restrictions
-
Transportable tablespace jobs are not restartable.
-
Transportable tablespace jobs are restricted to a degree of parallelism of 1.
-
Transportable tablespace mode requires that you have the
DATAPUMP_EXP_FULL_DATABASE
role. -
The default tablespace of the user performing the export must not be set to one of the tablespaces being transported.
-
The
SYSTEM
andSYSAUX
tablespaces are not transportable in transportable tablespace mode. -
All tablespaces in the transportable set must be set to read-only.
-
If the Data Pump Export
VERSION
parameter is specified along with theTRANSPORT_TABLESPACES
parameter, then the version must be equal to or greater than the Oracle DatabaseCOMPATIBLE
initialization parameter. -
The
TRANSPORT_TABLESPACES
parameter cannot be used in conjunction with theQUERY
parameter. -
Transportable tablespace jobs do not support the
ACCESS_METHOD
parameter for Data Pump Export.
Example
The following is an example of using the TRANSPORT_TABLESPACES
parameter in a file-based job (rather than network-based). The tablespace tbs_1
is the tablespace being moved. This example assumes that tablespace tbs_1
exists and that it has been set to read-only. This example also assumes that the default tablespace was changed before this export command was issued.
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=tts.dmp
TRANSPORT_TABLESPACES=tbs_1 TRANSPORT_FULL_CHECK=YES LOGFILE=tts.log
See Oracle Database Administrator's Guide for detailed information about transporting tablespaces between databases
2.4.48 TRANSPORTABLE
The Data Pump Export command-line utility TRANSPORTABLE
parameter specifies whether the transportable option should be used during a table mode export (specified with the TABLES
parameter) or a full mode export (specified with the FULL
parameter).
Default: NEVER
Purpose
Specifies whether the transportable option should be used during a table mode export (specified with the TABLES
parameter) or a full mode export (specified with the FULL
parameter).
Syntax and Description
TRANSPORTABLE = [ALWAYS | NEVER]
The definitions of the allowed values are as follows:
ALWAYS
- Instructs the export job to use the transportable option. If transportable is not possible, then the job fails.
In a table mode export, using the transportable option results in a transportable tablespace export in which metadata for only the specified tables, partitions, or subpartitions is exported.
In a full mode export, using the transportable option results in a full transportable export which exports all objects and data necessary to create a complete copy of the database.
NEVER
- Instructs the export job to use either the direct path or external table method to unload data rather than the transportable option. This is the default.
Note:
To export an entire tablespace in transportable mode, use the TRANSPORT_TABLESPACES
parameter.
-
If only a subset of a table's partitions are exported and the
TRANSPORTABLE=ALWAYS
parameter is used, then on import each partition becomes a non-partitioned table. -
If only a subset of a table's partitions are exported and the
TRANSPORTABLE
parameter is not used at all or is set toNEVER
(the default), then on import:-
If
PARTITION_OPTIONS=DEPARTITION
is used, then each partition included in the dump file set is created as a non-partitioned table. -
If
PARTITION_OPTIONS
is not used, then the complete table is created. That is, all the metadata for the complete table is present, so that the table definition looks the same on the target system as it did on the source. But only the data that was exported for the specified partitions is inserted into the table.
-
Restrictions
-
The
TRANSPORTABLE
parameter is only valid in table mode exports and full mode exports. -
To use the
TRANSPORTABLE
parameter, theCOMPATIBLE
initialization parameter must be set to at least 11.0.0. -
To use the
FULL
parameter in conjunction withTRANSPORTABLE
(to perform a full transportable export), the Data PumpVERSION
parameter must be set to at least 12.0. If theVERSION
parameter is not specified, then theCOMPATIBLE
database initialization parameter must be set to at least 12.0 or later. -
The user performing a transportable export requires the
DATAPUMP_EXP_FULL_DATABASE
privilege. -
Tablespaces associated with tables, partitions, and subpartitions must be read-only.
-
A full transportable export uses a mix of data movement methods. Objects residing in a transportable tablespace have only their metadata unloaded; data is copied when the data files are copied from the source system to the target system. The data files that must be copied are listed at the end of the log file for the export operation. Objects residing in non-transportable tablespaces (for example,
SYSTEM
andSYSAUX
) have both their metadata and data unloaded into the dump file set. (See Oracle Database Administrator's Guide for more information about performing full transportable exports.) -
The default tablespace of the user performing the export must not be set to one of the tablespaces being transported.
Example
The following example assumes that the sh
user has the DATAPUMP_EXP_FULL_DATABASE
role and that table sales2
is partitioned and contained within tablespace tbs2
. (The tbs2
tablespace must be set to read-only in the source database.)
> expdp sh DIRECTORY=dpump_dir1 DUMPFILE=tto1.dmp TABLES=sh.sales2 TRANSPORTABLE=ALWAYS
After the export completes successfully, you must copy the data files to the target database area. You could then perform an import operation using the PARTITION_OPTIONS
and REMAP_SCHEMA
parameters to make each of the partitions in sales2
its own table.
> impdp system PARTITION_OPTIONS=DEPARTITION TRANSPORT_DATAFILES=oracle/dbs/tbs2 DIRECTORY=dpump_dir1 DUMPFILE=tto1.dmp REMAP_SCHEMA=sh:dp
2.4.49 TTS_CLOSURE_CHECK
The Data Pump Export command-line mode TTS_CLOSURE_CHECK
parameter is used to indicate the degree of closure checking to be performed as part of a Data Pump transportable tablespace operation.
Purpose
Specifies the level of closure check to be performed as part of the transportable export operation. TTS_CLOSURE_CHECK
parameter can also be used to indicate that tablespaces can remain read-write during a test mode transportable tablespace operation. This option is used to obtain the timing requirements of the export operation. It is for testing purposes only. The dump file is unavailable for import.
Syntax and Description
TTS_CLOSURE_CHECK = [ ON | OFF | FULL | TEST_MODE ]
TTS_CLOSURE_CHECK
parameter supports the following options:
-
ON
- indicates self-containment closure check be performed -
OFF
- indicates no closure check be performed -
FULL
- indicates full bidirectional closure check be performed -
TEST_MODE
- indicates that tablespaces are not required to be in read-only mode
ON
,OFF
, and FULL
options are mutually exclusive. TEST_MODE
is a Data Pump Export option only.
Example 2-1 Example
TTS_CLOSURE_CHECK=FULL
Parent topic: Parameters Available in Export's Command-Line Mode
2.4.50 VERSION
The Data Pump Export command-line utility VERSION
parameter specifies the version of database objects that you want to export.
Default: COMPATIBLE
Purpose
Specifies the version of database objects that you want to export. Only database objects and attributes that are compatible with the specified release are exported. You can use the VERSION parameter to create a dump file set that is compatible with a previous release of Oracle Database. You cannot use Data Pump Export with releases of Oracle Database before Oracle Database 10g release 1 (10.1). Data Pump Export only works with Oracle Database 10g release 1 (10.1) or later. The VERSION
parameter simply allows you to identify the version of objects that you export.
On Oracle Database 11g release 2 (11.2.0.3) or later, you can specify the VERSION
parameter as VERSION=12
with FULL=Y
to generate a full export dump file that is ready for import into Oracle Database 12c. The export with the later release target VERSION
value includes information from registered database options and components. The dump file set specifying a later release version can only be imported into Oracle Database 12c Release 1 (12.1.0.1) and later. For example, if VERSION=12
is used with FULL=Y
and also with TRANSPORTABLE=ALWAYS
, then a full transportable export dump file is generated that is ready for import into Oracle Database 12c. For more information, refer to the FULL export parameter option.
Syntax and Description
VERSION=[COMPATIBLE | LATEST | version_string]
The legal values for the VERSION
parameter are as follows:
-
COMPATIBLE
- This value is the default value. The version of the metadata corresponds to the database compatibility level as specified on theCOMPATIBLE
initialization parameter.Note: Database compatibility must be set to 9.2 or later.
-
LATEST
- The version of the metadata and resulting SQL DDL corresponds to the database release, regardless of its compatibility level. -
version_string
- A specific database release (for example, 11.2.0). In Oracle Database 11g, this value cannot be lower than 9.2.
Database objects or attributes that are incompatible with the release specified for VERSION
are not exported. For example, tables containing new data types that are not supported in the specified release are not exported.
Restrictions
-
Exporting a table with archived LOBs to a database release earlier than 11.2 is not allowed.
-
If the Data Pump Export
VERSION
parameter is specified with theTRANSPORT_TABLESPACES
parameter, then the value forVERSION
must be equal to or greater than the Oracle DatabaseCOMPATIBLE
initialization parameter. -
If the Data Pump
VERSION
parameter is specified as any value earlier than 12.1, then the Data Pump dump file excludes any tables that containVARCHAR2
orNVARCHAR2
columns longer than 4000 bytes, and anyRAW
columns longer than 2000 bytes. -
Dump files created on Oracle Database 11g releases with the Data Pump parameter
VERSION=12
can only be imported on Oracle Database 12c Release 1 (12.1) and later.
Example
The following example shows an export for which the version of the metadata corresponds to the database release:
> expdp hr TABLES=hr.employees VERSION=LATEST DIRECTORY=dpump_dir1
DUMPFILE=emp.dmp NOLOGFILE=YES
2.4.51 VIEWS_AS_TABLES
The Data Pump Export command-line utility VIEWS_AS_TABLES
parameter specifies that you want one or more views exported as tables.
Default: There is no default.
Caution:
The VIEWS_AS_TABLES
parameter unloads view data in unencrypted format, and creates an unencrypted table. If you are unloading sensitive data, then Oracle strongly recommends that you enable encryption on the export operation and that you ensure the table is created in an encrypted tablespace. You can use the REMAP_TABLESPACE
parameter to move the table to such a tablespace.
Purpose
Specifies that one or more views are to be exported as tables.
Syntax and Description
VIEWS_AS_TABLES=[schema_name.]view_name[:table_name], ...
Data Pump exports a table with the same columns as the view and with row data obtained from the view. Data Pump also exports objects dependent on the view, such as grants and constraints. Dependent objects that do not apply to tables (for example, grants of the UNDER
object privilege) are not exported. You can use the VIEWS_AS_TABLES
parameter by itself, or with the TABLES
parameter. If either is used, then Data Pump performs a table-mode export.
The syntax elements are defined as follows:
schema_name
: The name of the schema in which the view resides. If a schema name is not supplied, it defaults to the user performing the export.
view_name: The name of the view to be exported as a table. The view must exist and it must be a relational view with only scalar, non-LOB columns. If you specify an invalid or non-existent view, the view is skipped and an error message is returned.
table_name
: The name of a table that you want to serve as the source of the metadata for the exported view. By default, Data Pump automatically creates a temporary "template table" with the same columns and data types as the view, but no rows. If the database is read-only, then this default creation of a template table fails. In such a case, you can specify a table name. The table must be in the same schema as the view. It must be a non-partitioned relational table with heap organization. It cannot be a nested table.
If the export job contains multiple views with explicitly specified template tables, then the template tables must all be different. For example, in the following job (in which two views use the same template table) one of the views is skipped:
expdp scott/password directory=dpump_dir dumpfile=a.dmp views_as_tables=v1:emp,v2:emp
An error message is returned reporting the omitted object.
Template tables are automatically dropped after the export operation is completed. While they exist, you can perform the following query to view their names (which all begin with KU$VAT
):
SQL> SELECT * FROM user_tab_comments WHERE table_name LIKE 'KU$VAT%';
TABLE_NAME TABLE_TYPE
------------------------------ -----------
COMMENTS
-----------------------------------------------------
KU$VAT_63629 TABLE
Data Pump metadata template table for view SCOTT.EMPV
Restrictions
-
The
VIEWS_AS_TABLES
parameter cannot be used with theTRANSPORTABLE
=ALWAYS
parameter. -
Tables created using the
VIEWS_AS_TABLES
parameter do not contain any hidden or invisible columns that were part of the specified view. -
The
VIEWS_AS_TABLES
parameter does not support tables that have columns with a data type ofLONG
.
Example
The following example exports the contents of view scott.view1
to a dump file named scott1.dmp
.
> expdp scott/password views_as_tables=view1 directory=data_pump_dir dumpfile=scott1.dmp
The dump file contains a table named view1
with rows obtained from the view.
Parent topic: Parameters Available in Export's Command-Line Mode
2.5 Commands Available in Data Pump Export Interactive-Command Mode
Check which command options are available to you when using Data Pump Export in interactive mode.
- About Data Pump Export Interactive Command Mode
Learn about commands you can use with Data Pump Export in interactive command mode while your current job is running. - ADD_FILE
The Data Pump Export interactive command modeADD_FILE
parameter adds additional files or substitution variables to the export dump file set. - CONTINUE_CLIENT
The Data Pump Export interactive command modeCONTINUE_CLIENT
parameter changes the Export mode from interactive-command mode to logging mod - EXIT_CLIENT
The Data Pump Export interactive command modeEXIT_CLIENT
parameter stops the export client session, exits Export, and discontinues logging to the terminal, but leaves the current job running. - FILESIZE
The Data Pump Export interactive command modeFILESIZE
parameter redefines the maximum size of subsequent dump files. - HELP
The Data Pump Export interactive command modeHELP
parameter provides information about Data Pump Export commands available in interactive-command mode. - KILL_JOB
The Data Pump Export interactive command modeKILL_JOB
parameter detaches all currently attached client sessions and then terminates the current job. It exits Export and returns to the terminal prompt. - PARALLEL
The Export Interactive-Command Mode PARALLEL parameter enables you to increase or decrease the number of active processes (worker and parallel slaves) for the current job. - START_JOB
The Data Pump Export interactive command modeSTART_JOB
parameter starts the current job to which you are attached. - STATUS
The Export interactive command STATUS parameter displays status information about the export, and enables to to set the display interval for logging mode status. - STOP_JOB
The Data Pump Export interactive command modeSTOP_JOB
parameter stops the current job. It stops the job either immediately, or after an orderly shutdown, and exits Export.
Parent topic: Data Pump Export
2.5.1 About Data Pump Export Interactive Command Mode
Learn about commands you can use with Data Pump Export in interactive command mode while your current job is running.
In interactive command mode, the current job continues running, but logging to the terminal is suspended, and the Export prompt (Export>
) is displayed.
To start interactive-command mode, do one of the following:
-
From an attached client, press Ctrl+C.
-
From a terminal other than the one on which the job is running, specify the
ATTACH
parameter in anexpdp
command to attach to the job.ATTACH
is is a useful feature in situations in which you start a job at one location, and need to check on it at a later time from a different location.
The following table lists the activities that you can perform for the current job from the Data Pump Export prompt in interactive-command mode.
Table 2-1 Supported Activities in Data Pump Export's Interactive-Command Mode
Activity | Command Used |
---|---|
Add additional dump files. |
|
Exit interactive mode and enter logging mode. |
|
Stop the export client session, but leave the job running. |
EXIT_CLIENT |
Redefine the default size to be used for any subsequent dump files. |
|
Display a summary of available commands. |
HELP |
Detach all currently attached client sessions and terminate the current job. |
KILL_JOB |
Increase or decrease the number of active worker processes for the current job. This command is valid only in the Enterprise Edition of Oracle Database 11g or later. |
PARALLEL |
Restart a stopped job to which you are attached. |
START_JOB |
Display detailed status for the current job and/or set status interval. |
STATUS |
Stop the current job for later restart. |
STOP_JOB |
2.5.2 ADD_FILE
The Data Pump Export interactive command mode ADD_FILE
parameter adds additional files or substitution variables to the export dump file set.
Purpose
Adds additional files or substitution variables to the export dump file set.
Syntax and Description
ADD_FILE=[directory_object:]file_name [,...]
Each file name can have a different directory object. If no directory object is specified, then the default is assumed.
The file_name
must not contain any directory path information. However, it can include a substitution variable, %U
, which indicates that multiple files can be generated using the specified file name as a template.
The size of the file being added is determined by the setting of the FILESIZE
parameter.
Example
The following example adds two dump files to the dump file set. A directory object is not specified for the dump file named hr2.dmp
, so the default directory object for the job is assumed. A different directory object, dpump_dir2
, is specified for the dump file named hr3.dmp
.
Export> ADD_FILE=hr2.dmp, dpump_dir2:hr3.dmp
Related Topics
2.5.3 CONTINUE_CLIENT
The Data Pump Export interactive command mode CONTINUE_CLIENT
parameter changes the Export mode from interactive-command mode to logging mod
Purpose
Changes the Export mode from interactive-command mode to logging mode.
Syntax and Description
CONTINUE_CLIENT
In logging mode, status is continually output to the terminal. If the job is currently stopped, then CONTINUE_CLIENT
also causes the client to attempt to start the job.
Example
Export> CONTINUE_CLIENT
2.5.4 EXIT_CLIENT
The Data Pump Export interactive command mode EXIT_CLIENT
parameter stops the export client session, exits Export, and discontinues logging to the terminal, but leaves the current job running.
Purpose
Stops the export client session, exits Export, and discontinues logging to the terminal, but leaves the current job running.
Syntax and Description
EXIT_CLIENT
Because EXIT_CLIENT
leaves the job running, you can attach to the job at a later time. To see the status of the job, you can monitor the log file for the job, or you can query the USER_DATAPUMP_JOBS
view, or the V$SESSION_LONGOPS
view.
Example
Export> EXIT_CLIENT
2.5.5 FILESIZE
The Data Pump Export interactive command mode FILESIZE
parameter redefines the maximum size of subsequent dump files.
Purpose
Redefines the maximum size of subsequent dump files. If the size is reached for any member of the dump file set, then that file is closed and an attempt is made to create a new file, if the file specification contains a substitution variable or if additional dump files have been added to the job.
Syntax and Description
FILESIZE=integer[B | KB | MB | GB | TB]
The integer
can be immediately followed (do not insert a space) by B
, KB
, MB
, GB
, or TB
(indicating bytes, kilobytes, megabytes, gigabytes, and terabytes respectively). Bytes is the default. The actual size of the resulting file may be rounded down slightly to match the size of the internal blocks used in dump files.
A file size of 0 is equivalent to the maximum file size of 16 TB.
Restrictions
-
The minimum size for a file is ten times the default Data Pump block size, which is 4 kilobytes.
-
The maximum size for a file is 16 terabytes.
Example
Export> FILESIZE=100MB
2.5.6 HELP
The Data Pump Export interactive command mode HELP
parameter provides information about Data Pump Export commands available in interactive-command mode.
Purpose
Provides information about Data Pump Export commands available in interactive-command mode.
Syntax and Description
HELP
Displays information about the commands available in interactive-command mode.
Example
Export> HELP
2.5.7 KILL_JOB
The Data Pump Export interactive command mode KILL_JOB
parameter detaches all currently attached client sessions and then terminates the current job. It exits Export and returns to the terminal prompt.
Purpose
Detaches all currently attached client sessions and then terminates the current job. It exits Export and returns to the terminal prompt.
Syntax and Description
KILL_JOB
A job that is terminated using KILL_JOB
cannot be restarted. All attached clients, including the one issuing the KILL_JOB
command, receive a warning that the job is being terminated by the current user and are then detached. After all clients are detached, the job's process structure is immediately run down and the master table and dump files are deleted. Log files are not deleted.
Example
Export> KILL_JOB
2.5.8 PARALLEL
The Export Interactive-Command Mode PARALLEL parameter enables you to increase or decrease the number of active processes (worker and parallel slaves) for the current job.
Purpose
Enables you to increase or decrease the number of active processes (worker and parallel slaves) for the current job.
Syntax and Description
PARALLEL=integer
PARALLEL
is available as both a command-line parameter, and as an interactive-command mode parameter. You set it to the desired number of parallel processes (worker and parallel slaves). An increase takes effect immediately if there are sufficient files and resources. A decrease does not take effect until an existing process finishes its current task. If the value is decreased, then workers are idled but not deleted until the job exits.
Restrictions
-
This parameter is valid only in the Enterprise Edition of Oracle Database 11g or later.
Example
Export> PARALLEL=10
Related Topics
2.5.9 START_JOB
The Data Pump Export interactive command mode START_JOB
parameter starts the current job to which you are attached.
Purpose
Starts the current job to which you are attached.
Syntax and Description
START_JOB
The START_JOB
command restarts the current job to which you are attached (the job cannot be currently executing). The job is restarted with no data loss or corruption after an unexpected failure or after you issued a STOP_JOB
command, provided the dump file set and master table have not been altered in any way.
Exports done in transportable-tablespace mode are not restartable.
Example
Export> START_JOB
2.5.10 STATUS
The Export interactive command STATUS parameter displays status information about the export, and enables to to set the display interval for logging mode status.
Purpose
Displays cumulative status of the job, a description of the current operation, and an estimated completion percentage. It also allows you to reset the display interval for logging mode status.
Syntax and Description
STATUS[=integer]
You have the option of specifying how frequently, in seconds, this status should be displayed in logging mode. If no value is entered or if the default value of 0 is used, then the periodic status display is turned off and status is displayed only once.
This status information is written only to your standard output device, not to the log file (even if one is in effect).
Example
The following example displays the current job status, and changes the logging mode display interval to five minutes (300 seconds):
Export> STATUS=300
2.5.11 STOP_JOB
The Data Pump Export interactive command mode STOP_JOB
parameter stops the current job. It stops the job either immediately, or after an orderly shutdown, and exits Export.
Purpose
Stops the current job, either immediately, or after an orderly shutdown, and exits Export.
Syntax and Description
STOP_JOB[=IMMEDIATE]
If the master table and dump file set are not disturbed when or after the STOP_JOB
command is issued, then the job can be attached to and restarted at a later time with the START_JOB
command.
To perform an orderly shutdown, use STOP_JOB
(without any associated value). A warning requiring confirmation will be issued. An orderly shutdown stops the job after worker processes have finished their current tasks.
To perform an immediate shutdown, specify STOP_JOB
=IMMEDIATE
. A warning requiring confirmation will be issued. All attached clients, including the one issuing the STOP_JOB
command, receive a warning that the job is being stopped by the current user and they will be detached. After all clients are detached, the process structure of the job is immediately run down. That is, the master process will not wait for the worker processes to finish their current tasks. There is no risk of corruption or data loss when you specify STOP_JOB=IMMEDIATE
. However, some tasks that were incomplete at the time of shutdown may have to be redone at restart time.
Example
Export> STOP_JOB=IMMEDIATE
2.6 Examples of Using Data Pump Export
These sections provides examples of using Data Pump Export.
- Performing a Table-Mode Export
This example shows a table-mode export, specified using theTABLES
parameter. - Data-Only Unload of Selected Tables and Rows
This example shows data-only unload of selected tables and rows. - Estimating Disk Space Needed in a Table-Mode Export
This example shows how to estimate the disk space needed in a table-mode export. - Performing a Schema-Mode Export
This example shows you how to perform a schema-mode export. - Performing a Parallel Full Database Export
This example shows you how to perform a parallel full database export. - Using Interactive Mode to Stop and Reattach to a Job
This example shows you how to use interactive mode to stop and reattach to a job.
Parent topic: Data Pump Export
2.6.1 Performing a Table-Mode Export
This example shows a table-mode export, specified using the TABLES
parameter.
In this example, the Data Pump export command performs a table export of the tables employees
and jobs
from the human resources (hr
) schema.
Because user hr
is exporting tables in his own schema, it is not necessary to specify the schema name for the tables. The NOLOGFILE=YES
parameter indicates that an Export log file of the operation is not generated.
Example 2-2 Performing a Table-Mode Export
expdp hr TABLES=employees,jobs DUMPFILE=dpump_dir1:table.dmp NOLOGFILE=YES
Parent topic: Examples of Using Data Pump Export
2.6.2 Data-Only Unload of Selected Tables and Rows
This example shows data-only unload of selected tables and rows.
The example shows the contents of a parameter file (exp.par
), which you can use to perform a data-only unload of all the tables in the human resources (hr
) schema, except for the tables countries
and regions
. Rows in the employees
table are unloaded that have a department_id
other than 50. The rows are ordered by employee_id
.
You can issue the following command to execute the exp.par
parameter file:
> expdp hr PARFILE=exp.par
This export performs a schema-mode export (the default mode), but the CONTENT
parameter effectively limits the export to an unload of just the table data. The DBA previously created the directory object dpump_dir1
, which points to the directory on the server where user hr
is authorized to read and write export dump files. The dump file dataonly.dmp
is created in dpump_dir1
.
Example 2-3 Data-Only Unload of Selected Tables and Rows
DIRECTORY=dpump_dir1
DUMPFILE=dataonly.dmp
CONTENT=DATA_ONLY
EXCLUDE=TABLE:"IN ('COUNTRIES', 'REGIONS')"
QUERY=employees:"WHERE department_id !=50 ORDER BY employee_id"
Parent topic: Examples of Using Data Pump Export
2.6.3 Estimating Disk Space Needed in a Table-Mode Export
This example shows how to estimate the disk space needed in a table-mode export.
In this example, the ESTIMATE_ONLY
parameter is used to estimate the space that is consumed in a table-mode export, without actually performing the export operation. Issue the following command to use the BLOCKS
method to estimate the number of bytes required to export the data in the following three tables located in the human resource (hr
) schema: employees
, departments
, and locations
.
The estimate is printed in the log file and displayed on the client's standard output device. The estimate is for table row data only; it does not include metadata.
Example 2-4 Estimating Disk Space Needed in a Table-Mode Export
> expdp hr DIRECTORY=dpump_dir1 ESTIMATE_ONLY=YES TABLES=employees,
departments, locations LOGFILE=estimate.log
Parent topic: Examples of Using Data Pump Export
2.6.4 Performing a Schema-Mode Export
This example shows you how to perform a schema-mode export.
The example shows a schema-mode export of the hr
schema. In a schema-mode export, only objects belonging to the corresponding schemas are unloaded. Because schema mode is the default mode, it is not necessary to specify the SCHEMAS
parameter on the command line, unless you are specifying more than one schema or a schema other than your own.
Example 2-5 Performing a Schema Mode Export
> expdp hr DUMPFILE=dpump_dir1:expschema.dmp LOGFILE=dpump_dir1:expschema.log
Parent topic: Examples of Using Data Pump Export
2.6.5 Performing a Parallel Full Database Export
This example shows you how to perform a parallel full database export.
The example shows a full database Export that can use 3 parallel processes (worker or PQ slaves).
Example 2-6 Parallel Full Export
> expdp hr FULL=YES DUMPFILE=dpump_dir1:full1%U.dmp, dpump_dir2:full2%U.dmp
FILESIZE=2G PARALLEL=3 LOGFILE=dpump_dir1:expfull.log JOB_NAME=expfull
Because this export is a full database export, all data and metadata in the database is exported. Dump files full101.dmp
, full201.dmp
, full102.dmp
, and so on, are created in a round-robin fashion in the directories pointed to by the dpump_dir1
and dpump_dir2
directory objects. For best performance, Oracle recommends that you place the dump files on separate I/O channels. Each file is up to 2 gigabytes in size, as necessary. Initially, up to three files are created. If needed, more files are created. The job and master table has a name of expfull
. The log file is written to expfull.log
in the dpump_dir1
directory.
Parent topic: Examples of Using Data Pump Export
2.6.6 Using Interactive Mode to Stop and Reattach to a Job
This example shows you how to use interactive mode to stop and reattach to a job.
To start this example, reexecute the parallel full export described here:
Performing a Parallel Full Database Export
While the export is running, press Ctrl+C. This keyboard command starts the interactive-command interface of Data Pump Export. In the interactive interface, logging to the terminal stops, and the Export prompt is displayed.
After the job status is displayed, you can issue the CONTINUE_CLIENT
command to resume logging mode and restart the expfull
job.
Export> CONTINUE_CLIENT
A message is displayed that the job has been reopened, and processing status is output to the client.
Example 2-7 Stopping and Reattaching to a Job
At the Export prompt, issue the following command to stop the job:
Export> STOP_JOB=IMMEDIATE
Are you sure you wish to stop this job ([y]/n): y
The job is placed in a stopped state, and exits the client.
To reattach to the job you just stopped, enter the following command:
> expdp hr ATTACH=EXPFULL
Parent topic: Examples of Using Data Pump Export
2.7 Syntax Diagrams for Data Pump Export
This section provides syntax diagrams for Data Pump Export.
These diagrams use standard SQL syntax notation. For more information about SQL syntax notation, see Oracle Database SQL Language Reference.
ExpOpts_Cont
ExpCompression
ExpFileOpts
ExpDiagnostics
Related Topics
Parent topic: Data Pump Export