In this article, we are going to learn about the parameters of Import Oracle Datapump.
| Parameter | Description |
| abort_step | undocumented feature |
| access_method | data access method : default is automatic |
| attach | attach to existing job : default is no |
| cluster | start workers across cluster: default is y |
| content | content to import : default is ALL |
| data_options | Import data layer options |
| current_edition | application on local database directory default directory specification |
| directory | default directory specification |
| dumper_directory | directory for stream dumper |
| dumpfile | import dumpfile names format is (file1, file2…) |
| encryption_password | encryption key to be used |
| estimate | calculate size estimate default is BLOCKS |
| exclude | import exclude option |
| flashback_scn | system change number to be used for flashback import: default is no |
| flashback_time | database time to be used for flashback import : default no |
| full | indicates a full mode import |
| help | help: display description of import parameters:default is N |
| include | import include : default is no |
| ip_address | ip address for PL/SQL debugger |
| job_name | job_name : default is no |
| keep master | keep master: retain job table upon completion |
| logfile | log important messages to specified file |
| master_only | only import the master table associated with this job |
| metrics | enable/disable object metrics reporting |
| mp_enable | enable/disable multi processing for current session |
| network_link | network mod-import |
| nologfile | no import log life creates |
| package_load | specify how to load PL/SQL |
| parallel | degree of parallelism default is 1 |
| parallel_threshold | degree of DML parallelism |
| parfile | parameter file: name of file contains parameter specification |
| partition_options | options determine how partitions should be handle : default is NONE |
| query | query used to select a subset of rows for a table |
| remap_data | transform data is user tables |
| remap_schema | remap source schema object to new schema |
| remap_table | remap tables to a different name |
| remap_tablespace | remap objects to a different name |
| reuse_datafiles | re-initialize existing datafiles |
| schemas | schemas to import: format is (‘schema1,….,schemaN’) |
| services_name | service name that job will charge against |
| silent | silent:display information default is NONE |
| skip_unusable_indexes | skip indexes which indexes which are in the unused state |
| source_edition | application edition to be used on remote database |
| sqlfile | write appropiate sql DDL to speciied file |
| status | interval between status updates |
| streams_configuration | import streams configuration metadata |
| table_exists_action | taken if the table to import already exists |
| tables | tables to import : format is |
| tablespaces | tablespaces to transport format is ‘(ts1,…, tsN)’ |
| trace | trace option: enable sql_trace and timed_stat : default 0 |
| transform | metadata_transforms |
| transportable | use transportable data movement: default is NEVER |
| transport_datafiles | list of datafiles to be plugged into target system |
| transport_tablespaces | transportable tablespaces option: default is N |
| transport_full_check | verify that tablespaces to be used do not have dependencies |
| tts_closure_check | enable/disable transport able containment check: default is Y |
| user_id | user/password to connect to oracle no default |
| version | job version: compatible is default. |
