What is a logical backup?

Logical backups are backups in which the export utility (for example Exp) uses SQL to read database data and then export it into a binary file at the operating system level. You can then import the data back into a database using the import utility (Imp).The Oracle Export Utility can create a file that contains all the objects in a particular schema. The file that is created can then be imported to the same schema to recover from a failure or to a different instance or schema for archiving purposes.

Advantages of a logical backup are mention below:-

  • You should make a logical backup before you upgrade to a new version of DB2, Oracle, SQL Server, or Rational RequisitePro. Prior to an upgrade, a logical backup will provide the latest data in the event of a failure. Oracle may require a logical backup if the newer database format is not compatible with the older database format.
  • In Oracle, logical backups can be used to detect block corruptions. If a corrupt block is detected during an export, the export will fail. The corrupt blocks must be repaired before a successful export can be obtained.
  • Logical backups can be used to provide a test or development copy of your Rational RequisitePro projects. The test or development copy can be used to test a new release of Rational RequisitePro or to train new users.
  • In Oracle, logical backups can be used to in conjunction with COALESCE to defragment extents.
  • Logical backups can be used to significantly alter your database structure such as moving the Oracle schema for Rational RequisitePro from one Oracle tablespace to another or using a different Oracle schema name. It can also be used to move Rational RequisitePro data to a different database in SQL Server.
  • Logical backups are very useful when changing database servers.
  • Move data between different hardware and operating- system environments.
  • We can move data from one owner to another.
  • It can move data from one tablespace or schema to another.

How do we can take logical backup in oracle?

Data Pump is often described as an upgraded version of the old exp/imp utilities. The depiction in inaccurate; it’s a bit like calling a modern smartphone a replacement for an old rotary-dial landline. Although the old utilities are dependable and work well. Data Pump encompasses that functionality and while adding completely new dimensions to how data can be lifted and move between environments.

Data Pump Architecture

Data Pump consists of the following components:

  • expdp (Data Pump export utility)
  • impdp(Data Pump import utility)
  • DBMS_DATAPUMP PL/SQL package(Data Pump application interface [API])
  • DBMS_METADATA PL/SQL package (Data Pump Metadata API)

The expdp and impdp utilities the DBMS_DATAPUMP and DBMS_METADATA built-in PL/SQL packages when exporting and importing data and metadata. The DBMS_DATAPUMP package moves entire databases or subsets of data between database environments. The DBMS_METADATA package exports and imports information about database objects;

Modes of  Data Pump to export/import are mention below:-

  • Entire database
  • Schema level
  • Table level
  • Tablespace level
  • Transportable tablespace level

Entire database ( Data Pump Export/Import)

A full export is initiated with the FULL parameter set to Y and must be done with a user that has DBA privileges or that has the DATAPUMP_EXP_FULL_DATABASE role granted to it.

An example of Export:-

$ expdp mv_maint/foo directory=dp_dir dumpfile=full.dmp logfile=full.log full=y

An example of Import:-

$ impdp mv_maint/foo directory=dp_dir dumpfile=full.dmp logfile=fullimp.log full=y


1)  When we use export/import their full database can not be export. The contents of the SYS schema are not exported (there are a few exceptions to this, such as the AUD$ table).  Consider what would happen if you could export the contents of the SYS schema from one database and import them into another. The SYS schema contents would overwrite internal data dictionary tables/views and thus corrupt the database. Therefore, DataPump never exports objects owned by SYS.

2) Index data are not exported, but rather, the index DDL that contains that SQL required to recreate the indexes during a subsequent import.

Schema Level(Data Pump Export/Import)

Data Pump starts a schema-level export for the user running the export job. User-level export is frequently used to copy a schema or set of schemas from one environment to another. the following commands starts a schema-level export for the Mv_MAINT user:-

An example of Export:-

$ exdp mv_maint/foo directory=dp_dir  dumpfile=mv_maint.dmp logfile=mv_main.log

we can also initiate a schema-level export for users other than the one running the export job with the SCHEMAS parameter.

$ expdp mv_maint/foo directory=dp _dir dumpfile=user.dmp schemas=heera,chaya

An example of Import:-

we can initiate a schema-level import by referencing a dump file that was taken with a schema-level export.

$ imdp  mv_maint/foo directory=dp_dir dumpfile=user.dmp

We can also initiate a schema-level import when using a full export dump.fille .

$impdp mv_maint/foo direcory=dp_dir dumpfile=full.dmp schemas=herra, chaya

Note: – When we initiate a schema-level import, there are some details to be aware of:-

  • No tablespace are included in a schema-level export.
  • The import job attempts to recreate any users in the dump file. If a user already exit, an error is thrown, and the import job continues.
  • The import job will reset the users’ passwords, based on the password that was exported.
  • Tables owned by the users will be imported and populated. If a table already exists, you must instruct DataPump on how to handle this with the TABLE_EXISTS_ACTION parameter.

Table Level(Data Pump Export/Import)

At this level, we export at the table level.

An example of Export:-

$ expdp mv_maint/foo   directory=dp_dir dumpfile=tab.dmp \

tables=heera.inv, heera.inv_items

An example of Import:-

$impdp  mv_maint/foo directory=dp_dir dumpfile=tab.dmp

A table- level import only attempts to import the tables and specified data. if a table already exists an error is thrown, and the import job continues, If a table already exists and contains data, you must specify how you want the export job to handle this.

We can also intiate a table-level import when full-export dump file or a schema-level export.

$ impdp mv_maint/foo directory=dp_dir dumpfile=full.dmp

 Tablespace Level(DataPump Export/Import)

A tablespace -level export-imort operates on objects contained within specific tablespaces.

$ expdp  mv_maint/foo dierectory=dp_dir dumpfile=tbsp.dmp tablespaces=users

we can intiate a tablespace-level import by specifying an export file that was created with a tablespace-level export.

$ impdp mv_maint/foo  directory=dp_dir dumpfile=tbsp.dmp

We can also intiate a tablespace-level import by using a full export, but specifying the TABLESPACES parmeter:

$ impdp  mv_maint/foo  directory=dp_dir dumpfile=full.dmp tablespaces=users

A tablespace-level import willattempt to create any tables and indexes within the tablespace. The import doesn’t try to recreate the tablespace themselves.


About The Author

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.