ORACLE DATABASE 19c
Migration Guide
Step-by-Step: expdp / impdp — Data Pump
Method
|
Prepared By |
Bidhan Mandal |
|
Designation |
Oracle
Database Administrator (DBA) |
|
Document
Date |
09 March 2026 |
|
Subject |
Oracle DB 19c
Migration — expdp / impdp Process |
|
Target |
Fresh Compute
Instance (Oracle 19c not installed) |
|
Document Ref |
ORA-MIG-19C-2026-001 |
This document provides a complete, step-by-step guide for
migrating an existing Oracle Database 19c instance to a new compute instance
where Oracle Database software is not yet installed. The Oracle Data Pump
utility (expdp/impdp) is used as the migration method.
The migration is divided into four phases: (1) Export
from the source server, (2) Transfer of dump files, (3) Installation of Oracle
19c on the target server, and (4) Import on the target server. All steps must
be followed in sequence.
|
NOTE: Both source and target databases must use the same character
set (AL32UTF8 recommended). Ensure the target server has at least 2x the disk
space of your dump files. |
|
PHASE 1 — EXPORT FROM SOURCE SERVER
(expdp) |
|
STEP 1 |
Create OS
Directory and Oracle Directory Object |
Log in to the source server as the oracle OS user and
create the export directory. Then connect to Oracle as SYSDBA and create the
directory object.
|
#
Create OS directory on source server mkdir
-p /backup/export chown
oracle:oinstall /backup/export #
Connect to Oracle database as SYSDBA sqlplus
/ as sysdba |
|
--
Create Oracle directory object pointing to OS path CREATE
OR REPLACE DIRECTORY EXPORT_DIR AS '/backup/export'; --
Grant read/write permission to system user GRANT
READ, WRITE ON DIRECTORY EXPORT_DIR TO system; --
Verify the directory was created successfully SELECT
directory_name, directory_path FROM dba_directories WHERE
directory_name = 'EXPORT_DIR'; |
|
STEP 2 |
Run the
Data Pump Export (expdp) |
Run the export from the OS command line. Select Option A
for a full database export or Option B to export specific schemas only.
|
#
OPTION A: Full database export (recommended) expdp
system/password@ORCL \ FULL=Y \ DIRECTORY=EXPORT_DIR \ DUMPFILE=fullexport_%U.dmp \ LOGFILE=fullexport.log \ PARALLEL=4 \ COMPRESSION=ALL |
|
#
OPTION B: Export specific schemas only expdp
system/password@ORCL \ SCHEMAS=HR,SALES,FINANCE \ DIRECTORY=EXPORT_DIR \ DUMPFILE=schema_export_%U.dmp \ LOGFILE=schema_export.log \ PARALLEL=4 |
|
NOTE: The %U in DUMPFILE creates multiple numbered files (e.g.
fullexport_01.dmp, _02.dmp). This allows parallel export and avoids oversized
single files. |
|
STEP 3 |
Verify the
Export Files |
|
#
List export files and confirm sizes ls
-lh /backup/export/ #
Check the log file for errors grep
-i "error\|ORA-\|warning" /backup/export/fullexport.log #
Check total size of export directory du
-sh /backup/export/ |
|
WARNING: Do NOT proceed to Phase 2 if the export log shows any ORA-
errors. Fix the issue and re-run the export. |
|
PHASE 2 — TRANSFER DUMP FILES TO TARGET
SERVER |
|
STEP 4 |
Transfer
Dump Files via SCP or rsync |
Copy the dump files from the source server to the target
server using one of the following methods:
|
#
OPTION A: SCP — recommended for smaller databases scp
/backup/export/*.dmp oracle@TARGET_SERVER_IP:/backup/import/ scp
/backup/export/*.log oracle@TARGET_SERVER_IP:/backup/import/ #
OPTION B: rsync — better for large files, supports resume rsync
-avzh --progress /backup/export/*.dmp \ oracle@TARGET_SERVER_IP:/backup/import/ #
OPTION C: NFS mount (if shared storage is available) mount
-t nfs SOURCE_SERVER:/backup/export /backup/import #
Verify files on TARGET server after transfer ls
-lh /backup/import/ |
|
NOTE: For databases larger than 100GB, compress before transfer:
gzip /backup/export/*.dmp — then decompress on target: gunzip
/backup/import/*.dmp.gz |
|
PHASE 3 — INSTALL ORACLE DATABASE 19c ON
TARGET SERVER |
Since Oracle Database is NOT installed on the target
compute instance, complete all steps below before attempting any import. Steps
5–10 must be executed on the TARGET server.
|
STEP 5 |
Prepare OS
— Install Required Packages (as root) |
|
#
RHEL / OEL / CentOS — installs all Oracle prerequisites automatically yum
install -y oracle-database-preinstall-19c #
OR manually install required packages yum
install -y gcc make binutils libaio libaio-devel \ libstdc++ libstdc++-devel sysstat
elfutils-libelf-devel \ unixODBC unixODBC-devel ksh glibc
glibc-devel #
Create oracle OS groups groupadd
-g 54321 oinstall groupadd
-g 54322 dba groupadd
-g 54323 oper #
Create oracle OS user useradd
-u 54321 -g oinstall -G dba,oper -m oracle echo
"oracle:YourSecurePassword" | chpasswd |
|
STEP 6 |
Configure
Kernel Parameters (as root) |
|
#
Append required parameters to /etc/sysctl.conf cat
>> /etc/sysctl.conf << EOF fs.aio-max-nr
= 1048576 fs.file-max
= 6815744 kernel.shmall
= 2097152 kernel.shmmax
= 4294967295 kernel.shmmni
= 4096 kernel.sem
= 250 32000 100 128 net.ipv4.ip_local_port_range
= 9000 65500 net.core.rmem_default
= 262144 net.core.rmem_max
= 4194304 net.core.wmem_default
= 262144 net.core.wmem_max
= 1048586 EOF #
Apply kernel parameters immediately sysctl
-p |
|
STEP 7 |
Set Oracle
User Limits (as root) |
|
#
Append limits to /etc/security/limits.conf cat
>> /etc/security/limits.conf << EOF oracle soft
nofile 1024 oracle hard
nofile 65536 oracle soft
nproc 2047 oracle hard
nproc 16384 oracle soft
stack 10240 oracle hard
stack 32768 EOF |
|
STEP 8 |
Create
Oracle Directory Structure (as root) |
|
#
Create Oracle installation directories mkdir
-p /u01/app/oracle/product/19c/dbhome_1 mkdir
-p /u01/app/oraInventory mkdir
-p /backup/import #
Set ownership and permissions chown
-R oracle:oinstall /u01 chown
-R oracle:oinstall /backup/import chmod
-R 775 /u01 |
|
STEP 9 |
Set Oracle
Environment Variables (as oracle user) |
|
#
Switch to oracle user su
- oracle #
Add environment variables to ~/.bash_profile cat
>> ~/.bash_profile << EOF export
ORACLE_BASE=/u01/app/oracle export
ORACLE_HOME=/u01/app/oracle/product/19c/dbhome_1 export
ORACLE_SID=ORCL export
PATH=$ORACLE_HOME/bin:$PATH export
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export
CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib EOF #
Apply variables to current session source
~/.bash_profile #
Verify echo
$ORACLE_HOME echo
$ORACLE_SID |
|
STEP 10 |
Install
Oracle 19c Software (as oracle user) |
Download LINUX.X64_193000_db_home.zip from Oracle
eDelivery or My Oracle Support, then run the silent installer:
|
#
Unzip the Oracle installer into ORACLE_HOME cd
$ORACLE_HOME unzip
/path/to/LINUX.X64_193000_db_home.zip #
Run silent install (Software Only — no database created yet) ./runInstaller
-silent \ oracle.install.option=INSTALL_DB_SWONLY \ ORACLE_HOSTNAME=$(hostname) \ UNIX_GROUP_NAME=oinstall \ INVENTORY_LOCATION=/u01/app/oraInventory \ ORACLE_HOME=$ORACLE_HOME \ ORACLE_BASE=$ORACLE_BASE \ oracle.install.db.InstallEdition=EE \ oracle.install.db.OSDBA_GROUP=dba \ oracle.install.db.OSOPER_GROUP=oper \ DECLINE_SECURITY_UPDATES=true |
|
STEP 11 |
Run Root
Scripts (as root) |
|
#
Switch back to root user exit #
Run root scripts in this exact order /u01/app/oraInventory/orainstRoot.sh /u01/app/oracle/product/19c/dbhome_1/root.sh |
|
STEP 12 |
Create New
Database Using DBCA (as oracle user) |
Create a new empty Oracle database on the target server.
This database will receive the imported data in Phase 4.
|
#
Run DBCA in silent mode to create the database dbca
-silent \ -createDatabase \ -templateName General_Purpose.dbc \ -gdbname ORCL \ -sid ORCL \ -characterSet AL32UTF8 \ -sysPassword YourSysPassword \ -systemPassword YourSystemPassword \ -createAsContainerDatabase false \ -databaseType MULTIPURPOSE \ -automaticMemoryManagement false \ -totalMemory 4096 \ -storageType FS \ -datafileDestination
/u01/app/oracle/oradata #
Verify database is up and open sqlplus
/ as sysdba SELECT
name, open_mode FROM v$database; |
|
NOTE: Check character set on source before creating target DB:
SELECT value FROM nls_database_parameters WHERE parameter='NLS_CHARACTERSET';
— it must match. |
|
PHASE 4 — IMPORT ON TARGET SERVER
(impdp) |
|
STEP 13 |
Create
Import Directory Object |
|
#
Confirm dump files exist on target server ls
-lh /backup/import/ #
Connect as SYSDBA and create directory object sqlplus
/ as sysdba |
|
CREATE
OR REPLACE DIRECTORY IMPORT_DIR AS '/backup/import'; GRANT
READ, WRITE ON DIRECTORY IMPORT_DIR TO system; --
Verify SELECT
directory_name, directory_path FROM dba_directories WHERE
directory_name = 'IMPORT_DIR'; |
|
STEP 14 |
Run the
Data Pump Import (impdp) |
Run the import from the OS command line on the TARGET
server. Choose the matching option used during export in Phase 1.
|
#
OPTION A: Full database import impdp
system/password@ORCL \ FULL=Y \ DIRECTORY=IMPORT_DIR \ DUMPFILE=fullexport_%U.dmp \ LOGFILE=fullimport.log \ PARALLEL=4 |
|
#
OPTION B: Schema-only import impdp
system/password@ORCL \ SCHEMAS=HR,SALES,FINANCE \ DIRECTORY=IMPORT_DIR \ DUMPFILE=schema_export_%U.dmp \ LOGFILE=schema_import.log |
|
#
OPTION C: Import with tablespace remapping #
Use when tablespace names differ between source and target impdp
system/password@ORCL \ FULL=Y \ DIRECTORY=IMPORT_DIR \ DUMPFILE=fullexport_%U.dmp \ LOGFILE=fullimport.log \
REMAP_TABLESPACE=OLD_TABLESPACE:NEW_TABLESPACE |
|
WARNING: Monitor the import in real time using: tail -f
/backup/import/fullimport.log |
|
STEP 15 |
Verify
Import and Recompile Invalid Objects |
|
--
Connect as SYSDBA on target sqlplus
/ as sysdba --
1. Check log for errors HOST
grep -i 'ORA-' /backup/import/fullimport.log --
2. List invalid objects SELECT
owner, object_type, object_name, status FROM dba_objects WHERE status = 'INVALID' ORDER BY owner, object_type; --
3. Recompile all invalid objects EXEC
UTL_RECOMP.RECOMP_SERIAL(); --
4. Verify tablespaces are ONLINE SELECT
tablespace_name, status FROM dba_tablespaces; --
5. Check row counts match source SELECT
table_name, num_rows FROM dba_tables WHERE owner = 'YOUR_SCHEMA'; --
6. Gather fresh optimizer statistics EXEC
DBMS_STATS.GATHER_DATABASE_STATS( estimate_percent =>
DBMS_STATS.AUTO_SAMPLE_SIZE); |
•
Update tnsnames.ora on all
application/client servers to point to new DB server IP
•
Update sqlnet.ora if
encryption or timeout settings differ on the new server
•
Test listener: lsnrctl
status
•
Test connectivity from
application server: tnsping ORCL
•
Reset application user
passwords as required by your security policy
•
Verify Oracle profiles,
password complexity rules, and account lockout settings
•
Enable Oracle Auditing
(AUDIT_TRAIL) if required
•
Validate Transparent Data
Encryption (TDE) settings if used on source
•
Gather fresh optimizer
statistics: DBMS_STATS.GATHER_DATABASE_STATS
•
Review SGA and PGA settings
— ensure they match or exceed source configuration
•
Re-apply any custom
database parameters from source spfile/pfile
•
Monitor AWR/ADDM reports
after first 24 hours of production load
•
Run full regression test
suite against the new database instance
•
Validate all stored
procedures, functions, packages, and triggers compile correctly
•
Test all DBMS_SCHEDULER
jobs and batch processes
•
Verify application login
and all CRUD (Create/Read/Update/Delete) operations
•
Configure and test RMAN
backup immediately after go-live on the new server
•
Update monitoring and
alerting tools with the new server hostname and IP
•
Update all DBA runbooks and
documentation with new connection details
•
Confirm archive log mode is
enabled: SELECT log_mode FROM v$database;