Skip to content

Archive for

Grid Infrastructure Installation guide for standalone server

 I just make Grid Infrastructure Installation guide for standalone server , I hope This document is very useful & User friendly for who trying to install Grid, you can download from here or review from.

Download PDF Document from here

Grid Infrastructure Installation Steps

Presentation Document :

Screen Resolution Oracle Virtual Box

I installed Oracle virtual Box for Windows 7 and then I installed Oracle Enterprise Linux guest OS for Virtual Box. When i open my Guest OS, This Screen Resolution is 800×600 and below.so my screen was very small .So that i plan to modify  screen resolution for Guest OS.

Method 1 : Install Guest Additions

For Oracle Enterprise Linux 5. 

After Started Gust OS at  Virtual Box,  Choose Devices –> Install guest Additions.

It should be appear in guest operating system.

Step 1:


[root@localhost media]# ls

TOSHIBA  VBOXADDITIONS_3.2.10_66523

[root@localhost media]# cd VBOXADDITIONS_3.2.10_66523

[root@localhost VBOXADDITIONS_3.2.10_66523]# ls

32Bit        VBoxLinuxAdditions-amd64.run    VBoxWindowsAdditions.exe 64Bit        VBoxLinuxAdditions-x86.run      VBoxWindowsAdditions-x86.exe
AUTORUN.INF  VBoxSolarisAdditions.pkg autorun.sh   VBoxWindowsAdditions-amd64.exe

[root@localhost VBOXADDITIONS_3.2.10_66523]# cp VBoxLinuxAdditions-x86.run /tmp

[root@localhost ~]# cd /tmp

[root@localhost tmp]# ls

gconfd-root     mapping-root  ssh-ZMDjDI2464              virtual-root.Ob9cyx
keyring-xmJ5Gd  orbit-root    VBoxLinuxAdditions-x86.run  xorg.conf

Step 2: Run x86 VBox linux additions


[root@localhost tmp]# ./VBoxLinuxAdditions-x86.run

Verifying archive integrity... All good.

Uncompressing VirtualBox 3.2.10 Guest Additions for Linux........VirtualBox Guest Additions installer

Uninstalling old VirtualBox DKMS kernel modules            [  OK  ]

Building the VirtualBox Guest Additions kernel modules     [FAILED]

Your system does not seem to be set up to build kernel modules.
Look at /var/log/vboxadd-install.log to find out what went wrong.  Once you have corrected it,
 you can
run

  /etc/init.d/vboxadd setup
to build them.
Doing non-kernel setup of the Guest Additions              [  OK  ]
Installing the Window System drivers Installing X.Org 7.1 modules          [  OK  ]

Setting up the Window System to use the Guest Additions    [  OK  ]

You may need to restart the hal service and the Window System (or just restart
the guest system) to enable the Guest Additions.
Installing graphics libraries and desktop services component[  OK  ]

 Oops :!…….It shows Kernel modules Errors, I just looked and spent some few minutes and fix it, What Errors need, Some Packages Yes.

The Following package need to install , Using rpm -Uvh <package-name>


libstdc++-devel-4.1.2-48.el5.i386.rpm

libgomp-4.4.0-6.el5.i386.rpm

libaio-devel-0.3.106-3.2.i386.rpm

kernel-headers-2.6.18-194.el5.i386.rpm

kernel-devel-2.*

glibc-devel-2.5-49.i386.rpm

glibc-headers-2.5-49.i386.rpm

gcc-4.1.2-48.el5.i386.rpm

gcc-c++-4.1.2-48.el5.i386.rpm

Note : You must installed all packages, Otherwise the above Kernel module Errors will be appear. you can get this package from your OEL CD or DVD  /server folder.

After installed Package, you can run


[root@localhost tmp]# ./VBoxLinuxAdditions-x86.run

Verifying archive integrity... All good. Uncompressing VirtualBox 3.2.10 Guest Additions for Linux........

VirtualBox Guest Additions installer Removing installed version 3.2.10 of VirtualBox Guest Additions...

Uninstalling old VirtualBox DKMS kernel modules            [  OK  ]

Building the VirtualBox Guest Additions kernel modules

Your guest system does not seem to have sufficient OpenGL support to enable
accelerated 3D effects (this requires Linux 2.6.27 or later in the guest
system).  This Guest Additions feature will be disabled.
Building the main Guest Additions module  [  OK  ]

Building the shared folder support module                  [  OK  ]

Doing non-kernel setup of the Guest Additions              [  OK  ]

Starting the VirtualBox Guest Additions                    [  OK  ]

Installing the Window System drivers Installing X.Org 7.1 modules [  OK  ]

Setting up the Window System to use the Guest Additions    [  OK  ]

You may need to restart the hal service and the Window System (or just restart
the guest system) to enable the Guest Additions.

Installing graphics libraries and desktop services componen[  OK  ]
[root@localhost tmp]#

Now Guest Additions successfully Installed…. Now I want to set Screen resolution, Unfortunately I rebooted my machine, During Restart It shows some error for Display.I set again old values like “800×600” , After I set it, I started the machine, the old value appear.I thinked Why its not appear Display screen resoltion after installed Guest Addtions.

The answer is , We need to add one line for /etc/X11/xorg.conf        Not Like Ubuntu. OEL some needs.

so I again Startup my vbox X11 setup, [no need this setup, if once your installed Guset additions before reboot, By mistake I rebooted, so the I ran again. you just add one line, its enough ]


#/etc/init.d/vbox-add-X11 setup 

'''''''

After run it, you just go to ADD   “Modes    “1024×768” “800×600″ ” for /etc/X11/xorg.conf


#gedit /etc/X11/xorg.conf

Section "Screen"
  SubSection "Display"
    Depth      24
    Modes    "1024x768" "800x600"
  EndSubSection
  Device       "Device[0]"
  Identifier   "Screen[0]"
  Monitor      "Monitor[0]"
EndSection

Now After Rebooted, Your screen resolution Should be changed. 🙂

——-

Method 2: Without Install Guest Additions, But Oracle Recommand , Install guest Additions.

you can modify from here /etc/X11/xorg.conf

AS a root user,     

Before Copy xorg.conf  

#cp /etc/X11/xorg.conf /tmp

then

Write a new line for xorg.conf

#gedit /etc/X11/xorg.conf

# Xorg configuration created by system-config-display
Section “Device” 
  Identifier  “Videocard0” 
  Driver      “vesa” 
EndSection 

Section “Monitor” 
  Identifier “WXGA+ Inspiron” 
  HorizSync 31.5-90.0 
  VertRefresh 60.0-60.0 
  Option “DPMS” 
EndSection 

Section “Screen” 
  Identifier “Screen0” 
  Device     “Videocard0” 
  Monitor    “WXGA+ Inspiron” 
  DefaultDepth     16 
  SubSection “Display” 
   Viewport   0 0 
   Depth     16 
   Modes “1280×1024” “1024×768” “800×600” “640×480” 
  EndSubSection 
 SubSection “Display” 
   Viewport   0 0 
   Depth     24 
   Modes “1280×1024” “1024×768” “800×600” “640×480” 
  EndSubSection 
EndSection

After modified, you just reboot your Guest OS.

Now you can see your screen resolution.

# Enable USB Device For Oracle Virtual Box.

Flashback versions query

This post show you Easy understanding about Flashback versions query and also How we can restore these data from flashback using EM DB console.

What Is Flashback Versions Query ?

You use a Flashback Version Query to retrieve the different versions of specific rows that existed during a given time interval. A new row version is created whenever a COMMIT statement is executed.

You specify a Flashback Version Query using the VERSIONS BETWEEN clause of the SELECT statement. Here is the syntax:

VERSIONS {BETWEEN {SCN | TIMESTAMP} start AND end}

where start and end are expressions representing the start and end of the time interval to be queried, respectively. The interval is closed at both ends: the upper and lower limits specified (start and end) are both included in the time interval.

 The flashback versions query feature, like Flashback Table, utilizes data from the undo segments, so you must configure your instance to use automatic undo management. The amount of data that you can retrieve with the VERSIONS_BETWEEN clause is limited by the setting of the UNDO_RETENTION initialization parameter. Because the UNDO_RETENTION parameter determines your row history, you should set this parameter based on your needed timeline to utilize flashback versions query. Also, you can use the RETENTION GUARANTEE clause to ensure that you will retain all critical undo data when using flashback versions query.

Now See Exmaples :

Step 1:


SQL> create user fbuser identified by fbuser;

User created.

SQL> grant connect,resource,dba to fbuser;

Grant succeeded.

SQL> conn fbuser/fbuser;
Connected.
SQL> create table flashtest as select * from hr.employees;

Table created.

SQL> SELECT current_scn, TO_CHAR(SYSTIMESTAMP, 'YYYY-MM-DD HH24:MI:SS') FROM v$database;

CURRENT_SCN TO_CHAR(SYSTIMESTAM
----------- -------------------
    4086851 2010-11-10 11:32:14

Step 2:

SQL> select first_name, email from flashtest where department_id = 30;

FIRST_NAME           EMAIL
-------------------- -------------------------
Den                  DRAPHEAL
Alexander            AKHOO
Shelli               SBAIDA
Sigal                STOBIAS
Guy                  GHIMURO
Karen                KCOLMENA

6 rows selected.

Step 3:

SQL> update flashtest set email = EMAIL||'@HRDEPT' where department_id = 30;

6 rows updated.

SQL> select first_name, email from flashtest where department_id = 30;

FIRST_NAME           EMAIL
-------------------- -------------------------
Den                  DRAPHEAL@HRDEPT
Alexander            AKHOO@HRDEPT
Shelli               SBAIDA@HRDEPT
Sigal                STOBIAS@HRDEPT
Guy                  GHIMURO@HRDEPT
Karen                KCOLMENA@HRDEPT

6 rows selected.

SQL> commit;

Commit complete.

SQL> delete flashtest where department_id = 30;

6 rows deleted.

SQL> commit;

Commit complete.

Step 4:

SQL> select first_name, email from flashtest where department_id = 30;

no rows selected

Step 5:


SQL> set lines 1000;
SQL> select versions_startscn, versions_endscn,versions_xid, versions_operation,
  2  first_name, email from flashtest VERSIONS BETWEEN SCN MINVALUE and MAXVALUE
  3  where department_id = 30;

VERSIONS_STARTSCN VERSIONS_ENDSCN VERSIONS_XID     V FIRST_NAME           EMAIL
----------------- --------------- ---------------- - -------------------- -------------------------
          4086869                 05000400DE0A0000 D Karen                KCOLMENA@HRDEPT
          4086869                 05000400DE0A0000 D Guy                  GHIMURO@HRDEPT
          4086869                 05000400DE0A0000 D Sigal                STOBIAS@HRDEPT
          4086869                 05000400DE0A0000 D Shelli               SBAIDA@HRDEPT
          4086869                 05000400DE0A0000 D Alexander            AKHOO@HRDEPT
          4086869                 05000400DE0A0000 D Den                  DRAPHEAL@HRDEPT
          4086866         4086869 0700080067080000 U Karen                KCOLMENA@HRDEPT
          4086866         4086869 0700080067080000 U Guy                  GHIMURO@HRDEPT
          4086866         4086869 0700080067080000 U Sigal                STOBIAS@HRDEPT
          4086866         4086869 0700080067080000 U Shelli               SBAIDA@HRDEPT
          4086866         4086869 0700080067080000 U Alexander            AKHOO@HRDEPT

VERSIONS_STARTSCN VERSIONS_ENDSCN VERSIONS_XID     V FIRST_NAME           EMAIL
----------------- --------------- ---------------- - -------------------- -------------------------
          4086866         4086869 0700080067080000 U Den                  DRAPHEAL@HRDEPT
                          4086866                    Den                  DRAPHEAL
                          4086866                    Alexander            AKHOO
                          4086866                    Shelli               SBAIDA
                          4086866                    Sigal                STOBIAS
                          4086866                    Guy                  GHIMURO
                          4086866                    Karen                KCOLMENA

18 rows selected.

Okay. Now i want to restore this update row only using DBconsole and also How can we use Flashback versions query in EM Dbconsole.

Step 1:

Step 2:

Step 3:

Step 4:

Step 5

Step 6:

Step 7: You need to check Dependency Options.

After submitted, The particular SCN transactions restored.

SQL> select first_name, email from flashtest where department_id = 30;

FIRST_NAME           EMAIL
——————– ————————-
Den                  DRAPHEAL@HRDEPT
Alexander            AKHOO@HRDEPT
Shelli               SBAIDA@HRDEPT
Sigal                STOBIAS@HRDEPT
Guy                  GHIMURO@HRDEPT
Karen                KCOLMENA@HRDEPT

6 rows selected.

Datapump query parameter

This post show you  about Datapump query parameter used in impdp and expdp.

For Export :

Default: none

Purpose

Enables you to filter the data that is exported by specifying a clause for a SQL SELECT statement, which is applied to all tables in the export job or to a specific table.

Syntax and Description

QUERY = [schema.][table_name:] query_clause

The query_clause is typically a WHERE clause for fine-grained row selection, but could be any SQL clause. For example, an ORDER BY clause could be used to speed up a migration from a heap-organized table to an index-organized table. If a [schema.]table_name is not supplied, the query is applied to (and must be valid for) all tables in the export job. A table-specific query overrides a query applied to all tables.

When the query is to be applied to a specific table, a colon must separate the table name from the query clause. More than one table-specific query can be specified, but only one can be specified per table. Oracle highly recommends that you place QUERY specifications in a parameter file; otherwise, you might have to use operating system-specific escape characters on the command line before each quotation mark. To specify a schema other than your own in a table-specific query, you need the EXP_FULL_DATABASE role.

Restrictions

The QUERY parameter cannot be used in conjunction with the following parameters:

  • CONTENT=METADATA_ONLY
  • ESTIMATE_ONLY
  • TRANSPORT_TABLESPACES

Example For Export :


SQL> create table empquery as select * from scott.emp;

Table created.

SQL> select * from empquery;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7369 SMITH      CLERK           7902 17-DEC-80        800                                20
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30
      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30
      7566 JONES      MANAGER         7839 02-APR-81       2975                         20
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400       30
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850                       30
      7782 CLARK      MANAGER         7839 09-JUN-81       2450                      10
      7788 SCOTT      ANALYST         7566 19-APR-87       3000                        20
      7839 KING       PRESIDENT            17-NOV-81       5000                            10
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30
      7876 ADAMS      CLERK           7788 23-MAY-87       1100                         20

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7900 JAMES      CLERK           7698 03-DEC-81        950                    30
      7902 FORD       ANALYST         7566 03-DEC-81       3000                    20
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10

14 rows selected.

Here I wrote some export commands in *.par file method”

See below open notepad and write commands here like below


tables=empquery
query=empquery:"WHERE deptno > 20"
nologfile=y
directory=data_pump_dir
dumpfile=queryme.dmp 

and save as *.par file.

C:\Users\mazar>expdp testnet/testnet parfile=C:\app\oracle\mazar\admin\azardb\dpdump\querytest.par

Export: Release 11.2.0.1.0 - Production on Tue Nov 9 14:49:16 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "TESTNET"."SYS_EXPORT_TABLE_05":  testnet/******** parfile=C:\app\oracle\mazar\admin\azardb\dpdump\querytest.par
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "TESTNET"."EMPQUERY"                        8.273 KB       6 rows
Master table "TESTNET"."SYS_EXPORT_TABLE_05" successfully loaded/unloaded
******************************************************************************
Dump file set for TESTNET.SYS_EXPORT_TABLE_05 is:
  C:\APP\ORACLE\MAZAR\ADMIN\AZARDB\DPDUMP\QUERYME.DMP
Job "TESTNET"."SYS_EXPORT_TABLE_05" successfully completed at 14:49:40

After exported , I just dropped table and import that dump file what inside have for test purpose.


SQL> drop table empquery;

Table dropped.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

C:\Users\mazar>impdp testnet/testnet tables=empquery  directory=data_pump_dir dumpfile=queryme.dmp logfile=queryme.log

Import: Release 11.2.0.1.0 - Production on Tue Nov 9 14:52:26 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "TESTNET"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "TESTNET"."SYS_IMPORT_TABLE_01":  testnet/******** tables=empquery directory=data_pump_dir dumpfile=queryme.dmp logfile=queryme.log
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "TESTNET"."EMPQUERY"                        8.273 KB       6 rows
Job "TESTNET"."SYS_IMPORT_TABLE_01" successfully completed at 14:52:32

and then connect testnet user

SQL> /

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30
      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30
      7900 JAMES      CLERK           7698 03-DEC-81        950                    30

6 rows selected.

You can see only DEPTNO=30 ROWS avilable.

For Import :

At Same Table again i was created , and select the execute command what i show in first steps

SQL> create table empquery as select * from scott.emp;

Table created.

SQL> select * from empquery;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
———- ———- ——— ———- ——— ———- ———- ———-
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30
      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30
      7566 JONES      MANAGER         7839 02-APR-81       2975                    20
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30
      7782 CLARK      MANAGER         7839 09-JUN-81       2450                    10
      7788 SCOTT      ANALYST         7566 19-APR-87       3000                    20
      7839 KING       PRESIDENT            17-NOV-81       5000                    10
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30
      7876 ADAMS      CLERK           7788 23-MAY-87       1100                    20

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
———- ———- ——— ———- ——— ———- ———- ———-
      7900 JAMES      CLERK           7698 03-DEC-81        950                    30
      7902 FORD       ANALYST         7566 03-DEC-81       3000                    20
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10

14 rows selected.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


C:\Users\mazar>expdp testnet/testnet tables=empquery  directory=data_pump_dir dumpfile=queryexport.dmp logfile=queryexport.log

Export: Release 11.2.0.1.0 - Production on Tue Nov 9 14:55:59 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "TESTNET"."SYS_EXPORT_TABLE_05":  testnet/******** tables=empquery directory=data_pump_dir dumpfile=queryexport.dmp logfile=queryexport.log
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "TESTNET"."EMPQUERY"                        8.585 KB      14 rows
Master table "TESTNET"."SYS_EXPORT_TABLE_05" successfully loaded/unloaded
******************************************************************************
Dump file set for TESTNET.SYS_EXPORT_TABLE_05 is:
  C:\APP\ORACLE\MAZAR\ADMIN\AZARDB\DPDUMP\QUERYEXPORT.DMP
Job "TESTNET"."SYS_EXPORT_TABLE_05" successfully completed at 14:56:20

C:\Users\mazar>sqlplus

SQL*Plus: Release 11.2.0.1.0 Production on Tue Nov 9 14:57:59 2010

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: testnet/testnet

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> drop table empquery;

Table dropped.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Here I create *.par file like below

tables=empquery
query=empquery:"WHERE deptno <= 20"
nologfile=y
directory=data_pump_dir
dumpfile=queryexport.dmp 

and save as *.par file.
C:\Users\mazar>impdp testnet/testnet tables=empquery parfile=D:\backup\sqlloader\queryimp.par

Import: Release 11.2.0.1.0 - Production on Tue Nov 9 15:00:38 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "TESTNET"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "TESTNET"."SYS_IMPORT_TABLE_01":  testnet/******** tables=empquery parfile=D:\backup\sqlloader\queryimp.par
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "TESTNET"."EMPQUERY"                        8.585 KB       8 out of 14 rows
Job "TESTNET"."SYS_IMPORT_TABLE_01" successfully completed at 15:00:58
C:\Users\mazar>sqlplus

SQL*Plus: Release 11.2.0.1.0 Production on Tue Nov 9 15:01:23 2010

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: testnet/testnet

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> set lines 500
SQL> select * from empquery;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20
      7566 JONES      MANAGER         7839 02-APR-81       2975                    20
      7782 CLARK      MANAGER         7839 09-JUN-81       2450                    10
      7788 SCOTT      ANALYST         7566 19-APR-87       3000                    20
      7839 KING       PRESIDENT            17-NOV-81       5000                    10
      7876 ADAMS      CLERK           7788 23-MAY-87       1100                    20
      7902 FORD       ANALYST         7566 03-DEC-81       3000                    20
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10

8 rows selected.

SQL>

Datapump filesize parameter

Default: 0 (unlimited)

Purpose

Specifies the maximum size of each dump file. If the size is reached for any member of the dump file set, that file is closed and an attempt is made to create a new file, if the file specification contains a substitution variable.

Syntax and Description

FILESIZE=integer[B | K | M | G]

The integer can be followed by B, K, M, or G (indicating bytes, kilobytes, megabytes, and gigabytes respectively). Bytes is the default. The actual size of the resulting file may be rounded down slightly to match the size of the internal blocks used in dump files.

Restrictions

The minimum size for a file is ten times the default Data Pump block size, which is 4 kilobytes.

Example :


C:\Users\mazar>expdp testnet/testnet tables=mytest  directory=data_pump_dir dumpfile=filesize.dmp logfile=estimatest.log filesize=50k

Export: Release 11.2.0.1.0 - Production on Tue Nov 9 10:53:50 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "TESTNET"."SYS_EXPORT_TABLE_01":  testnet/******** tables=mytest directory=data_pump_dir dumpfile=filesize.dmp logfile=estimatest.log filesize=50k
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "TESTNET"."MYTEST"                          6.625 KB       1 rows
ORA-31694: master table "TESTNET"."SYS_EXPORT_TABLE_01" failed to load/unload
ORA-39095: Dump file space has been exhausted: Unable to allocate 4096 bytes
Job "TESTNET"."SYS_EXPORT_TABLE_01" stopped due to fatal error at 10:54:43


C:\Users\mazar>expdp testnet/testnet tables=mytest  directory=data_pump_dir dumpfile=filesizenew.dmp logfile=estimatestnew.log filesize=100k

Export: Release 11.2.0.1.0 - Production on Tue Nov 9 12:26:27 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "TESTNET"."SYS_EXPORT_TABLE_02":  testnet/******** tables=mytest directory=data_pump_dir dumpfile=filesizenew.dmp logfile=estimatestnew.log filesize=1
00k
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "TESTNET"."MYTEST"                          6.625 KB       1 rows
Master table "TESTNET"."SYS_EXPORT_TABLE_02" successfully loaded/unloaded
******************************************************************************
Dump file set for TESTNET.SYS_EXPORT_TABLE_02 is:
  C:\APP\ORACLE\MAZAR\ADMIN\AZARDB\DPDUMP\FILESIZENEW.DMP
Job "TESTNET"."SYS_EXPORT_TABLE_02" successfully completed at 12:28:17

Ref :

http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_export.htm

Datapump Estimate parameter

ESTIMATE

Default: BLOCKS

Purpose

Specifies the method that Export will use to estimate how much disk space each table in the export job will consume (in bytes). The estimate is printed in the log file and displayed on the client’s standard output device. The estimate is for table row data only; it does not include metadata.

Syntax and Description

ESTIMATE={BLOCKS | STATISTICS}
  • BLOCKS – The estimate is calculated by multiplying the number of database blocks used by the source objects, times the appropriate block sizes.
  • STATISTICS – The estimate is calculated using statistics for each table. For this method to be as accurate as possible, all tables should have been analyzed recently.

Restrictions

If the Data Pump export job involves compressed tables, the default size estimation given for the compressed table is inaccurate when ESTIMATE=BLOCKS is used. This is because the size estimate does not reflect that the data was stored in a compressed form. To get a more accurate size estimate for compressed tables, use ESTIMATE=STATISTICS.

Example : Estimate= Statistics


C:\Users\mazar>expdp testnet/testnet tables=mytest estimate=statistics directory=data_pump_dir dumpfile=estimate_Stat.dmp logfile=estimate.log

Export: Release 11.2.0.1.0 - Production on Tue Nov 9 09:53:33 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "TESTNET"."SYS_EXPORT_TABLE_01":  testnet/******** tables=mytest estimate=statistics directory=data_pump_dir dumpfile=estimate_Stat.dmp logfile=estima
te.log
Estimate in progress using STATISTICS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
.  estimated "TESTNET"."MYTEST"                          6.582 KB
Total estimation using STATISTICS method: 6.582 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "TESTNET"."MYTEST"                          6.625 KB       1 rows
Master table "TESTNET"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for TESTNET.SYS_EXPORT_TABLE_01 is:
  C:\APP\ORACLE\MAZAR\ADMIN\AZARDB\DPDUMP\ESTIMATE_STAT.DMP
Job "TESTNET"."SYS_EXPORT_TABLE_01" successfully completed at 09:54:23

Estimate=blocks


C:\Users\mazar>expdp testnet/testnet tables=mytest estimate=blocks directory=data_pump_dir dumpfile=estimatestat.dmp logfile=estimatest.log

Export: Release 11.2.0.1.0 - Production on Tue Nov 9 09:55:26 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "TESTNET"."SYS_EXPORT_TABLE_01":  testnet/******** tables=mytest estimate=blocks directory=data_pump_dir dumpfile=estimatestat.dmp logfile=estimatest.
log
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
.  estimated "TESTNET"."MYTEST"                             64 KB
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "TESTNET"."MYTEST"                          6.625 KB       1 rows
Master table "TESTNET"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for TESTNET.SYS_EXPORT_TABLE_01 is:
  C:\APP\ORACLE\MAZAR\ADMIN\AZARDB\DPDUMP\ESTIMATESTAT.DMP
Job "TESTNET"."SYS_EXPORT_TABLE_01" successfully completed at 09:55:49
C:\Users\mazar>

ESTIMATE_ONLY

Default: n

Purpose

Instructs Export to estimate the space that a job would consume, without actually performing the export operation.

Syntax and Description

ESTIMATE_ONLY={y | n}

If ESTIMATE_ONLY=y, then Export estimates the space that would be consumed, but quits without actually performing the export operation.

Example :


C:\Users\mazar>set oracle_sid=azardb

C:\Users\mazar>expdp testnet/testnet estimate_only=y nologfile=y

Export: Release 11.2.0.1.0 - Production on Tue Nov 9 09:47:32 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "TESTNET"."SYS_EXPORT_SCHEMA_01":  testnet/******** estimate_only=y nologfile=y
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
.  estimated "TESTNET"."A"                                   9 MB
.  estimated "TESTNET"."DEPT"                               64 KB
.  estimated "TESTNET"."MYTEST"                             64 KB
.  estimated "TESTNET"."TEST"                               64 KB
.  estimated "TESTNET"."EMP"                                 0 KB
.  estimated "TESTNET"."MYVAR"                               0 KB
Total estimation using BLOCKS method: 9.187 MB
Job "TESTNET"."SYS_EXPORT_SCHEMA_01" successfully completed at 09:50:00

Ref :

http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_export.htm

content datapump parameter

This post show you , Easy understanding about CONTENT datapump parameter.

Default : ALL

Purpose :

Enables you to filter what Export unloads: data only, metadata only, or both.

Syntax and Description :

CONTENT={ALL | DATA_ONLY | METADATA_ONLY}
  • ALL unloads both data and metadata. This is the default.
  • DATA_ONLY unloads only table row data; no database object definitions are unloaded.
  • METADATA_ONLY unloads only database object definitions; no table row data is unloaded.

Restrictions

The CONTENT=METADATA_ONLY parameter cannot be used in conjunction with the parameter TRANSPORT_TABLESPACES (transportable-tablespace-mode).

See Example :


SQL> conn sys/Admin123@azardb as sysdba
Connected.
SQL> create user dpuser identified by dpuser;

User created.

SQL> grant connect,resource,dba to dpuser;

Grant succeeded.

SQL> grant read,write on directory data_pump_dir to dpuser;

Grant succeeded.

SQL> conn dpuser/dpuser;
Connected.
SQL> create table employee as select  * from scott.emp;

Table created.

Now I’m go to  Export Only Metadata


C:\Users\mazar>expdp dpuser/dpuser@azardb directory=data_pump_dir dumpfile=contentmeta.dmp  log=contentmeta.log content=metadata_only

Export: Release 11.2.0.1.0 - Production on Mon Nov 8 10:10:33 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: "log=contentmeta.log" Location: Command Line, Replaced with: "logfile=contentmeta.log"
Legacy Mode has set reuse_dumpfiles=true parameter.
Starting "DPUSER"."SYS_EXPORT_SCHEMA_01":  dpuser/********@azardb directory=data_pump_dir dumpfile=contentmeta.dmp logfile=contentmeta.log content=metadata_on
y reuse_dumpfiles=true
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Master table "DPUSER"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for DPUSER.SYS_EXPORT_SCHEMA_01 is:
  C:\APP\ORACLE\MAZAR\ADMIN\AZARDB\DPDUMP\CONTENTMETA.DMP
Job "DPUSER"."SYS_EXPORT_SCHEMA_01" successfully completed at 10:14:10

** You can see No row data Exported, Only metadata information only exported.

C:\Users\mazar>sqlplus

SQL*Plus: Release 11.2.0.1.0 Production on Mon Nov 8 10:53:55 2010

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: dpuser/dpuser

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> drop table employee;

Table dropped.

I dropped the table and them inport what i was exported (METADATA).

SQL> select * from tab;

TNAME                          TABTYPE  CLUSTERID
------------------------------ ------- ----------
BIN$F3pNm4h2RDelNFamba47MA==$0 TABLE

C:\Users\mazar>impdp dpuser/dpuser@azardb directory=data_pump_dir dumpfile=contentmeta.dmp  log=contentmeta.log

Import: Release 11.2.0.1.0 - Production on Mon Nov 8 10:55:05 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: "log=contentmeta.log" Location: Command Line, Replaced with: "logfile=contentmeta.log"
Master table "DPUSER"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "DPUSER"."SYS_IMPORT_FULL_01":  dpuser/********@azardb directory=data_pump_dir dumpfile=contentmeta.dmp logfile=contentmeta.log
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"DPUSER" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Job "DPUSER"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 10:55:12

You can see No rows imported, only metadata


 C:\Users\mazar>sqlplus

SQL*Plus: Release 11.2.0.1.0 Production on Mon Nov 8 10:55:18 2010

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: dpuser/dpuser

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select * from tab;

TNAME                          TABTYPE  CLUSTERID
------------------------------ ------- ----------
EMPLOYEE                       TABLE

SQL> select * from employee;

no rows selected

CONTENT= data_only :

SQL> create table employee as select * from scott.emp;

Table created.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options



C:\Users\mazar>expdp dpuser/dpuser@azardb directory=data_pump_dir dumpfile=contentmeta.dmp  log=contentmeta.log content=data_only

Export: Release 11.2.0.1.0 - Production on Mon Nov 8 10:56:38 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: "log=contentmeta.log" Location: Command Line, Replaced with: "logfile=contentmeta.log"
Legacy Mode has set reuse_dumpfiles=true parameter.
Starting "DPUSER"."SYS_EXPORT_SCHEMA_01":  dpuser/********@azardb directory=data_pump_dir dumpfile=contentmeta.dmp logfile=contentmeta.log content=data_only r
use_dumpfiles=true
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
. . exported "DPUSER"."EMPLOYEE"                         8.585 KB      14 rows
Master table "DPUSER"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for DPUSER.SYS_EXPORT_SCHEMA_01 is:
  C:\APP\ORACLE\MAZAR\ADMIN\AZARDB\DPDUMP\CONTENTMETA.DMP
Job "DPUSER"."SYS_EXPORT_SCHEMA_01" successfully completed at 10:56:57



Enter user-name: dpuser/dpuser

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> drop table employee;

Table dropped.

SQL> commit;

Commit complete.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


C:\Users\mazar>impdp dpuser/dpuser@azardb directory=data_pump_dir dumpfile=contentmeta.dmp  log=contentmeta.log content=data_only

Import: Release 11.2.0.1.0 - Production on Mon Nov 8 11:11:08 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: "log=contentmeta.log" Location: Command Line, Replaced with: "logfile=contentmeta.log"
Master table "DPUSER"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "DPUSER"."SYS_IMPORT_FULL_01":  dpuser/********@azardb directory=data_pump_dir dumpfile=contentmeta.dmp logfile=contentmeta.log content=data_only
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.UPATE_TD_ROW_IMP [62]
TABLE_DATA:"DPUSER"."EMPLOYEE"
ORA-31603: object "EMPLOYEE" of type TABLE not found in schema "DPUSER"

ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.KUPW$WORKER", line 8171

----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name
290C5B9C     18990  package body SYS.KUPW$WORKER
290C5B9C      8192  package body SYS.KUPW$WORKER
290C5B9C     18552  package body SYS.KUPW$WORKER
290C5B9C      4105  package body SYS.KUPW$WORKER
290C5B9C      8875  package body SYS.KUPW$WORKER
290C5B9C      1649  package body SYS.KUPW$WORKER
2A56420C         2  anonymous block

ORA-39126: Worker unexpected fatal error in KUPW$WORKER.UPATE_TD_ROW_IMP [62]
TABLE_DATA:"DPUSER"."EMPLOYEE"
ORA-31603: object "EMPLOYEE" of type TABLE not found in schema "DPUSER"

ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.KUPW$WORKER", line 8171

----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name
290C5B9C     18990  package body SYS.KUPW$WORKER
290C5B9C      8192  package body SYS.KUPW$WORKER
290C5B9C     18552  package body SYS.KUPW$WORKER
290C5B9C      4105  package body SYS.KUPW$WORKER
290C5B9C      8875  package body SYS.KUPW$WORKER
290C5B9C      1649  package body SYS.KUPW$WORKER
2A56420C         2  anonymous block

Job "DPUSER"."SYS_IMPORT_FULL_01" stopped due to fatal error at 11:11:13
You can see NO TABLE avilable, So you couldnot import data without table.

Ref :

http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_export.htm

My Blog First Year Birthday

Thanks Everyone 🙂

ORA-39170: Schema expression ‘AZAR$RIY’ does not correspond to any schemas

SQL> create user "azar$riy" identified by azar;

User created.

SQL> grant connect,resource,dba to "azar$riy";

Grant succeeded.

SQL> grant read,write on directory data_pump_dir to "azar$riy";

Grant succeeded.

SQL> conn "azar$riy"/azar;
Connected.
SQL> create table a as select * from scott.emp;

Table created.

SQL> commit;

Commit complete.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


C:\Users\mazar>expdp system/Admin123@azardb directory=data_pump_dir schemas="azar$riy" dumpfile=azartest.dmp log=azartest.log 
Export: Release 11.2.0.1.0 - Production on Sun Nov 7 15:42:09 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39170: Schema expression 'AZAR$RIY' does not correspond to any schemas.


C:\Users\mazar>expdp system/Admin123@azardb directory=data_pump_dir schemas='\"azar$riy\"'
 dumpfile=azartest.dmp log=azartest.log 

Export: Release 11.2.0.1.0 - Production on Sun Nov 7 15:42:20 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: "log=azartest.log" Location: Command Line, Replaced with: "logfile=azartest.log"
Legacy Mode has set reuse_dumpfiles=true parameter.
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/********@azardb directory=data_pump_dir dumpfile=azartest.dmp logfile=azartest.log schemas='"azar$riy"' reuse
_dumpfiles=true
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported "azar$riy"."A"                              8.578 KB      14 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
  C:\APP\ORACLE\MAZAR\ADMIN\AZARDB\DPDUMP\AZARTEST.DMP
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 15:43:10
C:\Users\mazar>

Rename and Relocate datafiles

This post will show you How to rename your datafiles and also relocate datafiles.

Rename Datafiles in Single Tablespace  (Database Open Mode) :-

Caution : Backup your tablespace Before you will do some change in your tablespace.

Step 1:  I just create tablespaces with two datafiles


SQL> create tablespace renametbs datafile 'd:\backup\renametbs01.dbf' size 50m;

Tablespace created.

SQL> alter tablespace renametbs add datafile 'd:\backup\renametbs02.dbf' size 50m;

Tablespace altered.

SQL> select file_name,tablespace_name from dba_data_files where tablespace_name='RENAMETBS';

FILE_NAME
--------------------------------------------------------------------------------
TABLESPACE_NAME
------------------------------
D:\BACKUP\RENAMETBS01.DBF
RENAMETBS

D:\BACKUP\RENAMETBS02.DBF
RENAMETBS

Step 2: Tablespace Offline

SQL> alter tablespace renametbs offline normal;

Tablespace altered.

Step  3: Rename your datafile, Here you need to listen something , when you’re execute rename command, It will show some errors like below


L> alter tablespace renametbs rename datafile 'd:\backup\renametbs01.dbf','d:\backup\renametbs02.dbf' TO 'd:\backup\renametbs1.dbf','d:\backup\renametbs2.dbf

ter tablespace renametbs rename datafile 'd:\backup\renametbs01.dbf','d:\backup\renametbs02.dbf' TO 'd:\backup\renametbs1.dbf','d:\backup\renametbs2.dbf'

ROR at line 1:
A-01525: error in renaming data files
A-01141: error renaming data file 8 - new file 'd:\backup\renametbs1.dbf' not
und
A-01110: data file 8: 'D:\BACKUP\RENAMETBS01.DBF'
A-27041: unable to open file
D-04002: unable to open file
S-Error: (OS 2) The system cannot find the file specified.

What is this means, After tablespace offline, you need to copy and paste same location of your datafiles and rename manully (OS level) which name you will implement, Otherwise the system cannot find the files. 

SQL> alter tablespace renametbs rename datafile 'D:\BACKUP\RENAMETBS01.DBF','D:\BACKUP\RENAMETBS02.DBF' TO 'D:\BACKUP\RENAMETBS1.DBF','D:\BACKUP\RENAMETBS2.DBF
';

Tablespace altered.

Step 4: Tablespace Online and Take backup.


SQL> alter tablespace renametbs online;

Tablespace altered.

SQL> select file_name,tablespace_name from dba_data_files where tablespace_name='RENAMETBS';

FILE_NAME
--------------------------------------------------------------------------------
TABLESPACE_NAME
------------------------------
D:\BACKUP\RENAMETBS1.DBF
RENAMETBS

D:\BACKUP\RENAMETBS2.DBF
RENAMETBS
SQL>

Relocate Datafiles in Single Tablespace (Database Open Mode) :

Step 1: Check your current Datafiles location


SQL> select file_name,tablespace_name from dba_data_files where tablespace_name='RENAMETBS';

FILE_NAME
--------------------------------------------------------------------------------
TABLESPACE_NAME
------------------------------
D:\BACKUP\RENAMETBS1.DBF
RENAMETBS

D:\BACKUP\RENAMETBS2.DBF
RENAMETBS

Step 2: you need to know your datafiles size,

SQL> select file_name,bytes from dba_data_files where tablespace_name='RENAMETBS';

FILE_NAME
--------------------------------------------------------------------------------
     BYTES
----------
D:\BACKUP\RENAMETBS1.DBF
  52428800

D:\BACKUP\RENAMETBS2.DBF
  52428800

Step 3: Tablespace offline


SQL> alter tablespace renametbs offline normal;

Tablespace altered.

Step 4: Copy your datafiles to New location and Rename it manually (OS level).


SQL> alter tablespace renametbs rename datafile 'D:\BACKUP\RENAMETBS1.DBF','D:\BACKUP\RENAMETBS2.DBF' TO 'C:\RENAMETB\RENAMETBS1.DBF','C:\RENAMETB\RENAMETBS2.D
BF';

Tablespace altered.

Step 5:  After relocated , your current datafile location and tablespace online , take backup.


SQL> select file_name,tablespace_name from dba_data_files where tablespace_name='RENAMETBS';

FILE_NAME
--------------------------------------------------------------------------------
TABLESPACE_NAME
------------------------------
C:\RENAMETB\RENAMETBS1.DBF
RENAMETBS

C:\RENAMETB\RENAMETBS2.DBF
RENAMETBS

SQL> alter tablespace renametbs online;

Tablespace altered.

SQL>

Rename Datafile for Multiple Tablespace  (Database Mount Mode):

Step 1: Startup mount


SQL> startup mount
ORACLE instance started.

Total System Global Area  535662592 bytes
Fixed Size                  1375792 bytes
Variable Size             209715664 bytes
Database Buffers          318767104 bytes
Redo Buffers                5804032 bytes
Database mounted.

Step 2: Check your datafile


SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
C:\APP\ORACLE\MAZAR\ORADATA\AZARDB\SYSTEM01.DBF
C:\APP\ORACLE\MAZAR\ORADATA\AZARDB\SYSAUX01.DBF
C:\APP\ORACLE\MAZAR\ORADATA\AZARDB\UNDOTBS01.DBF
C:\APP\ORACLE\MAZAR\ORADATA\AZARDB\USERS01.DBF
C:\APP\ORACLE\MAZAR\ORADATA\AZARDB\EXAMPLE01.DBF
D:\BACKUP\RCRMAN01.DBF
D:\BACKUP\USERSTBS01.DBF
C:\RENAMETB\RENAMETBS1.DBF
D:\BACKUP\TBSPITR01.DBF
C:\RENAMETB\RENAMETBS2.DBF
D:\BACKUP\TESTTB01.DBF

11 rows selected.

Here I choosed “C:\RENAMETB\RENAMETBS1.DBF ” to ” C:\RENAMETB\RENAMETBS01.DBF “, AS Same Like “D:\BACKUP\TESTTB01.DBF” to “D:\BACKUP\TESTTB1.DBF 

Step 3: Rename Datafile for multiple tablespace


SQL> alter database rename file 'C:\renametb\renametbs1.dbf','D:\backup\testtb01.dbf' to 'C:\renametb\renametbs01.dbf','D:\backup\testtb1.dbf';

Database altered.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
C:\APP\ORACLE\MAZAR\ORADATA\AZARDB\SYSTEM01.DBF
C:\APP\ORACLE\MAZAR\ORADATA\AZARDB\SYSAUX01.DBF
C:\APP\ORACLE\MAZAR\ORADATA\AZARDB\UNDOTBS01.DBF
C:\APP\ORACLE\MAZAR\ORADATA\AZARDB\USERS01.DBF
C:\APP\ORACLE\MAZAR\ORADATA\AZARDB\EXAMPLE01.DBF
D:\BACKUP\RCRMAN01.DBF
D:\BACKUP\USERSTBS01.DBF
C:\RENAMETB\RENAMETBS01.DBF
D:\BACKUP\TBSPITR01.DBF
C:\RENAMETB\RENAMETBS2.DBF
D:\BACKUP\TESTTB1.DBF

11 rows selected.

Step 4: Alter your database opened.


SQL> alter database open;

Database altered.

SQL> select file_name,tablespace_name from dba_data_files where tablespace_name in ('RENAMETBS','TESTTB');

FILE_NAME
--------------------------------------------------------------------------------
TABLESPACE_NAME
------------------------------
C:\RENAMETB\RENAMETBS01.DBF
RENAMETBS

C:\RENAMETB\RENAMETBS2.DBF
RENAMETBS

D:\BACKUP\TESTTB1.DBF
TESTTB
SQL>

Refer :-

http://download.oracle.com/docs/cd/E11882_01/server.112/e17120/dfiles005.htm#i1006457