Tuesday, January 29, 2013

What is the difference between restoring and recovering?

Restoring the database involves copying backup files from the backup location to disk, sometimes it involves replacing damaged files etc., while recovery is the process of applying redo logs to the database in order to roll it forward. Database can be rolled forward toa specific point-in-time, or until the last transaction in the log files.

During restoration, RMAN decides which backup sets, datafiles, and archive logs are necessary for the database restoration and verifies their contents. No files are actually restored during this process.

Restoration of a datafile or a control file from backup involves retrieving the file onto disk from a backup location and making it available to the database server. While recovering a datafile involved takinhg a restored copy of the datafile and applying changes recorded in the redo logs.

Monday, January 28, 2013

Oracle backup fails with "could not read file header for datafile 5 error reason 1" error

When attempting to perform a database backup I recived the following error:

"could not read file header for datafile 5 error reason 1"

To diagnoze the error, first check the DBA_DATA_FILES view, the file with file_id=5:

SELECT FILE_ID, FILE_NAME FROM DBA_DATA_FILES;
The results returned will look something like that:
 FILE_ID                   FILE_NAME
-------------------------------------------------------------------
1                /u01/oradata/DBName/system01.dbf

2                /u01/oradata/DBName/sysaux01.dbf

3                /u01/oradata/DBName/undotbs01.dbf

4                /u01/oradata/DBName/users01.dbf

5                /u01/app/oracle/product/11.2.0.3/dbs/MISSING00005 
Solution: Check your data files to see which one is missing, then rename the missing file:

ALTER DATABASE RENAME FILE '/u01/app/oracle/product/11.2.0.3/dbs/MISSING00005' TO '/u01/app/oracle/oradata/DBName/mydatefile1.dbf';
followed by
RECOVER DATAFILE 5;

How to check whether an Oracle database was started with spfile or pfile?

There are several ways to check whether the database was started using spfile or pfile

  1. Run
    SHOW PARAMETER SPFILE;
    
    If it returns a path to an spfile in Value field, then spfile was used. If the Value field is blank, then pfle was used to strat up the database:
    NAME                         TYPE    VALUE
    ---------------------------- -------- ---------------------
    spfile                       string
    
  2. Run ALTER SYSTEM statement and see if it causes an error:
    ALTER SYSTEM SET AUDIT_TRAIL='DB_EXTENDED' SCOPE=SPFILE;
    
    If it was pfile that use used, the following error will occur:
    ORA-32001: write to SPFILE requested but no SPFILE is in use
    
  3. By using V$SPPARAMETER view:
    SELECT ISSPECIFIED, COUNT(*) FROM V$SPPARAMETER GROUP BY ISSPECIFIED;
    
    If pfile was used, FASLE will be returned for all setting - meaning they all come from PFILE:
    ISSPEC   COUNT(*)
    ------ ----------
    FALSE         348
    
    If result returned looks something like this:
    ISSPEC   COUNT(*)
    ------ ----------
    FALSE         327
    TRUE           21
    
    then we know that SPFILE was used because ISSPECIFIED returned TRUE for some settings, specified in SPFILE.

Friday, January 25, 2013

ORA-01180: can not create datafile 1

RMAN-03002: failure of restore command at 01/24/2013 16:44:05
ORA-01180: can not create datafile 1
ORA-01110: data file 1: '/u01/oradata/pubs/system01.dbf'
Occurred when running database restore.

I ran

list incarnation;
the result was

List of Database Incarnations
DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1       1       DBName     707574411        PARENT  1          23-JAN-13
2       2       DBName     707574411        CURRENT 799793     25-JAN-13
Cause: Incarnation error

Solution: Reset incarnation

RESET DATABASE TO INCARNATION 1;
What is an incarnation?

A database incarnation is created whenever you open the database with the RESETLOGS option.

The Current incarnation is the one in which the database is running at present

The incarnation from which the current incarnation branched after a ALTER DATABASE OPEN RESETLOGS was performed is called the Parent incarnation.

If you want to go back to some SCN which is not part of the Current database incarnation, we need to use the RESET DATABASE TO INCARNATION command as shown in the example below

The purpose of incarnations

An incarnation helps to identify redo streams which have the same SCN, but occurred at different points in time. This prevents the possibility of applying the wrong archive log file from a previous incarnation which could corrupt the database.

Suppose we are at incarnation 1 and are at SCN 100 in the database. I do a resetlogs and now the incarnation of the database becomes 2. Suppose we do another resetlogs and it so happens that the SCN at the time we did the resetlogs was also 100. Somewhere down the line later we want to do a point in time recovery and want to recover the database until SCN 100. How does Oracle know which is the correct SCN and how far back in time should the database be rolled back to and which archive logs need to be applied?

This is where incarnations come in and we will see how we can set the database to a particular incarnation to enable RMAN to do a point in time recovery.

Error accessing PRODUCT_USER_PROFILE

Error accessing PRODUCT_USER_PROFILE
Warning: Product user profile information not loaded!
You may need to run PUPBLD.SQL as SYSTEM
Occurred after restoring the database and attempting to login via SQL*Plus

Solution:run @?/sqlplus/admin/pupbld.sql script while being logged in as SYSTEM user, to install the SQL*Plus PRODUCT_USER_PROFILE tables. This table allows SQL*Plus to disable commands per user.

Important:Run this script logged in as SYSTEM user, not SYS. When loggin in to SQL*Plus using connect as sysdba, you end up being logged in as SYS account. To double-check, run "show user" command

Thursday, January 24, 2013

ORA-01113: file 1 needs media recovery

RMAN-03002: failure of alter db command at 01/24/2013 12:08:25
ORA-01113: file 1 needs media recovery
ORA-01110: data file 1: '/u40/oradata/pubs/system01.dbf'
The error occurs when running ALTER DATABASE OPEN command

Cause: The reason is RECOVER DATABASE; statement was no issued after running RESTORE DATABASE and before opening the database via ALTER DATABASE OPEN

Solution: Issue RECOVER DATABASE;

followed by ALTER DATABASE OPEN RESETLOGS; (or NORESETLOGS if no redo logs were lost)

Use RESETLOGS after incomplete recovery (when the entire redo stream wasn't applied). RESETLOGS will initialize the logs, reset your log sequence number, and start a new "incarnation" of the database.

Use NORESETLOGS when doing complete recovery (when the entire redo stream was applied). Oracle will continue using the existing (valid) log files.

The NORESETLOGS option does not clear the redo log files during startup and the online redo logs to be used for recovery. Only used in scenario where MANUAL RECOVERY is started, CANCEL is used, and then RECOVER DATABASE is started.

ORA-19573: cannot obtain exclusive enqueue for datafile 1

The error occurs when executing restore database statement.

Cause: The problem occurs because of an attempt to restore an open dtabase.

Solution: Mount the database but not open it.

Run
SHUTDOWN IMMEDIATE;

STARTUP MOUNT;

How to recover a dropped table in Oracle 11g when flashback mode is off

Recovering dropped table is easy in Oracle, provided that the table was not dropped with PURGE option. In case the table is dropped and space occupied by the table is released and the table does not get moved into the recycle bin. But if table is dropped without PURGE option, Oracle has this very neat feature - Recycle bin, similar to the recycle bin in Windows. There are two recyle bin views in Oracle: USER_RECYCLEBIN and DBA_RECYCLEBIN, Synonym RECYCLEBIN points to your USER_RECYCLEBIN.

The recycle bin can be turned on and off with RECYCLEBIN initialization parameter. When table is dropped, it get rename to system-generated name preceeded with BIN and stored in recycle bin. The important thing to know is that after table has been dropped, it's only been renamed, the table segmants are still in the tablespace, unchanged. the space occupied by the table will not be reclaimed until the table has been purged from the recycle bin.

While in the recycle bin, the table can even be queried using the newly generated name that starts qwith BIN$.

The table can easily be recovered from the recycle bin using flashback drop, which will rename the table to its original name.

You can check flashback mode by running

SELECT FLASHBACK_ON FROM V$DATABASE; 
First check the parameter Recyclebin is set to true. Recycle bin is a data dictionary table that contains information about dropped objects. Dropped tables and any associated objects such as indexes, constraints, nested tables, and the likes are not removed and still occupy space. They continue to count against user space quotas, until specifically purged from the recycle bin or the unlikely situation where they must be purged by the database because of tablespace space constraints.
SHOW PARAMETER RECYCLEBIN;
if recyclebin is set to off, perform the following steps:
  1. run
    ALTER SYSTEM SET RECYCLEBIN=ON SCOPE=SPFILE;
    
  2. verify that recyclebin parameter is now set to ON
  3. shutdown the database
    SHUTDOWN IMMEDIATE
    
  4. Restart the database
    STARTUP
    
then run
SELECT * FROM RECYCLEBIN;
and see if your table is in there. If it is, use the following quesry to restore it:
FLASHBACK TABLE  TO BEFORE DROP;
Then check if the table is back:
SELECT * FROM USER_TABLES WHERE TABLE_NAME=;

Wednesday, January 23, 2013

How to determine the type of a backup file based on the filename

Let's say you backed up your database and fast_recovery_area contains a number of backup files and you don't know which ones to use for recovery, which ones are full backup, which are incremental, and which are level 0 or level1. First get the backup piece id by running:
list backuppiece '/u01/fast_recovery_area/DBName/backupset/backup_filename.bkp'
The oputput should look something liek this:
List of Backup Pieces
BP Key  BS Key  Pc# Cp# Status      Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
5       5       1   1   AVAILABLE   DISK        /u01/fast_recovery_area/DBName/backupset/backup_filename.bkp
Then run
list backupset 5;
where 5 is the backup piece id of your backup file The output will look something like this:
List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
5       Incr 0  680.12M    DISK        00:00:06     18-JAN-13
        BP Key: 5   Status: AVAILABLE  Compressed: NO  Tag: TAG20130118T141414
        Piece Name: /u01/fast_recovery_area/DBName/backupset/backup_filename.bkp
  List of Datafiles in backup set 5
  File LV Type Ckp SCN    Ckp Time  Name
  ---- -- ---- ---------- --------- ----
  1    0  Incr 1256801    18-JAN-13 /u01/oradata/DBName/system01.dbf
  2    0  Incr 1256801    18-JAN-13 /u01/oradata/DBName/sysaux01.dbf
  3    0  Incr 1256801    18-JAN-13 /u01/oradata/DBName/undotbs01.dbf
  4    0  Incr 1256801    18-JAN-13 /u01/oradata/DBName/users01.dbf
  5    0  Incr 1256801    18-JAN-13 /u01/oradata/DBName/tbspace1_01.dbf
Field Type lists the backup type, in this case Incremental. The field LV lists the level of the backup - 0.

Tuesday, January 22, 2013

"Not a Valid Month" Error

Error received when trying to insert a date into a date field:
INSERT INTO TableName(Field1)VALUES('01/22/2013');
Cause: The date format specified in an INSERT statement dow not match NLS_DATE_FORMAT parameter.
To check NLS_DATE_FORMAT parameter, run:
SELECT sysdate FROM dual;
The output should be something like this: 22-JAN-13
Solution: Modify SQL statement to read:
INSERT INTO TableName(Field1)VALUES(to_date('22-JAN-2013'));

Friday, January 18, 2013

Setting permanent environmental variables and user profile file

When you initially connect to the server, before you set your environmental variables by running .oraenv, your environmental variables such as ORACLE_SID and ORACLE_HOME populated by default by the values from .profile. You can change those temporarily by running .oraenv but more permanent ones are stored in the user profile file.

You can view your current settings by running the following from the command prompt

env

In order to view or edit .profile, you need to go to either /home/oracle or ORACLE_BASE and run

ls -la

to see all the files, including hidden ones - the ones preceded by the dot (.).

Then use

vi .profile

and look for a line where ORACLE_SID, ORACLE_BASE, and ORACLE_HOME are being set. Change it to the name of the database you want by default and your base and home directories.

Wednesday, January 16, 2013

Undo data vs. Redo log

The main difference is that UNDO data can make the change go away, and REDO can make the change happen again if necessary. UNDO holds a copy of previous data untill the transaction commits and for some time afterwards (depending on undo retention period specified), thus providing read consistence, since the user can read the data as it is at the time a transaction started, while transaction is still in the process. Capturing the undo data enables Oracle to roll back the uncommitted data. Undo data is used for both read consistency and to recover from failed transactions.

UNDO segments store BEFORE images of a record, while REDO logs record redo entries.

For example the user adds a record to the database - an employee named John Doe with ID of 3. Currently database contains two records - with IDs of 1 and 2. So UNDO data captures those two records in case we need to rollback the insertion of record with ID 3.

Meanwhile REDO log records all the changes to the database as they happen. If the database crashes, Oracle will first read redo logs and apply all committed changes 9the ones that didn't end up in data files yet, due to the crash) before opening the database for regular use.

Here is a simple differences between the two:

Undo Redo
Makes a change go away Reproduces a change
Used for rollback and read consistency Used for rolling forward the changes
Protects the database from inconsistent reads Protects from data loss
Data stored in Undo segments in Undo tablespace Logs stored in SGA, in memory

Configuring auditing

How to check if auditing is enabled:
  1. connect as sysdba
  2. run
    SHOW PARAMETER AUDIT_TRAIL;
    
if auditing_trail parameter is set to None, enable auditing by performing the following steps:
  1. Update parameter file:
    ALTER SYSTEM SET AUDIT_TRAIL='DB' SCOPE=SPFILE; 
    
  2. Shutdown the database
    SHUTDOWN IMMEDIATE;
    
  3. Re-start and open database
    STARTUP OPEN;
    
  4. Check the parameter value:
    SHOW PARAMETER AUDIT_TRAIL;
    
When AUDIT_TRAIL parameter is set to DB, audit data can be sound in SYS.AUD$ table, or one of the views based on this tabel: DBA_AUDIT_TRAIL, DBA_COMMON_AUDIT_TRAIL or DBA_COMMON_AUDIT_TRAIL. In order to capture the actual SQL command issued by the user, change AUDIT_TRAIL to DB_EXTENDED. In SYS.AUD$ the sql command will show up in SQL_TEXT field. For example:
SELECT ACTION, SQL_TEXT FROM DBA_AUDIT_TRAIL;
To view what is audited for each user, run:
SELECT USER_NAME, AUDIT_OPTION FROM DBA_STMT_AUDIT_OPTS WHERE NOT USER_NAME IS NULL;
To retrieve all the sql statements ran by a current user, run (AUDIT_TRAIL should be set to DB_EXTENDED in order to auddit actual sql statements):
SELECT SQL_TEXT FROM USER_AUDIT_OBJECT;

Tuesday, January 15, 2013

Moving datafile to a different location

Moving datafile involves 4 steps:
  1. Taking datafile offline
  2. Moving the file to a new location
  3. Recovering the datafile
  4. Bringing datafile online
In order to perform the steps above, the database has to be in ARCHIVELOG mode. In order to check, run the following:
ARCHIVE LOG LIST;
The output will be something like that:
Database log mode              No Archive Mode
Automatic archival             Disabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     56
Current log sequence           58
Enable ARCHIVELOG mode and peform the following:
ALTER DATABASE DATAFILE '/u01/oradata/tbs01.dbf' OFFLINE;
ALTER DATABASE RENAME FILE '/u01/oradata/tbs01.dbf' TO '/u40/oradata/Northwind/tbs01.dbf';
RECOVER DATAFILE '/u01/oradata/Northwind/tbs01.dbf';
ALTER DATABASE DATAFILE '/u01/oradata/Northwind/tbs01.dbf' ONLINE;

before you rename the file, make sure you run the following command on the UNIX side to ensure the file on the OS has been moved:
cp /u01/oradata/tbs01.dbf /u01/oradata/Northwind/tbs01.dbf
If you do not move OS file first, you will get an error in SQL Plus while attempting to run ALTER DATABASE RENAME statement
ORA-01511: error in renaming log/data files 
ORA-01516: nonexistent log file, datafile, or tempfile "/u01/oradata/Northwind/tbs01.dbf"
When done, run the statement below to ensure that all your datafiles are in the correct place:
SELECT NAME FROM v$DATAFILE;

or

SELECT * FROM V$DBFILE; (for older versions)

Monday, January 14, 2013

How do you know you are a DBA

You know you are a DBA when being asked "Why are you going to see a dentist?" and you reply "For basic maintenance and performance monitoring"

Saturday, January 12, 2013

How to enable ARCHIVELOG mode for a database

ARCHIVELOG mode has a number of benefits:
  • Database can be backed up online
  • Backup policy can be managed through RMAN
  • Database can be recovered point-in-time
By default a database is created in NOARCHIVELOG mode. Here is a sequence of steps to perform in order to enable ARCHIVELOG mode for a database:
  1. Connect as sysdba
  2. sqlplus / as sysdba
    
  3. Shut down the database
  4. SHUTDOWN IMMEDIATE;
    
  5. Modify pfile by setting LOG_ARCHIVE_START parameter to true
  6. LOG_ARCHIVE_START=TRUE
    
  7. Startup and mount your database using your parameter file or create spfile from your pfile and then mount
  8. STARTUP MOUNT PFILE='/u01/app/oracle/product/11.2.0.3/dbs/init.ora';
    
    or
    CREATE SPFILE='/u01/app/oracle/product/11.2.0.3/dbs/spfile.ora' FROM PFILE='/u01/app/oracle/product/11.2.0.3/dbs/init.ora';
    STARTUP MOUNT;
    
  9. Enable ARCHIVELOG mode
  10. ALTER DATABASE ARCHIVELOG;
    
  11. Open your database
  12. ALTER DATABASE OPEN;
    
After you ran all the above, check the status of your database with the following command:
ARCHIVE LOG LIST;
The output should look something like that:
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     56
Next log sequence to archive   58
Current log sequence           58

Friday, January 11, 2013

ORA-09925: Unable to create audit trail file

Got an infamous "ORA-09925: Unable to create audit trail file" error while trying to log in to sqlplus as sysdba via command line:

sqlplus / as sysdba

I did some research and 90% of cases where this error came up while creating a database via command line, while the user tried to log in to sqlplus in order to startup the database from pfile, the problem was permissions issue - not sufficient permissions to write to the adump directory. It happened to be something different in my case.

Cause: An instance for that ORACLE_SID was alredy started. I ran ps -ef | grep pmon

and saw that the instance was already running and therefore I was getting the error

Solution: Run ps -ef | grep pmon, copy the process id and kill the process using kill processid command.

Wednesday, January 9, 2013

Creating an Oracle database from command line

#Export all environmental variables

export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/11.2.0.3
export LIBPATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32:$LIBPATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32:$ LD_LIBRARY_PATH
export ORACLE_SID=pubs
export PATH=$ORACLE_HOME/bin:$PATH
#run the commands above to verify environmental variables

env | grep ORA
env | grep LIB
#create a pfile - initpubs.ora $ORACLE_HOME/dbs/inipubs.ora

db_name='pubs'
memory_target=2G
processes = 150
audit_file_dest='/u01/app/oracle/admin/pubs/adump'
audit_trail ='db'
db_block_size=8192
sga_max_size=1073741824
sga_target=1073741824
db_domain=''
db_recovery_file_dest='/u02/flash_recovery_area'
db_recovery_file_dest_size=2G
open_cursors=300
remote_login_passwordfile='EXCLUSIVE'
undo_management=AUTO
undo_tablespace='UNDOTBS1'
# You may want to ensure that control files are created on separate physical
# devices
control_files = (/u01/oradata/pubs/control01.ctl, /u01/oradata/pubs/control02.ctl)
compatible ='11.2.0'
#create password file

$ORACLE_HOME/bin/orapwd file=$ORACLE_HOME/dbs/pwdpubs.ora password=oracle entries=5
#create an entry in /etc/oratab file

pubs:/u01/app/oracle/product/11.2.0.3:N
#log into sqlplus as sysdba by running the following command:

sqlplus / as sysdba
# start the instance with nomount option, specifying your pfile

startup nomount pfile='/u01/app/oracle/product/11.2.0.3/dbs/initpubs.ora';
# run actual create database statement

create database pubs
MAXLOGFILES 5
MAXLOGMEMBERS 2
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 100
logfile group 1 ('/u01/oradata/pubs/redo01.log') size 100M,
        group 2 ('/u02/oradata/pubs/redo02.log') size 100M
character set WE8ISO8859P1
national character set utf8
datafile '/u01/oradata/pubs/system01.dbf' size 500M autoextend on next 10M maxsize unlimited extent management local
user datafile '/u01/app/oracle/oradata/newdata/user01.dbf' size 10m
sysaux datafile '/u01/oradata/pubs/sysaux01.dbf' size 100M autoextend on next 10M maxsize unlimited
undo tablespace undotbs1 datafile '/u01/oradata/pubs/undotbs01.dbf' size 100M
default temporary tablespace temp tempfile '/u40/oradata/pubs/temp01.dbf' size 100M;
 
# run the following scripts to complete database creation

@?/rdbms/admin/catalog.sql

@?/rdbms/admin/catproc.sql
 
# set passwords for sys and system

alter user sys identified by whatever;

alter user system identified by whatever;

Tuesday, January 8, 2013

Using Oracle Data Pump for import/export

In order to get Data Pump directory location, execute

SELECT directory_path FROM dba_directories WHERE directory_name='DATA_PUMP_DIR'; 
Typically it is $ORACLE_BASE/admin/DBName/dpdump/

Then check priviges that are assigned to the directory by executing;

SELECT grantee, privilege FROM dba_tab_privs dtp WHERE table_name = 'DATA_PUMP_DIR';
The grantee could be a user or a role.

Use the following to create a directory:

CREATE OR REPLACE DIRECTORY dpump_dir1 AS '.../admin/DBName/dpdump';
After a directory is created, you need to grant READ and WRITE permission on the directory to other users. For example, to allow the Oracle database to read and to write to files on behalf of user usr1 in the directory named by dpump_dir1, you must execute the following command:
GRANT READ,WRITE ON DIRECTORY dpump_dir1 TO usr1;
Once the directory access is granted, the user can export his database objects with command arguments from a command line:

expdp usr1/pwd DIRECTORY=dpump_dir1 dumpfile=usr1.dmp
The above will produce the following output:

Export: Release 11.2.0.3.0 - Production on Tue Jan 8 12:48:31 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "USR1"."SYS_EXPORT_SCHEMA_01":  usr1/******** DIRECTORY=dpump_dir1 dumpfile=usr1.dmp
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "USR1"."TABLE1"                             0 KB       0 rows
. . exported "USR1"."TABLE2"                             0 KB       0 rows
. . exported "USR1"."TABLE3"                                0 KB       0 rows
Master table "USR1"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for USR1.SYS_EXPORT_SCHEMA_01 is:
  /u01/app/oracle/admin/DBName/dpdump/usr1.dmp
Job "USR1"."SYS_EXPORT_SCHEMA_01" successfully completed at 12:49:08
The above will also be written to a default log file in the same directory - export.log or in the log you specify in the command line in LOGFILE parameter:
expdp usr1/pwd DIRECTORY=dpump_dir1 DUMPFILE=usr1.dmp LOGFILE=export_usr1.log
You could also specify particular tables you want to export in a TABLES parameter of your command, in case you do not need all user tables. You can add more than one table separated by comma:

expdp usr1/pwd TABLES=TABLE1,TABLE2 DIRECTORY=dpump_dir1 DUMPFILE=usr1.dmp LOGFILE=export_usr1.log
Import command would look something like this:

impdp usr1/pwd TABLES=TABLE1,TABLE2 DIRECTORY=dpump_dir1 DUMPFILE=usr1.dmp LOGFILE=import_usr1.log
To import a schema, you will need to add a SCHEMAS parameter to your command:

expdp usr1/pwd SCHEMAS=USR1 DIRECTORY=dpump_dir1 DUMPFILE=usr1.dmp LOGFILE=export_usr1.log


impdp usr1/pwd SCHEMAS=USR1 DIRECTORY=dpump_dir1 DUMPFILE=usr1.dmp LOGFILE=import_usr1.log
And for a full database import/export, you will need to add FULL parameter and set it to y:

expdp usr1/pwd FULL=y DIRECTORY=dpump_dir1 DUMPFILE=usr1.dmp LOGFILE=export_usr1.log


impdp usr1/pwd FULL=y DIRECTORY=dpump_dir1 DUMPFILE=usr1.dmp LOGFILE=import_usr1.log
To append data to existing tables, add parameter TABLE_EXISTS_ACTION=APPEND to your import statement

Monday, January 7, 2013

ORA-07391: sftopn: fopen error, unable to open text file

Have gotten this error a number of times while attempting to create a PFILE from SPFILE:
CREATE PFILE='/u01/oracle/product/11.2.0.3/dbs/initdb1.ora' FROM SPFILE='/u01/oracle/product/11.2.0.3/dbs/spfiledb1.ora'
Cause: Not sufficient permisiions to read from SPFILE

Solution: Run chmod 777 /u01/oracle/product/11.2.0.3/dbs/spfiledb1.ora to set appropriate permissions

Friday, January 4, 2013

The Oracle system identifier '' already exists, specify another SID

The Oracle system identifier '' already exists, specify another SID error occurs during database creation via dbca utility.

Cause: You have probably previously created database with the same name and have removed it but the traces of it still remain

Solution: Open /etc/oratab file in edit mode and remove the line containing SID that is causing an error message. Another place to look would be $ORACLE_BASE/oraInventory/ContentsXML/inventory.xml - this file could also contain an entry with the offending SID

Thursday, January 3, 2013

How to find out when is the last time a particular table was updated

In order to find out when was a particular table updated last (in Oracle), use ORA_ROWSCN:
SELECT SCN_TO_TIMESTAMP(MAX(ORA_ROWSCN)) FROM TableName;