Tuesday, December 31, 2013

The Oracle base has been set to Parse failed, error 229

Error:
In line 0 of oraclehomeproperties.xml:
LPX-00229: input source is empty
The Oracle base has been set to Parse failed, error 229

Cause: oraclehomeproperties.xml file got corrupt

Solution:
  1. cd $ORACEL_HOME/inventory/ContentsXML
  2. use ls -ltr to get the following:
    drwx—— 2 oracle oinstall 4096 Dec 29 2010 ConfigXML
    -rw-r—– 1 oracle oinstall 7770 Feb 21 2011 libs.xml
    -rw-r—– 1 oracle oinstall 1978 Feb 21 2011 config.xml
    -rw-r—– 1 oracle oinstall 656001 Feb 21 2011 comps.xml
    -rw-r–r– 1 oracle oinstall 0 May 19 2013 oraclehomeproperties.xml

    Notice that oraclehomeproperties.xml is of size 0 bytes.
  3. To fix the issue either restore oraclehomeproperties.xml from backup or copy it from another ORACLE_HOME of a different database

Friday, December 20, 2013

SQL Server database capacity planning notes

It is best not to rely on auto-growth of the database because it negatively affects the performance and also causes database files to become fragmented because during auto-growth large chunks of disk space are being allocated.

One can set an auto-growth just as a way to prevent any issues but still monitor database growth proactively. Also it is best to set auto-growth percentage to a reasonable number in order to keep the database from using more space than it needs.

The best time to grow a database is during offline maintenance window. Allocate new space to the database files (mdf and ldf), close the dtaabase and defragment the filesystem on which database resides.

Rebuilding indexes also helps to combat fragmentation by deleting an old index and building a fresh one, leaving space for new data.

Tuesday, December 17, 2013

How to grant a user read-only rights to a schema in Oracle 11g

There are two ways to grant a user read-only privileges on a single schema in Oracle:

1) Retrieve all the objects and grant SELECT privileges on each object to the user in question

2) Create a role and to that role grant SELECT privileges on each object in the schema and then grant that role to the user.

Script below can spool all the necessary GRANT statements to a file, given username of the user (or role) to grant priileges to and given the nschema name:
set pages 0;
set linesize 100;
set feedback off; 
set verify off; 

spool C:\Test\GET_ALL_SCHEMA_OBJECTS.sql

SELECT 'GRANT SELECT ON ' || table_name || ' TO &&new_user;' FROM dba_tables WHERE owner=upper('&&schema_name');
 
SELECT 'GRANT SELECT ON ' || view_name || ' TO &&new_user;' FROM dba_views WHERE owner=upper('&&schema_name');
 
set serveroutput off;
spool off;


Wednesday, December 4, 2013

How to retrieve all permissions granted to a particular user

Below is the query that will give you all the permissions granted to a particular user for the current database:
SELECT class_desc [Permission Level], CASE WHEN class = 0 THEN DB_NAME()
         WHEN class = 1 THEN OBJECT_NAME(major_id)
         WHEN class = 3 THEN SCHEMA_NAME(major_id) END [Securable]
  , USER_NAME(grantee_principal_id) [User]
  , permission_name [Permission]
  , state_desc
FROM sys.database_permissions
WHERE USER_NAME(grantee_principal_id)='user1'

Tuesday, December 3, 2013

ORA-00600: internal error code, arguments: [13013], [5001], [121347], [12771893], [1], [12771893], [17]. Non-fatal internal error happenned while SMON was doing logging scn->time mapping.

Error: ORA-00600: internal error code, arguments: [13013], [5001], [121347], [12771893], [1], [12771893], [17], [], [], [], [], []. Non-fatal internal error happenned while SMON was doing logging scn->time mapping.

Details: Error occurred in the alert log

Cause: Occurs when smon_scn_time has been corrupted

Solution:. We have 2 choices here
  1. Restore and recover the system tablespace
  2. or
  3. Recreate smon_scn_time

More on the solution here

Thursday, November 14, 2013

ORA-24324: service handle not initialized

Error: ORA-24324: service handle not initialized

Details: This error occurs when starting up the database by executing STARTUP NOMOUNT or STARTUP MOUNT from SQL*Plus

Cause: Occurs when you try to startup from the same SQL*Plus session you did the shutdown from

Solution:Exit and re-enter SQL*Plus after shutdown and before executing startup again.

Wednesday, November 6, 2013

Analyzing physical RAM on the server to determine whether an increase needed

Check the total physical RAm on the server by running
lsattr -El sys0 -a realmem
Result would look something like this:
> lsattr -El sys0 -a realmem
realmem 25165824 Amount of usable physical memory in Kbytes False
The go through each end every database running on the server and do the following steps:
  1. Login to sqlplus
  2. Check MEMORY_TARGET parameter:
    SQL>SHOW PARAMETER MEMORY_TARGET
    
    If value is not 0, then use it, but if it is 0, then use sum of values of SGA and PGA
    SQL>SHOW PARAMETER SGA_TARGET
    
    and
    SQL>SHOW PARAMETER PGA_AGGREGATE_TARGET
    
  3. After adding up all the values for each and every running database, calculate the percentage currently taken up. If it is no more than 50-70%, no change needed.

Monday, November 4, 2013

Oracle DataGuard switchiver vs. failover

Switchover is a planned role reversal between primary and standby databases in DataGuard configuration. it guarantees no data loss and is typically done for maintenance. There is no need to re-enable either database when the transition occurs.

Failover is not planned and is typically performed in event of primary database failure event, when there is no possibility of recovering primary database in a timely fashion. When failure occurs, standby database is taking over the primary role. Failover may result in data loss, depending on the protection mode enabled at the time failover occurs.

If a DataGuard broker is enabled, we can automate the failover, however if DataGuard broker is not enabled, we will have to perform manual failover.

If a DataGuard broker cannot detect the heartbeat of the primary DB, it will automatically perform the failover.

Friday, October 25, 2013

ksh: /usr/bin/rm: 0403-027 The parameter list is too long.

Error: ksh: /usr/bin/rm: 0403-027 The parameter list is too long.

Details: This error occurs when attempting to remove old files using rm *.aud

Cause:There are too many files to remove

Solution:Use the following command instead to delete first all files that are older than 5 days:
find /oracle/admin/testP/adump/ -name '*.aud' -mtime +5 -exec rm {} \;

Thursday, October 24, 2013

ORA-00119: invalid specification for system parameter LOCAL_LISTENER

Error: ORA-00119: invalid specification for system parameter LOCAL_LISTENER ORA-00132: syntax error or unresolved network name 'LISTENER_'

Details: This error occurs when attempting to start the database

Cause:The lack of a LOCAL_LISTENER, that must be defined in the TNSNAMES.ORA

Solution:In TNSNAMES.ORA (not in LISTENER.ORA) define the following entry for the local listener:
LISTENER_ =  (ADDRESS = (PROTOCOL = TCP)(HOST = servername)(PORT = 1521)) 

Wednesday, October 23, 2013

sp2-1503 unable to initialize oracle call interface

Error: sp2-1503 unable to initialize oracle call interface

Details: This error occurs when attempting to start sqlplus

Cause:PATH variable is not set correctly

Solution:Set and export PATH variable
export PATH=$ORACLE_HOME/bin:$PATH

Tuesday, October 22, 2013

TNS-03505: Failed to resolve name

Error: TNS-03505: Failed to resolve name

Details: This error occurs when doing tnsping DBName

Cause:ORACLE_HOME and/or PATH variables are not set correctly

Solution:Set and export both correct ORACLE_HOME and PATH
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/client_1
export PATH=$PATH:$ORACLE_HOME/bin

Thursday, October 10, 2013

ORA-12514: TNS:listener does not currently know of service

Error: ORA-12514: TNS:listener does not currently know of service

Possible Cause:The database might not have been registered with a listener

Diagnosing the error:The following things should eb done to diagnoze the error
  1. Check what listener is running:
    > lsnrctl status
    
  2. Check LOCAL_LISTENER parameter
    SQL> show parameter local_listener
    
    Usually it is blank by default and thus at startup the database is dynamically registered with the listener that is currently running. If LOCAL_LISTENER does have a value, then it means that database is trying to register with the configuration of the listener listed in LOCAL_LISTENER

Possible Solution:Clearing the value of LOCAL_LISTENER:


SQL> alter system set local_listener='';
SQL>lsnrctl reload
or

SQL> alter system set local_listener='' scope=both sid='*';
SQL>lsnrctl reload

Wednesday, August 7, 2013

RMAN-20001: target database not found in recovery catalog

Error: RMAN-20001: target database not found in recovery catalog

Details: This error occurs when attempting to backup the newly cloned database using RMAN

Cause:The database was not registered with recovery catalog.

Solution:Register the database with recovery catalog by performing the following steps:
  1. Set the environment to your target database:
    > . oraenv
    > DBName
    
  2. Connect to RMAN:
    > rman target /
    
  3. Connect to catalog as rman user:
    RMAN> connect catalog rmanuser/pwd@RMANDB
    
  4. Register target database:
    RMAN> register database;
    

The output will be:
database registered in recovery catalog
starting full resync of recovery catalog
full resync complete
Note: The database you are registering must not be a standby database, since standby database gets registered automatically whenever primary database is registered. Also the database must not be already registered in the recovery catalog. List of databases in recovery catalog can be retrieved byt loging in to RMAN repository and runnning:
select name, dbid from rc_database
where name=@DB_SID;

Tuesday, July 9, 2013

Automatic Diagnostic Repository (ADR)

Starting with Oracle 11g R1, traditional *_DUMP_DEST parameters are ignored and the trace location is decided based on DIAGNOSTIC_DEST parameter. If this parameter is not set explicitly, it defaults to ORACLE_BASE environment variable if this is set, if this is not set, then it defaults to ORACLE_HOME/log.

DIAGNOSTIC_DEST is the ADR root directory also known as ADR base.

Each instance's diagnostic data is stored in its own ADR home directory. Under ADR base there are multiple ADR homes. Each ADR home is the root directory for all traces within particular instance.

ADR base/diag
        /rdbms
                /dbname
                            /alerts
                            /cdump
                            /incident
                            /trace


show parameter diag - displays the ADR base

Friday, June 28, 2013

How to import an Oracle database schema to another database with a different schema name

During import, whenever your source schema name is different from target schema name, use REMAP_SCHEMA parameter:
impdp system/pwd dumpfile=sourceschema.dmp directory=DATA_PUMP_DIR schemas=sourceschema logfile=sourceschema_imp.log remap_schema=sourceschema:targetschema

Thursday, June 27, 2013

How to connect to an Oracle database hosted on a different server from ksh script

This can be done two ways, by providing tns string inside the script or by placing it into tnsnames.ora file:
FILE1=output.log
sql="INSERT INTO TABLE1(...)VALUES(...);"
sqlplus 'username/pwd@@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=myHostname)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=myDB)))' << EOF > $FILE1
$sql
commit;
exit;
EOF


FILE1=output.log
sql="INSERT INTO TABLE1(...)VALUES(...);"
sqlplus 'username/pwd@DB_ALIAS' << EOF > $FILE1
$sql
commit;
exit;
EOF

Wednesday, June 26, 2013

How to store sqlplus output into a single or multiple variables via ksh script

db_status=`sqlplus -s "/ as sysdba" <<EOF
       set heading off feedback off verify off
       select status from v\$instance;
       exit
EOF`
echo $db_status
The above works for when you want to retrieve a single value. If you want to retrieve multiple values, use:
output=`sqlplus -s "/ as sysdba" <<EOF
       set heading off feedback off verify off
       select host_name, instance_name, status from v\\$instance;
       exit
EOF`

var1=`echo $output | awk '{print $1}'`
var2=`echo $output | awk '{print $2}'`
var3=`echo $output | awk '{print $3}'`
echo $var1
echo $var2
echo $var3
You can also put your sql statement into a file with .sql extension. Something like that:
set head off
set verify off
set feedback off
set pages 0

SELECT field1, field2, field3 
FROM Table1;

exit;
And then read the script values into variables:
sqlplus -s "/ as sysdba" @script2run.sql.sql | read var1 var2 var3

Tuesday, June 25, 2013

Fixing warnings and errors after running impdp

First check the registry to see if the list of invalid objects by running the query below and see which objects have status set to INVALID:
SELECT comp_id, version, status FROM dba_registry;
or simply do:
SELECT COUNT(*) FROM dba_registry WHERE status='INVALID';
If there are any invalid objects, run utlrp script located in which is located in $ORACLE_HOME/rdbms/admin/utlrp.sql. This scrip will recompile inalid objects.
@?/rdbms/admin/utlrp.sql
The line above will run the script.

Next step is to check dba_registry again to see if all invalid objects have been fixed.

Tuesday, June 11, 2013

How to use grep command to exclude the lines beginning with a pattern

For instance when reading from a file you want to exclude all the comments i.e. lines beginning with hash sign (#), then use -v switch for grep command:
grep -v "^\#" $FILENAME

Thursday, June 6, 2013

SCN Explained

How Oracle transaction data change is written to a datafile
  1. Transaction starts
  2. Required data blocks are found in the buffer cache. If not found in the buffer cache, then found in the datafile
  3. Transaction modifies the data blocks in buffer cache, identified as "dirty data" and is written to the log buffer
  4. Transaction is committed and LGWR process writes the dirty data in the log buffer to the redo log file
  5. CKPT process updates the headers of all the datafiles and DBWn process writes the dirty data from Buffer Cache to the datafile
SCN - system change number. the SCN is an internal number maintained by the database to log the changes made. it is sort of an Oracle clock - every time transaction commits, it increments. There are a total of 4 SCN : System checkpoint SCN, the datafile checkpoint SCN, Start SCN and Stop SCN.

System checkpoint SCN - when a checkpoint operation is cimpleted, the Oracle stores the system checkpoint SCN in the control file
SELECT checkpoint_change# FROM V$DATABASE
It is for the entire database.

Datafile checkpoint SCN - when checkpoint operation is completed, Oracle stores it separately in the control file
SELECT name, checkpoint_change# FROM V$DATAFILE
Start SCN - checkpoint SCN stored in the file header of each datfile. it is called start SCN because it is used to start the database instance
SELECT name, checkpoint_change# FROM V$DATAFILE_HEADER
Stop SCN - SCN termination of each datafile header are stored in the control file
SELECT name, last_change# FROM V$DATAFILE
During the normal operation, stop SCN (last_change# field) is an infinite number or is NULL.

Current SCN is retrieved using:
select dbms_flashback.get_system_change_number from dual;
Whenever a transaction is comitted, the log writer process (LGWR) writes the records from the redo log buffers in SGA to the online redo logs on disk. LGWR also writes the SCN for the transaction to that online redo log file.

When all modified database buffers are written out to datafiles byt the database writer (DBWn) process, a checkpoint occurs. Checkpoint (CKPT) updates all database files and control files with the SCN at the time of the checkpoint. A successful checkpoint guarantees that all database changes up to that SCN have been recorded in the datafiles. As a result, only those changes that occurred afetr the checkpoint, need to be applied during recovery.

Checkpoint typically occurs at intervals specified in LOG_CHECKPOINT_INTERVAl parameter or whenever redo log switch takes place. Oracle stores SCN associated with that checkpoint in 4 places: 3 in control files and 1 in the datafile header for each datafile.

How to determine whether clean shutdown has occurred?

Run
select name,checkpoint_change# from v$datafile_header where name like '%datafilenamehere%';
If a clean shutdown has occurred, the CHECKPOINT_CHANGE# and LAST_CHANGE# will match for each datafile. if an instance crashes at shutdown, the numbers will be different and instance crash recovery will be required at the next instance start up.

Upon the next instance startup Oracle will detect that the stop SCN for datafiles was not set in the control file during startup. So during crash recovery oracle will apply all the redo log records from online redo logs in a process of rolling forward - to ensure that all transactions committed before the crash are applied to the datafiles. Also active transactions that did not commit before the crash will be rolled back.

Instance recovery happens automatically at startup without DBA's intervention.

While the database is up and open for use, the system checkpoint in the control file, the datafile checkpoint SCN in the control file, and the start SCN in each datafile header all match.

During a clean shutdown, a checkpoint is performed and the stop SCN for each datafile is set to the start SCN from the datafile header. Upon startup, Oracle checks the start SCN in the file header with the datafile checkpoint SCN. If they match, Oracle checks the start SCN in the datafile header with the datafile stop SCN in the control file. If they match, the database can be opened because all block changes have been applied, no changes were lost on shutdown, and therefore no recovery is required on startup. After the database is opened, the datafile stop SCN in the control file once again changes to NULL to indicate that the datafile is open for normal use.

System crash - For database after a system crash, upon startup, the stop SCN is not set (LAST_CHANGE#) and thus the Oracle will know the instance crashed because the checkpoint on a shutdown was not performed. If it had been performed, the CHECKPOINT_CHANGE# and LAST_CHANGE# values would match for each datafile as they do during clean shutdown.

During crash recovery, Oracle applies redo log records from the online redo logs in a process referred to as roll forward to ensure that all transactions committed before the crash are applied to the datafiles. Following roll forward, active transactions that did not commit are identified from the rollback segments and are undone before the blocks involved in the active transactions can be accessed. This process is referred to as roll back.

Media failure - In case of a media failure, the datafile checkpointSCN and start SCN in the datafile header won't match at instance startup.

Monday, June 3, 2013

How to retrieve the SQL used to create database links for Oracle database

The code used to cretae the database links can be retrieved using dbms_metadata.get_ddl:
SET serveroutput on
SET size 1000
SET long 1000

select dbms_metadata.get_ddl('DB_LINK',db_link,owner) from dba_db_links;

Wednesday, May 22, 2013

ORA-39082: Object type created with compilation warnings during import

This error often occurs when running data import via impdp utility.

Cause: It occurs due to circular dependencies and reordering the sequences of importing the objects

Solution Run the utlrp.sql script located in $ORACLE_HOME/rmdms/admin directory.

>>cd $ORACLE_HOME/rmdms/admin 
>>ls -ltr utlrp*
>>@utlrp.sql

Wednesday, May 15, 2013

Incorrect syntax near GO and why one cannot use GO when executing dynamic sql

When dynamic sql is executed using sp_executesql,
GO
keyword cannot be used because it will produce and error "Incorrect syntax near GO". Also that SQL Server will be looking for a carriage return CHAR(13) and a line feed CHAR(10)

Example:
-- declare some variables
DECLARE @SQL VARCHAR(100)
DECLARE @dbName VARCHAR(100)-- Get the current or target database name
SET @dbName = DB_NAME()-- Build the script
SET @sql = 'USE [' + @dbName + ']' + CHAR(13) + CHAR(10)
SET @sql = @sql + 'GO' + CHAR(13) + CHAR(10)-- Print the command
PRINT (@sql)
The code above will work for printing out the statement to the screen, but it will not execute if you add EXEC sp_executesql @sql

The solution would be to remove the
GO
keyword altogether. The code below will execute without an issue:

-- declare some variables
DECLARE @SQL VARCHAR(100)
DECLARE @dbName VARCHAR(100)-- Get the current or target database name
SET @dbName = DB_NAME()-- Build the script
SET @sql = 'USE [' + @dbName + ']' + CHAR(13) + CHAR(10)
SET @sql = @sql + CHAR(13) + CHAR(10)-- Print the command
PRINT (@sql)
EXEC sp_executesql @sql
The only way to get the dynamic sql to work with
GO
is to either copy the output to another window and run from there or write it to external file and then execute from your code using
exec master..xp_cmdshell 'osql -E -i C:\Scripts\test.sql'

Thursday, May 9, 2013

Korn Shell scripting intro

I recently started writing korn shell scripts here and there and below is some useful info on how to get started. I use vi editor for creating files:
vi myscript.ksh
Korn shell scripts usually have extension ksh The first line of the script should always be:
#!/usr/bin/ksh
After the file has been created, one needs to give it execute permissions so that the file becomes runable:
chmod 755 myscript.ksh
Now the script can be executed as follows:
./myscript.ksh
if the script name or path has spaces in it, it needs to be wrapped in double quotes
"C:\My Documents\My Scripts\myscript.ksh"
The escape character for ksh scripts is a (\) backslash. For example if you want to escape a dollar sign ($) inside your script, instead of writing
select * from v$database;
use
select * from v\$database;
(#) pound sign is used for comments and print and echo commands are used to output text to a sceen

Wednesday, May 8, 2013

WITH GRANT OPTION vs. WITH ADMIN OPTION

Role or system privileges are granted using WITH ADMIN OPTION, and table-specific privileges are granted using WITH GRANT OPTION.

Revoking a privilege granted using WITH GRANT will cascade revoke any and all privileges assigned by the target user to other users, but revoking a privilege granted using WITH ADMIN will only revoke privileges for the target user, leaving intact all the othwer users target user might have granted privileges to.

Example:

WITH GRANT OPTION

DBA can grant object privileges using with grant option (select,update,insert) to A, and A, in turn, grants those privileges to B.

DBA--------->A--------->B

DBA can only revoke the privileges from A but not from B, because WITH GRANT OPTION implies that we can only revoke those privileges that we have granted. Only the user who granted privileges can revoke them. But if DBA revoke A's privileges, the priveleges will cascade revoke from B as well.



WITH ADMIN OPTION

We grant the system privileges to A using with admin option(CREATE TABLE,CREATE SESSION)

DBA------>A-------->B

Here DBA can revoke privileges from either A or B. But if DBA only revokes privileges from A, they won't be cascade revoked from B.

Users with ADMIN OPTION for a system privilege can revoke the privilege of any other database user, the revoker does not have to be the same user who originally granted the privilege.

Wednesday, May 1, 2013

Clone user privileges script

Today I finalized my sql script for cloning Oracle users:
set pages 0;
set linesize 100;
set feedback off; 
set verify off; 

spool user_privs_clone.sql
-- user itself 
select 'create user &&new_user identified by values '''||password||''' '||
' default tablespace '||default_tablespace||
' temporary tablespace '||temporary_tablespace||' profile '||
profile||';'
from sys.dba_users
where username = upper('&&cur_user');

-- user roles
select 'grant '||granted_role||' to &&new_user'||
decode(ADMIN_OPTION, 'YES', ' WITH ADMIN OPTION')||';'
from sys.dba_role_privs
where grantee = upper('&&cur_user');

-- system privileges
select 'grant '||privilege||' to &&new_user'||
decode(ADMIN_OPTION, 'YES', ' WITH ADMIN OPTION')||';'
from sys.dba_sys_privs
where grantee = upper('&&cur_user');

-- table privileges
select 'grant '||privilege||' on '||owner||'.'||table_name||' to &&new_user;'
from sys.dba_tab_privs
where grantee = upper('&&cur_user');


-- column privileges
select 'grant '||privilege||' on '||owner||'.'||table_name||
'('||column_name||') to &&new_user;'
from sys.dba_col_privs
where grantee = upper('&&cur_user');


-- tablespace quotas
select 'alter user '||username||' quota '||
decode(MAX_BYTES,-1,'UNLIMITED', to_char(MAX_BYTES/(1024*1024),'9999') || 'M')||
' on '||tablespace_name||';'
from sys.dba_ts_quotas
where username = upper('&&cur_user');


-- default roles
set serveroutput on;
declare
defroles varchar2(4000);
begin
for c1 in (select * from sys.dba_role_privs
where grantee = upper('&&cur_user')
and default_role = 'YES'
) loop
if length(defroles) > 0 then
defroles := defroles||',
'||c1.granted_role;
else
defroles := defroles||c1.granted_role;
end if;
end loop;
dbms_output.put_line('alter user &&new_user default role '||defroles||';');
end;
/

set serveroutput off;
spool off;

All the sql statements, including CREATE USER get spooled to a user_privs_clone.sql file. When the script runs, you will be prompted for the name of the user you want to create/or if user already exists, for whichever user you want to grant privileges. Next you will be prompted for the username of a user whose privileges you would like to clone.

Monday, April 29, 2013

Issue cloning user privileges

I was using the script below to clone the user privileges of user1 and assign them to user2:
select 'grant ' || PRIVILEGE || ' on ' || owner || '.' || TABLE_NAME || ' to ' || 'USER2' || decode(GRANTABLE,'YES',' with grant option;','NO',';') from
  dba_tab_privs
where
  GRANTEE = upper('USER1')
union
select
  'grant ' ||GRANTED_ROLE || ' to ' || 'USER2' || ';'
from
  dba_role_privs
where
  GRANTEE = upper('USER1')
union
select
  'grant ' || PRIVILEGE || ' to ' || 'USER2' || ';'
from
  dba_sys_privs
where
  GRANTEE = upper('USER1')
union
select
  'alter user ' || 'USER2' || ' quota ' || decode(MAX_BYTES,-1,'UNLIMITED', to_char(MAX_BYTES/(1024*1024),'9999') || 'M') || ' on ' || TABLESPACE_NAME || ';'
from
  dba_ts_quotas
where
  USERNAME = upper('USER1');

Everything seems to be working fine and both users had identical privileges, yet, USER2 was not able to see all the tables that USER1 could see.

The issue turned out to be that bothe users wer granted two roles ROLE1 and ROLE2, but the only difference was that ROLE2 was a default role for both users, but ROLE1 was default only for USER1.

SQL> SELECT GRANTED_ROLE, ADMIN_OPTION, DEFAULT_ROLE FROM dba_role_privs WHERE grantee='USER1';

GRANTED_ROLE                   ADM DEF
------------------------------ --- ---
ROLE1                          NO  YES
ROLE2                          NO  YES


SQL> SELECT GRANTED_ROLE, ADMIN_OPTION, DEFAULT_ROLE FROM dba_role_privs WHERE grantee='USER2';

GRANTED_ROLE                   ADM DEF
------------------------------ --- ---
ROLE1                          NO  NO
ROLE2                          NO  YES

Running the sql below fixed the issue

ALTER USER USER2 DEFAULT ROLE ALL;
or


ALTER USER USER2 DEFAULT ROLE ROLE1, ROLE2;
Caution: Running


ALTER USER USER2 DEFAULT ROLE ROLE1;
would set ROLE1 as a default role but remove all other roles (in this case ROLE2) from default role.

Wednesday, April 17, 2013

IP Addresses and Oracle RAC

There are 3 types of IP addresses used in RAC:

  • Private IP - used for internal communication between the cluster nodes
  • Public IP - regular IP address used to communicate with the network
  • Virtual IP - used by database applications to enable failover when one node fails

Connections to the server are done through listener, and if connection has been established and listener fails, that's not an issue since after connection has been established, there is no need for a listener. Now if the net connection is trying to get a session while listener os down, the user will get an error message as connection fails. That's in a regular environment.

In RAC environment database is in a sharing mode - shared between all conencted nodes thus more than 1 listener is running in various nodes. So if a listener is down and user process is trying to get a connection, RAC will automatically transfer that request to another listener on a different node.

Before physical IP addresses were used in listener configuration which is a disadvantage over using virtual IP addresses because when using physical IP the session will wait for a timeout before failing over to another node. using Virtual IP (VIP) in listener coniguration because Oracle Notifiction Service (OBS) maintains the communication between the nodes and will notify all the nodes and listeners whenever one of the nodes is down, and VIP of the failed node will automatically divert to surviving node and session will be established in that node. There won't be any wait for TCP/IP timeout event.

Tuesday, April 16, 2013

ORA-00257 archiver error. Connect internal only, until freed.

Error: ORA-00257 archiver error. Connect internal only, until freed.

Details: This error occurs when attempting to login via sqlplus

Cause:One of the drives was out of space, the one that stored archive logs.

Solution:Removing old archive logs by starting RMAN and running the following script:
run {
delete noprompt archivelog all completed before 'sysdate-7';
}
The above will remove all archive logs older than 7 days

Monday, April 15, 2013

How to find archive logs location

Execute from sqlplus:

1) archive log list;
2) show parameter db_recovery_file_dest;
or query V$ARCHIVE_DEST

3) select dest_name, status, destination from v$archive_dest; 

Thursday, April 11, 2013

UNIX - clearing out files that are older than 30 days

Useful command for clearing out old files:
find ./* -type f -mtime +30 -exec rm {} \;
The above will clear out files that are older than 30 days in the current directory. find ./* will look at the current directory, grabbing all the files (*), -mtime +30 will specify files that are older than 30 days, and -exec rm {} \ will remove all the files that match the condition.

Thursday, March 28, 2013

Failed to lock <file> exclusively. Lock held by PID: xxxx

Error: Failed to lock <file> exclusively. Lock held by PID: xxxx

Details: This error occurs when attempting to startup database instance. The file in question usually named lk<SID>

Solution: Kill the process with PID xxxx (kill -9 xxxx)

Friday, March 15, 2013

SQL Server: Managing transaction logs

Today I learned something new - if truncating and shrinking transaction log does not seem to have any effect on transaction log's size, and the recovery mode is Full, one needs to change the mode to Simple, then shirnk the transaction log and change the mode back to Full.

The thing is that sometimes, when the database in Full recovery mode, the transactional log does not shrink, so this is the workaround.

This can also be done in production when necessary, but mirroring must be removed first and the reconfigured after everything is done.

Thursday, February 28, 2013

Removing old folders with powershell

In attempt to automate mundane tasks I have put together some scripts to schedule on the server. Here is one of them I am planning on using to remove old snapshots of a SQL Server database that are more than 45 days old:
$initpath = "C:\Snapshots"
(get-date).addDays(-45)
get-childitem -Path $initpath | where-object {$_.lastwritetime -lt (get-date).addDays(-45)} |
Foreach-Object { 
    if($_.Attributes -eq "Directory") {
        $_.FullName 
        if (Test-Path $_.FullName) {
            Remove-Item -r $_.FullName
        }

    }
}

Tuesday, February 26, 2013

SELECT_CATALOG_ROLE vs. SELECT ANY DICTIONARY

SELECT_CATALOG_ROLE is more restrictive than SELECT ANY DICTIONARY. Although both have privileges to select from the dictionary views, SELECT ANY DICTIONARY allows the user to see the source code of package bodies and triggers which are normally avilable to the DBAs.

SELECT ANY DICTIONARY is a system privelege, while SELECT_CATALOG_ROLE is a role.

SELECT_CATALOG_ROLE allows the user to query V$SESSION but not to create a procedure. In order to create an object on the base object, the user must have the direct grant on the base object, not through a role.

So while both allow the users to query V$DATAFILE, the role does not allow the users to create objects; the system privilege does.

Note: In case where the user is granted SELECT ANY TABLE but when parameter O7_DICTIONARY_ACCESSIBILITY is set to false, the user can access tables in any schema, except SYS, in other words cannot access data dictionary tables, granting the user SELECT_CATALOG_ROLE will enable him to access dictionary objects. Another way is to set O7_DICTIONARY_ACCESSIBILITY to true.

Friday, February 22, 2013

Page count and fragmentation

I was working on writing a maintenance script for SQL Server that would perform a rebuild on indexes with fragmentation of 30% or above and re-organize indexes with fragmentation of below 30% and found that page count should also be taken into an account. if a page count is less than 1000 then rebuilding an index will not only have no effect of fragmentation but also rebuild procedure will be more costly than fragmentation of an index with 1000 pages or less.
As a result I changed my query slightly:
SELECT OBJECT_NAME(ps.object_id) As TableName, i.name As IndexName,ps.index_type_desc As IndexType, ps.index_depth, ps.index_level, ps.avg_fragmentation_in_percent, ps.fragment_count, ps.page_count 
FROM sys.dm_db_index_physical_stats(DB_ID('DBName'), null, null, null, 'LIMITED') as ps
INNER JOIN sys.indexes i ON ps.object_id = i.object_id AND ps.index_id = i.index_id 
WHERE NOT i.name IS NULL and ps.avg_fragmentation_in_percent >= 30 AND ps.page_count > 1000

Friday, February 15, 2013

ksh: sqlplus: not found

ksh: sqlplus:  not found
Occurred when running
sqlplus / as sysdba
Cause: PATH environmental variable was not set for the instance.

Solution: run
export PATH=$ORACLE_HOME/bin:$PATH 
before executing
sqlplus / as sysdba

Wednesday, February 13, 2013

Generating comma-separated lists in Oracle

generating comma-separated lists in oracle is much simpler than in SQL Server. In Oracle 11g, LISTAGG function can be used:

SELECT authors.au_id, LISTAGG(titleauthor.title_id, ', ')
WITHIN GROUP (ORDER BY titleauthor.title_id) as TitleIds 
FROM titleauthor JOIN authors USING(au_id)  
GROUP BY authors.au_id

Tuesday, February 12, 2013

Some useful terminology

Term Definition
Patch Publicly released update to a known bug/issue
Hotfix An update, not always publicly released, used to fix a very specific issue
Maintenance Release An incremental update between software versions to fix multiple outstanding issues
Service Pack A large update that fixes many outstanding issues, normally includes all Patches, Hotfixes, Maintenance releases that precede the service pack

Saturday, February 9, 2013

Oracle certified!

As of this morning I am officially an Oracle Certified Associate DBA! The snow storm did not stop me and I passed my 1z0-052 exam with 90%. Only 75 days after I started learning Oracle and UNIX.

Tuesday, February 5, 2013

Control file recovery

Control file needs recovery when
  • Any one control file specified in a parameter file is lost due to media failure or corrupted
  • New control file is needed when changing the database

When one control file becomes corrupted, oracle database automatically stops running. One can determine which control file is causing a problem by looking into alert log file. Then remove the corrupted control file and copy good control file to the corrupted location
  1. rm -f /u10/oradata/controlfile01.ctl
    
  2. cp /u20/oradata/controlfile02.ctl /u10/oradata/controlfile01.ctl
    
  3. Then mount the database
    STARTUP MOUNT;
    
  4. and open it
    ALTER DATBASE OPEN;
    
When a control file is lost, copy a good control file to the new location, modify control_file parameter by removing lost location and add the new location to it:

  1. Shutdown the database:
    SHUTDOWN ABORT;
    
  2. cp /u20/oradata/controlfile02.ctl /u10/oradata/controlfile04.ctl
    
  3. modify PFILE:
    CONTROL_FILE='/u20/oradata/controlfile02.ctl',
    '/u21/oradata/controlfile03.ctl',
    '/u30/oradata/controlfile04.ctl'
    
  4. Then mount and open the database:
    STARTUP MOUNT;
    ALTER DATBASE OPEN;
    

Monday, February 4, 2013

Moving control files to a new location

  1. Connect as sysdba
  2. run
    SHOW PARAMETER CONTROL_FILES;
    
    to check the locations of the control files
  3. Shutdown the database:
    SHUTDOWN IMMEDIATE;
    
  4. Then run
    SHOW PARAMETER SPFILE;
    
    If the database running on pfile (the path to SPFILE will be null), change the init.ora file in $ORACLE_HOME/dbs directory
    CONTROL_FILES=(/new_loc1/control01.ctl,/new_loc2/control02.ctl,/new_loc3/control03.ctl)
    
    *If database is running on spfile, then create PFILE from SPFILE, modify it, then re-create SPFILE from newly altered PFILE.
  5. Copy the control files to the new location using cp command:
    cp /old_loc1/control01.ctl /new_loc1/control01.ctl
    cp /old_loc1/control02.ctl /new_loc2/control02.ctl
    cp /old_loc1/control02.ctl /new_loc3/control03.ctl
    
  6. Then, while loged in as sysdba user, run
    STARTUP;
    SHOW PARAMETER CONTROL_FILES;
    

Tuesday, January 29, 2013

What is the difference between restoring and recovering?

Restoring the database involves copying backup files from the backup location to disk, sometimes it involves replacing damaged files etc., while recovery is the process of applying redo logs to the database in order to roll it forward. Database can be rolled forward toa specific point-in-time, or until the last transaction in the log files.

During restoration, RMAN decides which backup sets, datafiles, and archive logs are necessary for the database restoration and verifies their contents. No files are actually restored during this process.

Restoration of a datafile or a control file from backup involves retrieving the file onto disk from a backup location and making it available to the database server. While recovering a datafile involved takinhg a restored copy of the datafile and applying changes recorded in the redo logs.

Monday, January 28, 2013

Oracle backup fails with "could not read file header for datafile 5 error reason 1" error

When attempting to perform a database backup I recived the following error:

"could not read file header for datafile 5 error reason 1"

To diagnoze the error, first check the DBA_DATA_FILES view, the file with file_id=5:

SELECT FILE_ID, FILE_NAME FROM DBA_DATA_FILES;
The results returned will look something like that:
 FILE_ID                   FILE_NAME
-------------------------------------------------------------------
1                /u01/oradata/DBName/system01.dbf

2                /u01/oradata/DBName/sysaux01.dbf

3                /u01/oradata/DBName/undotbs01.dbf

4                /u01/oradata/DBName/users01.dbf

5                /u01/app/oracle/product/11.2.0.3/dbs/MISSING00005 
Solution: Check your data files to see which one is missing, then rename the missing file:

ALTER DATABASE RENAME FILE '/u01/app/oracle/product/11.2.0.3/dbs/MISSING00005' TO '/u01/app/oracle/oradata/DBName/mydatefile1.dbf';
followed by
RECOVER DATAFILE 5;

How to check whether an Oracle database was started with spfile or pfile?

There are several ways to check whether the database was started using spfile or pfile

  1. Run
    SHOW PARAMETER SPFILE;
    
    If it returns a path to an spfile in Value field, then spfile was used. If the Value field is blank, then pfle was used to strat up the database:
    NAME                         TYPE    VALUE
    ---------------------------- -------- ---------------------
    spfile                       string
    
  2. Run ALTER SYSTEM statement and see if it causes an error:
    ALTER SYSTEM SET AUDIT_TRAIL='DB_EXTENDED' SCOPE=SPFILE;
    
    If it was pfile that use used, the following error will occur:
    ORA-32001: write to SPFILE requested but no SPFILE is in use
    
  3. By using V$SPPARAMETER view:
    SELECT ISSPECIFIED, COUNT(*) FROM V$SPPARAMETER GROUP BY ISSPECIFIED;
    
    If pfile was used, FASLE will be returned for all setting - meaning they all come from PFILE:
    ISSPEC   COUNT(*)
    ------ ----------
    FALSE         348
    
    If result returned looks something like this:
    ISSPEC   COUNT(*)
    ------ ----------
    FALSE         327
    TRUE           21
    
    then we know that SPFILE was used because ISSPECIFIED returned TRUE for some settings, specified in SPFILE.

Friday, January 25, 2013

ORA-01180: can not create datafile 1

RMAN-03002: failure of restore command at 01/24/2013 16:44:05
ORA-01180: can not create datafile 1
ORA-01110: data file 1: '/u01/oradata/pubs/system01.dbf'
Occurred when running database restore.

I ran

list incarnation;
the result was

List of Database Incarnations
DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1       1       DBName     707574411        PARENT  1          23-JAN-13
2       2       DBName     707574411        CURRENT 799793     25-JAN-13
Cause: Incarnation error

Solution: Reset incarnation

RESET DATABASE TO INCARNATION 1;
What is an incarnation?

A database incarnation is created whenever you open the database with the RESETLOGS option.

The Current incarnation is the one in which the database is running at present

The incarnation from which the current incarnation branched after a ALTER DATABASE OPEN RESETLOGS was performed is called the Parent incarnation.

If you want to go back to some SCN which is not part of the Current database incarnation, we need to use the RESET DATABASE TO INCARNATION command as shown in the example below

The purpose of incarnations

An incarnation helps to identify redo streams which have the same SCN, but occurred at different points in time. This prevents the possibility of applying the wrong archive log file from a previous incarnation which could corrupt the database.

Suppose we are at incarnation 1 and are at SCN 100 in the database. I do a resetlogs and now the incarnation of the database becomes 2. Suppose we do another resetlogs and it so happens that the SCN at the time we did the resetlogs was also 100. Somewhere down the line later we want to do a point in time recovery and want to recover the database until SCN 100. How does Oracle know which is the correct SCN and how far back in time should the database be rolled back to and which archive logs need to be applied?

This is where incarnations come in and we will see how we can set the database to a particular incarnation to enable RMAN to do a point in time recovery.

Error accessing PRODUCT_USER_PROFILE

Error accessing PRODUCT_USER_PROFILE
Warning: Product user profile information not loaded!
You may need to run PUPBLD.SQL as SYSTEM
Occurred after restoring the database and attempting to login via SQL*Plus

Solution:run @?/sqlplus/admin/pupbld.sql script while being logged in as SYSTEM user, to install the SQL*Plus PRODUCT_USER_PROFILE tables. This table allows SQL*Plus to disable commands per user.

Important:Run this script logged in as SYSTEM user, not SYS. When loggin in to SQL*Plus using connect as sysdba, you end up being logged in as SYS account. To double-check, run "show user" command

Thursday, January 24, 2013

ORA-01113: file 1 needs media recovery

RMAN-03002: failure of alter db command at 01/24/2013 12:08:25
ORA-01113: file 1 needs media recovery
ORA-01110: data file 1: '/u40/oradata/pubs/system01.dbf'
The error occurs when running ALTER DATABASE OPEN command

Cause: The reason is RECOVER DATABASE; statement was no issued after running RESTORE DATABASE and before opening the database via ALTER DATABASE OPEN

Solution: Issue RECOVER DATABASE;

followed by ALTER DATABASE OPEN RESETLOGS; (or NORESETLOGS if no redo logs were lost)

Use RESETLOGS after incomplete recovery (when the entire redo stream wasn't applied). RESETLOGS will initialize the logs, reset your log sequence number, and start a new "incarnation" of the database.

Use NORESETLOGS when doing complete recovery (when the entire redo stream was applied). Oracle will continue using the existing (valid) log files.

The NORESETLOGS option does not clear the redo log files during startup and the online redo logs to be used for recovery. Only used in scenario where MANUAL RECOVERY is started, CANCEL is used, and then RECOVER DATABASE is started.

ORA-19573: cannot obtain exclusive enqueue for datafile 1

The error occurs when executing restore database statement.

Cause: The problem occurs because of an attempt to restore an open dtabase.

Solution: Mount the database but not open it.

Run
SHUTDOWN IMMEDIATE;

STARTUP MOUNT;

How to recover a dropped table in Oracle 11g when flashback mode is off

Recovering dropped table is easy in Oracle, provided that the table was not dropped with PURGE option. In case the table is dropped and space occupied by the table is released and the table does not get moved into the recycle bin. But if table is dropped without PURGE option, Oracle has this very neat feature - Recycle bin, similar to the recycle bin in Windows. There are two recyle bin views in Oracle: USER_RECYCLEBIN and DBA_RECYCLEBIN, Synonym RECYCLEBIN points to your USER_RECYCLEBIN.

The recycle bin can be turned on and off with RECYCLEBIN initialization parameter. When table is dropped, it get rename to system-generated name preceeded with BIN and stored in recycle bin. The important thing to know is that after table has been dropped, it's only been renamed, the table segmants are still in the tablespace, unchanged. the space occupied by the table will not be reclaimed until the table has been purged from the recycle bin.

While in the recycle bin, the table can even be queried using the newly generated name that starts qwith BIN$.

The table can easily be recovered from the recycle bin using flashback drop, which will rename the table to its original name.

You can check flashback mode by running

SELECT FLASHBACK_ON FROM V$DATABASE; 
First check the parameter Recyclebin is set to true. Recycle bin is a data dictionary table that contains information about dropped objects. Dropped tables and any associated objects such as indexes, constraints, nested tables, and the likes are not removed and still occupy space. They continue to count against user space quotas, until specifically purged from the recycle bin or the unlikely situation where they must be purged by the database because of tablespace space constraints.
SHOW PARAMETER RECYCLEBIN;
if recyclebin is set to off, perform the following steps:
  1. run
    ALTER SYSTEM SET RECYCLEBIN=ON SCOPE=SPFILE;
    
  2. verify that recyclebin parameter is now set to ON
  3. shutdown the database
    SHUTDOWN IMMEDIATE
    
  4. Restart the database
    STARTUP
    
then run
SELECT * FROM RECYCLEBIN;
and see if your table is in there. If it is, use the following quesry to restore it:
FLASHBACK TABLE  TO BEFORE DROP;
Then check if the table is back:
SELECT * FROM USER_TABLES WHERE TABLE_NAME=;

Wednesday, January 23, 2013

How to determine the type of a backup file based on the filename

Let's say you backed up your database and fast_recovery_area contains a number of backup files and you don't know which ones to use for recovery, which ones are full backup, which are incremental, and which are level 0 or level1. First get the backup piece id by running:
list backuppiece '/u01/fast_recovery_area/DBName/backupset/backup_filename.bkp'
The oputput should look something liek this:
List of Backup Pieces
BP Key  BS Key  Pc# Cp# Status      Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
5       5       1   1   AVAILABLE   DISK        /u01/fast_recovery_area/DBName/backupset/backup_filename.bkp
Then run
list backupset 5;
where 5 is the backup piece id of your backup file The output will look something like this:
List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
5       Incr 0  680.12M    DISK        00:00:06     18-JAN-13
        BP Key: 5   Status: AVAILABLE  Compressed: NO  Tag: TAG20130118T141414
        Piece Name: /u01/fast_recovery_area/DBName/backupset/backup_filename.bkp
  List of Datafiles in backup set 5
  File LV Type Ckp SCN    Ckp Time  Name
  ---- -- ---- ---------- --------- ----
  1    0  Incr 1256801    18-JAN-13 /u01/oradata/DBName/system01.dbf
  2    0  Incr 1256801    18-JAN-13 /u01/oradata/DBName/sysaux01.dbf
  3    0  Incr 1256801    18-JAN-13 /u01/oradata/DBName/undotbs01.dbf
  4    0  Incr 1256801    18-JAN-13 /u01/oradata/DBName/users01.dbf
  5    0  Incr 1256801    18-JAN-13 /u01/oradata/DBName/tbspace1_01.dbf
Field Type lists the backup type, in this case Incremental. The field LV lists the level of the backup - 0.

Tuesday, January 22, 2013

"Not a Valid Month" Error

Error received when trying to insert a date into a date field:
INSERT INTO TableName(Field1)VALUES('01/22/2013');
Cause: The date format specified in an INSERT statement dow not match NLS_DATE_FORMAT parameter.
To check NLS_DATE_FORMAT parameter, run:
SELECT sysdate FROM dual;
The output should be something like this: 22-JAN-13
Solution: Modify SQL statement to read:
INSERT INTO TableName(Field1)VALUES(to_date('22-JAN-2013'));

Friday, January 18, 2013

Setting permanent environmental variables and user profile file

When you initially connect to the server, before you set your environmental variables by running .oraenv, your environmental variables such as ORACLE_SID and ORACLE_HOME populated by default by the values from .profile. You can change those temporarily by running .oraenv but more permanent ones are stored in the user profile file.

You can view your current settings by running the following from the command prompt

env

In order to view or edit .profile, you need to go to either /home/oracle or ORACLE_BASE and run

ls -la

to see all the files, including hidden ones - the ones preceded by the dot (.).

Then use

vi .profile

and look for a line where ORACLE_SID, ORACLE_BASE, and ORACLE_HOME are being set. Change it to the name of the database you want by default and your base and home directories.

Wednesday, January 16, 2013

Undo data vs. Redo log

The main difference is that UNDO data can make the change go away, and REDO can make the change happen again if necessary. UNDO holds a copy of previous data untill the transaction commits and for some time afterwards (depending on undo retention period specified), thus providing read consistence, since the user can read the data as it is at the time a transaction started, while transaction is still in the process. Capturing the undo data enables Oracle to roll back the uncommitted data. Undo data is used for both read consistency and to recover from failed transactions.

UNDO segments store BEFORE images of a record, while REDO logs record redo entries.

For example the user adds a record to the database - an employee named John Doe with ID of 3. Currently database contains two records - with IDs of 1 and 2. So UNDO data captures those two records in case we need to rollback the insertion of record with ID 3.

Meanwhile REDO log records all the changes to the database as they happen. If the database crashes, Oracle will first read redo logs and apply all committed changes 9the ones that didn't end up in data files yet, due to the crash) before opening the database for regular use.

Here is a simple differences between the two:

Undo Redo
Makes a change go away Reproduces a change
Used for rollback and read consistency Used for rolling forward the changes
Protects the database from inconsistent reads Protects from data loss
Data stored in Undo segments in Undo tablespace Logs stored in SGA, in memory

Configuring auditing

How to check if auditing is enabled:
  1. connect as sysdba
  2. run
    SHOW PARAMETER AUDIT_TRAIL;
    
if auditing_trail parameter is set to None, enable auditing by performing the following steps:
  1. Update parameter file:
    ALTER SYSTEM SET AUDIT_TRAIL='DB' SCOPE=SPFILE; 
    
  2. Shutdown the database
    SHUTDOWN IMMEDIATE;
    
  3. Re-start and open database
    STARTUP OPEN;
    
  4. Check the parameter value:
    SHOW PARAMETER AUDIT_TRAIL;
    
When AUDIT_TRAIL parameter is set to DB, audit data can be sound in SYS.AUD$ table, or one of the views based on this tabel: DBA_AUDIT_TRAIL, DBA_COMMON_AUDIT_TRAIL or DBA_COMMON_AUDIT_TRAIL. In order to capture the actual SQL command issued by the user, change AUDIT_TRAIL to DB_EXTENDED. In SYS.AUD$ the sql command will show up in SQL_TEXT field. For example:
SELECT ACTION, SQL_TEXT FROM DBA_AUDIT_TRAIL;
To view what is audited for each user, run:
SELECT USER_NAME, AUDIT_OPTION FROM DBA_STMT_AUDIT_OPTS WHERE NOT USER_NAME IS NULL;
To retrieve all the sql statements ran by a current user, run (AUDIT_TRAIL should be set to DB_EXTENDED in order to auddit actual sql statements):
SELECT SQL_TEXT FROM USER_AUDIT_OBJECT;

Tuesday, January 15, 2013

Moving datafile to a different location

Moving datafile involves 4 steps:
  1. Taking datafile offline
  2. Moving the file to a new location
  3. Recovering the datafile
  4. Bringing datafile online
In order to perform the steps above, the database has to be in ARCHIVELOG mode. In order to check, run the following:
ARCHIVE LOG LIST;
The output will be something like that:
Database log mode              No Archive Mode
Automatic archival             Disabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     56
Current log sequence           58
Enable ARCHIVELOG mode and peform the following:
ALTER DATABASE DATAFILE '/u01/oradata/tbs01.dbf' OFFLINE;
ALTER DATABASE RENAME FILE '/u01/oradata/tbs01.dbf' TO '/u40/oradata/Northwind/tbs01.dbf';
RECOVER DATAFILE '/u01/oradata/Northwind/tbs01.dbf';
ALTER DATABASE DATAFILE '/u01/oradata/Northwind/tbs01.dbf' ONLINE;

before you rename the file, make sure you run the following command on the UNIX side to ensure the file on the OS has been moved:
cp /u01/oradata/tbs01.dbf /u01/oradata/Northwind/tbs01.dbf
If you do not move OS file first, you will get an error in SQL Plus while attempting to run ALTER DATABASE RENAME statement
ORA-01511: error in renaming log/data files 
ORA-01516: nonexistent log file, datafile, or tempfile "/u01/oradata/Northwind/tbs01.dbf"
When done, run the statement below to ensure that all your datafiles are in the correct place:
SELECT NAME FROM v$DATAFILE;

or

SELECT * FROM V$DBFILE; (for older versions)

Monday, January 14, 2013

How do you know you are a DBA

You know you are a DBA when being asked "Why are you going to see a dentist?" and you reply "For basic maintenance and performance monitoring"

Saturday, January 12, 2013

How to enable ARCHIVELOG mode for a database

ARCHIVELOG mode has a number of benefits:
  • Database can be backed up online
  • Backup policy can be managed through RMAN
  • Database can be recovered point-in-time
By default a database is created in NOARCHIVELOG mode. Here is a sequence of steps to perform in order to enable ARCHIVELOG mode for a database:
  1. Connect as sysdba
  2. sqlplus / as sysdba
    
  3. Shut down the database
  4. SHUTDOWN IMMEDIATE;
    
  5. Modify pfile by setting LOG_ARCHIVE_START parameter to true
  6. LOG_ARCHIVE_START=TRUE
    
  7. Startup and mount your database using your parameter file or create spfile from your pfile and then mount
  8. STARTUP MOUNT PFILE='/u01/app/oracle/product/11.2.0.3/dbs/init.ora';
    
    or
    CREATE SPFILE='/u01/app/oracle/product/11.2.0.3/dbs/spfile.ora' FROM PFILE='/u01/app/oracle/product/11.2.0.3/dbs/init.ora';
    STARTUP MOUNT;
    
  9. Enable ARCHIVELOG mode
  10. ALTER DATABASE ARCHIVELOG;
    
  11. Open your database
  12. ALTER DATABASE OPEN;
    
After you ran all the above, check the status of your database with the following command:
ARCHIVE LOG LIST;
The output should look something like that:
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     56
Next log sequence to archive   58
Current log sequence           58

Friday, January 11, 2013

ORA-09925: Unable to create audit trail file

Got an infamous "ORA-09925: Unable to create audit trail file" error while trying to log in to sqlplus as sysdba via command line:

sqlplus / as sysdba

I did some research and 90% of cases where this error came up while creating a database via command line, while the user tried to log in to sqlplus in order to startup the database from pfile, the problem was permissions issue - not sufficient permissions to write to the adump directory. It happened to be something different in my case.

Cause: An instance for that ORACLE_SID was alredy started. I ran ps -ef | grep pmon

and saw that the instance was already running and therefore I was getting the error

Solution: Run ps -ef | grep pmon, copy the process id and kill the process using kill processid command.

Wednesday, January 9, 2013

Creating an Oracle database from command line

#Export all environmental variables

export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/11.2.0.3
export LIBPATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32:$LIBPATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/lib32:$ LD_LIBRARY_PATH
export ORACLE_SID=pubs
export PATH=$ORACLE_HOME/bin:$PATH
#run the commands above to verify environmental variables

env | grep ORA
env | grep LIB
#create a pfile - initpubs.ora $ORACLE_HOME/dbs/inipubs.ora

db_name='pubs'
memory_target=2G
processes = 150
audit_file_dest='/u01/app/oracle/admin/pubs/adump'
audit_trail ='db'
db_block_size=8192
sga_max_size=1073741824
sga_target=1073741824
db_domain=''
db_recovery_file_dest='/u02/flash_recovery_area'
db_recovery_file_dest_size=2G
open_cursors=300
remote_login_passwordfile='EXCLUSIVE'
undo_management=AUTO
undo_tablespace='UNDOTBS1'
# You may want to ensure that control files are created on separate physical
# devices
control_files = (/u01/oradata/pubs/control01.ctl, /u01/oradata/pubs/control02.ctl)
compatible ='11.2.0'
#create password file

$ORACLE_HOME/bin/orapwd file=$ORACLE_HOME/dbs/pwdpubs.ora password=oracle entries=5
#create an entry in /etc/oratab file

pubs:/u01/app/oracle/product/11.2.0.3:N
#log into sqlplus as sysdba by running the following command:

sqlplus / as sysdba
# start the instance with nomount option, specifying your pfile

startup nomount pfile='/u01/app/oracle/product/11.2.0.3/dbs/initpubs.ora';
# run actual create database statement

create database pubs
MAXLOGFILES 5
MAXLOGMEMBERS 2
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 100
logfile group 1 ('/u01/oradata/pubs/redo01.log') size 100M,
        group 2 ('/u02/oradata/pubs/redo02.log') size 100M
character set WE8ISO8859P1
national character set utf8
datafile '/u01/oradata/pubs/system01.dbf' size 500M autoextend on next 10M maxsize unlimited extent management local
user datafile '/u01/app/oracle/oradata/newdata/user01.dbf' size 10m
sysaux datafile '/u01/oradata/pubs/sysaux01.dbf' size 100M autoextend on next 10M maxsize unlimited
undo tablespace undotbs1 datafile '/u01/oradata/pubs/undotbs01.dbf' size 100M
default temporary tablespace temp tempfile '/u40/oradata/pubs/temp01.dbf' size 100M;
 
# run the following scripts to complete database creation

@?/rdbms/admin/catalog.sql

@?/rdbms/admin/catproc.sql
 
# set passwords for sys and system

alter user sys identified by whatever;

alter user system identified by whatever;

Tuesday, January 8, 2013

Using Oracle Data Pump for import/export

In order to get Data Pump directory location, execute

SELECT directory_path FROM dba_directories WHERE directory_name='DATA_PUMP_DIR'; 
Typically it is $ORACLE_BASE/admin/DBName/dpdump/

Then check priviges that are assigned to the directory by executing;

SELECT grantee, privilege FROM dba_tab_privs dtp WHERE table_name = 'DATA_PUMP_DIR';
The grantee could be a user or a role.

Use the following to create a directory:

CREATE OR REPLACE DIRECTORY dpump_dir1 AS '.../admin/DBName/dpdump';
After a directory is created, you need to grant READ and WRITE permission on the directory to other users. For example, to allow the Oracle database to read and to write to files on behalf of user usr1 in the directory named by dpump_dir1, you must execute the following command:
GRANT READ,WRITE ON DIRECTORY dpump_dir1 TO usr1;
Once the directory access is granted, the user can export his database objects with command arguments from a command line:

expdp usr1/pwd DIRECTORY=dpump_dir1 dumpfile=usr1.dmp
The above will produce the following output:

Export: Release 11.2.0.3.0 - Production on Tue Jan 8 12:48:31 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "USR1"."SYS_EXPORT_SCHEMA_01":  usr1/******** DIRECTORY=dpump_dir1 dumpfile=usr1.dmp
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "USR1"."TABLE1"                             0 KB       0 rows
. . exported "USR1"."TABLE2"                             0 KB       0 rows
. . exported "USR1"."TABLE3"                                0 KB       0 rows
Master table "USR1"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for USR1.SYS_EXPORT_SCHEMA_01 is:
  /u01/app/oracle/admin/DBName/dpdump/usr1.dmp
Job "USR1"."SYS_EXPORT_SCHEMA_01" successfully completed at 12:49:08
The above will also be written to a default log file in the same directory - export.log or in the log you specify in the command line in LOGFILE parameter:
expdp usr1/pwd DIRECTORY=dpump_dir1 DUMPFILE=usr1.dmp LOGFILE=export_usr1.log
You could also specify particular tables you want to export in a TABLES parameter of your command, in case you do not need all user tables. You can add more than one table separated by comma:

expdp usr1/pwd TABLES=TABLE1,TABLE2 DIRECTORY=dpump_dir1 DUMPFILE=usr1.dmp LOGFILE=export_usr1.log
Import command would look something like this:

impdp usr1/pwd TABLES=TABLE1,TABLE2 DIRECTORY=dpump_dir1 DUMPFILE=usr1.dmp LOGFILE=import_usr1.log
To import a schema, you will need to add a SCHEMAS parameter to your command:

expdp usr1/pwd SCHEMAS=USR1 DIRECTORY=dpump_dir1 DUMPFILE=usr1.dmp LOGFILE=export_usr1.log


impdp usr1/pwd SCHEMAS=USR1 DIRECTORY=dpump_dir1 DUMPFILE=usr1.dmp LOGFILE=import_usr1.log
And for a full database import/export, you will need to add FULL parameter and set it to y:

expdp usr1/pwd FULL=y DIRECTORY=dpump_dir1 DUMPFILE=usr1.dmp LOGFILE=export_usr1.log


impdp usr1/pwd FULL=y DIRECTORY=dpump_dir1 DUMPFILE=usr1.dmp LOGFILE=import_usr1.log
To append data to existing tables, add parameter TABLE_EXISTS_ACTION=APPEND to your import statement