Thursday, October 16, 2008

Oracle Critical Patch Update - October 2008

Oracle released critical patch update for October 2008 yesterday; this is the last CPU for 2008. There have been 36 new fixes across all products including 15 new security fixes for database products. Please review the following URL to see if the product you are using requires this patch or not.
http://www.oracle.com/technology/deploy/security/critical-patch-updates/cpuoct2008.html

You can also refer to MetaLink Note 735216.1 for more details on database fixes.

Thursday, September 25, 2008

Oracle announces Storage server and Database machine

SAN FRANCISCO - SEPTEMBER 24:  The new Oracle ...Larry Ellison announced Oracle's foray into hardware arena with the launch of storage server and database machine at the ongoing Oracle Open World 2008 in his keynote address. Oracle has partnered with HP for this. Read the press release.

Oracle has announced a new storage server called "Oracle Exadata" and a database machine partnering with HP. The database machine is named "HP Oracle Database Machine". It's a pre-configured machine with Oracle Enterprise Linux consisting 8 database servers, 64 Intel processor cores, grid of 14 Oracle Exadata storage servers,. It's specially been designed for data warehousing market. It's being seen as a serious contender to Teradata and Netezza. The Exadata Storage server includes two Intel processors, each with four cores, with up to 12 terabytes of raw storage. Here is what James Kobielus of Forrester Research has to say at his blog.

Refer to the following links for more about Exadata and Database machine.

HP Oracle Exadata Storage Server
HP Oracle Database Machine

Tuesday, July 22, 2008

CURSOR_SPACE_FOR_TIME to be deprecated in future releases

According to a recently published metalink note (565424.1) the parameter "CURSOR_SPACE_FOR_TIME" is going to be deprecated starting release 11.1.0.7 onwards. In fact Oracle's next patch set release for 10g - 10.2.0.5 will also de-support this parameter. We will have to remove this parameter whenever these 2 patch sets - 10.2.0.5 for 10g and 11.1.0.7 for 11g are released and we decided to migrate. CURSOR_SPACE_FOR_TIME was introduced to reduce latch contention; This parameter may no longer be required with introduction of cursor mutexes hence Oracle has decided to deprecate this in future releases.

Monday, July 21, 2008

Oracle 11g New Features – PL/SQL enhancements Part-II

This is the concluding part of the 2 part series on PL/SQL enhancements in 11g. Here are few more useful enhancements in PL/SQL :-

1. Usage of Named and mixed notation with PL/SQL subprograms in a SQL statement

Prior to 11g, to use a sub-program (e.g. function) in a SELECT statement you have to provide the real parameters in positional notation. Let’s look at an example using both 10g and 11g to understand it better.

Here is a small piece of code which calculates the years of employment of an employee :-
CREATE OR REPLACE FUNCTION cal_employment_duration (
empid IN NUMBER,
as_on IN DATE
)
RETURN NUMBER
IS
vduration NUMBER;
BEGIN
SELECT MONTHS_BETWEEN (as_on, hiredate) / 12
INTO vduration
FROM emp
WHERE empno = empid;

RETURN ROUND (vduration);
END;
/
Now I call this function using a SELECT statement in Oracle 10g. Observe the difference when I use named and mixed notation :-
SIPRAS@ORA10G> SELECT empno, hiredate,
2 cal_employment_duration (empno, SYSDATE) || ' Years' "Employee Tenure"
3 FROM emp
4 WHERE empno = 7788
5 /

EMPNO HIREDATE Employee Tenure
---------- --------- ----------------------------------------------
7788 19-APR-87 21 Years
SIPRAS@ORA10G> SELECT empno, hiredate,
2 cal_employment_duration (empid => empno,
3 as_on => SYSDATE
4 )
5 || ' Years' "Employee Tenure"
6 FROM emp
7 WHERE empno = 7788;
cal_employment_duration (empid => empno,
*
ERROR at line 2:
ORA-00907: missing right parenthesis
Now lets use the same example in Oracle 11g :-
SIPRAS@11glab> SELECT empno, hiredate,
2 cal_employment_duration (empno, SYSDATE) || ' Years' "Employee Tenure"
3 FROM emp
4 WHERE empno = 7788;

EMPNO HIREDATE Employee Tenure
---------- --------- ----------------------------------------------
7788 19-APR-87 21 Years
Now lets use named notations and see if it works or not:-
SIPRAS@11glab> SELECT empno, hiredate,
2 cal_employment_duration (empid => empno,
3 as_on => SYSDATE
4 )
5 || ' Years' "Employee Tenure"
6 FROM emp
7 WHERE empno = 7788;

EMPNO HIREDATE Employee Tenure
---------- --------- ----------------------------------------------
7788 19-APR-87 21 Years

SIPRAS@11glab> SELECT empno, hiredate,
2 cal_employment_duration (empno, as_on => SYSDATE)
3 || ' Years' "Employee Tenure"
4 FROM emp
5 WHERE empno = 7788;

EMPNO HIREDATE Employee Tenure
---------- --------- ----------------------------------------------
7788 19-APR-87 21 Years
Great! it works, except in the following case; Note the error message :-
SIPRAS@11glab> SELECT empno, hiredate,
2 cal_employment_duration (empid => empno,
3 SYSDATE)
4 || ' Years' "Employee Tenure"
5 FROM emp
6 WHERE empno = 7788;
cal_employment_duration (empid => empno,
*
ERROR at line 2:
ORA-06553: PLS-312: a positional parameter association may not follow a named
association

Not sure why this was left out of the enhancements.

2. Trigger Enhancements: A couple of enhancements have been done to triggers. I have captured all of them with examples below.

i) You can now ENABLE or DISABLE triggers at creation time e.g
CREATE TRIGGER test_trig
BEFORE INSERT ON EMP
DISABLED/ENABLED

ii) Compound trigger: A new trigger type called “compound” has been introduced. It basically implements actions for all of the DML timing points in a single trigger. Action for each timing point “BEFORE STATEMENT”, “AFTER STATEMENT”, “BEFORE EACH ROW” and “AFTER EACH ROW” can now be written in a single trigger. Here is an example of :-

This trigger tracks updates to the “sal” column; It (a) assigns an update_id, notes start and end time for each update statement (b) keeps old and new salary for every row updated. All of this done with a single trigger.
CREATE OR REPLACE TRIGGER track_sal_upd
FOR UPDATE OF sal ON emp
COMPOUND TRIGGER
max_sal_upd_id number(4);
BEFORE STATEMENT IS
BEGIN
select nvl(max(sal_upd_id),0) into max_sal_upd_id from sal_update_log;
max_sal_upd_id := max_sal_upd_id + 1;
insert into sal_update_log values(max_sal_upd_id,null,null,null,systimestamp,'salary update process '||max_sal_upd_id||' started');
END BEFORE STATEMENT;

AFTER EACH ROW IS
BEGIN
insert into sal_update_log values(max_sal_upd_id,:old.empno,:old.sal,:old.sal,systimestamp,'updated by process '||max_sal_upd_id);
END AFTER EACH ROW;

AFTER STATEMENT IS
BEGIN
insert into sal_update_log values(max_sal_upd_id,null,null,null,systimestamp,'salary update process '||max_sal_upd_id||' finished');
END AFTER STATEMENT;
END track_sal_upd;
/

iii) Ordering of triggers: You can now control the order in which the triggers on a table would get fired. Oracle 11g has introduced a new clauses “FOLLOWS” to implement this feature. It will allow you to control the order in which the triggers fire when you have multiple triggers of same type on the same table. Oracle randomly picks up the triggers (if multiple triggers of same type on same table exist) if FOLLOWS clause is not used, which was the case prior to 11g.

Here an example which uses FOLLOWS clause. In this example, we have 2 triggers – first one “check_sal” is only for updates to SAL column whereas second one “check_update_job” will fire on updates to JOB & SAL column. Mark the FOLLOWS clause on “check_sal” trigger. It states that check_sal trigger should fire only after “check_update_job” has fired..

CREATE OR REPLACE TRIGGER check_update_job
BEFORE UPDATE OF job,sal
ON emp
FOR EACH ROW
DECLARE
v_start_range NUMBER (7, 2);
v_end_range NUMBER (7, 2);
BEGIN
IF :OLD.job IN ('CLERK','SALESMAN') THEN
IF :NEW.JOB != 'MANAGER' THEN
raise_application_error (num => -20001,
msg => 'Cannot change job to ' || :NEW.job
);
END IF;
END IF;
END;
/

CREATE OR REPLACE TRIGGER check_sal
BEFORE UPDATE OF sal
ON emp
FOR EACH ROW
FOLLOWS check_update_job
DECLARE
v_start_range NUMBER (7, 2);
v_end_range NUMBER (7, 2);
BEGIN
SELECT start_range, end_range
INTO v_start_range, v_end_range
FROM sal_range
WHERE job = :NEW.job;
IF :NEW.sal NOT BETWEEN v_start_range AND v_end_range
THEN
raise_application_error (num => -20000,
msg => 'Salary is not in the prescribed range'
);
END IF;
END;
/


SIPRAS@11glab> select empno,job,sal from emp where empno=7369;

EMPNO JOB SAL
---------- --------- ----------
7369 CLERK 800

SIPRAS@11glab> update emp set sal=500, job='PRESIDENT' where empno=7369;
update emp set sal=500, job='PRESIDENT' where empno=7369
*
ERROR at line 1:
ORA-20001: Cannot change job to PRESIDENT
ORA-06512: at "SIPRAS.CHECK_UPDATE_JOB", line 7
ORA-04088: error during execution of trigger 'SIPRAS.CHECK_UPDATE_JOB'
See the above error, it fired “check_update_job” first. In releases prior to Oracle 11g, you would not be able to control this.

3. Enhancements to PL/SQL Native compilation

Prior to Oracle 11g, initialization parameters were used to setup native compilation of PL/SQL code. However starting 11g, only one parameter “PLSQL_CODE_TYPE” needs to be set. It can be set to “INTERPRETED” (default value) or “NATIVE”. This parameter can be set at session level (ALTER SESSION), system level (ALTER SYSTEM) and for specific subprograms too using ALTER PROCEDURE. We all know, native compilation help improve the speed of PL/SQL programs as it compiles them to native code. It is of great help when your code consists of lot of loops, calculations, branches etc.

The trigger enhancements are really cool! I particularly liked the compound trigger feature the most. Share your thoughts on what you like…This was the concluding part of the PL/SQL enhancement series. Here is the link to first post in this series.

Wednesday, July 16, 2008

Oracle Critical Patch Update July 2008

Oracle has release the 3rd critical patch update for 2008 (CPU July 2008). Critical patch updates mostly contain fixes to security vulnerabilities plus it would have non-security fixes too. This critical patch update contains 45 new security fixes across all products which includes14 for Oracle database. Please review the following URL to see if the product you are using requires this patch or not.

http://www.oracle.com/technology/deploy/security/critical-patch-updates/cpujul2008.html

Also refer to Metalink note id 579278.1 for Oracle database and Fusion middleware products.

The next critical patch update (CPU) would be coming in October 2008.

Happy patching!

Technorati Tags: ,,,

Monday, June 30, 2008

Oracle 11g New Features – PL/SQL enhancements Part-I

I was going through the enhancements made for PL/SQL in Oracle 11g and learn that there are quite a few. I am going to post on these new features in two parts. Here goes the first part.

1. Usage of sequences in PL/SQL expressions

Allows you to assign a sequence value in an expression that is, you do not need to use a SQL query to generate sequence value and assign it to variable. Here is an example :-

SELECT seq_name.nextval INTO variable FROM dual; –> this was how we used to generate sequence values inside PL/SQL

From 11g you can simply do this

variable := seq_name.nextval; –> isn’t this great!

Similarly “currval” can also be used in PL/SQL expression.

2. CONTINUE statement in PL/SQL

CONTINUE is the new loop control statement in 11g. We have used “EXIT” in order to exit out of the loop on certain condition, however CONTINUE would allow us to exit the current iteration in the loop and the control would be passed to next iteration. Here is a small example of find out out even numbers :-

BEGIN
   FOR x IN 1 .. 10
   LOOP
      IF MOD (x, 2) = 0
      THEN
         DBMS_OUTPUT.put_line ('Even number');
         DBMS_OUTPUT.put_line (x);
      ELSE
         CONTINUE;
         DBMS_OUTPUT.put_line (x);
      END IF;
   END LOOP;
END;
/

3. CONTINUE-WHEN statement

It’s purpose is to replace IF…THEN…CONTINUE. Lets re-write the above example using CONTINUE-WHEN :-

BEGIN
   FOR x IN 1 .. 10
   LOOP
      CONTINUE WHEN MOD (x, 2) = 1 ;
         DBMS_OUTPUT.put_line ('Even number');
         DBMS_OUTPUT.put_line (x); 
   END LOOP;
END;
/

4. New Datatypes – SIMPLE_INTEGER, SIMPLE_FLOAT and SIMPLE_DOUBLE

SIMPLE_INTEGER supports values ranging –2147483648 to 2147483648 and does not include null values which means it comes with a “NOT NULL” constraint. Apart from the fact that it’s never checked for nulls, overflow checking is also not necessary for SIMPLE_INTEGER. Due to these facts it gives better performance than PLS_INTEGER.

SIMPLE_FLOAT and SIMPLE_DOUBLE are new subtypes of BINARY_FLOAT and BINARY_DOUBLE with “NOT NULL” feature.

5. Changes to Regular expression built-ins

The new REGEXP_COUNT built-in returns the number of times a pattern is found in an input string. The built-ins REGEXP_SUBSTR and REGEXP_INSTR have been improved to return the occurrence you want to find.

I will be covering a few more PL/SQL enhancements in my next post. Do post your comments if you have any suggestions.

Friday, May 23, 2008

Capital of India hosts its first ever Blog Camp

India Gate, New DelhiI just learned that Delhi (Capital of India) is hosting its first ever Blog Camp. I was waiting for this event to happen but not sure if I can make it or not but I will make an attempt though.

Hats off to its organizers!!!

When
24th May 2008 9:30 am IST

Where
Microsoft Corp
5th Floor, Eros Towers
Nehru Place, New Delhi
Map

More details at http://www.barcamp.org/BlogCampDelhi

Storing milliseconds in Oracle

I got an interesting question from someone in my team today - does timestamp column store the time in milliseconds? If not, is it possible?

This is something I never bothered to think about so far...so I had to do some quick research...and eventually found that Oracle does not store millisecond directly but it does store seconds with a fraction component which can help you to get the milliseconds. Not only millisecond rather you can get to microseconds too! In fact you can specify the number of digits that oracle stores in the fractional part of seconds i.e. the precision part for TIMESTAMP datatype. The default is platform dependant, on UNIX it defaults to 6 and on windows it defaults to 3. The valid range is [0-9].

Here is how the fractional component is shown, which can be used to derive milliseconds :-



SIPRAS@orademo> select time_in_ms from test_tab;

TIME_IN_MS
------------------------------------------------
23-MAY-08 01.29.59.008864 PM
23-MAY-08 11.11.11.100000 AM


You can insert milliseconds too! here is an example :-




SIPRAS@orademo> create table test_tab (time_in_ms timestamp);

Table created.

SIPRAS@orademo> insert into test_tab values(systimestamp);

1 row created.

SIPRAS@orademo> insert into test_tab values('23-MAY-2008 11:11:11.1');

1 row created.

SIPRAS@orademo> commit;

Commit complete.

SIPRAS@orademo> select * from test_tab;

TIME_IN_MS
---------------------------------------------------------------------------
23-MAY-08 01.29.59.008864 PM
23-MAY-08 11.11.11.100000 AM


I was glad that Oracle does provide some mechanism to store milliseconds otherwise I would have been really surprised.

Wednesday, May 21, 2008

11g New Features: Data pump enhancements

I was going through the enhancements made to Oracle Data pump import/export utilities and a couple of them got my attention. To be very specific I liked the newly introduced remap_table and remap_data parameters most :-

  • Remap_Data
Remap the data of a column while importing; Basically you would use a function to do the remap. Particularly useful in numerous imports of production data. Now you can use this parameter to mask/convert data such as passwords, credit card details, email ids etc. Isn't that really cool?
  • Remap_Table
Now its possible to rename a table while importing. Useful? Yes, I think so; In situations where you keep table exports or want to get data from production and compare with stage data during bug fixing etc. With this parameter you would not need to import the table into a different schema (coz the table already exists in stage database) to compare. Indeed, another useful parameter.
Apart from the ones listed above, Oracle 11g has introduced a few other enhancement to data pump utilities which are :-

1. data compression => compress data and metadata before writing to the dump files
2. compress dump files => reduces dump file size by 10-15%
3. encryption features => you can encrypt data during export and can also specify encryption algorithm. you can encrypt dump files too!
4. data options => to ignore non-deferrable constraints
5. reuse dumpfiles => overwrite dump files; earlier data pump export would return an error if a dump file with same name exists. so now it will overwrite any existing dump files. its both good and bad since you may have to move dump files to a different location if you have a need to retain them.
6. transportable option for tables

In all, quite a few enhancements to data pump utility though I still don't understand the "transportable" option for tables and where/how do I put it to use. Thoughts?

BLOB support in SQL*Plus

Oracle 11g SQL*Plus client comes with an improvement - it now supports BLOB column; Now you can verify blob column content using sqlplus though you may not still be able to make a sense out it if the content is of type image or pdf etc.

Here is an example :-

$ sqlplus sipras

SQL*Plus: Release 11.1.0.6.0 - Production on Wed May 14 14:47:30 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Enter password:

Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SIPRAS@11glab> create table test (id number, pic blob);

Table created.

SIPRAS@11glab> insert into test values (1,'122334');

1 row created.

SIPRAS@11GLAB > select * from test;

ID
----------
PIC
----------------------------------------------------------------------------------------------------
1
122334


1 row selected.


- An example using image file being loaded into blob column

SIPRAS@11glab> create or replace directory blobdir as '/tmp';

Directory created.

SIPRAS@11glab>insert into test values(2, utl_raw.cast_to_raw('testing blob'));

SIPRAS@11GLAB > select * from test;

ID
----------
PIC
----------------------------------------------------------------------------------------------------
1
122334

2
74657374696E6720626C6F62


2 rows selected.


-- anonymous block to load the content of an image file to the blob


SIPRAS@11glab>DECLARE
vblob BLOB;
vbfile BFILE := BFILENAME ('BLOBDIR', '10gocplogo.jpg');
vamt INTEGER;
VSIZE INTEGER;
BEGIN
SELECT pic
INTO vblob
FROM TEST
WHERE ID = 1
FOR UPDATE;

DBMS_LOB.fileopen (vbfile);
VSIZE := DBMS_LOB.getlength (vbfile);
DBMS_OUTPUT.put_line ('Size of input file: ' || VSIZE);
DBMS_LOB.loadfromfile (vblob, vbfile, VSIZE);
DBMS_OUTPUT.put_line ('After loadfromfile');
VSIZE := DBMS_LOB.getlength (vblob);
DBMS_OUTPUT.put_line ('Size of blob: ' || VSIZE);
EXCEPTION
WHEN OTHERS
THEN
DBMS_OUTPUT.put_line ('An error occurred');
DBMS_OUTPUT.put_line (SQLCODE || SQLERRM);
END;
/

Size of input file: 15054
After loadfromfile
Size of blob: 15054

PL/SQL procedure successfully completed.

SQL> select * from test;

ID
----------
PIC
----------------------------------------------------------------------------------------------------
1
FFD8FFE000104A46494600010201012C012C0000FFED002C50686F746F73686F7020332E30003842494D03ED000000000010
012C000000010001012C000000010001FFEE000E41646F62650064C00000

2
74657374696E6720626C6F62

2 rows selected.

I know that you cannot interpret the values since it was an image file but still just wanted to show that you will not get "SP2-0678 Column or attribute type can not be displayed by SQL*Plus" error any more.

However, if you try the same thing from 10g client it won't work and you will get the SP2-0678 error :-

SQL*Plus: Release 10.2.0.1.0 - Production on Wed May 14 13:21:11 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SIPRAS@11glab> truncate table test;
Table truncated.

SIPRAS@11glab> insert into test values (1,'122334');

1 row created.

SIPRAS@11glab> select * from test;
SP2-0678: Column or attribute type can not be displayed by SQL*Plus

Tuesday, May 13, 2008

11g New Features - Tablespace Encryption

You might have heard about Transparent Data Encryption or TDE in 10g; In Oracle 11g, it has been taken to a next level with "tablespace encryption". So TDE which started at column level is available at tablespace level.

How does it help?
Now instead of encrypting the columns one-by-one in a table or for a set of tables, you can simply put all tables which need to be encrypted in a single tablespace and encrypt it.

How to do it?
- Create and open the wallet
- Create tablespace with encryption property

CREATE TABLESPACE encrypt_tbsp
DATAFILE '/
oradata/encrypt_tbsp01.dbf'
SIZE 100M

ENCRYPTION

DEFAULT STORAGE(ENCRYPT);


- Create tables in the encrypted tablespace

How to find if tablespace is encrypted or not?
- A new column "ENCRYPTED" has been added to DBA_TABLESPACES which will indicate if the tablespace is encrypted or not.
- Query a new dynamic performance view V$ENCRYPTED_TABLESPACES for encrypted tablespaces.

Do I need to do anything special to access data from a encrypted tablespace?
Not really, you continue to access data as usual BUT the wallet must be open. No modification whatsoever is required for the code/SQL to access data from encrypted tablespace.

What is supported and not supported with encrypted tablespaces?
Supported
- Move table back and forth between encrypted tablespace and non-encrypted tablespace
- Datapump is supported to export/import encrypted content/tablespaces

- Transportable tablespace is supported using datapump

Not Supported
- Tablespace encryption cannot be used for SYSTEM, SYSAUX, UNDO and TEMP tablespaces
- Existing tablespace cannot be encrypted

- Traditional export/import utilities for encrypted content


Though I have not been able to create an encrypted tablespace but still wanted to share this information. I will try to post my learnings from the exercise later. Hope this is useful. NJoy! encrypting and do let me know if I have missed out any key messaging herein.

References:-
Oracle® Database Advanced Security Administrator's Guide 11g Release 1 (11.1) Part Number B28530-02

Thursday, May 8, 2008

11g New Feature - SYSDBA authentication gets stronger

The SYSDBA & SYSOPER authentication gets stronger from 11g; Oracle now extended support for PKI, Kerberos and Radius to SYSDBA and SYSOPER connections. Earlier, this was limited to all users except SYSDBA and SYSOPER. A new initialization parameter LDAP_DIRECTORY_SYSAUTH has been introduced which needs to be set along with LDAP_DIRECTORY_ACCESS for stronger SYSDBA authentication.

You login as SYSDBA the following way if you configure Directory authentication (OID) :-

CONNECT user@11glab AS SYSDBA
Enter password: password

If you have used Kerberos or SSL then connect as SYSDBA the following way :-

CONNECT /@11glab AS SYSDBA

References: Oracle® Database Security Guide 11g Release 1 (11.1) Part Number B28531-04

Sunday, April 27, 2008

Oracle Metalink getting a new look

Oracle metalink is getting a facelift. A lot of flash has been used. Here is the cool new look :-



It is yet to replace the existing interface though You can still see the new look via https://csm.oracle.com or Click "Software Configuration Manager" tab on https://metalink.oracle.com.

Thursday, April 17, 2008

Oracle Critical Patch Update April 2008

Oracle has released critical patch update for Apr 2008. Please review the following URL to see if the product you are using requires the patch or not.

http://www.oracle.com/technology/deploy/security/critical-patch-updates/cpuapr2008.html

Cheers!

Thursday, April 10, 2008

11g New Features - Automatic Memory Management

Memory management in Oracle database is getting better rather I would say is becoming easier now. There was a time when each memory component of the Oracle had to be defined using parameters. Oracle first introduced dynamic memory resizing in 9i, moved to Automatic Shared memory management in 10g reducing the number of major parameters to size memory to 2 - SGA was controlled using a single parameter (SGA_TARGET) and PGA portion controlled by PGA_AGGREGATE_TARGET. Now oracle has gone one step ahead - 11g gives you the option of defining only one parameter "MEMORY_TARGET" to control both SGA and PGA. Now Oracle can dynamically exchange memory between SGA and PGA. Isn't that great...

Starting 11g, Oracle by default uses this new memory management feature known as "Automatic Memory Management". Now you should not be worried about whether PGA got over-allocated or has SGA got over-allocated; You can simply set MEMORY_TARGET and relax!!!

SIPRAS@11glab> show parameter memory

NAME TYPE VALUE
------------------------------------ ----------- -------
hi_shared_memory_address integer 0
memory_max_target big integer 608M
memory_target big integer 608M
shared_memory_address integer 0


You can also set maximum target too with the help of "MEMORY_MAX_TARGET" as was the case with "SGA_MAX_SIZE" . Some new (useful) views in 11g pertaining to automatic memory management :-

V$MEMORY_DYNAMIC_COMPONENTS -> find out how much has been allocated to each component along with minimum and maximum values it touched
V$MEMORY_TARGET_ADVICE -> will give you tuning advice for the MEMORY_TARGET

SIPRAS@11glab> SELECT component, current_size, min_size, max_size, last_oper_type FROM v$memory_dynamic_components;

COMPONENT CURRENT_SIZE MIN_SIZE MAX_SIZE LAST_OPER_TYP
------------------------------ ------------ ---------- ---------- -------------
shared pool 150994944 71303168 150994944 GROW
large pool 4194304 4194304 4194304 STATIC
java pool 12582912 4194304 12582912 GROW
streams pool 0 0 0 STATIC
SGA Target 381681664 381681664 381681664 STATIC
DEFAULT buffer cache 209715200 209715200 297795584 SHRINK
KEEP buffer cache 0 0 0 STATIC
RECYCLE buffer cache 0 0 0 STATIC
DEFAULT 2K buffer cache 0 0 0 STATIC
DEFAULT 4K buffer cache 0 0 0 STATIC
DEFAULT 8K buffer cache 0 0 0 STATIC
DEFAULT 16K buffer cache 0 0 0 STATIC
DEFAULT 32K buffer cache 0 0 0 STATIC
Shared IO Pool 0 0 0 STATIC
PGA Target 16777216 16777216 16777216 STATIC
ASM Buffer Cache 0 0 0 STATIC


MEMORY_TARGET is a dynamic parameter whereas MEMORY_MAX_TARGET is not. If you set SGA_TARGET and PGA_AGGREGATE_TARGET with automatic memory management and they are less than MEMORY_TARGET then those values will act as the minimum values for SGA and PGA respectively. In case you set it to more than MEMORY_TARGET, then you will get the error "ORA-00844: Parameter not taking MEMORY_TARGET into account, see alert log for more information".

Reference :- Oracle® Database Administrator's Guide 11g Release 1 (11.1) Part Number B28310-04

Un-documented and Un-used privilege

A recent question on Oracle Forums about a privilege caught my attention (got a few others interested too) and I started looking for an answer...And in the end it turned out to be un-documented and un-utilized privilege. The privilege in question is "UNDER ANY TABLE". The Privilege does exists in the database; I checked in the database (DBA_SYS_PRIVS and ROLE_SYS_PRIVS) but its not documented anywhere. I searched 9i, 10g and 11g documentation but no luck.

SIPRAS@orademo> select * from ROLE_SYS_PRIVS where privilege like 'UNDER%';

ROLE PRIVILEGE ADM
------------------------------ ---------------------------------------- ---
DBA UNDER ANY TYPE YES
DBA UNDER ANY TABLE YES
DBA UNDER ANY VIEW YES

The other 2 privileges "UNDER ANY VIEW" and "UNDER ANY TYPE" are documented. Here is the link.

One of the respondent mentioned that :- In 2002 someone else asked this question on the Metalink forums and support replied, "This privilege is currently not used today. It was added for future functionality." May be oracle thought of it but did not use it.

My search lead me to "IBM Informix Dynamix Server". They have a UNDER privilege for TABLE and here is an excerpt from the documentation -

Under Privileges for Typed Tables (IDS)

You can grant or revoke the Under privilege to control whether users can use a typed table as a supertable in an inheritance hierarchy. The Under privilege is granted to PUBLIC automatically when a table is created (except in ANSI-compliant databases). In an ANSI-compliant database, the Under privilege on a table is granted to the owner of the table. To restrict which users can define a table as a supertable in an inheritance hierarchy, you must first revoke the Under privilege for PUBLIC and then specify the users to whom you want to grant the Under privilege. For example, to specify that only a limited group of users can use the employee table as a supertable in an inheritance hierarchy, you might execute the following statements:

REVOKE UNDER ON employee
FROM PUBLIC;

GRANT UNDER ON employee
TO johns, cmiles, paulz
Sounds like an ANSI compliant feature and something to do with "User-defined types" and "Nested tables" in Oracle; May be Oracle wanted to use for something similar but did not implement it hence the privilege is still sticking around without any use...or May be the "UNDER ANY TYPE" is sufficing the requirement but the "UNDER ANY TABLE" has not be taken out...

Wednesday, April 9, 2008

yaahoo....10g RAC expert...

....I have something to cheer about.....having cleared 10g RAC Expert exam today....feeling relieved and proud too :)

Monday, April 7, 2008

11g New Features - Incident packaging using "adrci" utility

As an Oracle DBA, you interface with Oracle support many a times; specially when you are faced with some critical errors such ORA-600 or ORA-7445 etc. And each time you interact with Oracle support, you have to provide quite few logs (alert log, trace files etc.) so that support can assist you in resolving the problem. At times the SR goes back and forth just to additional information - such as providing additional trace files etc. since you may have missed it earlier.

But starting 11g, your life will become a little easy thanks to a new utility called "ADRCI" - ADR Command Interpreter. Now you don't have to dig around trace files and other files to determine the files that Support would require for analysis. You can simply use "adrci" to package all the files that would be required for analysis with a few simple commands.

Apart from the packaging ability, one can use adrci to :-

a. view alert log (show alert)
b. view other trace files
c. view health check reports
Viewing alert log and trace files is easy but as I said, the best use of this utility is to package incident / problems encountered so that can the packaged file can be easily be uploaded to Oracle support when needed. You do not have to search through trace files and other files to determine the files that are required for analysis.

Here is a quick demonstration of how useful this utility can be when you have to upload files ( required to analyze a specific problem) to Oracle support :-

1. Launch adrci

$ adrci

ADRCI: Release 11.1.0.6.0 - Beta on Mon Apr 7 16:11:06 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

ADR base = "/app/oracle"


2. Check the incidents reported in alert log

adrci> show incident

ADR Home = /app/oracle/diag/rdbms/11GLAB:
*************************************************************************
INCIDENT_ID PROBLEM_KEY CREATE_TIME
-------------------- ----------------------------------------------------------- ----------------------------------------
21746 ORA 4031 2008-04-07 16:57:11.039525 +05:30
21745 ORA 4031 2008-04-07 16:57:00.356082 +05:30
21715 ORA 4031 2008-04-07 16:57:16.796655 +05:30
21714 ORA 4031 2008-04-07 16:57:07.883365 +05:30
21713 ORA 4031 2008-04-07 16:57:00.694116 +05:30
5 rows fetched


3. Identify the specific incident for which you want to create a package so that you can upload it to Oracle support

adrci> IPS CREATE PACKAGE INCIDENT 21713
Created package 2 based on incident id 21713, correlation level typical


This creates a pacakge of the incident 21713 in "incpkg" directory; you can then add diagnostic data to the package.

4. Finally generate the package for the incident, which then can be uploaded to metalink while seeking support from Oracle.

adrci> ips generate package 3
Generated package 3 in file /app/oracle/product/11.1.0/db_1/dbs/ORA4031_20080407170431_COM_1.zip, mode complete

5. As reported above a zip file has been created with all relevant logs. Now you can upload this zip file to Oracle support and seek their help in resolving your problem.

You can also view the information generated in "incpkg" directory

$cd /app/oracle/diag/rdbms/11GLAB/incpkg/pkg_3/seq_1
$ ls -ltr
-rw-r----- 1 oracle oinstall 499 Apr 7 17:14 metadata.xml
-rw-r----- 1 oracle oinstall 21968 Apr 7 17:14 manifest_3_1.xml
-rw-r----- 1 oracle oinstall 26270 Apr 7 17:14 manifest_3_1.txt
-rw-r----- 1 oracle oinstall 20064 Apr 7 17:14 manifest_3_1.html
drwxr-xr-x 2 oracle oinstall 4096 Apr 7 17:14 export
drwxr-xr-x 2 oracle oinstall 4096 Apr 7 17:14 crs
-rw-r----- 1 oracle oinstall 62789 Apr 7 17:14 config.xml

We used to achieve the same earlier by running some scripts or collecting the logs/trace files manually. But with adrci, this task is pretty simplified; I think this is surely going to reduce the time to diagnose and resolve any problem.

References :- Oracle® Database Administrator's Guide 11g Release 1 (11.1) Part Number B28310-04
Oracle® Database Utilities 11g Release 1 (11.1) Part Number B28319-02 - ADRCI: ADR Command Interpreter

Friday, April 4, 2008

11g New Features - ADR

This is a big change in 11g. Now you will not find "alertSID.log" in it's familiar location i.e admin/SID/bdump. It has now got a new location; In fact, Oracle has standardized to store all logs/trace at a single base directory instead of scattering them all over. It's called "Automatic Diagnostic Repository" or "ADR". The parameters "background_dump_dest" and "user_dump_dest" are now deprecated and being replaced by a new parameter called "DIAGNOSTIC_DEST". What more - the alert.log is also available in an xml; Don't worry the text version will still be available.

So from now onwards, you should use "diagnostic_dest" parameter to specify the destination for logs/traces. Oracle creates a sub-directory called "diag" beneath "diagnostic_dest" directory where it will keep all logs/traces. When I say all logs, it is all logs/trace files for all its products/tools including ASM, CRS, Listener etc. This is very great move specifically when CRS stack etc are used; the CRS logs are scattered all over the place.

Here is the new directory structure where you can find all log and trace files :-

"diagnostic_dest\diag\product_name\instance_name\"

Note: I have just expanded and explained key folders in rdbms sub-directory which stores all log and trace files for Oracle database.


diag
rdbms
SID
sweep -->
stage -->
incpkg -->
incident --> incident dump files
hm --> health monitor files
cdump --> core_dump_dest
metadata --> incident metadata files
lck -->
ir -->
alert --> location for the alert log in xml format
trace --> location for all trace files and text version of alert.log

ofm
netcman
lsnrctl
diagtool
crs
asm
tnslsnr
clients

Some other major improvements are :-
  • Now each critical errors such as ORA-600, ORA-7445 etc. are treated as incidents. Tags the logs with incident id's; Incident alerts are sent to EM and makes entry into alert.log as usual.
  • Incident Packaging service :- IPS enables you to automatically and easily gather the diagnostic data—traces, dumps, health check reports, and more—pertaining to a critical error and package the data into a zip file for transmission to Oracle Support. Awesome indeed.
  • Automatic capture of diagnostic data upon failure
  • Health checks on detecting a critical error
  • V$DIAG_INFO is a new view which lists all ADR locations for the database instance
  • There is a new column called "TRACEFILE" in V$PROCESS which specifies the trace file name for each process.
Above all, 11g comes with a new command line utility called "adrci" to view alert/trace files, package incident trace files so that uploading to oracle support is easy, view health check reports etc. I will explain about this tool in my next post. So all said, the ADR is a great improvement, a great step forward to standardize diagnostic information.

References :- Oracle® Database Administrator's Guide 11g Release 1 (11.1) Part Number B28310-04

Friday, March 28, 2008

11g New Features - Virtual column

The other day while going through Oracle forums, I found an interesting question on partitioning. The gentlemen had a problem which most of us face; He wanted to partition a table based on year and month but both columns were being stored in CHAR datatype; here is the table structure

CREATE TABLE SALES
(
YEARS CHAR(4 BYTE),
MONTH CHAR(2 BYTE),
)


The solution, I gave, was to add a new column just for the partition purpose. In fact thats how it used to happen till 10g. But 11g comes with an interesting feature; you can now create a virtual column. I have used the same table structure to explain how virtual column works in 11g. Basically virtual column is nothing but a derived column; you do not insert data into it rather it derives its value based on the values inputed in some other columns or combination of columns. It even gives you the capability to write a small piece of code to generate the value.

Create the table with a virtual column to partition the data into 4 partitions Q1, Q2, Q3 and Q4.

CREATE TABLE SALES
(
YEARS VARCHAR2(4),
MONTH VARCHAR2(2),
PART_COL
VARCHAR2(6)
generated always as
(
case
when MONTH in ('01','02','03')
then 'Q1'
when MONTH in ('04','05','06')
then 'Q2'
when MONTH in ('07','08','09')
then 'Q3'
when MONTH in ('10','11','12')
then 'Q4'
end
) virtual
)
partition by list (PART_COL)
(
partition p_q1 values ('Q1'),
partition p_q2 values ('Q2'),
partition p_q3 values ('Q3'),
partition p_q4 values ('Q4')
);

Insert data into the table

insert into sales (years,month) values ('2007','01');
insert into sales (years,month) values ('2007','01');
insert into sales (years,month) values ('2007','02');
insert into sales (years,month) values ('2007','02');
insert into sales (years,month) values ('2007','12');
insert into sales (years,month) values ('2007','09');

insert into sales (years,month) values ('2007','05');

insert into sales (years,month) values ('2007','07');

insert into sales (years,month) values ('2007','11');


SIPRAS@11glab> commit;

Now select from the table and see how the data has gone into different partitions

SIPRAS@11glab> select * from sales;

YEAR MO PA
---- -- --
2007 01 Q1
2007 02 Q1
2007 02 Q1
2007 01 Q1
2007 05 Q2
2007 09 Q3
2007 07 Q3
2007 12 Q4
2007 11 Q4

9 rows selected.

SIPRAS@11glab> select * from sales partition (p_q1);

YEAR MO PA
---- -- --
2007 01 Q1
2007 02 Q1
2007 02 Q1
2007 01 Q1

SIPRAS@11glab> select * from sales partition (p_q2);

YEAR MO PA
---- -- --
2007 05 Q2

SIPRAS@11glab> select * from sales partition (p_q3);

YEAR MO PA
---- -- --
2007 09 Q3
2007 07 Q3

SIPRAS@11glab> select * from sales partition (p_q4);

YEAR MO PA
---- -- --
2007 12 Q4
2007 11 Q4


According to me, Virtual column is a real powerful addition. In situation like this I need not force the development team to insert data into the new column that will be used for partition or write a trigger for it. Simply creating virtual column and partitioning it would do wonders. Moreover it's use not limited only for partitions; it can be used in general e.g. wherever we use a derived columns which gets it value through a trigger or a stored procedures or a default value (only to be updated by a piece of code later).

Reference :- Oracle Database VLDB and Partitioning Guide 11g Release 1 (11.1)

Wednesday, March 26, 2008

11g New Features - Case-sensitive passwords

Probably a long overdue feature...though one could have implemented the same using password verify function in earlier releases but it was necessitated to be in compliance with industry wide Data security standards. Starting 11g case sensitive passwords automatically enforced.

Here is how to implement case-sensitive passwords feature :-

SIPRAS@11glab> create user TESTUSR identified by TESTUSR;

User created.

SIPRAS@11glab> grant create session to TESTUSR;

Grant succeeded.

SIPRAS@11glab> conn testusr/testusr@11glab
ERROR:
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.
@> conn testusr/TESTUSR@11glab
Connected.
TESTUSR@11glab>


See the difference - since the user was created with an upper case password, it did not allow lower case password while connecting to "TESTUSR". Had it been 10g, you would easily got connected. So now, "TESTUSR", "testusr" and "TestUsr" are different passwords.

However, Oracle has also provided an initialization parameter to disable case-sensitive passwords i.e. going back to old way of 10g and prior versions.

SIPRAS@11glab> show parameter SEC_CASE_SENSITIVE_LOGON

NAME TYPE VALUE
------------------------------------ ----------- ---------
sec_case_sensitive_logon boolean TRUE

SIPRAS@11glab> ALTER SYSTEM set SEC_CASE_SENSITIVE_LOGON=FALSE scope=both;

System altered.

SIPRAS@11glab> show parameter SEC_CASE_SENSITIVE_LOGON

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sec_case_sensitive_logon boolean FALSE

And now see the difference...

SIPRAS@11glab> conn testusr/testusr@11glab
Connected.

TESTUSR@11glab> conn testusr/TESTUSR@11glab
Connected.
TESTUSR@11glab>


So it would connect irrespective of case. A new column "PASSWORD_VERSIONS" has been added to "DBA_USERS" view to indicate database version in which the password was created or changed.

SIPRAS@11glab> select username,PASSWORD_VERSIONS from dba_users;

USERNAME PASSWORD
------------------------------ --------
.....
SCOTT 10G 11G
TESTUSR 10G 11G


However, I am not able to find answer as to why a user created in 11g has both "10G" and "11G" in PASSWORD_VERSIONS column. According to the documentation if a database was migrated from 10g then it would have both "10G", "11G" in it which is not true in my case..

One can also enforce case-sensitive passwords for SYSDBA users. Use "ignorecase" argument while creating password files using "ORAPWD" utility. Default values for "ignorecase" is "n", and you can set it to "y" to enable case-sensitive passwords.

e.g. $orapwd file=orapw entries=5 ignorecase=y

So if you plan to upgrade to 11g then make sure you change passwords to adhere to case-sensitivity and ensure that you change your scripts which have inconsistent password cases too.

Reference : Oracle® Database Security Guide 11g Release 1 (11.1) Part Number B28531-04

Thursday, March 20, 2008

11g New Features - "Read only Tables"

Did you read that right? It's read only TABLES...11g has introduced read only tables. It's now possible to make a table read only to it's owner also. Earlier we used to grant "SELECT" privilege on a table to other users or create a view to make it read only for others BUT that was only for other users and not the owner. See the example below to switch the table from read-write to read-only and vice-versa...

SCOTT@11glab> create table deptnew as select * from dept;

Table created.

SCOTT@11glab> select * from deptnew;

DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON

SCOTT@11glab> insert into deptnew values (50,'New Dept','New Location');

1 row created.

SCOTT@11glab> commit;

Commit complete.


Make it READ ONLY

SCOTT@11glab> alter table deptnew read only;

Table altered.

SCOTT@11glab> insert into deptnew values (60,'New Dep1','New Location1');
insert into deptnew values (60,'New Dep1','New Location1')
*
ERROR at line 1:
ORA-12081: update operation not allowed on table "SCOTT"."DEPTNEW"

How do I find out if a table is read-only or read-write. Check the new column in "USER_TABLES"

SCOTT@11glab> select table_name,read_only from user_tables where table_name='DEPTNEW';

TABLE_NAME REA
------------------------------ ---
DEPTNEW YES

Make it READ WRITE again to insert data...

SCOTT@11glab> alter table deptnew read write;

Table altered.

SCOTT@11glab> insert into deptnew values (60,'New Dep1','New Location1');

1 row created.

SCOTT@11glab> commit;

Commit complete.

SCOTT@11glab> select * from deptnew;

DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
50 New Dept New Location
60 New Dep1 New Location1

6 rows selected.


Very useful indeed...

Monday, March 17, 2008

11g - My first look

It's been a while since I have installed 11g but hardly got a chance to explore or look at it's new features. It was only last week that I started exploring it. There are quite a bit of really interesting and useful new features; I will start posting on some of the new features that I come across.

Here is list of some of the interesting new features that I have bumped into so far;

1. ADR - a big change in the way the diagnostic logs (alert.log, trace files) are now stored.
2. Database Replay
3. Case-sensitive passwords - Long overdue
4. Data-masking
5. ACL for calling packages
6. Generated columns or Virtual columns
7. Read only tables - Did you hear that "Read only tables" not "Tablespace"
8. Virtual column partitioning - really interesting
9. Get advice on how good is your table design
10. SYSASM role
11. Online patching and patching based on features you use

This is not the final list...there are many more which I am yet to explore. Keep a tab on my blog as I will try to bring in the interesting ones...

Cheers!

Thursday, March 13, 2008

dbfsize - did you know about it?

...I didn't till the time I bumped into it today while trying to solve a ORA-27047 issue. I got this error when I was trying to restore a database received from a client for a project; What I had was cold backup + controlfile trace.

CREATE CONTROLFILE SET DATABASE "ND01" RESETLOGS NOARCHIVELOG
*
ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-01565: error in identifying file
'/oracle/oradata/ND01/ND01_tools_01.dbf'
ORA-27047: unable to read the header block of file
Additional information: 2

Bumped into dbfsize command while doing some research. Once can use dbv utility (dbverify) but it does not report datafile OS block header corruption thats when you can use dbfsize. It's available in UNIX only. Here is an example of how to use and what it reports :-

$dbfsize ND01_tools_01.dbf
ND01_tools_01.dbf: Header block magic number is bad

Whereas, the dbv would show the following output :-

DBVERIFY - Verification complete

Total Pages Examined : 25600
Total Pages Processed (Data) : 0
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 0
Total Pages Failing (Index): 0
Total Pages Processed (Other): 0
Total Pages Processed (Seg) : 0
Total Pages Failing (Seg) : 0
Total Pages Empty : 0
Total Pages Marked Corrupt : 25600
Total Pages Influx : 0
Highest block SCN : 0 (0.0)

Learning never stops...

Wednesday, March 12, 2008

Orastack

ORASTACK is another interesting thing that I came across recently. Till now, it was unknown to me hence I thought it will be a useful post.

ORASTACK is an Oracle supplied utility specifically for Windows Operating system. It allows us to modify the default stack size of a thread/session when created in the Oracle executable. It should be run against a specific Oracle binary such as "sqlplus.exe", "tnslsnr.exe" etc. It alters the part of the binary header that defines the default stack size used by "the create thread API".

By reducing the stack of every session created in the Oracle executable, it is possible to serve more users. In a system with a 1000 users reducing the
stack from 1Mb to 500K would release 500Mb of the address space for other
allocations or more users. However if its set too low then session might fail with ORA-3113 error.

It's useful when you get frequent ORA-04030 or TNS-12518/TNS-12500 with ORA-04030 on windows environment.

The usage is -
orastack executable_name new_stack_size_in_bytes

e.g.
'orastack oracle.exe 500000'
'orastack tnslsnr.exe 500000

Remember to shutdown database if changing for "oracle.exe" and for others, ensure that no instance of the process, for which you would run ORASTACK, is running.

Finally a word of caution: - Always consult Oracle support before doing such things for your production environment.

Wednesday, March 5, 2008

Using "LOCAL" environment variable

You can bypass using TNSNAMES by setting this environment variable on Windows. You can set this variable to the remote database connect string and SQL*Net would simply use this as connect string whenever user does not supply it. This environment variable is for WINDOWS only and it's equivalent in UNIX is TWO_TASK variable.

C:\>set LOCAL=ORA10G

C:\>sqlplus scott/tiger

SCOTT@ORA10G> select global_name from global_name;

GLOBAL_NAME
------------------------------------------------------------

ORA10G.REGRESS.RDBMS.DEV.US.ORACLE.COM

See the impact if its set to a wrong value

C:\>set LOCAL=ORA10

C:\>sqlplus scott/tiger

ERROR:
ORA-12154: TNS:could not resolve the connect identifier specified

This variable has been there since long but has not been mentioned at many places so I thought it might be an useful post in my blog. Feel free to disagree ;)

Tuesday, March 4, 2008

TRACEFILE_IDENTIFIER - Another useful parameter

Many a times you need to enable tracing at a session level. And when you do that, you have to make a little bit of effort in digging out your session's trace file. Why? because the trace file is created in the user_dump_dest directory and there would be so many other trace files, and all of them would have similar naming convention "SID_ora_nnnn.trc". However with the help of the parameter "TRACEFILE_IDENTIFIER", you can easily distinguish your trace file. Very useful specifically when you want to use trace analyzer etc.

Here is how?

1. Set the tracefile identifier as you want, this will be appended to trace file name.
alter session set tracefile_identifier='sipras_sql_trace';

2. Enable your session level tracing
alter session set sql_trace=true;
alter session set events '10046 trace name context forever, level 12' ;

3. Execute your queries/statements

4. Turn-off tracing
alter session set events '10046 trace name context off';
alter session set sql_trace=false;

5. Check the user_dump_dest folder and you will find a file name "SID_ora_nnnn_sipras_sql_trace.trc

See now it's so easy to identify. Having said that you can still find out the trace file name without using TRACEFILE_IDENTIFIER parameter using the following SQL but when Oracle has provided an easier method, why not use it?

-- sql to find out trace file name for your session.

select c.value || '/' || d.instance_name ||'_ora_' || a.spid || '.trc' trace
from v$process a, v$session b, v$parameter c, v$instance d
where a.addr = b.paddr
and b.audsid = userenv('sessionid')
and c.name = 'user_dump_dest'
/

Wednesday, February 27, 2008

FIXED_DATE

There are so many little known parameters in oracle that sometimes you are taken by surprise. Today I found out about FIXED_DATE parameter and I did not know about it till now. Did you? May be...

This parameter is used to set a constant date for SYSDATE pseudo column. Once set, SYSDATE would always return the constant date. However, the problem is that it's not modifiable at session level which means it may not be really useful.

How to set FIXED_DATE :-

ALTER SYSTEM SET FIXED_DATE='2008-02-02-00:00:00';

SQL> select sysdate from dual;

SYSDATE
---------
02-FEB-08

Note: The format can be as shown above or the oracle default date format.

How to unset :-

ALTER SYSTEM SET FIXED_DATE=NONE;

SQL> select sysdate from dual;

SYSDATE
---------
27-FEB-08

BUT, setting this parameter does not effect SYSTIMESTAMP. It continues to show current timestamp.

SQL> ALTER SYSTEM SET FIXED_DATE='2008-02-02-00:00:00';

System altered.

SQL> select sysdate from dual;

SYSDATE
---------
02-FEB-08

SQL> select systimestamp from dual;

SYSTIMESTAMP
---------------------------------------------------------------------------
27-FEB-08 09.01.07.669041 AM +05:30

Since this is a very old parameter (I think it was introduced in Oracle 8), my guess is SYSTIMESTAMP has been left out as its relatively new.

Monday, February 25, 2008

Using FGA to view/retrieve bind variables used in queries

I was responding to a question on Oracle Forums on retrieving bind variables and thought I should post this on the blog as well.

Of many things, fine grained auditing techniques can used to view/retrieve values supplied to bind variables used in a query. I found it useful when user complains that queries are taking longer time in certain conditions and normally you would rely on application developers to provide you with the values supplied to the bind variables.

I have demonstrated an example below to retrieve bind variable values using FGA :-

1. First apply audit policy on the table/s which are involved in SQL e.g. in this example its the "DEPT" table.

begin
dbms_FGA.add_policy (
object_schema => 'SCOTT',
object_name => 'DEPT',
policy_name => 'DEPT_Aud',
handler_schema => NULL,
handler_module => NULL,
enable => TRUE,
statement_types => 'UPDATE, DELETE, INSERT, SELECT' );
end;
/

2. Run your SQL with bind variables e.g. in this example it's an INSERT into DEPT table.

var a number
var b varchar2(20)
var c varchar2(20)

begin
:a:=90;
:b:='NewDept';
:c:='NewLoc';
end;
/

INSERT INTO DEPT VALUES(:a,:b,:c);

3. Query DBA_FGA_AUDIT_TRAIL view to get the SQL and bind variables

select object_name, sql_text, sql_bind from dba_fga_audit_trail;

OBJECT_NAME
--------------------------------------------------------------------------------
SQL_TEXT
--------------------------------------------------------------------------------
SQL_BIND
--------------------------------------------------------------------------------

DEPT
INSERT INTO DEPT VALUES(:a,:b,:c)
#1(2):90 #2(7):NewDept #3(6):NewLoc

So, happy auditing BUT do remember to turn off the auditing when your work is over.

begin
DBMS_FGA.DROP_POLICY (
object_schema => 'SCOTT',
object_name => 'DEPT',
policy_name => 'DEPT_Aud');
end;
/

Friday, February 22, 2008

Compare table structure in oracle

I found an interesting script that I used sometime back to compare the structure of 2 tables. Thought I should preserve and blog it for the benefit of all.

--- SCRIPT START ---

Rem script name - compare_tables.sql
Rem Table structure comparison script
Rem Can compare tables across schema's provided the user has privilege
Rem Provide table name in the format SCHEMA_NAME.TABLE_NAME

set verify off
set heading off
set feedback off
set line 100

column owner format a20
column column_name format a20
column data_type format a20


accept table1 prompt 'Table 1: '
accept table2 prompt 'Table 2: '

prompt
prompt columns having same name but difference in datatype or length:
prompt -------------------------------------------------------------------------

column data_precision fold_after

select
a.column_name, a.data_type, a.data_length, a.data_scale,a.data_precision,
b.column_name, b.data_type, b.data_length, b.data_scale,b.data_precision
from
all_tab_columns a, all_tab_columns b
where
a.table_name = upper(substr('&table1',instr('&table1','.')+1)) and
a.owner = upper(substr('&table1',1,instr('&table1','.')-1)) and
b.table_name = upper(substr('&table2',instr('&table2','.')+1)) and
b.owner = upper(substr('&table2',1,instr('&table2','.')-1)) and
a.column_name = b.column_name and
(
a.data_type <> b.data_type or
a.data_length <> b.data_length or
a.data_scale <> b.data_scale or
a.data_precision <> b.data_precision
);

prompt columns present in &table1 but not in &table2
prompt ----------------------------------------------

select
column_name
from
all_tab_columns
where
table_name = upper(substr('&table1',instr('&table1','.')+1))
and owner = upper(substr('&table1',1,instr('&table1','.')-1))
minus
select
column_name --, data_type, data_length, data_scale, data_precision
from
all_tab_columns
where
table_name = upper(substr('&table2',instr('&table2','.')+1))
and owner = upper(substr('&table2',1,instr('&table2','.')-1));

prompt columns present in &table2 but not in &table1
prompt ----------------------------------------------

select
column_name --, data_type, data_length, data_scale, data_precision
from
all_tab_columns
where
table_name = upper(substr('&table2',instr('&table2','.')+1))
and owner = upper(substr('&table2',1,instr('&table2','.')-1))
minus
select
column_name
from
all_tab_columns
where
table_name = upper(substr('&table1',instr('&table1','.')+1))
and owner = upper(substr('&table1',1,instr('&table1','.')-1));

--- SCRIPT END --

Example using the script

CREATE TABLE DEPT_1
(
DEPTNO NUMBER(2),
DNAME VARCHAR2(20),
LOC VARCHAR2(20)
);

create table dept_2 as select deptno,dname from dept;

alter table dept_2 modify dname VARCHAR2(40);

@compare_tables.sql

Table 1: SIPRAS.DEPT_1
Table 2: SIPRAS.DEPT_2

columns having same name but difference in datatype or length:
------------------------------------------------------------------------

SIPRAS DNAME VARCHAR2 20
SIPRAS DNAME VARCHAR2 40

columns present in SIPRAS.DEPT_1 but not in SIPRAS.DEPT_2
-------------------------------------------------------------------------

LOC
columns present in SIPRAS.DEPT_2 but not in SIPRAS.DEPT_1
--------------------------------------------------------------------------

Monday, February 18, 2008

Performance tuning Seminar by Craig Shallahamer

I attended a 2 day seminar on "Reactive Performance Management" organized by Oracle University last week and the speaker was well-known Oracle guru Craig Shallahamer of OraPub.

Craig covered latching mechanism, SGA (Buffer and Shared Pool) management in detail. Though he did not get into in depth details but coverage was enough to refresh the memory.

Craig presented his methodology in tracking performance problem and identify potential tuning areas. He presented a model wherein he tries to identify bottlenecks at 3 different layers namely Oracle, Operating System and Application. He focused on using the wait events to diagnose and come up with solutions for each of the 3 layers. He emphasized the need to present the findings in such a manner that management (non-technical) people can understand and appreciate your work.

I had very high expectations from the seminar but unfortunately they were not met. At times, questions remained unanswered. However Craig was very honest in his responses and did not try to mislead. On the whole it was a refresher for me. All of us have an approach but the seminar enabled us to look at it more holistically. I specially liked his teaching method and honesty in his responses when he did not have an answer. Personally I felt his explanation on latching in a very easy to understand method was very good.