Wednesday, November 10, 2010

Oracle Application - Tips

1) Setting Environment:-
You can use the below script in a login.sql file
which will get executed every time you login to database or execute
these commands one by one in sql prompt.

define _editor=vi
set serveroutput on size 1000000
set trimspool on
set long 5000
set linesize 100
set pagesize 9999
column plan_plus_exp format a80
column global_name new_value gname
set termout off
select lower(user) || ‘@’ ||
decode(global_name, ‘ORACLE8.WORLD’, ‘8.0’, ‘ORA8I.WORLD’,
‘8i’, global_name ) global_name from global_name;
set sqlprompt ‘&gname> ‘
set termout on

2) Some hints on Improving performance using Oracle Applications:-

a) Use bind variables :-
Bind variable is a placeholder in a query. For example, to retrieve the record for employee 123, I can query:
select * from emp where empno = 123;

Alternatively, I can query:

select * from emp where empno = :empno;

In a typical system, you would query up employee 123 maybe once and then never again. Later, you would query up employee 456, then 789, and so on. If you use literals (constants) in the query then each and every query is a brand new query, never before seen by the database. It will have to be parsed, qualified (names resolved), security checked, optimized, and so on – in short, each and every
unique statement you execute will have to be compiled every time it is executed.

The second query uses a bind variable, :empno, the value of which is supplied at query execution time. This query is compiled once and then the query plan is stored in a shared pool (the library cache), from which it can be retrieved and reused. The difference between the two in terms of performance and scalability is huge, dramatic even.

From the above description it should be fairly obvious that parsing a statement with hard-coded variables (called a hard parse) will take longer and consume many more resources than reusing an already parsed query plan (called a soft parse). What may not be so obvious is the extent to which the former will reduce the number of users your system can support. Obviously, this is due in part to the increased resource consumption, but an even larger factor arises due to the latching mechanisms for the library cache. When you hard parse a query, the database will spend more time holding certain lowlevel serialization devices called latches. These latches protect the data structures in the shared memory of Oracle from concurrent modifications by
two sessions (else Oracle would end up with corrupt data structures) and from someone reading a data structure while it is being modified. The longer and more frequently we have to latch these data structures, the longer the queue to get these latches will become. In a similar fashion to having long transactions running under MTS, we will start to monopolize scarce resources. Your machine may appear to be under-utilized at times – and yet everything in the database is running very slowly. The likelihood is that someone is holding one of these serialization mechanisms and a line is forming – you are not able to run at top speed.It only takes one ill behaved application in your database to dramatically affect the performance of every other application. A single, small application that does not use bind variable will cause the relevant SQL of other well tuned applications to get discarded from the
shared pool over time. You only need one bad apple to spoil the entire barrel.
If you use bind variables, then everyone who submits the same exact query that references the same object will use the compiled plan from the pool. You will compile your subroutine once and use it over and over again. This is very efficient and is the way the database intends you to work. Not only will you
use fewer resources (a soft parse is much less resource intensive), but also you will hold latches for less time and need them less frequently. This increases your performance and greatly increases your scalability.


sql> alter system flush shared_pool;
sql> set timing on
sql> declare
2 type rc is ref cursor;
3 l_rc rc;
4 l_dummy all_objects.object_name%type;
5 l_start number default dbms_utility.get_time;
6 begin
7 for i in 1 .. 1000
8 loop
9 open l_rc for
10 ‘select object_name
11 from all_objects
12 where object_id = ‘ || i;
13 fetch l_rc into l_dummy;
14 close l_rc;
15 end loop;
16 dbms_output.put_line
17 ( round( (dbms_utility.get_time-l_start)/100, 2 ) ||
18 ‘ seconds...’ );
19 end;
20 /
14.86 seconds...
PL/SQL procedure successfully completed.

The above code uses dynamic SQL to query out a single row from the ALL_OBJECTS table. It generates 1000 unique queries with the values 1, 2, 3, ... and so on ‘hard-coded’ into the WHERE clause. On my 350MHz Pentium laptop, this took about 15 seconds (the speed may vary on different machines). Next, we do it using bind variables:
sql> declare
2 type rc is ref cursor;
3 l_rc rc;
4 l_dummy all_objects.object_name%type;
5 l_start number default dbms_utility.get_time;
6 begin
7 for i in 1 .. 1000
8 loop
9 open l_rc for
10 ‘select object_name
11 from all_objects
12 where object_id = :x’
13 using i;
14 fetch l_rc into l_dummy;
15 close l_rc;
16 end loop;
17 dbms_output.put_line
18 ( round( (dbms_utility.get_time-l_start)/100, 2 ) ||
19 ‘ seconds...’ );
20 end;
21 /
1.27 seconds...
PL/SQL procedure successfully completed.

b) Do not run long running trasaction under MTS :- When a shared server gets a request to run an update, or execute a stored procedure, then that shared server is dedicated to that task until completion. No one else will use that shared server until that update completes or that stored procedure finishes execution. Thus, when using MTS your goal must be to have very short statements. MTS is designed to scale up On-Line Transaction Processing (OLTP)
systems – a system characterized by statements that execute with sub-second response times. You’ll have a single row update, insert a couple of line items, and query records by primary key. You won’t (or shouldn’t) run a batch process that takes many seconds or minutes to complete.

c) Multi Versioning Example :-

sql> create table t
2 as
3 select * from all_users;
Table created.
sql> variable x refcursor
sql> begin
2 open :x for select * from t;
3 end;
4 /
PL/SQL procedure successfully completed.
sql> delete from t;
18 rows deleted.
sql> commit;
Commit complete.
sql> print x
------------------------------ ---------- ---------
SYS 0 04-NOV-00
SYSTEM 5 04-NOV-00
DBSNMP 16 04-NOV-00
ORDSYS 25 04-NOV-00
MDSYS 27 04-NOV-00
CTXSYS 30 04-NOV-00
DEMO 57 07-FEB-01

In the same session (or maybe another session would do this), we then proceeded to delete all data from that table. We even went as far as to COMMIT work on that delete. The rows are gone – but are they? In fact, they are retrievable via the cursor. The fact is that the resultset returned to us by the OPEN command was pre-ordained at the point in time we opened it. We had touched not a single block of
data in that table during the open, but the answer was already fixed in stone. We have no way of knowing what the answer will be until we fetch the data – however the result is immutable from our cursor’s perspective. It is not that Oracle copied all of the data above to some other location when we opened the cursor; it was actually the delete command that preserved our data for us by placing it into a data area called a rollback segment.

This is what read-consistency is all about and if you do not understand how Oracle’s multi-versioning scheme works and what it implies, you will not be able to take full advantage of Oracle nor will you be able to write correct applications in Oracle (ones that will ensure data integrity).

In reality the accounts table would have hundreds of thousands of rows in it, but for simplicity we’re just going to consider a table with four rows (we will visit this example in more detail in Chapter 3,
Locking and Concurrency):
Row Account Number Account Balance
1 123 $500.00
2 234 $250.00
3 345 $400.00
4 456 $100.00

What we would like to do is to run the end-of-day report that tells us how much money is in the bank.
That is an extremely simple query:
select sum(account_balance) from accounts;
And, of course, in this example the answer is obvious: $1250. However, what happens if we read row 1, and while we’re reading rows 2 and 3, an Automated Teller Machine (ATM) generates transactions against this table, and moves $400 from account 123 to account 456? Our query counts $500 in row 4 and comes up with the answer of $1650, doesn’t it? Well, of course, this is to be avoided, as it would be an error – at no time did this sum of money exist in the account balance column. It is the way in which Oracle avoids such occurrences, and how Oracle’s methods differ from every other database, that you need to understand.

In practically every other database, if you wanted to get a ‘consistent’ and ‘correct’ answer to this query, you would either have to lock the whole table while the sum was calculated or you would have to lock the rows as you read them. This would prevent people from changing the answer as you are getting it. If
you lock the table up-front, you’ll get the answer that was in the database at the time the query began. If you lock the data as you read it (commonly referred to as a shared read lock, which prevents updates but not other readers from accessing the data), you’ll get the answer that was in the database at the point the query finished. Both of these methods inhibit concurrency a great deal. The table lock would prevent any updates from taking place against the entire table for the duration of your query (for a table of four rows, this would only be a very short period – but for tables with hundred of thousands of rows,this could be several minutes). The ‘lock as you go’ method would prevent updates on data you have
read and already processed and could actually cause deadlocks between your query and other updates.

Now, I said earlier that you would not be able to take full advantage of Oracle if you did not understand the concept of multi-versioning. Here is one reason why that is true. Oracle uses multi-versioning to get the answer, as it existed at the point in time the query began, and the query will take place without locking a single thing (while our account transfer transaction updates rows 1 and 4, these rows will be locked to other writers – but not locked to other readers, such as our SELECT SUM...query). In fact, Oracle doesn’t have a ‘shared read’ lock common in other databases – it does not need it. Everything inhibiting concurrency that can be removed, has been removed.
So, how does Oracle get the correct, consistent answer ($1250) during a read without locking any data– in other words, without decreasing concurrency? The secret lies in the transactional mechanisms that Oracle uses. Whenever you modify data, Oracle creates entries in two different locations. One entry goes to the redo logs where Oracle stores enough information to redo or ‘roll forward’ the transaction.
For an insert this would be the row inserted. For a delete, it is a message to delete the row in file X, block Y, row slot Z. And so on. The other entry is an undo entry, written to a rollback segment. If your transaction fails and needs to be undone, Oracle will read the ‘before’ image from the rollback segment
and restore the data. In addition to using this rollback segment data to undo transactions, Oracle uses it to undo changes to blocks as it is reading them – to restore the block to the point in time your query began. This gives you the ability to read right through a lock and to get consistent, correct answers without locking any data yourself.

So, as far as our example is concerned, Oracle arrives at its answer as follows:
Time Query Account transfer transaction
T1 Reads row 1, sum = $500 so far
T2 Updates row 1, puts an exclusive lock on row 1 preventing other updates. Row 1 now
has $100
T3 Reads row 2, sum = $750 so far
T4 Reads row 3, sum = $1150 so far
T5 Updates row 4, puts an exclusive lock on block 4 preventing other updates (but not
reads). Row 4 now has $500.
T6 Reads row 4, discovers that row 4 has been modified. It will actually rollback the block to make it appear as it did at time = T1. The query will read the value $100 from this block
T7 Commits transaction
T8 Presents $1250 as the answer
At time T6, Oracle is effectively ‘reading through’ the lock placed on row 4 by our transaction. This is how non-blocking reads are implemented – Oracle only looks to see if the data changed, it does not care if the data is currently locked (which implies that it has changed). It will simply retrieve the old value from the rollback segment and proceed onto the next block of data.
This is another clear demonstration of multi-versioning – there are multiple versions of the same piece of information, all at different points in time, available in the database. Oracle is able to make use of these ‘snapshots’ of data at different points in time to provide us with read-consistent queries and nonblocking queries.

d) Cursor_sharing Parameter

CURSOR_SHARING=FORCE. This feature implements an ‘auto binder’ if you will. It will silently take a query written as SELECT * FROM EMP WHERE EMPNO = 1234 and rewrite it for us as SELECT * FROM EMP WHERE EMPNO = :x. This can dramatically decrease the number of hard parses, and decrease the library latch waits we discussed in the Architecture sections – but (there is always a but) it can have some side effects. You may hit an issue (a.k.a. ‘bug’) with regards to this feature, for example in the first release:
SQL> alter session set cursor_sharing=force;
Session altered.
SQL> select * from dual where dummy=‘X’and 1=0;
select * from dual where dummy=‘X’and 1=0
ERROR at line 1:
ORA-00933: SQL command not properly ended
SQL> alter session set cursor_sharing=exact;
Session altered.
SQL> select * from dual where dummy=‘X’and 1=0;
no rows selected

Basically, it is important to keep in mind that simply turning on CURSOR_SHARING = FORCE will not necessarily fix your problems. It may very well introduce new ones. CURSOR_SHARING is, in some cases, a very useful tool, but it is not a silver bullet. A well-developed application would never need it. In the long term, using bind variables where appropriate, and constants when needed, is the correct

Reference : Expert Oracle by Thomas Kyte

No comments:

Post a Comment