Monday, January 31, 2011

Adding Storage Devices

Adding Storage in Solaris.

To install a new hard drive on a Solaris 10 system, just follow these steps:

1. Prepare the system for a reconfiguration boot by issuing the command

server# touch /reconfigure

2. Synchronize disk data and power down the system using the commands

server# sync; sync; sync; shutdown

3. Switch off power to the system and attach the new hard disk to the external
SCSI chain, or install it internally into an appropriate disk bay.

4. Check that the SCSI device ID does not conflict with any existing SCSI devices.
If a conflict exists, simply change the ID by using the switch.

5. Power on the system and use the boot command to load the kernel, if the
OpenBoot monitor appears:

ok boot

The next step—assuming that you have decided which partitions you want to
create on your drive, using the information supplied earlier—is to run the format
program. In addition to creating slices, format also displays information about
existing disks and slices and can be used to repair a faulty disk. When format is
invoked without a command-line argument.

# format

Searching for disks...done

0. c1t0d0
1. c1t1d0
2. c1t2d0
Specify disk (enter its number):2


# format /dev/rdsk/c1t2d0

After selecting the appropriate disk, the message
[disk formatted]
appears if the disk has previously been formatted. This is an important message, because
it is a common mistake to misidentify a target disk from the available selection of both
formatted and unformatted disks. The menu looks like this:

disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
show - translate a disk address
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
volname - set 8-character volume name
! - execute , then return


If the disk has not been formatted, the first step is to prepare the disk to contain
slices and file systems by formatting the disk. To do so, issue the command format:

format> format
Ready to format. Formatting cannot be interrupted
and takes 46 minutes (estimated). Continue? y
Beginning format. The current time is Tue Feb 1 07:10:39 2011


Verifying media...
pass 0 - pattern = 0xc6dec6de

pass 1 - pattern = 0x6db6db6d

Total of 0 defective blocks repaired.

format> fdisk
No fdisk table exists. The default partition for the disk is:

a 100% "SOLARIS System" partition

Type "y" to accept the default partition, otherwise type "n" to edit the
partition table. y

format> partition

0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
! - execute , then return

partition> print
Current partition table (original):
Total disk cylinders available: 1020 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 1019 1020.00MB (1020/0/0) 2088960
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 1.00MB (1/0/0) 2048
9 unassigned wm 0 0 (0/0/0) 0

partition> modify
Select partitioning base:
0. Current partition table (original)
1. All Free Hog
Choose base (enter number) [0]? 1

Part Tag Flag Cylinders Size Blocks
0 root wm 0 0 (0/0/0) 0
1 swap wu 0 0 (0/0/0) 0
2 backup wu 0 - 295 296.00MB (296/0/0) 606208
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 usr wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 1.00MB (1/0/0) 2048
9 alternates wm 0 0 (0/0/0) 0

Do you wish to continue creating a new partition
table based on above table[yes]?
Free Hog partition[6]? 0
Enter size of partition '1' [0b, 0c, 0.00mb, 0.00gb]: 100.00mb
Enter size of partition '3' [0b, 0c, 0.00mb, 0.00gb]: 100.00mb
Enter size of partition '4' [0b, 0c, 0.00mb, 0.00gb]: 95.00mb
Warning: no space available for '5' from Free Hog partition
Warning: no space available for '6' from Free Hog partition
Warning: no space available for '7' from Free Hog partition

Part Tag Flag Cylinders Size Blocks
0 root wm 0 0 (0/0/0) 0
1 swap wu 1 - 100 100.00MB (100/0/0) 204800
2 backup wu 0 - 295 296.00MB (296/0/0) 606208
3 unassigned wm 101 - 200 100.00MB (100/0/0) 204800
4 unassigned wm 201 - 295 95.00MB (95/0/0) 194560
5 unassigned wm 0 0 (0/0/0) 0
6 usr wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 1.00MB (1/0/0) 2048
9 alternates wm 0 0 (0/0/0) 0

Okay to make this the current partition table[yes]?
Enter table name (remember quotes): "new_disks"

Ready to label disk, continue? y

partition> print
Current partition table (new_disks):
Total disk cylinders available: 296 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
1 unassigned wm 1 - 100 100.00MB (100/0/0) 204800
2 backup wu 0 - 295 296.00MB (296/0/0) 606208
3 unassigned wm 101 - 200 100.00MB (100/0/0) 204800
4 unassigned wm 201 - 295 95.00MB (95/0/0) 194560
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 1.00MB (1/0/0) 2048
9 unassigned wm 0 0 (0/0/0) 0

partition> q

disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
! - execute , then return

format> q

To view the disk label,
use the prtvtoc command. Here’s the output from the primary drive in an x86 system:

# prtvtoc /dev/dsk/c1t2d0s1
* /dev/dsk/c1t2d0s1 partition map
* Dimensions:
* 512 bytes/sector
* 32 sectors/track
* 64 tracks/cylinder
* 2048 sectors/cylinder
* 298 cylinders
* 296 accessible cylinders
* Flags:
* 1: unmountable
* 10: read-only
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
1 0 00 2048 204800 206847
2 5 01 0 606208 606207
3 0 00 206848 204800 411647
4 0 00 411648 194560 606207
8 1 01 0 2048 2047

After saving the changes to the disk’s partition table, exit the format program and
create a new UFS file system on the target slice by using the newfs command:

# newfs /dev/rdsk/c1t2d0s1
newfs: construct a new file system /dev/rdsk/c1t2d0s1: (y/n)? y
/dev/rdsk/c1t2d0s1: 204800 sectors in 100 cylinders of 64 tracks, 32 sectors
100.0MB in 7 cyl groups (16 c/g, 16.00MB/g, 7680 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 32832, 65632, 98432, 131232, 164032, 196832,

# newfs /dev/rdsk/c1t2d0s3
newfs: construct a new file system /dev/rdsk/c1t2d0s3: (y/n)? y
/dev/rdsk/c1t2d0s3: 204800 sectors in 100 cylinders of 64 tracks, 32 sectors
100.0MB in 7 cyl groups (16 c/g, 16.00MB/g, 7680 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 32832, 65632, 98432, 131232, 164032, 196832,

# newfs /dev/rdsk/c1t2d0s4
newfs: construct a new file system /dev/rdsk/c1t2d0s4: (y/n)? y
/dev/rdsk/c1t2d0s4: 194560 sectors in 95 cylinders of 64 tracks, 32 sectors
95.0MB in 6 cyl groups (16 c/g, 16.00MB/g, 7680 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 32832, 65632, 98432, 131232, 164032,

After a new file system is constructed, it is ready to be mounted. First, a mount point
is created:

# mkdir /u04 /u05 /u06

followed by the appropriate mount command:

# mount /dev/dsk/c1t2d0s1 /u04
# mount /dev/dsk/c1t2d0s3 /u05
# mount /dev/dsk/c1t2d0s4 /u06

# df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c1t0d0s0 23G 7.2G 15G 33% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 2.2G 916K 2.2G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
23G 7.2G 15G 33% /lib/
fd 0K 0K 0K 0% /dev/fd
swap 2.2G 40K 2.2G 1% /tmp
swap 2.2G 36K 2.2G 1% /var/run
/hgfs 16G 4.0M 16G 1% /hgfs
/dev/dsk/c1t2d0s1 93M 1.0M 83M 2% /u04
/dev/dsk/c1t2d0s3 93M 1.0M 83M 2% /u05
/dev/dsk/c1t2d0s4 89M 1.0M 79M 2% /u06

At this point, the disk is available to the system for the current session. However, if
you want the disk to be available after reboot, you need to create an entry in the virtual
file systems table, which is created from the /etc/vfstab file. An entry like this,

/dev/dsk/c1t2d0s1 /dev/rdsk/c1t2d0s1 /u04 ufs 2 yes -
/dev/dsk/c1t2d0s3 /dev/rdsk/c1t2d0s3 /u05 ufs 2 yes -
/dev/dsk/c1t2d0s4 /dev/rdsk/c1t2d0s4 /u06 ufs 2 yes -

II) Adding Storage in Linux.

1) Shutdown the Linux server.
2) Add the storage disk to the linux server and boot the server.
3) Check the storage which been added as below and partition it.

# fdisk -l

# fdisk /dev/sdb

Once the device is partitioned then make a new file system using
that device.

# mkfs.ext3 /dev/sdb1

After creating the file system create an directory and mount the
device to that directory.

# mkdir /u02

# mount /dev/sdb1 /u02

Now the device has been mounted to the /u02 partition. Add the
below entry in /etc/fstab file inorder to mount the partition
after reboot.

# vi /etc/fstab

/dev/sdb1 /u02 ext3 defaults 1 2


Sunday, January 30, 2011

Installing Oracle 10g in Sun Solaris 10 X86

Oracle Database on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2) [ID 169706.1]

Installing Oracle 10g in Sun Solaris 10 X86

1. Check the Ram Size.

#/usr/sbin/prtconf | grep “Memory size”

2. Check Swap size.

# /usr/sbin/swap -s

3. Check the /tmp size > 400 mb

# df -k /tmp

4. Check the Solaris Version.

# uname -r

5. Necessary Packages check.


# pkginfo -i SUNWarc SUNWbtool SUNWhea SUNWlibm SUNWlibms SUNWsprot SUNWsprox SUNWtoo SUNWi1of SUNWi1cs SUNWi15cs SUNWxwfnt

if any of the packages is missing then install those missing packages as below

1) First put CDROM/DVD into drive

2) For CDROM: After few second change directory /cdrom/cdrom0 or /cdrom/VOL_NAME

# cd /cdrom/cdrom0

3) For DVDROM: After few second change directory /dvd/VOL_NAMRE

If this does not works make sure volmgt daemon is running:

# /etc/init.d/volmgt start

Scan for devices both cd or floopy using:


Then proceed with cdrom device


# pkgadd -d /cdrom/sol_10_106_x86/Solaris_10/Product SUNWi1cs SUNWi15cs

# cat /etc/nsswitch.conf | grep hosts
# hostname

6. Set the Kernel Parameters.

Set the below parameters in /etc/system file.

set shmsys:shminfo_shmmax=4294967295
set semsys:seminfo_semmsl=256
set semsys:seminfo_semmni=100
set shmsys:shminfo_shmmni=100

Add new project named as oracle

# projadd oracle

Append the below in /etc/user_attr file


alter the kernel parameter "max-shm-memory" to set the maximum size of semaphores.

# prctl -n project.max-shm-memory -i project oracle

It will result output as below.

project: 100: oracle
privileged 254MB - deny -
system 16.0EB max deny -

That the curren value for this projet is set as 254MB, change this to desired value. eg 4gb

Execute below the make the change dynamically.
# prctl -n project.max-shm-memory -v 4gb -r -i project oracle

Execute the below the make the change even after reboot, this
will modify the /etc/project file as per the below output.

# projmod -s -K "project.max-shm-memory=(priv,4gb,deny)" oracle

# cat /etc/project

Now check whether the change has taken place.

# prctl -n project.max-shm-memory -i project oracle

Create User groups and users.

# groupadd oinstall
# groupadd dba
# useradd -d /home/oracle -g oinstall -G dba -s /bin/ksh oracle
# passwd -r files oracle
# mkdir /home/oracle

Automountd keeps track on /home by default. If you want to manually controll
this dir, then you need to disable automountd or comment out any line in
/etc/auto_home and /etc/auto_master. After comment out everything you can
reboot the system to be sure that automountd has no control over /home.

# chmown oracle:dba /home/oracle
# mkdir -p /u01/app/oracle/product/10.2.0/db_1
# mkdir -p /u01/app/oracle/oradata
# chown -R oracle:dba /u01

Set the Oracle user environment varibales.

# su - oracle

$ vi .profile

export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1
export ORACLE_SID=firstdb
export PATH=$ORACLE_HOME/bin:/usr/openwin/bin:/usr/x11R6/bin:$PATH

Install Oracle software as oracle user.

GUI Mode - Remote Installation.

We can use cygwin tool for X11 forwading the GUI mode to the
windows desktop. cygwin/X can be downloaded for free.

Can also use (Moba Xterm) / or any other Xterm software.

For X11 forwarding we need to select the below packages
while downloading/installing the cygwin


Step 1:

Once the cygwin tool is installed, open the cygwin

start the xwin by executing

$ startxwin

The above command will open an x terminal window.
In that enter the Server which we are going to forward to
the windows desktop.

$ xhost +

Now open the putty session.

in right hand side select the SSH and check the
enable x11 forwarding box

Enter the host details and login to the box.
now set the DISPLAY variable where the GUI mode sessions
to be forwarded.

$ export DISPLAY=

$ cd /software/database
$ ./runInstaller

if installing in local host in gui mode then set the
$export DISPLAY=:0.0

Silent Mode Installation - Remote Installation.

login as oracle user

$ su - oracle

unzip the oracle 10g database file.

$ cd /software
$ unzip

Once the unzip completed we a see a directory called database inside the
same directory, and inside database you can see a direcotry called

$ cd /software/database/response

Here we can find all the response file, copy the enterprise.rsp to
any location.

$ cp enterprise.rsp /home/oracle

$ vi /home/oracle/enterprise.rsp

Now modify the response file. We need to enter/ specify values
for the parameters it requres. eg, oracle base path, oracle home,
datbase name, oracle owner group, file system etc.

Read the specification carefully and enter the input.

once the response file is ready we can start the installation as below.

$ cd /software/database

$ ./runInstaller -silent -responseFile /home/oracle/enterprise.rsp

at last run the below scripts as root user.


Automate database Startup

edit file “/var/opt/oracle/oratab” script to find lines with ‘Y’ at their ends
Create file “/etc/init.d/dbora” and enter the below scripts


if [ ! -f $ORA_HOME/bin/dbstart ]
echo "Oracle startup: cannot start"

case "$1" in
su - $ORA_OWNER -c "$ORA_HOME/bin/dbstart"
su - $ORA_OWNER -c "$ORA_HOME/bin/dbshut"

chmod 777 /etc/init.d/dbora

To integrate dbora file to standart Solaris startup and shutdown process:
#ln -s /etc/init.d/dbora /etc/rc0.d/K01dbora
#ln -s /etc/init.d/dbora /etc/rc2.d/S99dbora

edit file “dbstart” & “dbshut”, find line

and change to =


Sunday, January 23, 2011

Compress EXP/IMP Using Pipes

exporting using pipe:-

$ mknod exp_p.pipe p
$ gzip -c < exp_p.pipe > /home/oracle/full_db.gz &
$ exp "'sys/admin@orcl as sysdba'" file=exp_p.pipe full=y log=full.log

a) create a pipe file using mknod utility.

b) The gzip will compress the data which are received through the pipe and
it will create a gz (zip) file as full_db.gz, start this to run
in the background.

c) when expoing the data in the file parameter give the name of the
pipe file which been started using the mknod utility.

Importing using pipe:-

$ mknod imp_p.pipe p
$ gunzip -c < full_db.gz > /home/oracle/imp_p.pipe &
$ imp "'sys/admin@orcl as sysdba'" file=imp_p.pipe tables=emp fromuser=manzoor touser=manzoor

a) Create a pipe file using mknod utility.

b) the gunzip will uncomrpress the compressed file and and tranfer it
to the imp_p.pipe file.

c) when importing the data, in the file parameter give the name of the
pipe which is recived the data from the gunzip.


# Script to take Full export using pipe.


rm -f newer_than_2.pipe wc.log newer_than_2.log
mknod newer_than_2.pipe p
$GZIP -c $dmpf &
exp parfile=newer_than_2.par
if [ $? -eq 0 ] ; then
echo "***"
echo "*** SUCCESS exp parfile=newer_than_2.par"
echo "***"
echo "*** FAIL exp parfile=newer_than_2.par exit"
exit 1

# confirm export file exists
echo "***"
echo "*** export file:"
ls -l $dmpf

Note : If you have enough space in the target server you can also directly uncompress / gunzip the dumpfile and pass it to import tool as below.

$ gunzip  full_db.gz

$ imp "'sys/admin@orcl as sysdba'" file=full_db tables=emp fromuser=manzoor touser=manzoor

Wednesday, January 19, 2011

Connecting to ASM Instance

The ASM and database instances must have equivalent operating system access rights. For example, the ASM instance and the database instance must have identical read and write permissions for the disks that comprise the related ASM disk group. For UNIX systems, this is typically provided through shared UNIX group membership. On Windows systems, the ASM service can run as Administrator.

An ASM instance does not have a data dictionary, so the only way to connect to an ASM instance is by using one of three system privileges, SYSASM, SYSDBA, or SYSOPER. There are three modes of connecting to ASM instances:

•Local connection using operating system authentication

sqlplus / AS SYSDBA

•Local connection using password authentication

sqlplus sys/admin as sysdba

•Remote connection by way of Oracle Net Services using password authentication


(ADDRESS = (PROTOCOL = TCP)(HOST = server123)(PORT = 1521))

sqlplus sys/password@myasm as sysdba

Monday, January 17, 2011

Large Objects

What Are Large Objects?

Large Objects (LOBs) are a set of datatypes that are designed to hold large amounts of data. A LOB can hold up to a maximum size ranging from 8 terabytes to 128 terabytes depending on how your database is configured. Storing data in LOBs enables you to access and manipulate the data efficiently in your application.

Why Use Large Objects?
This section introduces different types of data that you encounter when developing applications and discusses which kinds of data are suitable for large objects.

In the world today, applications must deal with the following kinds of data:

•Simple structured data.

This data can be organized into simple tables that are structured based on business rules.

•Complex structured data

This kind of data is complex in nature and is suited for the object-relational features of the Oracle database such as collections, references, and user-defined types.

•Semi-structured data

This kind of data has a logical structure that is not typically interpreted by the database. For example, an XML document that is processed by your application or an external service, can be thought of as semi-structured data. The database provides technologies such as Oracle XML DB, Advanced Queuing, and Messages to help your application work with semi-structured data.

•Unstructured data

This kind of data is not broken down into smaller logical structures and is not typically interpreted by the database or your application. A photographic image stored as a binary file is an example of unstructured data.

Large objects are suitable for these last two kinds of data: semi-structured data and unstructured data. Large objects features allow you to store these kinds of data in the database as well as in operating system files that are accessed from the database.

With the growth of the internet and content-rich applications, it has become imperative that the database support a datatype that:

•Can store unstructured and semi-structured data in an efficient manner.

•Is optimized for large amounts of data.

•Provides a uniform way of accessing data stored within the database or outside the database.

Using LOBs for Semi-structured Data
Examples of semi-structured data include document files such as XML documents or word processor files. These kinds of documents contain data in a logical structure that is processed or interpreted by an application, and is not broken down into smaller logical units when stored in the database.

Applications involving semi-structured data typically use large amounts of character data. The Character Large Object (CLOB) and National Character Large Object (NCLOB) datatypes are ideal for storing and manipulating this kind of data.

Binary File objects (BFILE datatypes) can also store character data. You can use BFILEs to load read-only data from operating system files into CLOB or NCLOB instances that you then manipulate in your application.

Using LOBs for Unstructured Data
Unstructured data cannot be decomposed into standard components. For example, data about an employee can be structured into a name, which is stored as a string; an identifier, such as an ID number, a salary and so on. A photograph, on the other hand, consists of a long stream of 1s and 0s. These bits are used to switch pixels on or off so that you can see the picture on a display, but are not broken down into any finer structure for database storage.

Also, unstructured data such as text, graphic images, still video clips, full motion video, and sound waveforms tends to be large in size. A typical employee record may be a few hundred bytes, while even small amounts of multimedia data can be thousands of times larger.

SQL datatypes that are ideal for large amounts of unstructured binary data include the BLOB datatype (Binary Large Object) and the BFILE datatype (Binary File object).

Why Not Use LONGs?
The database supports LONG as well as LOB datatypes. When possible, change your existing applications to use LOBs instead of LONGs because of the added benefits that LOBs provide. LONG-to-LOB migration enables you to easily migrate your existing applications that access LONG columns, to use LOB columns.

Applications developed for use with Oracle Database version 7 and earlier, used the LONG or LONG RAW data type to store large amounts of unstructured data.

With the Oracle8i and later versions of the database, using LOB datatypes is recommended for storing large amounts of structured and semi-structured data. LOB datatypes have several advantages over LONG and LONG RAW types including:

•LOB Capacity: LOBs can store much larger amounts of data. LOBs can store 4GB of data or more depending on you system configuration. LONG and LONG RAW types are limited to 2GB of data.

•Number of LOB columns in a table: A table can have multiple LOB columns. LOB columns in a table can be of any LOB type. In Oracle Database Release 7.3 and higher, tables are limited to a single LONG or LONG RAW column.

•Random piece-wise access: LOBs support random access to data, but LONGs support only sequential access.

•LOBs can also be object attributes.

Different Kinds of LOBs

Different kinds of LOBs can be stored in the database or in external files.

Internal LOBs
LOBs in the database are stored inside database tablespaces in a way that optimizes space and provides efficient access. The following SQL datatypes are supported for declaring internal LOBs: BLOB, CLOB, and NCLOB. Details on these datatypes are given in "Large Object Datatypes"

SQL Datatype Description
Binary Large Object

Stores any kind of data in binary format. Typically used for multimedia data such as images, audio, and video.

Character Large Object

Stores string data in the database character set format. Used for large strings or documents that use the database character set exclusively. Characters in the database character set are in a fixed width format.

National Character Set Large Object

Stores string data in National Character Set format. Used for large strings or documents in the National Character Set. Supports characters of varying width format.

External Binary File

A binary file stored outside of the database in the host operating system file system, but accessible from database tables. BFILEs can be accessed from your application on a read-only basis. Use BFILEs to store static data, such as image data, that does not need to be manipulated in applications.

Any kind of data, that is, any operating system file, can be stored in a BFILE. For example, you can store character data in a BFILE and then load the BFILE data into a CLOB specifying the character set upon loading.

Persistent and Temporary LOBs

Internal LOBs (LOBs in the database) can be either persistent or temporary. A persistent LOB is a LOB instance that exists in a table row in the database. A temporary LOB instance is created when you instantiate a LOB only within the scope of your local application.

A temporary instance becomes a persistent instance when you insert the instance into a table row.

Persistent LOBs use copy semantics and participate in database transactions. You can recover persistent LOBs in the event of transaction or media failure, and any changes to a persistent LOB value can be committed or rolled back. In other words, all the Atomicity Consistency Isolation Durability (ACID) properties that pertain to using database objects pertain to using persistent LOBs.

External LOBs and the BFILE Datatype

External LOBs are data objects stored in operating system files, outside the database tablespaces. The database accesses external LOBs using the SQL datatype BFILE. The BFILE datatype is the only external LOB datatype.

BFILEs are read-only datatypes. The database allows read-only byte stream access to data stored in BFILEs. You cannot write to a BFILE from within your application.

The database uses reference semantics with BFILE columns. Data stored in a table column of type BFILE, is physically located in an operating system file, not in the database tablespace.

You typically use BFILEs to hold:

•Binary data that does not change while your application is running, such as graphics.

•Data that is loaded into other large object types, such as a BLOB or CLOB where the data can then be manipulated.

•Data that is appropriate for byte-stream access, such as multimedia.

•Read-only data that is relatively large in size, to avoid taking up large amounts database tablespace.

Any storage device accessed by your operating system can hold BFILE data, including hard disk drives, CD-ROMs, PhotoCDs and DVDs. The database can access BFILEs provided the operating system supports stream-mode access to the operating system files

Ref :- Oracle® Database SecureFiles and Large Objects Developer's Guide

Thursday, January 13, 2011

Auditing / FGA

Server Setup:-

To Enable:-

audit_trail = none/false/os/db/true/db,extended/xml/xml,extended

To disable:-

audit_trail = none/false

db/true - Auditing i enabled and audit records are stored in AUD$.

db,extended - Auditing records are stored in AUD$ and two more columns sql_bind, sql_text are included.

xml - Auditing records are stored as xml files in auidt_file_dest directory.

xml,extended - with sql_bin and sql_text.

os - Auditing records are stored in audit_file_dest.

audit_sys_operations = true/false -> If set as true then it will also audit the operations perform by sysdba/sysoper and sys use on the database. If false then it will audit only other user.


> alter system set audit_trail = db scope=spfile;
> shut immediate;
> startup;
> audit all by manzoor by access; -- where manzoor is username.
> audit select table , update table, delete table, insert table by
manzoor by access;
> audit execute procedure by manzoor by access;

The audit trail records are stored in AUD$, you can query the result using the dba_views.

dba_audit_trail -> standard auditing AUD$
dba_fga_audit_trail -> Fine grained Auditing FGA_LOG$
dba_common_audit_trail -> Both standard and FGA
V$xml_audit_trail -> if xml or xml,extended


Auditing should be planned carefully to control the quantity of audit information. Only audit specific operations or objects of interest.

The database audit trail must be deleted or archived on a regular basis to prevent the sys.aud$ table growing to a unacceptable size. Only dba should have maintenance access to the audit trail. Auditing modifications of the data in the audit trail itself can be achieved using the following statement.

> audit insert, delete, select, update on sys.aud$ by access;

The os audit files should be secured at the os level by assigning the correct file permission.

Fine Grained Auditing

Fine grained auditing extends orcale standard auditing capabilities by allowing the user to audit action based on user_defined predicates. It is independent of the audit_trail parameter. The FGA records are stored in FA_LOG$ table.

object_schema => 'MANZOOR',
object_name => 'EMP',
policy_name => 'FGAEXP',
audit_condition => 'SAL > 50000',
audit_column => 'SAL'

Here we have created a fga policy to audit if any users executes a query to list the rows and if those rows contains the sal > 50000 then it will get recorded in the fga_log$ and can be viewed through the dba_fga_audit_trail column.

SQL> select * from emp;

---------- ------------------------------ ----------
1 Anif 40000
2 Jameel 50000
3 Kannan 60000

SQL> select * from emp where name = 'Jameel';

---------- ------------------------------ ----------
2 Jameel 50000

SQL> select * from emp where name = 'Anif';

---------- ------------------------------ ----------
2 Jameel 40000

SQL> select sql_text from dba_fga_audit_trail;

select * from emp
select * from emp where name = 'Kannan'

Here we can see that the query "select * from emp where name = 'Anif'" is not get
listed in the dba_fga_audit_trail view because the salary is less than 50000.

Auditing DML Operations:


SQL> select sql_text from dba_fga_audit_trail;

select * from emp where name = 'Anif'
insert into emp values (4,'Sahrmi',55555)
update emp set sal = 45000 where name = 'Anif'
delete from emp where sal=45000

here whenever the sal column is being affected by any DML operations it will be audited.

Refer below for more info...

Tuesday, January 11, 2011

What is PGA

Overview of the Program Global Area

Oracle Database allocates a program global area (PGA) for each server process. The PGA is used to process SQL statements and to hold logon and other session information. For the purposes of memory management, the collection of all PGAs is known as the instance PGA. Using an initialization parameter, you set the size of the instance PGA, and the database distributes memory to individual PGAs as needed.


Background processes also allocate their own PGAs. This discussion focuses on server process PGAs only.
This section contains the following topics:

Content of the PGA

The content of the PGA memory varies, depending on whether or not the instance is running the shared server option. Generally speaking, the PGA memory is divided into the following areas:

a)Session Memory
b)Private SQL Area

Session Memory
Session memory is the memory allocated to hold a session's variables (logon information) and other information related to the session. For a shared server, the session memory is shared and not private.

Private SQL Area
The private SQL area contains data such as bind variable values, query execution state information, and query execution work areas. Each session that issues a SQL statement has a private SQL area. Each user that submits the same SQL statement has his or her own private SQL area that uses a single shared SQL area. Thus, many private SQL areas can be associated with the same shared SQL area.

The location of a private SQL area depends on the type of connection established for a session. If a session is connected through a dedicated server, private SQL areas are located in the server process's PGA. However, if a session is connected through a shared server, part of the private SQL area is kept in the SGA.

This section includes the following topics:

a)Cursors and SQL Areas
b)Private SQL Area Components
c)SQL Work Areas

Cursors and SQL Areas
The application developer of an Oracle Database precompiler program or OCI program can explicitly open cursors, or handles to specific private SQL areas, and use them as a named resource throughout the execution of the program. Recursive cursors that Oracle Database issues implicitly for some SQL statements also use shared SQL areas.

The management of private SQL areas is the responsibility of the user process. The allocation and deallocation of private SQL areas depends largely on which application tool you are using, although the number of private SQL areas that a user process can allocate is always limited by the initialization parameter OPEN_CURSORS. The default value of this parameter is 50.

A private SQL area continues to exist until the corresponding cursor is closed or the statement handle is freed. Although Oracle Database frees the run-time area after the statement completes, the persistent area remains waiting. Application developers close all open cursors that will not be used again to free the persistent area and to minimize the amount of memory required for users of the application.

Private SQL Area Components
The private SQL area of a cursor is itself divided into two areas whose lifetimes are different:

The persistent area—This area contains bind variable values. It is freed only when the cursor is closed.

The runtime area—Oracle Database creates this area as the first step of an execute request. It contains the following structures:

Query execution state information

For example, for a full table scan, this area contains information on the progress of the scan

SQL work areas

These areas are allocated as needed for memory-intensive operations like sorting or hash-joins.

For DML, the run-time area is freed when the statement finishes running. For queries, it is freed after all rows are fetched or the query is canceled.

SQL Work Areas

SQL work areas are allocated to support memory-intensive operators such as the following:

Sort-based operators (order by, group-by, rollup, window function)


Bitmap merge

Bitmap create

For example, a sort operator uses a work area (sometimes called the sort area) to perform the in-memory sort of a set of rows. Similarly, a hash-join operator uses a work area (also called the hash area) to build a hash table from its left input. If the amount of data to be processed by these two operators does not fit into a work area, the input data is divided into smaller pieces. This enables some data pieces to be processed in memory while the rest are spilled to temporary disk storage to be processed later. Although bitmap operators do not spill to disk when their associated work area is too small, their complexity is inversely proportional to the size of their work area. Thus, these operators run faster with larger work area.

The size of a work area can be controlled and tuned. The database automatically tunes work area sizes when automatic PGA memory management is enabled. See "Overview of Memory Management Methods" for more information.

Generally, bigger work areas can significantly improve the performance of a particular operator at the cost of higher memory consumption. Optimally, the size of a work area is big enough to accommodate the input data and auxiliary memory structures allocated by its associated SQL operator. If not, response time increases, because part of the input data must be spilled to temporary disk storage. In the extreme case, if the size of a work area is far too small compared to the input data size, multiple passes over the data pieces must be performed. This can dramatically increase the response time of the operator.

For more info...

Wednesday, January 5, 2011

Oracle Releases History


Founded - 1977 in Santa Clara, California USA, by Larry Ellison,Bob miner & Ed Oates
under the name SDL (Software Development Laboratories).

In 1979 SDL has been changed to RSI (Relational Software, Inc)

In 1982 RSI renamed to Oracle Systems Corporation to align itself more with its flagship procduct Oracle Database.

In 1995 Oracle Systems corporation has been changed to ORACLE corporation.

Oracle Releases.

Oracle Version 1

1978 - Oracle Version 1 written in assembly language, Oracle V1 is never officially released. The name oracle comes from the code name of a CIA project which the founders had all worked on while at the Ampex corporation.

Oracle Version 2

Jun 1979 - The first version of the oracle database software purchased by
Wright-Patterson Air force base runs on PDP-11 hardware ( The Company decided to name the first version of its flagship product "Version 2").

Oracle Version 3

Mar 1983 Oracle database is rewritten in 'C' Language for portability and
Oracle Version 3 released.

Oracle Version 4

Oct 1984 Oracle version 4 is released with introducing read consistency.

Oracle Version 5

1985 Oracle version 5 is released - one of the first RDBMS to operate in client-server mode.

Oracle Version 5.1

1986 Oracle Version 5.1 released with support for distributed queries. Investigation of clustering begins.

Oracle Version 6

1988 Oracle version 6 is released with support for row level locking and hot backups.

Oracle Version 7

Jun 1992 Oracle 7 is released with performance enhancements, administrative utilities, application-development tools, security features, the ability to persist PL/SQL program units in the database as stored procedures and triggers, and support for declarative referential integrity.

Oracle Version 8

Jun 1997 Oracle 8 is released with SQL object technology, Internet technology and
support for terabytes of data.

Oracle Version 8i

Sepr 1998 Oracle 8i is released (the i stands for Internet).

Oracle Version 9i

2000 Oracle 9i and Application Server is released. In May, Oracle announces the
Internet File System (iFS), later re-branded as Oracle Content Management SDK.[13]

Oracle Version 10g

2004 Oracle 10g is released (the g stands for Grid).

Oracle Version 11g

2007 Oracle 11g is released.

Reference :-