Skip to main content

Oracle Database Concepts in Detail

Oracle is a relational database management system. It is a mechanism to store and retrieve user data.
Data is stored logically in oracle objects such as tables and physically within operating system files which are refered as datafiles in Oracle.
In order to retrieve or view the data we can use various programing languages. SQL is the widely used language to query or modify or add more data within an oracle database.

Oracle Server consists of an Instance and a database.

An instance is associated to one database. In a RAC environment there can be more than one instances associated to a single database.

An instance mainly consists of two parts.  Memory allocated to an oracle server, ie., SGA and Oracle background processes.

System global area or SGA consists of below memory structures:

DB Buffer: Temporarily stores users queried blocks. These blocks are managed using a LRU queue. A block which is most frequently used is kept on the most recent used (MRU) side of the queue. It is also called as the hot block. Blocks not being used are gradually shifted to LRU side and finally overwritten. Any block that has been modified by the time it was called in memory from disk, for example by issuing an update or delete statement is known has dirty block or dirty buffer. In order to make these changes permanent, these dirty buffers are written to datafiles by database writer process (DBWn).

Redo Log Buffer: It records all/any changes made to the database or data. We can see this as a transcriptor which writes down all discussions word by word during a court trial. During a commit or a checkpoint data from log buffer is written to online redo log files by log writer process (LGWR).

Shared Pool: Stores parse SQL/PLSQL code and data dictionary information.

How to Size memory structures? In order to make sure that Oracle database is performing well on user requests, a DBA need to ensure that memory structures are adequately sized. Value for these can be set manually or we can leave it to Oracle automatic memory management. This is a new feature in Oracle 11G release which was not available in older releases.

Automatic Memory Management:
memory_max_target and memory_target are two important paramters. Setting the value of these initialization parameters to a value greater than zero enables the automatic memory management.

If these parameters have got values like

memory_max_target=0
memory_target=0

This means that you are not using Automatic Memory Management.

There is a difference in both of these parameter. If these are set then it means you are using Automatic Memory Management (AMM). Below is an example for usage of these parameters:

If you have have got 128G memory (RAM) on your database server and your systems team is happy for database to use 40G memory. That means you have 40G RAM to play with. In this scenario you can set memory_max_target = 40G and the setup can be done with the following commands:

alter system set memory_max_target=40G scope=spfile;
shutdown immediate
startup

By setting memory_max_target=40G you allow your instance to get hold of 40G RAM. At this stage if you set second parameter memory_target=30G that means you are instructing oracle instance to only use 30G out of 40G RAM.
If you do not set memory_target parameter then oracle will use 40G of RAM.
It is important to note that when memory_max_target=40G and memory_target=30G, the remaining 10G are still occupied by oracle and are not in use, even OS can not use this until you shutdown the instance.

Why we need memory_max_target when we have memory_target?

memory_max_target is the initialization parameter. Where memory_target is a dynamic parameter. Which means that when memory_max_target is set to 40G and memory_target is set to 30G you can adjust memory_target to some different value while instance is up and running by issuing the following command.

alter system set memory_target=35G scope=both;
or
alter system set memory_target=35G;

Having this option to adjust memory_target dynamically you can find what is the suitable memory target for the particluar database, suppose it is 35G for our example then at next outage or restart set memory_max_target=35G and leave memory_target=35G.

Background Processes Starts with instance startup and generally stops with instance shutdown.
On Linux each process has it's own process id, while on a Windows server Oracle.exe is the master process and other background processes run as threads under it.

Below are few background process and there descriptions:

PMON performs process recovery when a user process fails. PMON is responsible for cleaning up the database buffer cache and freeing resources that the user process was using.

SMON: is the system monitor process (SMON) performs recovery, if necessary, at instance startup. SMON is also responsible for cleaning up temporary segments that are no longer in use and for coalescing contiguous free extents within dictionary managed tablespaces. If any terminated transactions were skipped during instance recovery because of file-read or offline errors, SMON recovers them when the tablespace or file is brought back online. SMON checks regularly to see whether it is needed.

DBWn processes are responsible for writing modified (dirty) buffers in the database buffer cache to disk/datafiles. Usually one database writer process is sufficient but in certain cases there can be more than one database writer.

LGWR writes the redo log buffer to a redo log file on disk.

CKPT: When a checkpoint occurs, Oracle must update the headers of all datafiles to record the details of the checkpoint. This is done by the CKPT process.

Database

Data is stored permanently in physical operating system files called datafiles in Oracle. There are two different views to a database. One is logical, i.e,, structured way in which the data is displayed to . Other one is physical, i.e., actual mechanism in which the data is stored.

Logical database is invisible to operating system. Below are the logical units that comprises an oracle database.
  • Blocks - Fundamental unit of storing data. These are always multiple of OS blocks.
  • Extents - Collection of Oracle blocks. Default size is 1 MB.
  •   Segments - A segment is a database object that has space allocated to it - for example, a table,   index, materialized view, etc. A segment consists of one or more extents allocated within a       tablespace
  • Tablespaces - Collection of objects. It can have one or more than one datafile.

Physical database is exposed to the operating system. These are physical files which occupy space on physical disks. Modifying these at the OS level can have direct impact on the functioning of the database.


  • Datafiles - Stores user data in terms of oracle data blocks which are always multiple of OS blocks. 
  • Temp files - Stores temporary data, such as data used for calculations like sorting.
  • Redolog Files - Stores each and every change in the database. Changes are recorded in online redo     log file by LGWR. When a file is completely full, a log switch occurs and the next file becomes the   online file. Older file is archived by the archiver process so that the file can be overwritten. Archiver   only comes in picture when the database is in archive log mode. Redo log switch occurs in a cyclic     fashion. Redo log files are very critical for recovery of a database, so it is highly recommended to       multiplex these files.
  • Controlfiles - It stores the information about datafiles, redo log files and other important meta data     of  a database. It is recommended to keep multiplexed copies of the controlfile. 
  • Paramter File: Oracle instance when brought up reads the parameter file which keeps the value of    each parameter that defines a database. For example values for the memory paramater memory_target or the value for db_block_size.
  • Password File keeps the information about the users which can connect with sysdba privilege. 

In my future posts, we will see what all tasks are performed internally within Oracle when a simple SQL select statement is executed by a user.

You can leave any questions that you may have on any of my posts or any other topics and I will try to answer all your queries.

Comments

Popular posts from this blog

Configure Oracle ASM Disks on AIX

Configure Oracle ASM Disks on AIX You can use below steps to configure the new disks for ASM after the raw disks are added to your AIX server by your System/Infrastructure team experts: # /usr/sbin/lsdev -Cc disk The output from this command is similar to the following: hdisk9 Available 02-T1-01 PURE MPIO Drive (Fibre) hdisk10 Available 02-T1-01 PURE MPIO Drive (Fibre) If the new disks are not listed as available, then use the below command to configure the new disks. # /usr/sbin/cfgmgr Enter the following command to identify the device names for the physical disks that you want to use: # /usr/sbin/lspv | grep -i none This command displays information similar to the following for each disk that is not configured in a volume group: hdisk9     0000014652369872   None In the above example hdisk9 is the device name and  0000014652369872  is the physical volume ID (PVID). The disks that you want to use may have a PVID, but they must not belong to a volu...

Adding New Disks to Existing ASM Disk Group

Add Disks to Existing ASM Disk Group In this blog I will show how to add new disks to an existing ASM Disk group. This also contains the steps to perform the migration from existing to the new storage system. In order to add the disk to the ASM disk group, you will first need to configure these disk using the operating system commands. I have provided the steps to configure the disks on AIX system in my blog " Configure Oracle ASM Disks on AIX" Adding New Disks to DATA Disk Group (Storage Migration for DATA Disk Group) Login to your ASM instance $ sqlplus / as sysasm If the name of the new disk is in different format from the existing disk, the modify the asm_diskstring parameter to identify the new disks. In my below example /dev/ora_data* is the format of the existing disks and /dev/new_disk* is the naming format of the newly configured disks. You should not modify this parameter unless the naming format changes. SQL> alter system set asm_diskstring = '/dev/ora_data*...

Load records from csv file in S3 file to RDS MySQL database using AWS Data Pipeline

 In this post we will see how to create a data pipeline in AWS which picks data from S3 csv file and inserts records in RDS MySQL table.  I am using below csv file which contains a list of passengers. CSV Data stored in the file Passenger.csv Upload Passenger.csv file to S3 bucket using AWS ClI In below screenshot I am connecting the RDS MySQL instance I have created in AWS and the definition of the table that I have created in the database testdb. Once we have uploaded the csv file we will create the data pipeline. There are 2 ways to create the pipeline.  Using "Import Definition" option under AWS console.                    We can use import definition option while creating the new pipeline. This would need a json file which contains the definition of the pipeline in the json format. You can use my Github link below to download the JSON definition: JSON Definition to create the Data Pipeline Using "Edit Architect" ...