Skip to main content

Uncommitted data in Redo Log Files and Datafiles Explained by example

Does Log writer writes both uncommitted and committed data to redo log files?
Answer is yes

Log writer writes the information from log buffer to redo log files when log buffer is 1/3rd full or 1 MB of redo is generated or every 3 seconds or at every commit.

  • Let's take an example of a user "A" who is executing a huge transaction which generated a lot of redo records and log buffer went 1/3rd full or 1 MB full. Even if the user has not performed a commit, log writer has to write all the contents of log buffer to log files.
  • Let's take another example where one user performed a commit on his transaction and as the rule says LGWR has to write at every commit. At this point with the committed data of the particular session there can also be uncommitted data of other sessions or transactions which will also be written to redo log files.

Does DBWR writes both uncommitted and committed data to data files?
Answer is yes

Redo for dirty blocks can’t be overwritten.  If the redo logs fill up (all active), then changes in the database stop until all of the dirty blocks corresponding to the oldest redo log have been written to disk. At this point we get the message in alert log saying "checkpoint not complete".
Dirty buffers or blocks are the one that has changed after it had been read from disk (datafiles). Even if the commit has not occurred and because of enough redo generated by the transactions there are frequent log switches and now the oldest logfile has to be overwritten in order to accommodate more redo. Checkpoint occurs and this invokes DBWR to write all dirty buffers corresponding to that logfile to be written to disk. Let's see the below test that I performed on one of the test database.

SQL> create tablespace test11 datafile '/opt/oracle/data/dwh01/test101.dbf' size

Tablespace created.

SQL> create table test_table (name varchar2(20)) tablespace test11;

Table created.

SQL> insert into test_table values ('Akshay');

1 row created.

Issue below command at unix level 

[oracle@host1]$ strings test101.dbf | grep Akshay

No result displayed.

SQL> alter system checkpoint;

System altered.

[oracle@host1]$ strings test101.dbf | grep Akshay
Akshay

In the above case even if the commit has not occurred but checkpoint resulted in dbwr writing dirty buffers to disk.

Let us update the record and without committing do some log switches.

SQL> update test_table set name='Narang';

1 row updated.

[oracle@host1]$ strings test101.dbf | grep Akshay
Akshay
[oracle@host1]$ strings test101.dbf | grep Narang
No result displayed.

SQL> alter system switch logfile;

System altered.


SQL> select group#,status from v$log;

    GROUP# STATUS
---------- ----------------
         1 CURRENT
         2 INACTIVE
         3 ACTIVE

[oracle@host1]$ strings test101.dbf | grep Narang
No result displayed.

SQL>  alter system switch logfile;

System altered.


SQL> /

    GROUP# STATUS
---------- ----------------
         1 ACTIVE
         2 CURRENT
         3 ACTIVE

[oracle@host1]$ strings test101.dbf | grep Narang
No result displayed.

SQL>  alter system switch logfile;

System altered.

SQL> /

    GROUP# STATUS
---------- ----------------
         1 INACTIVE
         2 INACTIVE
         3 CURRENT
[oracle@host1]$ strings test101.dbf | grep Narang
Narang
[oracle@host1]$ strings test101.dbf | grep Akshay
No result displayed.

Due to log switches Oracle has to reuse the first redo log file but the update to the corresponding block was not written to disk. This caused a checkpoint which directed DWBR to write the block to corresponding datafile even if the changes were not committed.


Controlling Checkpoints and Log Switches

A checkpoint is the event during which the Database Writer process (DBWR) writes all modified database buffers in the SGA to the appropriate datafiles. A log switch is the event during which LGWR stops writing to one online redo log group and starts writing to another. The two events are often connected: an instance takes a checkpoint at each log switch by default. A log switch, by default, takes place automatically when the current online redo log file group fills.
However, you can designate that checkpoints are taken more often than when you have log switches, or you can have a checkpoint take place ahead of schedule, without a log switch. You can also have a log switch and checkpoint occur ahead of schedule, or without an accompanying checkpoint.

Comments

Popular posts from this blog

Configure Oracle ASM Disks on AIX

Configure Oracle ASM Disks on AIX You can use below steps to configure the new disks for ASM after the raw disks are added to your AIX server by your System/Infrastructure team experts: # /usr/sbin/lsdev -Cc disk The output from this command is similar to the following: hdisk9 Available 02-T1-01 PURE MPIO Drive (Fibre) hdisk10 Available 02-T1-01 PURE MPIO Drive (Fibre) If the new disks are not listed as available, then use the below command to configure the new disks. # /usr/sbin/cfgmgr Enter the following command to identify the device names for the physical disks that you want to use: # /usr/sbin/lspv | grep -i none This command displays information similar to the following for each disk that is not configured in a volume group: hdisk9     0000014652369872   None In the above example hdisk9 is the device name and  0000014652369872  is the physical volume ID (PVID). The disks that you want to use may have a PVID, but they must not belong to a volu...

Adding New Disks to Existing ASM Disk Group

Add Disks to Existing ASM Disk Group In this blog I will show how to add new disks to an existing ASM Disk group. This also contains the steps to perform the migration from existing to the new storage system. In order to add the disk to the ASM disk group, you will first need to configure these disk using the operating system commands. I have provided the steps to configure the disks on AIX system in my blog " Configure Oracle ASM Disks on AIX" Adding New Disks to DATA Disk Group (Storage Migration for DATA Disk Group) Login to your ASM instance $ sqlplus / as sysasm If the name of the new disk is in different format from the existing disk, the modify the asm_diskstring parameter to identify the new disks. In my below example /dev/ora_data* is the format of the existing disks and /dev/new_disk* is the naming format of the newly configured disks. You should not modify this parameter unless the naming format changes. SQL> alter system set asm_diskstring = '/dev/ora_data*...

Load records from csv file in S3 file to RDS MySQL database using AWS Data Pipeline

 In this post we will see how to create a data pipeline in AWS which picks data from S3 csv file and inserts records in RDS MySQL table.  I am using below csv file which contains a list of passengers. CSV Data stored in the file Passenger.csv Upload Passenger.csv file to S3 bucket using AWS ClI In below screenshot I am connecting the RDS MySQL instance I have created in AWS and the definition of the table that I have created in the database testdb. Once we have uploaded the csv file we will create the data pipeline. There are 2 ways to create the pipeline.  Using "Import Definition" option under AWS console.                    We can use import definition option while creating the new pipeline. This would need a json file which contains the definition of the pipeline in the json format. You can use my Github link below to download the JSON definition: JSON Definition to create the Data Pipeline Using "Edit Architect" ...