Skip to main content

RESETLOGS always mandatory after executing Recovery using Backup Controlfile

In case of incomplete recovery of the database it is always required to open the database using RESETLOGS option. But what if I fired the "Recover Database Until Cancel using Backup Controlfile" command on a database which just got aborted.

Below test explains that resetlogs is always required after firing recover database ....... using backup controlfile.

A running database is brought down using ABORT option.

SQL> shutdown abort
ORACLE instance shut down.

[oracle]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.2.0 Production on Thu Jul 30 15:51:28 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup nomount
ORACLE instance started.

Total System Global Area 1620115456 bytes
Fixed Size                  2226952 bytes
Variable Size            1241515256 bytes
Database Buffers          369098752 bytes
Redo Buffers                7274496 bytes

SQL> alter database mount;

Database altered.

Now I execute the command to recover using backup controlfile.  

SQL> recover database until cancel using backup controlfile;
ORA-00279: change 1926317955 generated at 07/30/2015 15:20:20 needed for thread 1
ORA-00289: suggestion :
/opt/oracle/product/11.2.0/dbhome_1/dbs/arch1_6153_864753447.dbf
ORA-00280: change 1926317955 for thread 1 is in sequence #6153

Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
CANCEL
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/opt/oracle/data/test01/system01.dbf'

ORA-01112: media recovery not started

I could have simply open the database and SMON would have done the crash recovery. 
In order to try that I canceled the recovery as the required redolog is not yet archived. But while opening the database, I get the below error.

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open

So RESETLOGS is must to open the database now. So what should we do next? I checked the redo logs information and found that the log sequence to be applied is still not archived.

SQL> select sequence# from v$log;

 SEQUENCE#
----------
      6151
      6153
      6152


SQL> select member,group# from v$logfile where group#=(select group# from v$log where sequence#=6153);

MEMBER GROUP#
-------------------------------------------------------------------------------- -------------
/opt/oracle/data/test01/redo03.log  3


Simply pass the redolog file name while doing the recovery.

SQL> recover database until cancel using backup controlfile;
ORA-00279: change 1926317955 generated at 07/30/2015 15:20:20 needed for thread 1
ORA-00289: suggestion :
/opt/oracle/product/11.2.0/dbhome_1/dbs/arch1_6153_864753447.dbf
ORA-00280: change 1926317955 for thread 1 is in sequence #6153


Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/opt/oracle/data/dwh01/redo03.log
Log applied.
Media recovery complete.

Now open the database using resetlogs.

SQL> alter database open resetlogs;

Database altered.

Remember it is always mandatory to open the database using RESETLOGS option if the above RECOVER command is used.


Comments

Popular posts from this blog

Configure Oracle ASM Disks on AIX

Configure Oracle ASM Disks on AIX You can use below steps to configure the new disks for ASM after the raw disks are added to your AIX server by your System/Infrastructure team experts: # /usr/sbin/lsdev -Cc disk The output from this command is similar to the following: hdisk9 Available 02-T1-01 PURE MPIO Drive (Fibre) hdisk10 Available 02-T1-01 PURE MPIO Drive (Fibre) If the new disks are not listed as available, then use the below command to configure the new disks. # /usr/sbin/cfgmgr Enter the following command to identify the device names for the physical disks that you want to use: # /usr/sbin/lspv | grep -i none This command displays information similar to the following for each disk that is not configured in a volume group: hdisk9     0000014652369872   None In the above example hdisk9 is the device name and  0000014652369872  is the physical volume ID (PVID). The disks that you want to use may have a PVID, but they must not belong to a volu...

Adding New Disks to Existing ASM Disk Group

Add Disks to Existing ASM Disk Group In this blog I will show how to add new disks to an existing ASM Disk group. This also contains the steps to perform the migration from existing to the new storage system. In order to add the disk to the ASM disk group, you will first need to configure these disk using the operating system commands. I have provided the steps to configure the disks on AIX system in my blog " Configure Oracle ASM Disks on AIX" Adding New Disks to DATA Disk Group (Storage Migration for DATA Disk Group) Login to your ASM instance $ sqlplus / as sysasm If the name of the new disk is in different format from the existing disk, the modify the asm_diskstring parameter to identify the new disks. In my below example /dev/ora_data* is the format of the existing disks and /dev/new_disk* is the naming format of the newly configured disks. You should not modify this parameter unless the naming format changes. SQL> alter system set asm_diskstring = '/dev/ora_data*...

Load records from csv file in S3 file to RDS MySQL database using AWS Data Pipeline

 In this post we will see how to create a data pipeline in AWS which picks data from S3 csv file and inserts records in RDS MySQL table.  I am using below csv file which contains a list of passengers. CSV Data stored in the file Passenger.csv Upload Passenger.csv file to S3 bucket using AWS ClI In below screenshot I am connecting the RDS MySQL instance I have created in AWS and the definition of the table that I have created in the database testdb. Once we have uploaded the csv file we will create the data pipeline. There are 2 ways to create the pipeline.  Using "Import Definition" option under AWS console.                    We can use import definition option while creating the new pipeline. This would need a json file which contains the definition of the pipeline in the json format. You can use my Github link below to download the JSON definition: JSON Definition to create the Data Pipeline Using "Edit Architect" ...