Goal of this exercise is to clone Oracle database that is using Grid / Clusterware components to a separate server.

Although you can find some articles on the Internet, problem is they are covering how to change the host name and/or IP address in case of Oracle Restart.

This article will show you much more: how to clone complete Oracle database, that is using Oracle Grid parts of infrastructure to a new server.

First I want to explain why you would do that?

First reason could be that you want to have identical environment as production.
Identical means in terms of production data and production configuration, not the CPU (number of CPU cores can be lower). Here I suggest to leave memory as is.

Having the same environment (besides the number of CPU cores) allows you to test database upgrade scenario.

You can also perform performance test as you have complete set of data.

Or you need to setup a new environment for a new customer where you will clean some data while majority of the other settings will remain the same.

Whatever the reason you have, you have several options, which can be further reduced to only two:
– to install Oracle SW (Grid & Db) along with a new server setting, create ASM disks and to restore database from the backup on the new server.

Other variants of this option influence the production system in terms of performance overhead which is not acceptable in mission critical systems (for calm and small databases you can use such variants).
For that reason I’ve eliminated that option.

– to perform VM clone (LPAR in case of IBM AIX) and to clone all VG (volume groups) at some point in time.
The main advantage of this option is that you get ready OS with all of the SW same as in production , and disk mounted.

In case of regular database that is using the regular file system (no ASM/Grid/Clusterware), that is all you have to do.

Unfortunately not in case of Oracle Grid/Clusterware based databases like RAC, RAC One Node and even Oracle Restart, as you are about 100+ commands away from the finish.

This is where my journey begin.

In this blog I’ll describe only the main points you need to do when using storage based cloning of the production database that is running all the time with no interruption.

This example will be based on IBM AIX OS and Oracle Restart database.
Although Oracle Restart is the simplest case of Oracle database that is using Grid and Clusterware infrastructure, it is not much simpler than the regular RAC or RAC One Node, that I also hope to cover once in the future (main difference is more Grid/Clusterware services along with a virtual private network).
When I login into the cloned host, first I checked to see what services are up and running by executing the following command:

crsctl stat res -t

As the command returns error message, I was aware it won’t be that easy as I thought, as Grid and Clusterware are deeply integrated with the OS and now I have a new IP address and a new hostname, which are the basis for many services provided by Grid&Clusterware.
As everything was dead, you need to follow the steps provided below:

First step is to de-configure Oracle Restart. To do that execute the following as a root user:

./roothas.pl -deconfig -force

Configure the CSS & OHAS services as root user as follows (as a root user):


Please perform the next steps as grid OS user (as the Grid Infrastructure OS owner): 

crsctl modify resource "ora.cssd" -init -attr "AUTO_START=1"

Stop has

crsctl stop has

Restart the OHAS stack as grid or oracle OS user (as the Grid Infrastructure OS owner):

crsctl start has

Check the CSS & OHAS state:

crsctl check has

crsctl check css

crsctl stat resource

crsctl stat res -t

cssd should be ONLINE (and evmd)

Re-create the default listener (LISTENER) using port 1521 (or using your desired port), thru the NETCA GUI located on the new Grid Infrastructure Oracle Home (or manually if you do not have graphical access) as grid or oracle OS user (as the Grid Infrastructure OS owner):

srvctl add listener

srvctl start listener

CRS-2674: Start of 'ora.LISTENER.lsnr' on 'hostname' failed

As you can see, I got the error and listener was not started.

By executing the following command, I can  reveal what's behind the error.

lsnrctl start listener

TNSLSNR for IBM/AIX RISC System/6000: Version - Production
System parameter file is /network/admin/listener.ora
Log messages written to /xxx/xxx/oracle/diag/tnslsnr/hostname/listener/alert/log.xml
Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=old_hostname)(PORT=1521)))
TNS-12545: Connect failed because target host or object does not exist
 TNS-12560: TNS:protocol adapter error
  TNS-00515: Connect failed because target host or object does not exist
   IBM/AIX RISC System/6000 Error: 68: Can't assign requested address

Listener failed to start. See the error message(s) above...

To fix this, follow the steps:

cd , and then execute:

vi listener.ora 


Save the file, and try to start the listener again.

srvctl start listener

This time you should have running listener. The following command will confirm that:

srvctl status listener

Listener LISTENER is enabled
Listener LISTENER is running on node(s): new_hostname

Now it's a time to create init+ASM.ora file on the <11.2 Grid Infrastructure Oracle Home>/dbs directory with the next parameters:


asm_diskstring= '/dev/oracleasm/disks/*'  



Add the ASM instance as grid or oracle OS user (as the Grid Infrastructure OS owner) by executing the following:

srvctl add asm

Enable ASM instance Auto Start as grid or oracle OS user (as the Grid Infrastructure OS owner) as follow: 

<11.2 Grid Infrastructure Oracle Home>/bin/crsctl modify resource "ora.asm" -attr "AUTO_START=1"  

Make sure the disks are discovered by kfod as grid or oracle OS user (as the Grid Infrastructure OS owner) as follow:

<11.2 Grid Infrastructure Oracle Home>/bin/kfod asm_diskstring='ORCL:*' disks=all
<11.2 Grid Infrastructure Oracle Home>/bin/kfod asm_diskstring='/*' disks=all

[root@+ASM@hostname:/] lspv
hdisk0          00fa0d66205e7c3b                    rootvg          active      
hdisk32         00fa0d6625405efc                    logsVG       active      
hdisk33         00fa0d662535fd8b                    dataVG           active      
hdisk34         none                                None                        
hdisk35         none                                None                        
hdisk36         none                                None                        
hdisk37         none                                None                        
hdisk38         none                                None                        
hdisk39         none                                None                        
hdisk40         none                                None                        
hdisk41         none                                None                        
hdisk42         none                                None                        
hdisk43         none                                None                        
hdisk44         none                                None                        
hdisk45         none                                None                        
hdisk46         none                                None                        

Change the permission for the ASM disks:

chown oracle:oracle /dev/rhdisk34
chmod 660 /dev/rhdisk34

chown oracle:oracle /dev/rhdisk35
chmod 660 /dev/rhdisk35

After that I get the expected result:

kfod asm_diskstring='ORCL:*' disks=all
ORACLE_SID ORACLE_HOME                                                          

Now you should be able to start the ASM instance.

Validate that the candidate disks are being discovered:

SYS@+ASM> select  path  from v$asm_disk;


And create spfile (this part I assume every decent DBA should be able to create).

Add the new ASM spfile and listener to the new ASM instance resource:

$> <11.2 Grid Infrastructure Oracle Home>/bin/srvctl modify asm  -p 

$> <11.2 Grid Infrastructure Oracle Home>/bin/srvctl modify asm  -l LISTENER

At the end, you need to add database as an resource under the control of the Grid infrastructure.

srvctl add database -d db_name -o  -p  -a "data,fra,temp"

srvctl add service -s service_name -d db_name -P BASIC

srvctl start database -d db_name

I suggest you to rename the database and change the DB ID, but won’t be described here due to the too many steps.

Generally, it’s standard procedure with addition of deleting/renaming/adding Clusterware resources.


There are no comments yet. Why not start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *