Skip to main content

How to Add a Node to Oracle RAC (11gR2)

Getting Started

This guide shows how to add a node to an existing 11gR2 Oracle RAC cluster. It is assumed that the node in question is available and is not part of a GNS/Grid Plug and Play cluster. In other words, the database is considered to be ‘Administrator-Managed.’ Also, the database software is non-shared and uses role separation where ‘grid’ is the clusterware owner and ‘oracle’ owns the database software. This guide uses a 2-node cluster running CentOS 5.7 (x64). There are two pre-existing nodes ‘rac1′ and ‘rac2.’ We will be adding ‘rac3′ to the cluster. This guide does not cover node preparation steps/prerequisites. The assumption is that since there is a pre-existing cluster the user knows how to prepare a node – from a prerequisite perspective – for cluster addition.




Verify Requirements

The cluster verify utility – ‘cluvfy’ – is used to determine that the new node is, in fact, ready to be added to the cluster.




Verify New Node (HWOS)

From an existing node, run ‘cluvfy’ to ensure that ‘rac3′ – the cluster node to be added – is ready from a hardware and operating system perspective:


[root@rac1]$ su - grid


[grid@rac1]$ export GRID_HOME=/u01/app/11.2.0/grid


[grid@rac1]$ $GRID_HOME/bin/cluvfy stage -post hwos -n rac3


If successful, the command will end with: ‘Post-check for hardware and operating system setup was successful.’ Otherwise, the script will print meaningful error messages.




Verify Peer (REFNODE)

The cluster verify utility – ‘cluvfy’ – is again used to determine the new node’s readiness. In this case, the new node is compared to an existing node to ensure compatibility/determine any conflicts. From an existing node, run ‘cluvfy’ to check inter-node compatibility:


[grid@rac1]$ $GRID_HOME/bin/cluvfy comp peer -refnode rac1 -n rac3 -orainv oinstall -osdba dba -verbose


In this case, existing node ‘rac1′ is compared with the new node ‘rac3,’ comparing such things as the existance/version of required binaries, kernel settings, etc. Invariably, the command will report that ‘Verification of peer compatibility was unsuccessful.’ This is due to the fact that the command simply looks for mismatches between the systems in question. Certain properties between systems will undoubtedly differ. For example, the amount of free space in /tmp rarely matches exactly. Therefore, certain errors from this command should be ignored, such as ‘Free disk space for "/tmp",’ and so forth. Differences in kernel settings and OS packages/rpms should, however, be addressed.




Verify New Node (NEW NODE PRE)

The cluster verify utility – ‘cluvfy’ – is used to determine the integrity of the cluster and whether it is ready for a new node. From an existing node, run ‘cluvfy’ to verify the integrity of the cluster:


$GRID_HOME/bin/cluvfy stage -pre nodeadd -n rac3 -fixup -verbose


If your shared storage is ASM using asmlib you may get an error – similar to the following – due to Bug #10310848:


ERROR:


PRVF-5449 : Check of Voting Disk location "ORCL:CRS1(ORCL:CRS1)" failed on the following nodes:


rac3:No such file or directory


PRVF-5431 : Oracle Cluster Voting Disk configuration check failed


The aforementioned error can be safely ignored whereas other errors should be addressed before continuing.




Extend Clusterware

The clusterware software will be extended to the new node.




Run "addNode.sh"

From an existing node, run "addNode.sh" to extend the clusterware to the new node ‘rac3:’


[grid@rac1]$ export IGNORE_PREADDNODE_CHECKS=Y


[grid@rac1]$ $GRID_HOME/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={rac3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3-vip}"


In my case, I am using ASM with asmlib on 11.2.0.2.0, so I experienced the aforementioned: Bug #10310848. Therefore, I had to add the ‘IGNORE_PREADDNODE_CHECKS’ environment variable so that the command would run to fruition; if you did not experience the bug in prior steps, then you can omit it.


If the command is successful, you should see a prompt similar to the following:


The following configuration scripts need to be executed as the "root" user in each cluster node.


/u01/app/oraInventory/orainstRoot.sh #On nodes rac3


/u01/app/11.2.0/grid/root.sh #On nodes rac3


To execute the configuration scripts:


1. Open a terminal window


2. Log in as "root"


3. Run the scripts in each cluster node


The Cluster Node Addition of /u01/app/11.2.0/grid was successful.


Run the ‘root.sh’ commands on the new node as directed:


[root@rac3]$ /u01/app/oraInventory/orainstRoot.sh


[root@rac3]$ /u01/app/11.2.0/grid/root.sh


If successful, clusterware daemons, the listener, the ASM instance, etc. should be started by the ‘root.sh’ script:


[root@rac3]$ $GRID_HOME/bin/crs_stat -t -v -c rac3


Name Type R/RA F/FT Target State Host


----------------------------------------------------------------------


ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE rac3


ora....SM3.asm application 0/5 0/0 ONLINE ONLINE rac3


ora....C3.lsnr application 0/5 0/0 ONLINE ONLINE rac3


ora.rac3.ons application 0/3 0/0 ONLINE ONLINE rac3


ora.rac3.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac3


ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE rac3




Verify New Node (NEW NODE POST)

Again, the cluster verify utility – ‘cluvfy’ – is used to verify that the clusterware has been extended to the new node properly:


[grid@rac1]$ $GRID_HOME/bin/cluvfy stage -post nodeadd -n rac3 -verbose


A successful run should yield ‘Post-check for node addition was successful.’




Extend Oracle Database Software

The Oracle database software will be extended to the new node.




Run "addNode.sh"

From an existing node – as the database software owner – run the following command to extend the Oracle database software to the new node ‘rac3:’


[oracle@rac1]$ echo $ORACLE_HOME


/u01/app/oracle/product/11.2.0/db_1


[oracle@rac1]$ $ORACLE_HOME/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={rac3}"


If the command is successful, you should see a prompt similar to the following:


The following configuration scripts need to be executed as the "root" user in each cluster node.


/u01/app/oracle/product/11.2.0/db_1/root.sh #On nodes rac3


To execute the configuration scripts:


1. Open a terminal window


2. Log in as "root"


3. Run the scripts in each cluster node


The Cluster Node Addition of /u01/app/oracle/product/11.2.0/db_1 was successful.


Run the ‘root.sh’ commands on the new node as directed:


[root@rac3]$ /u01/app/oracle/product/11.2.0/db_1/root.sh




Change ownership of ‘oracle’

If you are using job/role separation, then you will have to ‘chmod’ the ‘oracle’ executable in the newly created $ORACLE_HOME on ‘rac3:’


[root@rac3]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1


[root@rac3]$ chgrp asmadmin $ORACLE_HOME/bin/oracle


[root@rac3]$ chmod 6751 $ORACLE_HOME/bin/oracle


[root@rac3]$ ls -lart $ORACLE_HOME/bin/oracle


-rwsr-s--x 1 oracle asmadmin 228886450 Feb 21 11:33 /u01/app/oracle/product/11.2.0/db_1/bin/oracle


The end-goal is to have the permissions of the ‘oracle’ binary match the other nodes in the cluster.




Verify Administrative Privileges (ADMPRV)

Verify administrative privileges across all nodes in the cluster for the Oracle database software home:


[oracle@rac3]$ $ORACLE_HOME/bin/cluvfy comp admprv -o db_config -d $ORACLE_HOME -n rac1,rac2,rac3 -verbose


A successful run should yield ‘Verification of administrative privileges was successful.’




Add Instance to Clustered Database

A database instance will be established on the new node. Specifically, an instance named ‘racdb3′ will be added to ‘racdb’ – a pre-existing clustered database.




Satisfy Node Instance Dependencies

Satisfy all node instance dependencies, such as passwordfile, init.ora parameters, etc.


From the new node ‘rac3,’ run the following commands to create the passwordfile, ‘init.ora’ file, and ‘oratab’ entry for the new instance:


[oracle@rac3]$ echo $ORACLE_HOME


/u01/app/oracle/product/11.2.0/db_1


[oracle@rac3]$ cd $ORACLE_HOME/dbs


[oracle@rac3 dbs]$ mv initracdb1.ora initracdb3.ora


[oracle@rac3 dbs]$ mv orapwracdb1 orapwracdb3


[oracle@rac3 dbs]$ echo "racdb3:$ORACLE_HOME:N" >> /etc/oratab


From a node with an existing instance of ‘racdb,’ issue the following commands to create the needed public log thread, undo tablespace, and ‘init.ora’ entries for the new instance:


[oracle@rac1]$ export ORACLE_SID=racdb1


[oracle@rac1]$ . oraenv


The Oracle base remains unchanged with value /u01/app/oracle


[oracle@rac1]$ sqlplus "/ as sysdba"


SQL> alter database add logfile thread 3 group 7 ('+DATA','+FRA') size 100M, group 8 ('+DATA','+FRA') size 100M, group 9 ('+DATA','+FRA') size 100M;


SQL> alter database enable public thread 3;


SQL> create undo tablespace undotbs3 datafile '+DATA' size 200M;


SQL> alter system set undo_tablespace=undotbs3 scope=spfile sid='racdb3';


SQL> alter system set instance_number=3 scope=spfile sid='racdb3';


SQL> alter system set cluster_database_instances=3 scope=spfile sid='*';




Update Oracle Cluster Registry (OCR)

The OCR will be updated to account for a new instance – ‘racdb3′ – being added to the ‘racdb’ cluster database as well as changes to a service – ‘racsvc.colestock.test.’


Add ‘racdb3′ instance to the ‘racdb’ database and verify:


[oracle@rac3]$ srvctl add instance -d racdb -i racdb3 -n rac3


[oracle@rac3]$ srvctl status database -d racdb -v


Instance racdb1 is running on node rac1 with online services


racsvc.colestock.test. Instance status: Open.


Instance racdb2 is running on node rac2 with online services


racsvc.colestock.test. Instance status: Open.


Instance racdb3 is not running on node rac3


[oracle@rac3]$ srvctl config database -d racdb


Database unique name: racdb


Database name: racdb


Oracle home: /u01/app/oracle/product/11.2.0/db_1


Oracle user: oracle


Spfile: +DATA/racdb/spfileracdb.ora


Domain: colestock.test


Start options: open


Stop options: immediate


Database role: PRIMARY


Management policy: AUTOMATIC


Server pools: racdb


Database instances: racdb1,racdb2,racdb3


Disk Groups: DATA,FRA


Mount point paths:


Services: racsvc.colestock.test


Type: RAC


Database is administrator managed


‘racdb3′ has been added to the configuation


Add the ‘racdb3′ instance to the ‘racsvc.colestock.test’ service and verify:


[oracle@rac3]$ srvctl add service -d racdb -s racsvc.colestock.test -r racdb3 -u


[oracle@rac3]$ srvctl config service -d racdb -s racsvc.colestock.test


Service name: racsvc.colestock.test


Service is enabled


Server pool: racdb_racsvc.colestock.test


Cardinality: 3


Disconnect: false


Service role: PRIMARY


Management policy: AUTOMATIC


DTP transaction: false


AQ HA notifications: false


Failover type: SELECT


Failover method: NONE


TAF failover retries: 5


TAF failover delay: 10


Connection Load Balancing Goal: LONG


Runtime Load Balancing Goal: NONE


TAF policy specification: BASIC


Edition:


Preferred instances: racdb3,racdb1,racdb2


Available instances:


Start the Instance


Now that all the prerequisites have been satisfied and OCR updated, the ‘racdb3′ instance will be started.


Start the newly created instance – ‘racdb3′ – and verify:


[oracle@rac3]$ srvctl start instance -d racdb -i racdb3


[oracle@rac3]$ srvctl status database -d racdb -v


Instance racdb1 is running on node rac1 with online services racsvc.colestock.test. Instance status: Open.


Instance racdb2 is running on node rac2 with online services racsvc.colestock.test. Instance status: Open.


Instance racdb3 is running on node rac3 with online services racsvc.colestock.test. Instance status: Open.

Comments

Popular posts from this blog

Hard dependency with ip address Oracle RAC Cluster.

Command error out due to hard dependency with ip address [-Node1]/app/grid/oracle/product/11.2.0/grid/bin>./crsctl relocate resource RDBMS_DB -n Node2 CRS-2527: Unable to start 'RDBMS_DB' because it has a 'hard' dependency on 'sDB' CRS-2525: All instances of the resource 'sDB' are already running; relocate is not allowed because the force option was not specified CRS-4000: Command Relocate failed, or completed with errors. [-Node1]/app/grid/oracle/product/11.2.0/grid/bin>./crsctl relocate resource sDB  -n Node2 CRS-2529: Unable to act on 'DB' because that would require stopping or relocating 'LISTENER_DB', but the force option was not specified CRS-4000: Command Relocate failed, or completed with errors. [-Node1]/app/grid/oracle/product/11.2.0/grid/bin>./crsctl relocate resource LISTENER_DB  -n Node2 CRS-2527: Unable to start 'LISTENER_DB' because it has a 'hard' dependency on 'sD...

19C NID ( Rename Database)

 [oracle@localhost ~]$ nid DBNEWID: Release 19.0.0.0.0 - Production on Thu Dec 23 00:05:36 2021 Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved. Keyword     Description                    (Default) ---------------------------------------------------- TARGET      Username/Password              (NONE) DBNAME      New database name              (NONE) LOGFILE     Output Log                     (NONE) REVERT      Revert failed change           NO SETNAME     Set a new database name only   NO APPEND      Append to output log           NO HELP        Displays these messages    ...

ORA-01017/ORA-28000 with AUDIT_TRAIL

With default profile in Oracle 11g, accounts are automatically locked 1 day ( PASSWORD_LOCK_TIME ) after 10 failed login attempt ( FAILED_LOGIN_ATTEMPTS ): SQL > SET lines 200 SQL > SET pages 200 SQL > SELECT * FROM dba_profiles WHERE PROFILE = 'DEFAULT' ORDER BY resource_name; PROFILE                         RESOURCE_NAME                      RESOURCE LIMIT ------------------------------ -------------------------------- -------- ---------------------------------------- DEFAULT                         COMPOSITE_LIMIT                 ...