10 Adding and Deleting Oracle RAC from Nodes on Linux and UNIX Systems

This chapter describes how to extend an existing Oracle Real Application Clusters (Oracle RAC) home to other nodes and instances in the cluster, and delete Oracle RAC from nodes and instances in the cluster. This chapter provides instructions for Linux and UNIX systems.

If your goal is to clone an existing Oracle RAC home to create multiple new Oracle RAC installations across the cluster, then use the cloning procedures that are described in Chapter 8, "Cloning Oracle RAC to Nodes in a New Cluster".

The topics in this chapter include the following:

Notes:

  • Ensure that you have a current backup of Oracle Cluster Registry (OCR) before adding or deleting Oracle RAC by running the ocrconfig -showbackup command.

  • The phrase "target node" as used in this chapter refers to the node to which you plan to extend the Oracle RAC environment.

See Also:

Adding Oracle RAC to Nodes with Oracle Clusterware Installed

Before beginning this procedure, ensure that your existing nodes have the correct path to the Grid_home and that the $ORACLE_HOME environment variable is set to the Oracle RAC home.

See Also:

Oracle Clusterware Administration and Deployment Guide for information about extending the Oracle Clusterware home to new nodes in a cluster
  • If you are using a local (non-shared) Oracle home, then you must extend the Oracle RAC database home that is on an existing node (node1 in this procedure) to a target node (node3 in this procedure).

    Navigate to the Oracle_home/addnode directory on node1 and run the addnode.sh script.

    If you want to perform a silent installation, run the addnode.sh script using the following syntax:

    $ ./addnode.sh -silent "CLUSTER_NEW_NODES={node3}"
    
  • If you have a shared Oracle home that is shared using Oracle Automatic Storage Management Cluster File System (Oracle ACFS), then do the following to extend the Oracle database home to node3:

    1. Start the Oracle ACFS resource on the new node by running the following command as root from the Grid_home/bin directory:

      # srvctl start filesystem -device volume_device [-node node_name]
      

      Note:

      Make sure the Oracle ACFS resources, including Oracle ACFS registry resource and Oracle ACFS file system resource where the Oracle home is located, are online on the newly added node.
    2. Run the following command as the user that installed Oracle RAC from the Oracle_home/oui/bin directory on the node you are adding to add the Oracle RAC database home:

      $ ./runInstaller -attachHome ORACLE_HOME="ORACLE_HOME" "CLUSTER_NODES={node3}"
      LOCAL_NODE="node3" ORACLE_HOME_NAME="home_name" -cfs
      
    3. Navigate to the Oracle_home/addnode directory on node1 and run the addnode.sh script as the user that installed Oracle RAC using the following syntax:

      $ ./addnode.sh -noCopy "CLUSTER_NEW_NODES={node3}"
      

      Note:

      Use the -noCopy option because the Oracle home on the destination node is already fully populated with software.
  • If you have a shared Oracle home on a shared file system that is not Oracle ACFS, then you must first create a mount point for the Oracle RAC database home on the target node, mount and attach the Oracle RAC database home, and update the Oracle Inventory, as follows:

    1. Run the srvctl config database -db db_name command on an existing node in the cluster to obtain the mount point information.

    2. Run the following command as root on node3 to create the mount point:

      # mkdir -p mount_point_path
      
    3. Mount the file system that hosts the Oracle RAC database home.

    4. Run the following command as the user that installed Oracle RAC from the Oracle_home/oui/bin directory on the node you are adding to add the Oracle RAC database home:

      $ ./runInstaller -attachHome ORACLE_HOME="ORACLE_HOME" "CLUSTER
      _NODES={local_node_name}" LOCAL_NODE="node_name" ORACLE_HOME_NAME="home_name"
      
    5. Update the Oracle Inventory as the user that installed Oracle RAC, as follows:

      $ ./runInstaller -updateNodeList ORACLE_HOME=mount_point_path "CLUSTER_NODES={node_list}"
      

      In the preceding command, node_list refers to a list of all nodes where the Oracle RAC database home is installed, including the node you are adding.

Run the Oracle_home/root.sh script on node3 as root.

Note:

Oracle recommends that you back up the OCR after you complete the node addition process.

You can now add an Oracle RAC database instance to the target node using either of the procedures in the following sections.

Adding Policy-Managed Oracle RAC Database Instances to Target Nodes

You must manually add undo and redo logs, unless you store your policy-managed database on Oracle Automatic Storage Management (Oracle ASM) and Oracle Managed Files is enabled.

If there is space in a server pool to add a node and the database has been started at least once, then Oracle Clusterware adds the Oracle RAC database instance to the newly added node and no further action is necessary.

Note:

The database must have been started at least once before you can add the database instance to the newly added node.

If there is no space in any server pool, then the newly added node moves into the Free server pool. Use the srvctl modify srvpool command to increase the cardinality of a server pool to accommodate the newly added node, after which the node moves out of the Free server pool and into the modified server pool, and Oracle Clusterware adds the Oracle RAC database instance to the node.

Adding Administrator-Managed Oracle RAC Database Instances to Target Nodes

Note:

The procedures in this section only apply to administrator-managed databases. Policy-managed databases use nodes when the nodes are available in the databases' server pool.

You can use either Oracle Enterprise Manager or DBCA to add Oracle RAC database instances to the target nodes. To add a database instance to a target node with Oracle Enterprise Manager, see the Oracle Database 2 Day + Real Application Clusters Guide for complete information.

This section describes using DBCA to add Oracle RAC database instances.

These tools guide you through the following tasks:

  • Creating a new database instance on each target node

  • Creating and configuring high availability components

  • Creating the Oracle Net configuration for a non-default listener from the Oracle home

  • Starting the new instance

  • Creating and starting services if you entered services information on the Services Configuration page

After adding the instances to the target nodes, you should perform any necessary service configuration procedures, as described in Chapter 5, "Workload Management with Dynamic Database Services".

Using DBCA in Interactive Mode to Add Database Instances to Target Nodes

To add a database instance to a target node with DBCA in interactive mode, perform the following steps:

  1. Ensure that your existing nodes have the $ORACLE_HOME environment variable set to the Oracle RAC home.

  2. Start DBCA by entering dbca at the system prompt from the Oracle_home/bin directory.

    DBCA performs certain CVU checks while running. However, you can also run CVU from the command line to perform various verifications.

    See Also:

    Oracle Clusterware Administration and Deployment Guide for more information about CVU

    DBCA displays the Welcome page for Oracle RAC. Click Help on any DBCA page for additional information.

  3. Select Instance Management, click Next, and DBCA displays the Instance Management page.

  4. Select Add Instance and click Next. DBCA displays the List of Cluster Databases page that shows the databases and their current status, such as ACTIVE or INACTIVE.

  5. From the List of Cluster Databases page, select the active Oracle RAC database to which you want to add an instance. Click Next and DBCA displays the List of Cluster Database Instances page showing the names of the existing instances for the Oracle RAC database that you selected.

  6. Click Next to add a new instance and DBCA displays the Adding an Instance page.

  7. On the Adding an Instance page, enter the instance name in the field at the top of this page if the instance name that DBCA provides does not match your existing instance naming scheme.

  8. Review the information on the Summary dialog and click OK or click Cancel to end the instance addition operation. DBCA displays a progress dialog showing DBCA performing the instance addition operation.

  9. After you terminate your DBCA session, run the following command to verify the administrative privileges on the target node and obtain detailed information about these privileges where nodelist consists of the names of the nodes on which you added database instances:

    cluvfy comp admprv -o db_config -d Oracle_home -n nodelist [-verbose]
    
  10. Perform any necessary service configuration procedures, as described in Chapter 5, "Workload Management with Dynamic Database Services".

Deleting Oracle RAC from a Cluster Node

To remove Oracle RAC from a cluster node, you must delete the database instance and the Oracle RAC software before removing the node from the cluster.

Note:

If there are no database instances on the node you want to delete, then proceed to "Removing Oracle RAC".

This section includes the following procedures to delete nodes from clusters in an Oracle RAC environment:

Deleting Instances from Oracle RAC Databases

The procedures for deleting database instances are different for policy-managed and administrator-managed databases. Deleting a policy-managed database instance involves reducing the number of servers in the server pool in which the database instance resides. Deleting an administrator-managed database instance involves using DBCA to delete the database instance.

To delete a policy-managed database, reduce the number of servers in the server pool in which a database instance resides by relocating the server on which the database instance resides to another server pool. This effectively removes the instance without having to remove the Oracle RAC software from the node or the node from the cluster.

For example, you can delete a policy-managed database by running the following commands on any node in the cluster:

$ srvctl stop instance -d db_unique_name -n node_name
$ srvctl relocate server -n node_name -g Free

The first command stops the database instance on a particular node and the second command moves the node out of its current server pool and into the Free server pool.

See Also:

"Removing Oracle RAC" for information about removing the Oracle RAC software from a node

Deleting Instances from Administrator-Managed Databases

Note:

Before deleting an instance from an Oracle RAC database using SRVCTL to do the following:
  • If you have services configured, then relocate the services

  • Modify the services so that each service can run on one of the remaining instances

  • Ensure that the instance to be removed from an administrator-managed database is neither a preferred nor an available instance of any service

The procedure in this section explains how to use DBCA in interactive mode to delete an instance from an Oracle RAC database.

See Also:

Oracle Database 2 Day + Real Application Clusters Guide for information about how to delete a database instance from a target node with Oracle Enterprise Manager

Using DBCA in Interactive Mode to Delete Instances from Nodes

To delete an instance using DBCA in interactive mode, perform the following steps:

  1. Start DBCA.

    Start DBCA on a node other than the node that hosts the instance that you want to delete. The database and the instance that you plan to delete should be running during this step.

  2. On the DBCA Operations page, select Instance Management and click Next. DBCA displays the Instance Management page.

  3. On the DBCA Instance Management page, select the instance to be deleted, select Delete Instance, and click Next.

  4. On the List of Cluster Databases page, select the Oracle RAC database from which to delete the instance, as follows:

    1. On the List of Cluster Database Instances page, DBCA displays the instances that are associated with the Oracle RAC database that you selected and the status of each instance. Select the cluster database from which you will delete the instance.

    2. Click OK on the Confirmation dialog to proceed to delete the instance.

      DBCA displays a progress dialog showing that DBCA is deleting the instance. During this operation, DBCA removes the instance and the instance's Oracle Net configuration.

      Click No and exit DBCA or click Yes to perform another operation. If you click Yes, then DBCA displays the Operations page.

  5. Verify that the dropped instance's redo thread has been removed by using SQL*Plus on an existing node to query the GV$LOG view. If the redo thread is not disabled, then disable the thread. For example:

    SQL> ALTER DATABASE DISABLE THREAD 2;
    
  6. Verify that the instance has been removed from OCR by running the following command, where db_unique_name is the database unique name for your Oracle RAC database:

    srvctl config database -d db_unique_name
    
  7. If you are deleting more than one node, then repeat these steps to delete the instances from all the nodes that you are going to delete.

Removing Oracle RAC

This procedure removes Oracle RAC software from the node you are deleting from the cluster and updates inventories on the remaining nodes.

  1. If there is a listener in the Oracle RAC home on the node you are deleting, then you must disable and stop it before deleting the Oracle RAC software. Run the following commands on any node in the cluster, specifying the name of the listener and the name of the node you are deleting:

    $ srvctl disable listener -l listener_name -n name_of_node_to_delete
    $ srvctl stop listener -l listener_name -n name_of_node_to_delete
    
  2. Run the following command from $ORACLE_HOME/oui/bin on the node that you are deleting to update the inventory on that node:

    $ ./runInstaller -updateNodeList ORACLE_HOME=Oracle_home_location
    "CLUSTER_NODES={name_of_node_to_delete}" -local
    

    Note:

    If you have a shared Oracle RAC home, then append the -cfs option to the preceding command and provide a complete path to the location of the cluster file system.
  3. Deinstall the Oracle home—only if the Oracle home is not shared—from the node that you are deleting by running the following command from the Oracle_home\deinstall directory:

    deinstall -local
    

    Caution:

    If the Oracle home is shared, then do not run this command because it will remove the shared software. Proceed to the next step, instead.
  4. Run the following command from the $ORACLE_HOME/oui/bin directory on any one of the remaining nodes in the cluster to update the inventories of those nodes, specifying a comma-delimited list of remaining node names and the name of the local node:

    $ ./runInstaller -updateNodeList ORACLE_HOME=Oracle_home_location
    "CLUSTER_NODES={remaining_node_list}" LOCAL_NODE=local_node_name
    

    Notes:

    • Because all nodes may not have database software installed on an Oracle Flex Cluster, remaining_node_list must list only those nodes with installed database software homes.

    • If you have a shared Oracle RAC home, then append the -cfs option to the command example in this step and provide a complete path to the location of the cluster file system.

Deleting Nodes from the Cluster

After you delete the database instance and the Oracle RAC software, you can begin the process of deleting the node from the cluster. You accomplish this by running scripts on the node you want to delete to remove the Oracle Clusterware installation and then you run scripts on the remaining nodes to update the node list.

See Also:

Oracle Clusterware Administration and Deployment Guide for information about deleting nodes from the cluster