4 Oracle Flex Clusters

An Oracle Flex Cluster scales Oracle Clusterware to large numbers of nodes.

This chapter includes the following topics:

Overview of Oracle Flex Clusters

Oracle Grid Infrastructure installed in an Oracle Flex Cluster configuration is a scalable, dynamic, robust network of nodes. Oracle Flex Clusters provide a platform for a variety of applications, including Oracle Real Application Clusters (Oracle RAC) databases with large numbers of nodes. Oracle Flex Clusters also provide a platform for other service deployments that require coordination and automation for high availability.

All nodes in an Oracle Flex Cluster belong to a single Oracle Grid Infrastructure cluster. This architecture centralizes policy decisions for deployment of resources based on application needs, to account for various service levels, loads, failure responses, and recovery.

Oracle Flex Clusters contain two types of nodes arranged in a hub and spoke architecture: Hub Nodes and Leaf Nodes. The number of Hub Nodes in an Oracle Flex Cluster can be as many as 64. The number of Leaf Nodes can be many more. Hub Nodes and Leaf Nodes can host different types of applications.

Hub Nodes are similar to Oracle Grid Infrastructure nodes in an Oracle Clusterware standard Cluster configuration: they are tightly connected, and have direct access to shared storage. In an Oracle Flex Cluster configuration, shared storage can be provisioned to Leaf Nodes independent of the Oracle Grid Infrastructure.

Leaf Nodes are different from standard Oracle Grid Infrastructure nodes, in that they do not require direct access to shared storage, but instead request data through Hub Nodes. Hub Nodes can run in an Oracle Flex Cluster configuration without having any Leaf Nodes as cluster member nodes, but Leaf Nodes must be members of a cluster that includes at least one Hub Node.

Note:

If you upgrade an Oracle Flex Cluster, then Oracle recommends that you upgrade the Hub Nodes first, and that you also have any upgraded Hub Nodes up and running as part of the upgrade process.

Managing Oracle Flex Clusters

This section discusses Oracle Flex Cluster administration after successful installation of Oracle Grid Infrastructure for either a small or large cluster. Use CRSCTL to manage Oracle Flex Clusters.

See Also:

This section includes the following topics:

Changing the Cluster Mode

You can change the mode of an existing Oracle Clusterware standard Cluster to be an Oracle Flex Cluster.

Notes:

  • Changing the cluster mode requires cluster downtime.

  • Oracle does not support changing an Oracle Flex Cluster to an Oracle Clusterware standard Cluster.

  • Oracle Flex Cluster requires Grid Naming Service (GNS).

  • Zone delegation is not required.

Changing an Oracle Clusterware Standard Cluster to an Oracle Flex Cluster

To change an existing Oracle Clusterware standard Cluster to an Oracle Flex Cluster:

  1. Run the following command to determine the current mode of the cluster:

    $ crsctl get cluster mode status
    
  2. Run the following command to ensure that the Grid Naming Service (GNS) is configured with a fixed VIP:

    $ srvctl config gns
    

    This procedure cannot succeed unless GNS is configured with a fixed VIP. If there is no GNS, then, as root, create one, as follows:

    # srvctl add gns -vip vip_name | ip_address
    

    Run the following command as root to start GNS:

    # srvctl start gns
    
  3. Use the Oracle Automatic Storage Management Configuration Assistant (ASMCA) to enable Oracle Flex ASM in the cluster before you change the cluster mode.

    See Also:

    Oracle Automatic Storage Management Administrator's Guide for more information about enabling Oracle Flex ASM
  4. Run the following command as root to change the mode of the cluster to be an Oracle Flex Cluster:

    # crsctl set cluster mode flex
    
  5. Stop Oracle Clusterware by running the following command as root on each node in the cluster:

    # crsctl stop crs
    
  6. Start Oracle Clusterware by running the following command as root on each node in the cluster:

    # crsctl start crs -wait
    

    Note:

    Use the -wait option to display progress and status messages.

Changing the Node Role

The configured role of a node, whether it is a Hub Node or a Leaf Node, is kept locally, and is initially set at the time of installation. At startup, a node tries to come up in whatever role it was last configured.

Use CRSCTL to change the role of a node, as follows:

  1. Run the following command to determine the current role of the local node:

    $ crsctl get node role config
    
  2. Run the following command as root to change the role of the local node:

    # crsctl set node role {hub | leaf}
    

    Note:

    If you are changing a Leaf Node to a Hub Node, then you may have to run srvctl add vip to add a VIP, if a VIP does not already exist on the node. Leaf Nodes are not required to have VIPs.

    If you installed the cluster with DHCP-assigned VIPs, then there is no need to manually add a VIP.

  3. As root, stop Oracle High Availability Services on the node where you changed the role, as follows:

    # crsctl stop crs
    
  4. If you are changing a Leaf Node to a Hub Node, then configure the Oracle ASM Filter Driver as root, as follows:

    # $ORACLE_HOME/bin/asmcmd afd_configure
    

    See Also:

    Oracle Automatic Storage Management Administrator's Guide for more information about the asmcmd afd_configure command
  5. As root, restart Oracle High Availability Services on the node where you changed the role, as follows:

    # crsctl start crs -wait
    

    Note:

    Use the -wait option to display progress and status messages.
  6. Perform steps 3 and 5 on the local node.

  7. Manually update the inventory.

    If you convert a Hub Node to a Leaf Node, then run the following command on all remaining Hub Nodes:

    $ Grid_home/oui/bin/runInstaller -updateNodeList ORACLE_HOME=Oracle_home
    "CLUSTER_NODES={comma_separated_Hub_Node_list}" -silent -local CRS=TRUE
    

    On the newly converted Leaf Node, run the following command:

    $ Grid_home/oui/bin/runInstaller -updateNodeList ORACLE_HOME=Oracle_home
    "CLUSTER_NODES={Leaf_Node_name}" -silent -local CRS=TRUE
    

    If you convert a Leaf Node to a Hub Node, then run the following command on all Hub Nodes:

    $ Grid_home/oui/bin/runInstaller -updateNodeList ORACLE_HOME=Oracle_home
    "CLUSTER_NODES={comma_separated_Hub_Node_list}" -silent -local CRS=TRUE
    

See Also:

"Oracle RAC Environment CRSCTL Commands" for usage information about the preceding CRSCTL commands