Review the following sections to check that you have the networking hardware and internet protocol (IP) addresses required for an Oracle Grid Infrastructure for a cluster installation.
This chapter contains the following topics:
Broadcast Requirements for Networks Used by Oracle Grid Infrastructure
Multicast Requirements for Networks Used by Oracle Grid Infrastructure
Note:
For the most up-to-date information about supported network protocols and hardware for Oracle RAC installations, see the Certify pages on the My Oracle Support website at the following URL:https://support.oracle.com
The following is a list of requirements for network configuration:
Each node must have at least two network adapters or network interface cards (NICs): one for the public network interface, and one for the private network interface (the interconnect).
When you upgrade a node to Oracle Grid Infrastructure 11g Release 2 (11.2.0.2) and later, the upgraded system uses your existing network classifications.
For Solaris 11 and higher, the network adapter is a logical device and not a physical device.
To configure multiple public interfaces, use a third-party technology for your platform to aggregate the multiple public interfaces before you start installation, and then select the single interface name for the combined interfaces as the public interface. Oracle recommends that you do not identify multiple public interface names during Oracle Grid Infrastructure installation. Note that if you configure two network interfaces as public network interfaces in the cluster without using an aggregation technology, the failure of one public interface on a node does not result in automatic VIP failover to the other public interface.
Oracle recommends that you use the Redundant Interconnect Usage feature to make use of multiple interfaces for the private network. However, you can also use third-party technologies to provide redundancy for the private network, or link aggregation (IPMI).
Note:
Redundant Interconnect Usage requires a complete release 11.2.0.2 or higher stack (Oracle Grid Infrastructure and Oracle Databases). Prior release Oracle Databases cannot use this feature, and must use third-party link aggregation technologies or IPMP. If you consolidate different database releases in one cluster, and use databases prior to Oracle Database Release 11.2.0.2, then you may require both technologies.For the public network, each network adapter must support TCP/IP.
For the private network, the interface must support the user datagram protocol (UDP) using high-speed network adapters and switches that support TCP/IP (minimum requirement 1 Gigabit Ethernet).
Note:
UDP is the default interface protocol for Oracle RAC, and TCP is the interconnect protocol for Oracle Clusterware. You must use a switch for the interconnect. Oracle recommends that you use a dedicated switch.Oracle does not support token-rings or crossover cables for the interconnect.
If you have a shared Ethernet VLAN deployment, with shared physical adapter, ensure that you apply standard Ethernet design, deployment, and monitoring best practices to protect against cluster outages and performance degradation due to common shared Ethernet switch network events.
For clusters using single interfaces for private networks, each node's private interface for interconnects must be on the same subnet, and that subnet must connect to every node of the cluster. For example, if the private interfaces have a subnet mask of 255.255.255.0, then your private network is in the range 192.168.0.0--192.168.0.255, and your private addresses must be in the range of 192.168.0.[0-255]. If the private interfaces have a subnet mask of 255.255.0.0, then your private addresses can be in the range of 192.168.[0-255].[0-255].
For clusters using Redundant Interconnect usage, each private interface can be on a different subnet. However, each cluster member node must have an interface on each private interconnect subnet, and these subnets must connect to every node of the cluster. For example, you can have private networks on subnets 192.168.0 and 10.0.0, but each cluster member node must have an interface connected to the 192.168.0 and 10.0.0 subnets.
For the private network, the endpoints of all designated interconnect interfaces must be completely reachable on the network. There should be no node that is not connected to every private network interface. You can test if an interconnect interface is reachable using ping
.
With Redundant Interconnect Usage, you can identify multiple interfaces to use for the cluster private network, without the need of using bonding or other technologies. This functionality is available starting with Oracle Database 11g Release 2 (11.2.0.2).
When you define multiple interfaces, Oracle Clusterware creates from one to four highly available IP (HAIP) addresses. Oracle RAC and Oracle Automatic Storage Management (Oracle ASM) instances use these interface addresses to ensure highly available, load-balanced interface communication between nodes. The installer enables Redundant Interconnect Usage to provide a high availability private network.
By default, Oracle Grid Infrastructure software uses all of the HAIP addresses for private network communication, providing load-balancing across the set of interfaces you identify for the private network. If a private interconnect interface fails or become non-communicative, then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces.
Note:
During installation, you can define up to four interfaces for the private network. The number of HAIP addresses created during installation is based on both physical and logical interfaces configured for the network adapter. After installation, you can define additional interfaces. If you define more than four interfaces as private network interfaces, be aware that Oracle Clusterware activates only four of the interfaces at a time. However, if one of the four active interfaces fails, then Oracle Clusterware transitions the HAIP addresses configured to the failed interface to one of the reserve interfaces in the defined set of private interfaces.See Also:
Oracle Clusterware Administration and Deployment Guide for more information about HAIP addressesOracle Grid Infrastructure and Oracle RAC support the standard IPv6 address notations specified by RFC 2732 and global and site-local IPv6 addresses as defined by RFC 4193.
Cluster member node interfaces can be configured to use IPv4, IPv6, or both types of Internet protocol addresses. However, be aware of the following:
Configuring public VIPs: During installation, you can configure VIPs for a given public network as IPv4 or IPv6 types of addresses. You can configure an IPv6 cluster by selecting VIP and SCAN names that resolve to addresses in an IPv6 subnet for the cluster, and selecting that subnet as public during installation. After installation, you can also configure cluster member nodes with a mixture of IPv4 and IPv6 addresses.
If you install using static virtual IP (VIP) addresses in an IPv4 cluster, then the VIP names you supply during installation should resolve only to IPv4 addresses. If you install using static IPv6 addresses, then the VIP names you supply during installation should resolve only to IPv6 addresses.
During installation, you cannot configure the cluster with VIP and SCAN names that resolve to both IPv4 and IPv6 addresses. For example, you cannot configure VIPs and SCANS on some cluster member nodes to resolve to IPv4 addresses, and VIPs and SCANs on other cluster member nodes to resolve to IPv6 addresses. Oracle does not support this configuration.
Configuring private IP interfaces (interconnects): you must configure the private network as an IPv4 network. IPv6 addresses are not supported for the interconnect.
Redundant network interfaces: If you configure redundant network interfaces for a public or VIP node name, then configure both interfaces of a redundant pair to the same address protocol. Also ensure that private IP interfaces use the same IP protocol. Oracle does not support names using redundant interface configurations with mixed IP protocols. You must configure both network interfaces of a redundant pair with the same IP protocol.
GNS or Multi-cluster addresses: Oracle Grid Infrastructure supports IPv4 DHCP addresses, and IPv6 addresses configured with the Stateless Address Autoconfiguration protocol, as described in RFC 2462.
Note:
Link-local and site-local IPv6 addresses as defined in RFC 1884 are not supported.See Also:
http://www.ietf.org/rfc/rfc2732.txt
for RFC 2732, and information about IPv6 notational representation
http://www.ietf.org/rfc/rfc3513.txt
for RFC 3513, and information about proper IPv6 addressing
http://www.ietf.org/rfc/rfc2462.txt
for RFC 2462, and information about IPv6 Stateless Address Autoconfiguration protocol
Oracle Database Net Services Administrator's Guide for more information about network communication and IP address protocol options
For small clusters, you can use a static configuration of IP addresses. For large clusters, manually maintaining the large number of required IP addresses becomes too cumbersome. The Oracle Grid Naming Service is used with large clusters to ease network administration costs.This section contains the following topics:
IP Name and Address Requirements For Grid Naming Service (GNS)
IP Name and Address Requirements for Standard Cluster Manual Configuration
Before starting the installation, you must have at least two interfaces configured on each node: One for the private IP address and one for the public IP address.
You can configure IP addresses with one of the following options:
Dynamic IP address assignment using Multi-cluster or standard Oracle Grid Naming Service (GNS). If you select this option, then network administrators delegate a subdomain to be resolved by GNS (standard or multicluster). Requirements for GNS are different depending on whether you choose to configure GNS with zone delegation (resolution of a domain delegated to GNS), or without zone delegation (a GNS virtual IP address without domain delegation):
For GNS with zone delegation:
For IPv4, a DHCP service running on the public network the cluster uses
For IPv6, an autoconfiguration service running on the public network the cluster uses
Enough addresses on the DHCP server to provide one IP address for each node, and three IP addresses for the cluster used by the Single Client Access Name (SCAN) for the cluster
Use an existing GNS configuration. Starting with Oracle Grid Infrastructure12c Release 1 (12.1), a single GNS instance can be used by multiple clusters. To use GNS for multiple clusters, the DNS administrator must have delegated a zone for use by GNS. Also, there must be an instance of GNS started somewhere on the network and the GNS instance must be accessible (not blocked by a firewall). All of the node names registered with the GNS instance must be unique.
Static IP address assignment using DNS or host file resolution. If you select this option, then network administrators assign a fixed IP address for each physical host name in the cluster and for IPs for the Oracle Clusterware managed VIPs. In addition, either domain name server (DNS) based static name resolution is used for each node, or host files for both the clusters and clients have to be updated, resulting in limited SCAN functionality. Selecting this option requires that you request network administration updates when you modify the cluster.
For GNS without zone delegation:
Configure a GNS virtual IP address (VIP) for the cluster. To enable Oracle Flex Cluster, you must at least configure a GNS virtual IP address.
Note:
Oracle recommends that you use a static host name for all non-VIP server node public host names.Public IP addresses and virtual IP addresses must be in the same subnet.
The cluster name is case-insensitive, must be unique across your enterprise, must be at least one character long and no more than 15 characters in length, must be alphanumeric, cannot begin with a numeral, and may contain hyphens (-). Underscore characters (_) are not allowed.
If you configure a Standard cluster, and choose a Typical install, then the SCAN is also the name of the cluster. In that case, the SCAN must meet the requirements for a cluster name. The SCAN can be no longer than 15 characters.
In an Advanced installation, the SCAN and cluster name are entered in separate fields during installation, so cluster name requirements do not apply to the name used for the SCAN, and the SCAN can be longer than 15 characters. If you enter a domain with the SCAN name, and you want to use GNS with zone delegation, then the domain must be the GNS domain.
Note:
Select your name carefully. After installation, you can only change the cluster name by reinstalling Oracle Grid Infrastructure.If you enable Grid Naming Service (GNS), then name resolution requests to the cluster are delegated to the GNS, which is listening on the GNS virtual IP address. The network administrator must configure the domain name server (DNS) to delegate resolution requests for cluster names (any names in the subdomain delegated to the cluster) to the GNS. When a request comes to the domain, GNS processes the requests and responds with the appropriate addresses for the name requested. To use GNS, you must specify a static IP address for the GNS VIP address.
Note:
The following restrictions apply to vendor configurations on your system:For Standard Clusters: If you have vendor clusterware installed, then you cannot choose to use GNS, because the vendor clusterware does not support it. Vendor clusterware is not supported with Oracle Flex Cluster configurations.
You cannot use GNS with another multicast DNS. To use GNS, disable any third party mDNS daemons on your system.
Review the following requirements for using Multi-cluster GNS:
The general requirements for Multi-cluster GNS are similar to those for standard GNS. Multi-cluster GNS differs from standard GNS in that Multi-cluster GNS provides a single networking service across a set of clusters, rather than a networking service for a single cluster.
To provide networking service, Multi-cluster GNS is configured using DHCP addresses, and name advertisement and resolution is carried out with the following components:
The GNS server cluster performs address resolution for GNS client clusters. A GNS server cluster is the cluster where Multi-cluster GNS runs, and where name resolution takes place for the subdomain delegated to the set of clusters.
GNS client clusters receive address resolution from the GNS server cluster. A GNS client cluster is a cluster that advertises its cluster member node names using the GNS server cluster.
To use this option, your network administrators must have delegated a subdomain to GNS for resolution.
Before installation, create a static IP address for the GNS VIP address, and provide a subdomain that your DNS servers delegate to that static GNS IP address for resolution.
To configure a GNS client cluster, check to ensure all of the following requirements are completed:
A GNS server instance must be running on your network, and it must be accessible (for example, not blocked by a firewall).
All of the node names in the GNS domain must be unique; address ranges and cluster names must be unique for both GNS server and GNS client clusters.
You must have a GNS client data file that you generated on the GNS server cluster, so that the GNS client cluster has the information needed to delegate its name resolution to the GNS server cluster, and you must have copied that file to the GNS client cluster member node on which you are running the Oracle Grid Infrastructure installation.
On a GNS server cluster member, run the following command, where path_to_file is the name and path location of the GNS client data file you create:
srvctl export gns -clientdata path_to_file
For example:
$ srvctl export gns -clientdata /home/grid/research1
Copy the GNS Client data file to a secure path on the GNS Client node where you run the GNS Client cluster installation. The Oracle Installation user must have permissions to access that file. Oracle recommends that no other user is granted permissions to access the GNS Client data file. During installation, you are prompted to provide a path to that file.
After you have completed the GNS client cluster installation, you must run the following command on one of the GNS server cluster members to start GNS service, where path_to_file is the name and path location of the GNS client data file:
srvctl add gns -clientdata path_to_file
For example:
$ srvctl add gns -clientdata /home/grid/research1
See Also:
Oracle Clusterware Administration and Deployment Guide for more information about GNS server and GNS client administrationIf you do not enable GNS, then you must configure static cluster node names and addresses before starting installation.
Public and virtual IP names must conform with the RFC 952 standard, which allows alphanumeric characters and hyphens ("-"), but does not allow underscores ("_").
Oracle Clusterware manages private IP addresses in the private subnet on interfaces you identify as private during the installation interview.
The cluster must have the following names and addresses:
A public IP address for each node, with the following characteristics:
Static IP address
Configured before installation for each node, and resolvable to that node before installation
On the same subnet as all other public IP addresses, VIP addresses, and SCAN addresses in the cluster
A virtual IP address for each node, with the following characteristics:
Static IP address
Configured before installation for each node, but not currently in use
On the same subnet as all other public IP addresses, VIP addresses, and SCAN addresses in the cluster
A Single Client Access Name (SCAN) for the cluster, with the following characteristics:
Three static IP addresses configured on the domain name server (DNS) before installation so that the three IP addresses are associated with the name provided as the SCAN, and all three addresses are returned in random order by the DNS to the requestor
Configured before installation in the DNS to resolve to addresses that are not currently in use
Given addresses on the same subnet as all other public IP addresses, VIP addresses, and SCAN addresses in the cluster
Given a name that does not begin with a numeral, and that conforms with the RFC 952 standard, which allows alphanumeric characters and hyphens ("-"), but does not allow underscores ("_")
A private IP address for each node, with the following characteristics:
Static IP address
Configured before installation, but on a separate, private network, with its own subnet, that is not resolvable except by other cluster member nodes
The SCAN is a name used to provide service access for clients to the cluster. Because the SCAN is associated with the cluster as a whole, rather than to a particular node, the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients. It also adds location independence for the databases, so that client configuration does not have to depend on which nodes are running a particular database. Clients can continue to access the cluster in the same way as with previous releases, but Oracle recommends that clients accessing the cluster use the SCAN.
Note:
In a Typical installation, the SCAN you provide is also the name of the cluster, so the SCAN name must meet the requirements for a cluster name. In an Advanced installation, The SCAN and cluster name are entered in separate fields during installation, so cluster name requirements do not apply to the SCAN name.Oracle strongly recommends that you do not configure SCAN VIP addresses in the hosts file. Use DNS resolution for SCAN VIPs. If you use the hosts file to resolve SCANs, then the SCAN can resolve to one IP address only.
Configuring SCANs in a DNS or a hosts file is the only supported configuration. Configuring SCANs in a Network Information Service (NIS) is not supported.
See Also:
Appendix D, "Understanding Network Addresses" for more information about network addressesYou can use the nslookup
command to confirm that the DNS is correctly associating the SCAN with the addresses. For example:
root@node1]$ nslookup mycluster-scan Server: dns.example.com Address: 192.0.2.001 Name: mycluster-scan.example.com Address: 192.0.2.201 Name: mycluster-scan.example.com Address: 192.0.2.202 Name: mycluster-scan.example.com Address: 192.0.2.203
After installation, when a client sends a request to the cluster, the Oracle Clusterware SCAN listeners redirect client requests to servers in the cluster.
Starting with Oracle Grid Infrastructure 12c Release 1 (12.1), as part of an Oracle Flex Cluster installation, Oracle ASM is configured within Oracle Grid Infrastructure to provide storage services. Each Oracle Flex ASM cluster has its own name that is globally unique within the enterprise.
Oracle Flex ASM enables an Oracle ASM instance to run on a separate physical server from the database servers. Many Oracle ASM instances can be clustered to support numerous database clients.
You can consolidate all the storage requirements into a single set of disk groups. All these disk groups are managed by a small set of Oracle ASM instances running in a single Oracle Flex Cluster.
Every Oracle Flex ASM cluster has one or more Hub Nodes on which Oracle ASM instances are running.
See Also:
Oracle Clusterware Administration and Deployment Guide for more information about Oracle Flex Clusters
Oracle Automatic Storage Management Administrator's Guide for more information about Oracle Flex ASM
Oracle Flex ASM can use either the same private networks as Oracle Clusterware, or use its own dedicated private networks. Each network can be classified PUBLIC, ASM & PRIVATE, PRIVATE, or ASM.
The Oracle Flex ASM cluster network has the following requirements and characteristics:
The ASM network can be configured during installation, or configured or modified after installation.
Cluster nodes can be configured as follows:
Oracle Flex ASM cluster Hub Nodes, with the following characteristics:
Are similar to prior release Oracle Grid Infrastructure cluster member nodes, as all servers configured with the Hub Node role are peers.
Have direct connections to the ASM disks.
Run a Direct ASM client process.
Run an ASM Filter Driver, part of whose function is to provide cluster fencing security for the Oracle Flex ASM cluster. Note: This feature is available starting with Oracle Database 12c Release 1 (12.1.0.2).
Access the ASM disks as Hub Nodes only, where they are designated a Hub Node for that storage.
Respond to service requests delegated to them through the global ASM listener configured for the Oracle Flex ASM cluster, which designates three of the Oracle Flex ASM cluster member Hub Node listeners as remote listeners for the Oracle Flex ASM cluster.
Oracle Flex ASM cluster Leaf Nodes, with the following characteristics:
Use Indirect access to the ASM disks, where I/O is handled as a service for the client on a Hub Node.
Submit disk service requests through the ASM network.
Broadcast communications (ARP and UDP) must work properly across all the public and private interfaces configured for use by Oracle Grid Infrastructure.
The broadcast must work across any configured VLANs as used by the public or private interfaces.
When configuring public and private network interfaces for Oracle RAC, you must enable ARP. Highly Available IP (HAIP) addresses do not require ARP on the public network, but for VIP failover, you will need to enable ARP. Do not configure NOARP.
For each cluster member node, the Oracle mDNS daemon uses multicasting on all interfaces to communicate with other nodes in the cluster. Multicasting is required on the private interconnect. For this reason, at a minimum, you must enable multicasting for the cluster:
Across the broadcast domain as defined for the private interconnect
On the IP address subnet ranges 224.0.0.0/24 and optionally 230.0.1.0/24
You do not need to enable multicast communications across routers.
If you are configuring Grid Naming Service (GNS) for a standard cluster, then before installing Oracle Grid Infrastructure you must configure DNS to send to GNS any name resolution requests for the subdomain served by GNS. The subdomain that GNS serves represents the cluster member nodes.
To implement GNS, your network administrator must configure the DNS to set up a domain for the cluster, and delegate resolution of that domain to the GNS VIP. You can use a separate domain, or you can create a subdomain of an existing domain for the cluster. The subdomain name, can be any supported DNS name such as sales-cluster.rac.com
.
Oracle recommends that the subdomain name is distinct from your corporate domain. For example, if your corporate domain is mycorp.example.com
, the subdomain for GNS might be rac-gns.mycorp.example.com
.
If the subdomain is not distinct, then it should be for the exclusive use of GNS. For example, if you delegate the subdomain mydomain.example.com
to GNS, then there should be no other domains that share it such as lab1.mydomain.example.com
.
See Also:
Oracle Clusterware Administration and Deployment Guide for more information about GNS
Section 4.5.2, "Cluster Name and SCAN Requirements" for information about choosing network identification names
If you plan to use Grid Naming Service (GNS) with a delegated domain, then before Oracle Grid Infrastructure installation, configure your domain name server (DNS) to send to GNS name resolution requests for the subdomain GNS serves, which are the cluster member nodes. GNS domain delegation is mandatory with dynamic public networks (DHCP, autoconfiguration). GNS domain delegation is not required with static public networks (static addresses, manual configuration).
See Also:
Oracle Clusterware Administration and Deployment Guide for more information about GNS options, delegation, and public networksThe following is an overview of what needs to be done for domain delegation. Your actual procedure may be different from this example.
Configure the DNS to send GNS name resolution requests using delegation:
In the DNS, create an entry for the GNS virtual IP address, where the address uses the form gns-server.CLUSTERNAME.DOMAINNAME. For example, where the cluster name is mycluster
, and the domain name is example.com
, and the IP address is 192.0.2.1, create an entry similar to the following:
mycluster-gns-vip.example.com A 192.0.2.1
The address you provide must be routable.
Set up forwarding of the GNS subdomain to the GNS virtual IP address, so that GNS resolves addresses to the GNS subdomain. To do this, create a BIND configuration entry similar to the following for the delegated domain, where cluster01.example.com
is the subdomain you want to delegate:
cluster01.example.com NS mycluster-gns-vip.example.com
When using GNS, you must configure resolve.conf
on the nodes in the cluster (or the file on your system that provides resolution information) to contain name server entries that are resolvable to corporate DNS servers. The total timeout period configured—a combination of options attempts (retries) and options timeout (exponential backoff)—should be less than 30 seconds. For example, where xxx.xxx.xxx.42 and xxx.xxx.xxx.15 are valid name server addresses in your network, provide an entry similar to the following in /etc/resolv.conf
:
options attempts: 2 options timeout: 1 search cluster01.example.com example.com nameserver xxx.xxx.xxx.42 nameserver xxx.xxx.xxx.15
/etc/nsswitch.conf
controls name service lookup order. In some system configurations, the Network Information System (NIS) can cause problems with SCAN address resolution. Oracle recommends that you place the nis
entry at the end of the search list. For example:
/etc/nsswitch.conf hosts: files dns nis
Note:
Be aware that use of NIS is a frequent source of problems when doing cable pull tests, as host name and username resolution can fail.Review the following information if you intend to configure an Oracle Flex Cluster:
Note the following requirements for Oracle Flex Cluster configuration:
You must use Grid Naming Service (GNS) with an Oracle Flex Cluster deployment.
You must configure the GNS VIP as a static IP address for Hub Nodes.
On Multi-cluster configurations, you must identify the GNS client data file location for Leaf Nodes. The GNS client data file is copied over from the GNS server before you start configuring a Leaf Node.
All public network addresses for both Hub Nodes and Leaf Nodes, whether assigned manually or automatically, must be in the same subnet range.
All Oracle Flex Cluster addresses must be either static IP addresses, or DHCP addresses assigned through DHCP (IPv4) or autoconfiguration addresses assigned through an autoconfiguration service (IPv6), registered in the cluster through GNS.
If you choose to configure DHCP-assigned VIPs, then during installation you must configure cluster node VIP names for both Hub Nodes and Leaf Nodes using one of the following options:
Manual Names: Enter the node name and node VIP name for each cluster member node (for example, linnode1; linnode1-vip; linnode2; linnode2-vip; and so on) to be assigned to the VIP addresses delegated to cluster member nodes through DHCP, and resolved by DNS. Manual names must confirm with the RFC 952 standard, which allows alphanumeric characters and hyphens ("-"), but does not allow underscores ("_").
If you choose to configure manually-assigned VIPs, then during installation you must configure cluster node VIP names for both Hub Nodes and Leaf Nodes using one of the following options:
Manual Names: Enter the host name and virtual IP name for each node manually, and select whether it is a Hub Node or a Leaf Node. The names you provide must resolve to addresses configured on the DNS. Names must conform with the RFC 952 standard, which allows alphanumeric characters and hyphens ("-"), but does not allow underscores ("_").
Automatically Assigned Names: Enter string variables for values corresponding to host names that you have configured on the DNS. String variables allow you to assign a large number of names rapidly during installation. Configure addresses on the DNS with the following characteristics:
Hostname prefix: a prefix string used in each address configured on the DNS for use by cluster member nodes. For example: mycloud.
Range: A range of numbers to be assigned to the cluster member nodes, consisting of a starting node number and an ending node number, designating the end of the range: For example: 001, and 999.
Node name suffix: A suffix added after the end of a range number to a public node name. For example: nd.
VIP name suffix: A suffix added after the end of a virtual IP node name. For example: -vip.
You can create manual addresses using alphanumeric strings. For example, the following strings are examples of acceptable names: mycloud001nd; mycloud046nd; mycloud046-vip; mycloud348nd; mycloud784-vip.
To use GNS, you must specify a static IP address for the GNS VIP address, and you must have a subdomain configured on your DNS to delegate resolution for that subdomain to the static GNS IP address.
As nodes are added to the cluster, your organization's DHCP server can provide addresses for these nodes dynamically. These addresses are then registered automatically in GNS, and GNS provides resolution within the subdomain to cluster node addresses registered with GNS.
Because allocation and configuration of addresses is performed automatically with GNS, no further configuration is required. Oracle Clusterware provides dynamic network configuration as nodes are added to or removed from the cluster. The following example is provided only for information.
With a two node cluster where you have defined the GNS VIP, after installation you might have a configuration similar to the following for a two-node cluster, where the cluster name is mycluster
, the GNS parent domain is gns.example.com
, the subdomain is cluster01.example.com
, the 192.0.2 portion of the IP addresses represents the cluster public IP address subdomain, and 192.168 represents the private IP address subdomain:
Table 4-1 Grid Naming Service Example Network
Identity | Home Node | Host Node | Given Name | Type | Address | Address Assigned By | Resolved By |
---|---|---|---|---|---|---|---|
GNS VIP |
None |
Selected by Oracle Clusterware |
|
virtual |
192.0.2.1 |
Fixed by net administrator |
DNS |
Node 1 Public |
Node 1 |
|
|
public |
192.0.2.101 |
Fixed |
GNS |
Node 1 VIP |
Node 1 |
Selected by Oracle Clusterware |
|
virtual |
192.0.2.104 |
DHCP |
GNS |
Node 1 Private |
Node 1 |
|
|
private |
192.168.0.1 |
Fixed or DHCP |
GNS |
Node 2 Public |
Node 2 |
|
|
public |
192.0.2.102 |
Fixed |
GNS |
Node 2 VIP |
Node 2 |
Selected by Oracle Clusterware |
|
virtual |
192.0.2.105 |
DHCP |
GNS |
Node 2 Private |
Node 2 |
|
|
private |
192.168.0.2 |
Fixed or DHCP |
GNS |
SCAN VIP 1 |
none |
Selected by Oracle Clusterware |
|
virtual |
192.0.2.201 |
DHCP |
GNS |
SCAN VIP 2 |
none |
Selected by Oracle Clusterware |
|
virtual |
192.0.2.202 |
DHCP |
GNS |
SCAN VIP 3 |
none |
Selected by Oracle Clusterware |
|
virtual |
192.0.2.203 |
DHCP |
GNS |
Footnote 1 Node host names may resolve to multiple addresses, including VIP addresses currently running on that host.
If you choose not to use GNS, then before installation you must configure public, virtual, and private IP addresses. Also, check that the default gateway can be accessed by a ping
command. To find the default gateway, use the route
command, as described in your operating system's help utility.
For example, with a two-node cluster where each node has one public and one private interface, and you have defined a SCAN domain address to resolve on your DNS to one of three IP addresses, you might have the configuration shown in the following table for your network interfaces:
Table 4-2 Manual Network Configuration Example
Identity | Home Node | Host Node | Given Name | Type | Address | Address Assigned By | Resolved By |
---|---|---|---|---|---|---|---|
Node 1 Public |
Node 1 |
|
|
public |
192.0.2.101 |
Fixed |
DNS |
Node 1 VIP |
Node 1 |
Selected by Oracle Clusterware |
|
virtual |
192.0.2.104 |
Fixed |
DNS and hosts file |
Node 1 Private |
Node 1 |
|
|
private |
192.168.0.1 |
Fixed |
DNS and hosts file, or none |
Node 2 Public |
Node 2 |
|
|
public |
192.0.2.102 |
Fixed |
DNS |
Node 2 VIP |
Node 2 |
Selected by Oracle Clusterware |
|
virtual |
192.0.2.105 |
Fixed |
DNS and hosts file |
Node 2 Private |
Node 2 |
|
|
private |
192.168.0.2 |
Fixed |
DNS and hosts file, or none |
SCAN VIP 1 |
none |
Selected by Oracle Clusterware |
mycluster-scan |
virtual |
192.0.2.201 |
Fixed |
DNS |
SCAN VIP 2 |
none |
Selected by Oracle Clusterware |
mycluster-scan |
virtual |
192.0.2.202 |
Fixed |
DNS |
SCAN VIP 3 |
none |
Selected by Oracle Clusterware |
mycluster-scan |
virtual |
192.0.2.203 |
Fixed |
DNS |
Footnote 1 Node host names may resolve to multiple addresses.
You do not need to provide a private name for the interconnect. If you want name resolution for the interconnect, then you can configure private IP names in the hosts file or the DNS. However, Oracle Clusterware assigns interconnect addresses on the interface defined during installation as the private interface (eth1
, for example), and to the subnet used for the private subnet.
The addresses to which the SCAN resolves are assigned by Oracle Clusterware, so they are not fixed to a particular node. To enable VIP failover, the configuration shown in the preceding table defines the SCAN addresses and the public and VIP addresses of both nodes on the same subnet, 192.0.2.
Note:
All host names must conform to the RFC 952 standard, which permits alphanumeric characters. Host names using underscores ("_") are not allowed.During installation, you are asked to identify the planned use for each network adapter (or network interface) that Oracle Universal Installer (OUI) detects on your cluster node. Each NIC can be configured to perform only one of the following roles:
Public
Private
Do Not Use
You must use the same private adapters for both Oracle Clusterware and Oracle RAC. The precise configuration you choose for your network depends on the size and use of the cluster you want to configure, and the level of availability you require. Network interfaces must be at least 1 GbE, with 10 GbE recommended.Alternatively, use InfiniBand for the interconnect.
If certified Network-attached Storage (NAS) is used for Oracle RAC and this storage is connected through Ethernet-based networks, then you must have a third network interface for NAS I/O. Failing to provide three separate interfaces in this case can cause performance and stability problems under load.
Redundant interconnect usage cannot protect network adapters used for public communication. If you require high availability or load balancing for public adapters, then use a third party solution. Typically, bonding, trunking or similar technologies can be used for this purpose.
You can enable redundant interconnect usage for the private network by selecting multiple network adapters to use as private adapters. Redundant interconnect usage creates a redundant interconnect when you identify more than one network adapter as private.