This appendix provides an overview of concepts and terms that may be necessary to carry out installation.
This appendix contains the following sections:
This section reviews concepts about Oracle Grid Infrastructure for a cluster preinstallation tasks. It contains the following sections:
Optimal Flexible Architecture Guidelines for Oracle Grid Infrastructure
Oracle Grid Infrastructure for a Cluster and Oracle Restart Differences
Understanding the Oracle Home for Oracle Grid Infrastructure Software
Location of Oracle Base and Oracle Grid Infrastructure Software Directories
For installations with Oracle Grid Infrastructure only, Oracle recommends that you create an Oracle base and Grid home path compliant with Oracle Optimal Flexible Architecture (OFA) guidelines, so that Oracle Universal Installer (OUI) can select that directory during installation. For OUI to recognize the path as an Oracle software path, it must be in the form u[0-9][1-9]/app.
The OFA path for an Oracle base is u[0-9][1-9]/app/user
, where user
is the name of the Oracle software installation owner account.
The OFA path for an Oracle Grid Infrastructure Oracle home is u[0-9][1-9]/app/release/grid where release is the three-digit Oracle Grid Infrastructure release (for example, 12.1.0
).
When OUI finds an OFA-compliant software path (u[0-9][1-9]/app), it creates the Oracle Grid Infrastructure Grid home and Oracle Inventory (oraInventory
) directories for you. For example, the path /u01/app
and /u89/app
are OFA-compliant paths.
The Oracle Grid Infrastructure home must be in a path that is different from the Grid home for the Oracle Grid Infrastructure installation owner. If you create an Oracle Grid Infrastructure base path manually, then ensure that it is in a separate path specific for this release, and not under an existing Oracle base path.
Note:
If you choose to create an Oracle Grid Infrastructure home manually, then do not create the Oracle Grid Infrastructure home for a cluster under either the Oracle Grid Infrastructure installation owner (grid
) Oracle base or the Oracle Database installation owner (oracle
) Oracle base. Creating an Oracle Clusterware installation in an Oracle base directory will cause succeeding Oracle installations to fail.
Oracle Grid Infrastructure homes can be placed in a local home on servers, even if your existing Oracle Clusterware home from a prior release is in a shared location.
Requirements for Oracle Grid Infrastructure for a cluster are different from Oracle Grid Infrastructure on a single instance in an Oracle Restart configuration.
See Also:
Oracle Database Installation Guide for information about Oracle Restart requirementsYou must have a group whose members are given access to write to the Oracle Inventory (oraInventory
) directory, which is the central inventory record of all Oracle software installations on a server. Members of this group have write privileges to the Oracle central inventory (oraInventory
) directory, and are also granted permissions for various Oracle Clusterware resources, OCR keys, directories in the Oracle Clusterware home to which DBAs need write access, and other necessary privileges. By default, this group is called oinstall
. The Oracle Inventory group must be the primary group for Oracle software installation owners.
The oraInventory
directory contains the following:
A registry of the Oracle home directories (Oracle Grid Infrastructure and Oracle Database) on the system.
Installation logs and trace files from installations of Oracle software. These files are also copied to the respective Oracle homes for future reference.
Other metadata inventory information regarding Oracle installations are stored in the individual Oracle home inventory directories, and are separate from the central inventory.
You can configure one group to be the access control group for the Oracle Inventory, for database administrators (OSDBA), and for all other access control groups used by Oracle software for operating system authentication. However, if you use one group to provide operating system authentication for all system privileges, then this group must be the primary group for all users to whom you want to grant administrative system privileges.
Note:
If Oracle software is already installed on the system, then the existing Oracle Inventory group must be the primary group of the operating system user (oracle
or grid
) that you use to install Oracle Grid Infrastructure. See Section 5.1.1, "Determining If the Oracle Inventory and Oracle Inventory Group Exists" to identify an existing Oracle Inventory group.The Oracle Inventory directory (oraInventory
) is the central inventory location for all Oracle software installed on a server. Each cluster member node has its own central inventory file. You cannot have a shared Oracle Inventory directory, because it is used to point to the installed Oracle homes for all Oracle software installed on a node.
The first time you install Oracle software on a system, you are prompted to provide an oraInventory directory path.
By default, if an oraInventory group does not exist, then the installer lists the primary group of the installation owner for the Oracle Grid Infrastructure for a cluster software as the oraInventory group. Ensure that this group is available as a primary group for all planned Oracle software installation owners.
The primary group of all Oracle installation owners should be the Oracle Inventory Group (oinstall
), whose members are granted the OINSTALL system privileges to write to the central Oracle Inventory for a server, to write log files, and other privileges.
Note:
Group and user IDs must be identical on all nodes in the cluster. Check to make sure that the group and user IDs you want to use are available on each cluster member node, and confirm that the primary group for each Oracle Grid Infrastructure for a cluster installation owner has the same name and group ID.If the primary group of an installation owner is the user home directory (for example, /home/oracle
), then the Oracle Inventory is placed in the installation owner's home directory. This placement can cause permission errors during subsequent installations with multiple Oracle software owners. For that reason, Oracle recommends that you do not accept this option, and instead use an OFA-compliant path.
If you set an Oracle base variable to a path such as /u01/app/grid
or /u01/app/oracle
, then the Oracle Inventory is defaulted to the path u01/app/oraInventory
using correct permissions to allow all Oracle installation owners to write to this central inventory directory.
By default, the Oracle Inventory directory is not installed under the Oracle base directory for the installation owner. This is because all Oracle software installations share a common Oracle Inventory, so there is only one Oracle Inventory for all users, whereas there is a separate Oracle base for each user.
During installation, you are prompted to specify an Oracle base location, which is owned by the user performing the installation. The Oracle base directory is where log files specific to the user are placed. You can choose a location with an existing Oracle home, or choose another directory location that does not have the structure for an Oracle base directory.
Using the Oracle base directory path helps to facilitate the organization of Oracle installations, and helps to ensure that installations of multiple databases maintain an Optimal Flexible Architecture (OFA) configuration.
The Oracle base directory for the Oracle Grid Infrastructure installation is the location where diagnostic and administrative logs, and other logs associated with Oracle ASM and Oracle Clusterware are stored. For Oracle installations other than Oracle Grid Infrastructure for a cluster, it is also the location under which an Oracle home is placed.
However, in the case of an Oracle Grid Infrastructure installation, you must create a different path, so that the path for Oracle bases remains available for other Oracle installations.
For OUI to recognize the Oracle base path as an Oracle software path, it must be in the form u[00-99][00-99]/app, and it must be writable by any member of the oraInventory (oinstall
) group. The OFA path for the Oracle base is u[00-99][00-99]/app/
user
, where user
is the name of the software installation owner. For example:
/u01/app/grid
Because you can have only one Oracle Grid Infrastructure installation on a cluster, and all upgrades are out-of-place upgrades, Oracle recommends that you create an Oracle base for the grid infrastructure software owner (grid
), and create an Oracle home for the Oracle Grid Infrastructure binaries using the release number of that installation.
The Oracle home for Oracle Grid Infrastructure software (Grid home) should be in a path in the format u[00-99][00-99]/app/release/grid, where release is the release number of the Oracle Grid Infrastructure software. For example:
/u01/app/12.1.0/grid
During installation, ownership of the path to the Grid home is changed to root
. If you do not create a unique path to the Grid home, then after the Grid install, you can encounter permission errors for other installations, including any existing installations under the same path.
Ensure that the directory path you provide for the Oracle Grid Infrastructure software location (Grid home) complies with the following requirements:
If you create the path before installation, then it should be owned by the installation owner of Oracle Grid Infrastructure (typically oracle
for a single installation owner for all Oracle software, or grid
for role-based Oracle installation owners), and set to 775 permissions.
It should be created in a path outside existing Oracle homes, including Oracle Clusterware homes.
It should not be located in a user home directory.
It must not be the same location as the Oracle base for the Oracle Grid Infrastructure installation owner (grid
), or the Oracle base of any other Oracle installation owner (for example, /u01/app/oracle
).
It should be created either as a subdirectory in a path where all files can be owned by root
, or in a unique path.
Oracle recommends that you install Oracle Grid Infrastructure binaries on local homes, rather than using a shared home on shared storage.
Even if you do not use the same software owner to install Grid Infrastructure (Oracle Clusterware and Oracle ASM) and Oracle Database, be aware that running the root.sh
script during the Oracle Grid Infrastructure installation changes ownership of the home directory where clusterware binaries are placed to root
, and all ancestor directories to the root level (/
) are also changed to root
. For this reason, the Oracle Grid Infrastructure for a cluster home cannot be in the same location as other Oracle software.
However, Oracle Restart can be in the same location as other Oracle software.
See Also:
Oracle Database Installation Guide for your platform for more information about Oracle RestartDuring installation, you are asked to identify the planned use for each network interface that OUI detects on your cluster node. Identify each interface as a public or private interface or as an interface that you do not want Oracle Grid Infrastructure or Oracle Flex ASM cluster to use. Public and virtual IP addresses are configured on public interfaces. Private addresses are configured on private interfaces.
See the following sections for detailed information about each address type:
The public IP address is assigned dynamically using DHCP, or defined statically in a DNS or in a hosts file. It uses the public interface (the interface with access available to clients). The public IP address is the primary address for a cluster member node, and should be the address that resolves to the name returned when you enter the command hostname
.
If you configure IP addresses manually, then avoid changing host names after you complete the Oracle Grid Infrastructure installation, including adding or deleting domain qualifications. A node with a new host name is considered a new host, and must be added to the cluster. A node under the old name will appear to be down until it is removed from the cluster.
Oracle Clusterware uses interfaces marked as private for internode communication. Each cluster node needs to have an interface that you identify during installation as a private interface. Private interfaces need to have addresses configured for the interface itself, but no additional configuration is required. Oracle Clusterware uses interfaces you identify as private for the cluster interconnect. If you identify multiple interfaces during information for the private network, then Oracle Clusterware configures them with Redundant Interconnect Usage. Any interface that you identify as private must be on a subnet that connects to every node of the cluster. Oracle Clusterware uses all the interfaces you identify for use as private interfaces.
For the private interconnects, because of Cache Fusion and other traffic between nodes, Oracle strongly recommends using a physically separate, private network. If you configure addresses using a DNS, then you should ensure that the private IP addresses are reachable only by the cluster nodes.
After installation, if you modify interconnects on Oracle RAC with the CLUSTER_INTERCONNECTS
initialization parameter, then you must change it to a private IP address, on a subnet that is not used with a public IP address. Oracle does not support changing the interconnect to an interface using a subnet that you have designated as a public subnet.
You should not use a firewall on the network with the private network IP addresses, as this can block interconnect traffic.
If you are not using Grid Naming Service (GNS), then determine a virtual host name for each node. A virtual host name is a public node name that is used to reroute client requests sent to the node if the node is down. Oracle Database uses VIPs for client-to-database connections, so the VIP address must be publicly accessible. Oracle recommends that you provide a name in the format hostname-vip. For example: myclstr2-vip
.
The virtual IP (VIP) address is registered in the GNS, or the DNS. Select an address for your VIP that meets the following requirements:
The IP address and host name are currently unused (it can be registered in a DNS, but should not be accessible by a ping
command)
The VIP is on the same subnet as your public interface
The GNS virtual IP address is a static IP address configured in the DNS. The DNS delegates queries to the GNS virtual IP address, and the GNS daemon responds to incoming name resolution requests at that address.
Within the subdomain, the GNS uses multicast Domain Name Service (mDNS), included with Oracle Clusterware, to enable the cluster to map host names and IP addresses dynamically as nodes are added and removed from the cluster, without requiring additional host configuration in the DNS.
To enable GNS, you must have your network administrator provide a set of IP addresses for a subdomain assigned to the cluster (for example, grid.example.com
), and delegate DNS requests for that subdomain to the GNS virtual IP address for the cluster, which GNS will serve. The set of IP addresses is provided to the cluster through DHCP, which must be available on the public network for the cluster.
See Also:
Oracle Clusterware Administration and Deployment Guide for more information about Grid Naming ServiceOracle Database clients connect to Oracle Real Application Clusters database using SCANs. The SCAN and its associated IP addresses provide a stable name for clients to use for connections, independent of the nodes that make up the cluster. SCAN addresses, virtual IP addresses, and public IP addresses must all be on the same subnet.
The SCAN is a virtual IP name, similar to the names used for virtual IP addresses, such as node1-vip
. However, unlike a virtual IP, the SCAN is associated with the entire cluster, rather than an individual node, and associated with multiple IP addresses, not just one address.
The SCAN works by being able to resolve to multiple IP addresses in the cluster handling public client connections. When a client submits a request, the SCAN listener listening on a SCAN IP address and the SCAN port is made available to a client. Because all services on the cluster are registered with the SCAN listener, the SCAN listener replies with the address of the local listener on the least-loaded node where the service is currently being offered. Finally, the client establishes connection to the service through the listener on the node where service is offered. All of these actions take place transparently to the client without any explicit configuration required in the client.
During installation listeners are created. They listen on the SCAN IP addresses provided on nodes for the SCAN IP addresses. Oracle Net Services routes application requests to the least loaded instance providing the service. Because the SCAN addresses resolve to the cluster, rather than to a node address in the cluster, nodes can be added to or removed from the cluster without affecting the SCAN address configuration.
The SCAN should be configured so that it is resolvable either by using Grid Naming Service (GNS) within the cluster, or by using Domain Name Service (DNS) resolution. For high availability and scalability, Oracle recommends that you configure the SCAN name so that it resolves to three IP addresses. At a minimum, the SCAN must resolve to at least one address.
If you specify a GNS domain, then the SCAN name defaults to clustername-scan.GNS_domain. Otherwise, it defaults to clustername-scan.current_domain. For example, if you start Oracle Grid Infrastructure installation from the server node1
, the cluster name is mycluster
, and the GNS domain is grid.example.com
, then the SCAN Name is mycluster-scan.grid.example.com
.
Clients configured to use IP addresses for Oracle Database releases before Oracle Database 11g Release 2 can continue to use their existing connection addresses; using SCANs is not required. When you upgrade to Oracle Clusterware 12c Release 1 (12.1), the SCAN becomes available, and you should use the SCAN for connections to Oracle Database 11g Release 2 or later databases. When an earlier version of Oracle Database is upgraded, it registers with the SCAN listeners, and clients can start using the SCAN to connect to that database. The database registers with the SCAN listener through the remote listener parameter in the init.ora
file. The REMOTE_LISTENER parameter must be set to SCAN:PORT. Do not set it to a TNSNAMES alias with a single address with the SCAN as HOST=SCAN.
The SCAN is optional for most deployments. However, clients using Oracle Database 11g Release 2 and later policy-managed databases using server pools should access the database using the SCAN. This is because policy-managed databases can run on different servers at different times, so connecting to a particular node virtual IP address for a policy-managed database is not possible.
Provide SCAN addresses for client access to the cluster. These addresses should be configured as round robin addresses on the domain name service (DNS). Oracle recommends that you supply three SCAN addresses.
Note:
The following is a list of additional information about node IP addresses:For the local node only, OUI automatically fills in public and VIP fields. If your system uses vendor clusterware, then OUI may fill additional fields.
Host names and virtual host names are not domain-qualified. If you provide a domain in the address field during installation, then OUI removes the domain from the address.
Interfaces identified as private for private IP addresses should not be accessible as public interfaces. Using public interfaces for Cache Fusion can cause performance problems.
Identify public and private interfaces. OUI configures public interfaces for use by public and virtual IP addresses, and configures private IP addresses on private interfaces.
The private subnet that the private interfaces use must connect all the nodes you intend to have as cluster members.
Oracle Clusterware 12c Release 1 (12.1) is automatically configured with Cluster Time Synchronization Service (CTSS). This service provides automatic synchronization of all cluster nodes using the optimal synchronization strategy for the type of cluster you deploy. If you have existing cluster time synchronization service, such as NTP, then it starts in an observer mode. Otherwise, it starts in an active mode to ensure that time is synchronized between cluster nodes. CTSS does not cause compatibility issues.
The CTSS module is installed as a part of Oracle Grid Infrastructure installation. CTSS daemons are started up by the OHAS daemon (ohasd
), and do not require a command-line interface.
Oracle Grid Infrastructure installed in an Oracle Flex Cluster configuration is a scalable, dynamic, robust network of nodes. Oracle Flex Clusters also provide a platform for other service deployments that require coordination and automation for high availability.
All nodes in an Oracle Flex Cluster belong to a single Oracle Grid Infrastructure cluster. This architecture centralizes policy decisions for deployment of resources based on application needs, to account for various service levels, loads, failure responses, and recovery.
Oracle Flex Clusters contain two types of nodes arranged in a hub and spoke architecture: Hub Nodes and Leaf Nodes. The number of Hub Nodes in an Oracle Flex Cluster can be as many as 64. The number of Leaf Nodes can be many more. Hub Nodes and Leaf Nodes can host different types of applications.
Oracle Flex Cluster Hub Nodes are similar to Oracle Grid Infrastructure nodes in a standard configuration: they are tightly connected, and have direct access to shared storage.
Leaf Nodes are different from standard Oracle Grid Infrastructure nodes, in that they do not require direct access to shared storage. Hub Nodes can run in an Oracle Flex Cluster configuration without having any Leaf Nodes as cluster member nodes, but Leaf Nodes must be members of a cluster with a pool of Hub Nodes.
If you select manual configuration, then you must designate each node in your cluster as a Hub Node or a Leaf Node. Each role requires different access to storage. To be eligible for the Hub Node role, a server must have direct access to storage. To be eligible for the Leaf Node role, a server may have access to direct storage, but it does not require direct access, because Leaf Nodes access storage as clients through Hub Nodes.
If you select automatic configuration of roles, then cluster nodes that have access to storage and join are configured as Hub Nodes, up to the number that you designate as your target. Other nodes that do not have access to storage or that join the cluster after that target number is reached join the cluster as Leaf Nodes. Nodes are configured as needed to provide Hub Nodes configured with Local or Near ASM to provide storage client services, and Leaf Nodes that are configured with direct access to ASM disks can be reconfigured as needed to become Hub Nodes. Oracle recommends that you select automatic configuration of Hub and Leaf Node roles.
See Also:
Oracle Clusterware Administration and Deployment Guide for information about Oracle Flex Cluster deployments
Oracle Automatic Storage Management Administrator's Guide for information about Oracle Flex ASM
Understanding Oracle Automatic Storage Management Cluster File System
About Migrating Existing Oracle ASM Instances
Standalone Oracle ASM Installations to Clustered Installation Conversions
Oracle Automatic Storage Management has been extended to include a general purpose file system, called Oracle Automatic Storage Management Cluster File System (Oracle ACFS). Oracle ACFS is a new multi-platform, scalable file system, and storage management technology that extends Oracle Automatic Storage Management (Oracle ASM) functionality to support customer files maintained outside of the Oracle Database. Files supported by Oracle ACFS include application binaries and application reports. Other supported files are video, audio, text, images, engineering drawings, and other general-purpose application file data.
Automatic Storage Management Cluster File System (ACFS) can provide optimized storage for all Oracle files, including Oracle Database binaries. It can also store other application files. However, it cannot be used for Oracle Clusterware binaries.
See Also :
Oracle Automatic Storage Management Administrator's Guide for more information about ACFSIf you have an Oracle ASM installation from a prior release installed on your server, or in an existing Oracle Clusterware installation, then you can use Oracle Automatic Storage Management Configuration Assistant (ASMCA, located in the path Grid_home/bin) to upgrade the existing Oracle ASM instance to Oracle ASM 12c Release 1 (12.1), and subsequently configure failure groups, Oracle ASM volumes, and Oracle Automatic Storage Management Cluster File System (Oracle ACFS).
Note:
You must first shut down all database instances and applications on the node with the existing Oracle ASM instance before upgrading it.During installation, if you chose to use Oracle ASM and ASMCA detects that there is a prior Oracle ASM version installed in another home, then after installing the Oracle ASM 12c Release 1 (12.1) binaries, you can start ASMCA to upgrade the existing Oracle ASM instance. You can then choose to configure an Oracle ACFS deployment by creating Oracle ASM volumes and using the upgraded Oracle ASM to create the Oracle ACFS.
On an existing Oracle Clusterware or Oracle RAC installation, if the earlier version of Oracle ASM instances on all nodes is Oracle ASM 11g Release 1 (11.1), then you are provided with the option to perform a rolling upgrade of Oracle ASM instances. If the earlier version of Oracle ASM instances on an Oracle RAC installation are from an Oracle ASM release before Oracle ASM 11g Release 1 (11.1), then rolling upgrades cannot be performed. Oracle ASM is then upgraded on all nodes to 12c Release 1 (12.1).
If you have existing standalone Oracle ASM installations on one or more nodes that are member nodes of the cluster, then OUI proceeds to install Oracle Grid Infrastructure for a cluster.
If you place Oracle Clusterware files (OCR and voting disks) on Oracle ASM, then ASMCA is started at the end of the clusterware installation, and provides prompts for you to migrate and upgrade the Oracle ASM instance on the local node, so that you have an Oracle ASM 12c Release 1 (12.1) installation.
On remote nodes, ASMCA identifies any standalone Oracle ASM instances that are running, and prompts you to shut down those Oracle ASM instances, and any database instances that use them. ASMCA then extends clustered Oracle ASM instances to all nodes in the cluster. However, disk group names on the cluster-enabled Oracle ASM instances must be different from existing standalone disk group names.
With an out-of-place upgrade, the installer installs the newer version in a separate Oracle Clusterware home. Both versions of Oracle Clusterware are on each cluster member node, but only one version is active.
Rolling upgrade avoids downtime and ensure continuous availability while the software is upgraded to a new version.
If you have separate Oracle Clusterware homes on each node, then you can perform an out-of-place upgrade on all nodes, or perform an out-of-place rolling upgrade, so that some nodes are running Oracle Clusterware from the earlier version Oracle Clusterware home, and other nodes are running Oracle Clusterware from the new Oracle Clusterware home.
An in-place upgrade of Oracle Grid Infrastructure is not supported.
See Also:
Appendix B, "How to Upgrade to Oracle Grid Infrastructure 12c Release 1" for instructions on completing rolling upgrades