8 Oracle Grid Infrastructure Postinstallation Procedures

This chapter describes how to complete the postinstallation tasks after you have installed the Oracle Grid Infrastructure software.

This chapter contains the following topics:

8.1 Required Postinstallation Tasks

Download and install patch updates. Refer to the My Oracle Support web site for required patch updates for your installation.

Note:

Browsers require an Adobe Flash plug-in, version 9.0.115 or higher to use My Oracle Support. Check your browser for the correct version of Flash plug-in by going to the Adobe Flash checker page, and installing the latest version of Adobe Flash.

If you do not have Flash installed, then download the latest version of the Flash Player from the Adobe web site:

http://www.adobe.com/go/getflashplayer

To download required patch updates:

  1. Use a Web browser to view the My Oracle Support website:

    https://support.oracle.com

  2. Log in to My Oracle Support website.

    Note:

    If you are not a My Oracle Support registered user, then click Register for My Oracle Support and register.
  3. On the main My Oracle Support page, click Patches & Updates.

  4. On the Patches & Update page, click Advanced Search.

  5. On the Advanced Search page, click the search icon next to the Product or Product Family field.

  6. In the Search and Select: Product Family field, select Database and Tools in the Search list field, enter RDBMS Server in the text field, and click Go.

    RDBMS Server appears in the Product or Product Family field. The current release appears in the Release field.

  7. Select your platform from the list in the Platform field, and at the bottom of the selection list, click Go.

  8. Any available patch updates appear under the Results heading.

  9. Click the patch number to download the patch.

  10. On the Patch Set page, click View README and read the page that appears. The README page contains information about the patch set and how to apply the patches to your installation.

  11. Return to the Patch Set page, click Download, and save the file on your system.

  12. Use the unzip utility provided with Oracle Database 12c Release 1 (12.1) to uncompress the Oracle patch updates that you downloaded from My Oracle Support. The unzip utility is located in the $ORACLE_HOME/bin directory.

  13. See Appendix B, "How to Upgrade to Oracle Grid Infrastructure 12c Release 1" for information about how to stop database processes in preparation for installing patches.

8.2 Recommended Postinstallation Tasks

Oracle recommends that you complete the following tasks as needed after installing Oracle Grid Infrastructure:

8.2.1 Configuring IPMI-based Failure Isolation Using Crsctl

On HP-UX platforms, where Oracle does not currently support the native IPMI driver, DHCP addressing is not supported and manual configuration is required for IPMI support. OUI will not collect the administrator credentials, the BMC must be configured with a static IP address, and the address must be manually stored in the OLR.

To configure Failure Isolation using IPMI, complete the following steps on each cluster member node:

  1. If necessary, start Oracle Clusterware using the following command:

    $ crsctl start crs
    
  2. Use the BMC management utility to obtain the BMC's IP address and then use the cluster control utility crsctl to store the BMC's IP address in the Oracle Local Registry (OLR) by issuing the crsctl set css ipmiaddr address command. For example:

    $ crsctl set css ipmiaddr 192.168.10.45
    
  3. Enter the following crsctl command to store the user ID and password for the resident BMC in the OLR, where the noname user is the IPMI administrator user account, and provide the password when prompted:

    $ crsctl set css ipmiadmin ""
    IPMI BMC Password: 
    

    This command attempts to validate the credentials you enter by sending them to another cluster node. The command fails if that cluster node is unable to access the local BMC using the credentials.

    When you store the IPMI credentials in the OLR, you must have the anonymous user specified explicitly, or a parsing error will be reported.

8.2.2 Tuning Semaphore Parameters

Refer to the following guidelines only if the default semaphore parameter values are too low to accommodate all Oracle processes:

Note:

Oracle recommends that you refer to the operating system documentation for more information about setting semaphore parameters.
  1. Calculate the minimum total semaphore requirements using the following formula:

    2 * sum (process parameters of all database instances on the system) + overhead for background processes + system and other application requirements

  2. Set semmns (total semaphores systemwide) to this total.

  3. Set semmsl (semaphores for each set) to 250.

  4. Set semmni (total semaphores sets) to semmns divided by semmsl, rounded up to the nearest multiple of 1024.

8.2.3 Create a Fast Recovery Area Disk Group

During installation, by default you can create one disk group. If you plan to add an Oracle Database for a standalone server or an Oracle RAC database, then you should create the Fast Recovery Area for database files.

8.2.3.1 About the Fast Recovery Area and the Fast Recovery Area Disk Group

The Fast Recovery Area is a unified storage location for all Oracle Database files related to recovery. Database administrators can define the DB_RECOVERY_FILE_DEST parameter to the path for the Fast Recovery Area to enable on-disk backups, and rapid recovery of data. Enabling rapid backups for recent data can reduce requests to system administrators to retrieve backup tapes for recovery operations.

When you enable Fast Recovery in the init.ora file, all RMAN backups, archive logs, control file automatic backups, and database copies are written to the Fast Recovery Area. RMAN automatically manages files in the Fast Recovery Area by deleting obsolete backups and archive files no longer required for recovery.

Oracle recommends that you create a Fast Recovery Area disk group. Oracle Clusterware files and Oracle Database files can be placed on the same disk group, and you can also place Fast Recovery files in the same disk group. However, Oracle recommends that you create a separate Fast Recovery disk group to reduce storage device contention.

The Fast Recovery Area is enabled by setting DB_RECOVERY_FILE_DEST. The size of the Fast Recovery Area is set with DB _RECOVERY_FILE_DEST_SIZE. As a general rule, the larger the Fast Recovery Area, the more useful it becomes. For ease of use, Oracle recommends that you create a Fast Recovery Area disk group on storage devices that can contain at least three days of recovery information. Ideally, the Fast Recovery Area should be large enough to hold a copy of all of your data files and control files, the online redo logs, and the archived redo log files needed to recover your database using the data file backups kept under your retention policy.

Multiple databases can use the same Fast Recovery Area. For example, assume you have created one Fast Recovery Area disk group on disks with 150 gigabyte (GB) of storage, shared by three different databases. You can set the size of the Fast Recovery Area for each database depending on the importance of each database. For example, if database1 is your least important database, database 2 is of greater importance and database 3 is of greatest importance, then you can set different DB_RECOVERY_FILE_DEST_SIZE settings for each database to meet your retention target for each database: 30 GB for database 1, 50 GB for database 2, and 70 GB for database 3.

8.2.3.2 Creating the Fast Recovery Area Disk Group

To create a Fast Recovery file disk group:

  1. Navigate to the Grid home bin directory, and start Oracle ASM Configuration Assistant (asmca). For example:

    $ cd /u01/app/12.1.0/grid/bin
    $ ./asmca
    
  2. ASMCA opens at the Disk Groups tab. Click Create to create a new disk group

  3. The Create Disk Groups window opens.

    In the Disk Group Name field, enter a descriptive name for the Fast Recovery Area group. For example: FRA.

    In the Redundancy section, select the level of redundancy you want to use.

    In the Select Member Disks field, select eligible disks to be added to the Fast Recovery Area, and click OK.

  4. The Diskgroup Creation window opens to inform you when disk group creation is complete. Click OK.

  5. Click Exit.

8.2.4 Checking the SCAN Configuration

The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster. Because the SCAN is associated with the cluster as a whole, rather than to a particular node, the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients. It also adds location independence for the databases, so that client configuration does not have to depend on which nodes are running a particular database instance. Clients can continue to access the cluster in the same way as with previous releases, but Oracle recommends that clients accessing the cluster use the SCAN.

You can use the command cluvfy comp scan (located in Grid home/bin) to confirm that the DNS is correctly associating the SCAN with the addresses. For example:

$ cluvfy comp scan 


Verifying scan

Checking Single Client Access Name (SCAN)...

Checking TCP connectivity to SCAN Listeners...
TCP connectivity to SCAN Listeners exists on all cluster nodes

Checking name resolution setup for ”node1.example.com”...

Verification of SCAN VIP and Listener setup passed

Verification of scan was successful.

After installation, when a client sends a request to the cluster, the Oracle Clusterware SCAN listeners redirect client requests to servers in the cluster.

See Also:

Oracle Clusterware Administration and Deployment Guide for more information about system checks and configurations

8.2.5 Running Oracle RAC Configuration Audit Tool

Oracle recommends that you run the Oracle RAC configuration audit tool (ORAchk) to check your Oracle RAC installation. ORAchk is an Oracle RAC auditing tool that checks various important configuration settings within Oracle Real Application Clusters, Oracle Clusterware, Oracle Automatic Storage Management and the Oracle Grid Infrastructure environment.

For information about configuring and running ORAchk utility, refer to My Oracle Support note 1268927.1, which is available at the following URL:

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1268927.1

8.2.6 Setting Resource Limits for Oracle Clusterware and Associated Databases and Applications

After you have completed Oracle Grid Infrastructure installation, you can set resource limits in the Grid_home/crs/install/s_crsconfig_nodename_env.txt file. These resource limits apply to all Oracle Clusterware processes and Oracle databases managed by Oracle Clusterware. For example, to set a higher number of processes limit, edit the file and set CRS_LIMIT_NPROC parameter to a high value.

8.3 Using Earlier Oracle Database Releases with Oracle Grid Infrastructure

Review the following sections for information about using earlier Oracle Database releases with Oracle Grid Infrastructure 12c Release 1 (12.1) installations:

8.3.1 General Restrictions for Using Earlier Oracle Database Versions

You can use Oracle Database 10g Release 2 and Oracle Database 11g Release 1 and 2 with Oracle Clusterware 12c Release 1 (12.1).

Do not use the versions of srvctl, lsnrctl, or other Oracle Grid infrastructure home tools to administer earlier version databases. Only administer earlier Oracle Database releases using the tools in the earlier Oracle Database homes. To ensure that the versions of the tools you are using are the correct tools for those earlier release databases, run the tools from the Oracle home of the database or object you are managing.

If you upgrade an existing version of Oracle Clusterware and Oracle ASM to Oracle Grid Infrastructure 11g or later (which includes Oracle Clusterware and Oracle ASM), and you also plan to upgrade your Oracle RAC database to 12c Release 1 (12.1), then the required configuration of existing databases is completed automatically when you complete the Oracle RAC upgrade, and this section does not concern you.

Note:

Before you start an Oracle RAC or Oracle Database installation on an Oracle Clusterware 12c Release 1 (12.1) installation, if you are upgrading from Oracle Database 11g Release 1 (11.1.0.7 or 11.1.0.6), or Oracle Database 10g Release 2 (10.2.0.4), then Oracle recommends that you check for the latest recommended patches for the release you are upgrading from, and install those patches as needed on your existing database installations before upgrading.

For more information on recommended patches, see Oracle 12c Upgrade Companion (My Oracle Support Note 1462240.1):

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1462240.1

8.3.2 Managing Server Pools with Earlier Database Versions

Starting with Oracle Grid Infrastructure 12c, Oracle Database server categories include roles such as Hub and Leaf that were not present in earlier releases. For this reason, you cannot create server pools using the Oracle RAC 11g version of Database Configuration Assistant (DBCA). To create server pools for earlier release Oracle RAC installations, use the following procedure:

  1. Log in as the Oracle Grid Infrastructure installation owner (Grid user)

  2. Change directory to the 12.1 Oracle Grid Infrastructure binaries directory in the Grid home. For example:

    # cd /u01/app/12.1.0/grid/bin
    
  3. Use the Oracle Grid Infrastructure 12c version of srvctl to create a server pool consisting of Hub Node roles. For example, to create a server pool called p_hub with a maximum size of one cluster node, enter the following command:

    srvctl add serverpool -serverpool p_hub -min 0 -max 1 -category hub;
    
  4. Log in as the Oracle RAC installation owner, start DBCA from the Oracle RAC Oracle home. For example:

    $ cd /u01/app/oracle/product/11.2.0/dbhome_1/bin
    $ dbca
    

    DBCA discovers the server pool that you created with the Oracle Grid Infrastructure 12c srvctl command. Configure the server pool as required for your services.

See Also:

Oracle Clusterware Administration and Deployment Guide for more information about managing resources using policies

8.3.3 Using ASMCA to Administer Disk Groups for Earlier Database Versions

Use Oracle ASM Configuration Assistant (ASMCA) to create and modify disk groups when you install earlier Oracle databases and Oracle RAC databases on Oracle Grid Infrastructure installations. Starting with Oracle Database 11g Release 2 (11.2), Oracle ASM is installed as part of an Oracle Grid Infrastructure installation, with Oracle Clusterware. You can no longer use Database Configuration Assistant (DBCA) to perform administrative tasks on Oracle ASM.

See Also:

Oracle Database Storage Administrator's Guide for details about configuring disk group compatibility for databases using Oracle Database 11g or earlier software with Oracle Grid Infrastructure 12c (12.1)

8.3.4 Pinning Cluster Nodes for Oracle Database Release 10.x or 11.x

When Oracle Clusterware 12c Release 1 (12.1) is installed on a cluster with no previous Oracle software version, it configures the cluster nodes dynamically, which is compatible with Oracle Database Release 11.2 and later, but Oracle Database 10g and 11.1 require a persistent configuration. This process of association of a node name with a node number is called pinning.

Note:

During an upgrade, all cluster member nodes are pinned automatically, and no manual pinning is required for existing databases. This procedure is required only if you install earlier database versions after installing Oracle Grid Infrastructure 12c Release 1 (12.1) software.

To pin a node in preparation for installing or using an earlier Oracle Database version, use Grid_home/bin/crsctl with the following command syntax, where nodes is a space-delimited list of one or more nodes in the cluster whose configuration you want to pin:

crsctl pin css -n nodes

For example, to pin nodes node3 and node4, log in as root and enter the following command:

$ crsctl pin css -n node3 node4

To determine if a node is in a pinned or unpinned state, use Grid_home/bin/olsnodes with the following command syntax:

To list all pinned nodes:

olsnodes -t -n 

For example:

# /u01/app/12.1.0/grid/bin/olsnodes -t -n
node1 1       Pinned
node2 2       Pinned
node3 3       Pinned
node4 4       Pinned

To list the state of a particular node:

olsnodes -t -n node3

For example:

# /u01/app/12.1.0/grid/bin/olsnodes -t -n node3
node3 3       Pinned

See Also:

Oracle Clusterware Administration and Deployment Guide for more information about pinning and unpinning nodes

8.3.5 Using the Correct LSNRCTL Commands

To administer local and SCAN listeners using the lsnrctl command, set your $ORACLE_HOME environment variable to the path for the Oracle Grid Infrastructure home (Grid home). Do not attempt to use the lsnrctl commands from Oracle home locations for previous releases, as they cannot be used with the new release.

8.4 Modifying Oracle Clusterware Binaries After Installation

After installation, if you need to modify the Oracle Clusterware configuration, then you must unlock the Grid home.

For example, if you want to apply a one-off patch, or if you want to modify an Oracle Exadata configuration to run IPC traffic over RDS on the interconnect instead of using the default UDP, then you must unlock the Grid home.

Caution:

Before relinking executables, you must shut down all executables that run in the Oracle home directory that you are unlocking and relinking. In addition, shut down applications linked with Oracle shared libraries.

Unlock the home using the following procedure:

  1. Log in as root, and change directory to the path Grid_home/crs/install, where Grid_home is the path to the Grid home, and unlock the Grid home using the command rootcrs.sh -unlock -crshome Grid_home, where Grid_home is the path to your Grid infrastructure home. For example, with the Grid home /u01/app/12.1.0/grid, enter the following command:

    # cd /u01/app/12.1.0/grid/crs/install
    # perl rootcrs.sh -unlock -crshome /u01/app/12.1.0/grid
    
  2. Change user to the Oracle Grid Infrastructure software owner, and relink binaries using the command syntax make -f Grid_home/rdbms/lib/ins_rdbms.mk target, where Grid_home is the Grid home, and target is the binaries that you want to relink. For example, where the Grid user is grid, $ORACLE_HOME is set to the Grid home, and where you are updating the interconnect protocol from UDP to IPC, enter the following command:

    # su grid
    $ make -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk ipc_rds ioracle
    

    Note:

    To relink binaries, you can also change to the Oracle Grid Infrastructure installation owner and run the command Grid_home/bin/relink.
  3. Relock the Grid home and restart the cluster using the following command:

    # perl rootcrs.sh -patch
    
  4. Repeat steps 1 through 3 on each cluster member node.

Note:

Do not delete directories in the Grid home. For example, do not delete the directory Grid_home/Opatch. If you delete the directory, then the Grid infrastructure installation owner cannot use Opatch to patch the grid home, and Opatch displays the error message "'checkdir' error: cannot create Grid_home/OPatch".