9 Oracle Grid Infrastructure Postinstallation Procedures

This chapter describes how to complete the postinstallation tasks after you have installed the Oracle Grid Infrastructure software.

This chapter contains the following topics:

9.1 Required Postinstallation Tasks

Download and install patch updates. See the My Oracle Support website for required patch updates for your installation.

To download required patch updates:

  1. Use a Web browser to view the My Oracle Support website:

    https://support.oracle.com

  2. Log in to My Oracle Support.

    Note:

    If you are not a My Oracle Support registered user, then click Register for My Oracle Support and register.
  3. On the main My Oracle Support page, click Patches & Updates.

  4. On the Patches and Updates page, click Product or Family (Advanced).

  5. In the Product field, select Oracle Database.

  6. In the Release field, select one or more release numbers. For example, Oracle 12.1.0.1.0.

  7. Click Search.

  8. Any available patch updates are displayed in the Patch Search page.

  9. Click the patch number to download the patch.

  10. Select the patch number and click Read Me. The README page contains information about the patch set and how to apply the patches to your installation.

  11. Return to the Patch Set page, click Download, and save the file on your system.

  12. Use the unzip utility provided with Oracle Database 12c Release 1 (12.1) to uncompress the Oracle patch updates that you downloaded from My Oracle Support. The unzip utility is located in the $ORACLE_HOME/bin directory.

  13. See Appendix B, "How to Upgrade to Oracle Grid Infrastructure 12c Release 1" for information about how to stop database processes in preparation for installing patches.

9.2 Recommended Postinstallation Tasks

Oracle recommends that you complete the following tasks as needed after installing Oracle Grid Infrastructure:

9.2.1 Tuning Semaphore Parameters

Use the following guidelines only if the default semaphore parameter values are too low to accommodate all Oracle processes:

Note:

Oracle recommends that you refer to the operating system documentation for more information about setting semaphore parameters.
  1. Calculate the minimum total semaphore requirements using the following formula:

    2 * sum (process parameters of all database instances on the system) + overhead for background processes + system and other application requirements

  2. Set semmns (total semaphores systemwide) to this total.

  3. Set semmsl (semaphores for each set) to 250.

  4. Set semmni (total semaphores sets) to semmns divided by semmsl, rounded up to the nearest multiple of 1024.

    See Also:

    My Oracle Support note 226209.01, "Linux: How to Check Current Shared Memory, Semaphore Values," which is available from the following URL:

    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=226209.1

9.2.2 Create a Fast Recovery Area Disk Group

During installation, by default you can create one disk group. If you plan to add an Oracle Database for a standalone server or an Oracle RAC database, then you should create the Fast Recovery Area for database files.

9.2.2.1 About the Fast Recovery Area and the Fast Recovery Area Disk Group

The Fast Recovery Area is a unified storage location for all Oracle Database files related to recovery. Database administrators can define the DB_RECOVERY_FILE_DEST parameter to the path for the Fast Recovery Area to enable on-disk backups, and rapid recovery of data. Enabling rapid backups for recent data can reduce requests to system administrators to retrieve backup tapes for recovery operations.

When you enable Fast Recovery in the init.ora file, all RMAN backups, archive logs, control file automatic backups, and database copies are written to the Fast Recovery Area. RMAN automatically manages files in the Fast Recovery Area by deleting obsolete backups and archive files no longer required for recovery.

Oracle recommends that you create a Fast Recovery Area disk group. Oracle Clusterware files and Oracle Database files can be placed on the same disk group, and you can also place Fast Recovery Area files in the same disk group. However, Oracle recommends that you create a separate Fast Recovery Area disk group to reduce storage device contention.

The Fast Recovery Area is enabled by setting DB_RECOVERY_FILE_DEST. The size of the Fast Recovery Area is set with DB_RECOVERY_FILE_DEST_SIZE. As a general rule, the larger the Fast Recovery Area, the more useful it becomes. For ease of use, Oracle recommends that you create a Fast Recovery Area disk group on storage devices that can contain at least three days of recovery information. Ideally, the Fast Recovery Area should be large enough to hold a copy of all of your data files and control files, the online redo logs, and the archived redo log files needed to recover your database using the data file backups kept under your retention policy.

Multiple databases can use the same fast recovery area. For example, assume you have created one fast recovery area disk group on disks with 150 gigabyte (GB) of storage, shared by three different databases. You can set the size of the fast recovery area for each database depending on the importance of each database. For example, if test1 is your least important database, products is of greater importance and orders is of greatest importance, then you can set different DB_RECOVERY_FILE_DEST_SIZE settings for each database to meet your retention target for each database: 30 GB for test1, 50 GB for products, and 70 GB for orders.

9.2.2.2 Creating the Fast Recovery Area Disk Group

To create a Fast Recovery Area disk group:

  1. Navigate to the Grid home bin directory, and start Oracle ASM Configuration Assistant (ASMCA). For example:

    $ cd /u01/app/12.1.0/grid/bin
    $ ./asmca
    
  2. ASMCA opens at the Disk Groups tab. Click Create to create a new disk group.

  3. The Create Disk Groups window opens.

    In the Disk Group Name field, enter a descriptive name for the Fast Recovery Area group. For example: FRA.

    In the Redundancy section, select the level of redundancy you want to use.

    In the Select Member Disks field, select eligible disks to be added to the Fast Recovery Area, and click OK.

  4. The Diskgroup Creation window opens to inform you when disk group creation is complete. Click OK.

  5. Click Exit.

9.2.3 Checking the SCAN Configuration

The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster. Because the SCAN is associated with the cluster as a whole, rather than to a particular node, the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients. It also adds location independence for the databases, so that client configuration does not have to depend on which nodes are running a particular database instance. Clients can continue to access the cluster in the same way as with previous releases, but Oracle recommends that clients accessing the cluster use the SCAN.

You can use the command cluvfy comp scan (located in Grid home/bin) to confirm that the DNS is correctly associating the SCAN with the addresses. For example:

$ cluvfy comp scan 


Verifying scan

Checking Single Client Access Name (SCAN)...

Checking TCP connectivity to SCAN Listeners...
TCP connectivity to SCAN Listeners exists on all cluster nodes

Checking name resolution setup for ”node1.example.com”...

Verification of SCAN VIP and Listener setup passed

Verification of scan was successful.

After installation, when a client sends a request to the cluster, the Oracle Clusterware SCAN listeners redirect client requests to servers in the cluster.

See Also:

Oracle Clusterware Administration and Deployment Guide for more information about system checks and configurations

9.2.4 Downloading and Installing the ORAchk Health Check Tool

Download and install the ORAchk utility to perform proactive heath checks for the Oracle software stack.

ORAchk replaces the RACCheck utility, extends health check coverage to the entire Oracle software stack, and identifies and addresses top issues reported by Oracle users. ORAchk proactively scans for known problems with Oracle products and deployments, including the following:

  • Standalone Oracle Database

  • Oracle Grid Infrastructure

  • Oracle Real Application Clusters

  • Maximum Availability Architecture (MAA) Validation

  • Upgrade Readiness Validations

  • Oracle Golden Gate

  • E-Business Suite

For information about configuring and running the ORAchk utility, refer to My Oracle Support note 1268927.1:

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1268927.1

Note:

The ORAchk utility is not supported on IBM: Linux on System z.

9.2.5 Setting Resource Limits for Oracle Clusterware and Associated Databases and Applications

After you have completed Oracle Grid Infrastructure installation, you can set resource limits in the Grid_home/crs/install/s_crsconfig_nodename_env.txt file. These resource limits apply to all Oracle Clusterware processes and Oracle databases managed by Oracle Clusterware. For example, to set a higher number of processes limit, edit the file and set CRS_LIMIT_NPROC parameter to a high value.

9.3 Using Earlier Oracle Database Releases with Oracle Grid Infrastructure

Review the following sections for information about using earlier Oracle Database releases with Oracle Grid Infrastructure 12c Release 1 (12.1) installations:

9.3.1 General Restrictions for Using Earlier Oracle Database Versions

You can use Oracle Database 10g Release 2 and Oracle Database 11g Release 1 and 2 with Oracle Clusterware 12c Release 1 (12.1).

Do not use the versions of srvctl, lsnrctl, or other Oracle Grid infrastructure home tools to administer earlier version databases. Only administer earlier Oracle Database releases using the tools in the earlier Oracle Database homes. To ensure that the versions of the tools you are using are the correct tools for those earlier release databases, run the tools from the Oracle home of the database or object you are managing.

Oracle Database homes can only be stored on Oracle ASM Cluster File System (Oracle ACFS) if the database version is Oracle Database 11g Release 2 or higher. Earlier releases of Oracle Database cannot be installed on Oracle ACFS because these releases were not designed to use Oracle ACFS.

If you upgrade an existing version of Oracle Clusterware and Oracle ASM to Oracle Grid Infrastructure 11g or later (which includes Oracle Clusterware and Oracle ASM), and you also plan to upgrade your Oracle RAC database to 12c Release 1 (12.1), then the required configuration of existing databases is completed automatically when you complete the Oracle RAC upgrade, and this section does not concern you.

Note:

Before you start an Oracle RAC or Oracle Database installation on an Oracle Clusterware 12c Release 1 (12.1) installation, if you are upgrading from Oracle Database 11g Release 1 (11.1.0.7 or 11.1.0.6), or Oracle Database 10g Release 2 (10.2.0.4), then Oracle recommends that you check for the latest recommended patches for the release you are upgrading from, and install those patches as needed on your existing database installations before upgrading.

For more information on recommended patches, see Oracle 12c Upgrade Companion (My Oracle Support Note 1462240.1):

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1462240.1

9.3.2 Managing Server Pools with Earlier Database Versions

Starting with Oracle Grid Infrastructure 12c, Oracle Database server categories include roles such as Hub and Leaf that were not present in earlier releases. For this reason, you cannot create server pools using the Oracle RAC 11g version of Database Configuration Assistant (DBCA). To create server pools for earlier release Oracle RAC installations, use the following procedure:

  1. Log in as the Oracle Grid Infrastructure installation owner (Grid user)

  2. Change directory to the 12.1 Oracle Grid Infrastructure binaries directory in the Grid home. For example:

    # cd /u01/app/12.1.0/grid/bin
    
  3. Use the Oracle Grid Infrastructure 12c version of srvctl to create a server pool consisting of Hub Node roles. For example, to create a server pool called p_hub with a maximum size of one cluster node, enter the following command:

    srvctl add serverpool -serverpool p_hub -min 0 -max 1 -category hub;
    
  4. Log in as the Oracle RAC installation owner, start DBCA from the Oracle RAC Oracle home. For example:

    $ cd /u01/app/oracle/product/11.2.0/dbhome_1/bin
    $ dbca
    

    DBCA discovers the server pool that you created with the Oracle Grid Infrastructure 12c srvctl command. Configure the server pool as required for your services.

See Also:

Oracle Clusterware Administration and Deployment Guide for more information about managing resources using policies

9.3.3 Making Oracle ASM Available to Earlier Oracle Database Releases

To use Oracle ASM with Oracle Database releases earlier than Oracle Database 12c, you must use Local ASM or set the cardinality for Flex ASM to ALL, instead of the default of 3. After you install Oracle Grid Infrastructure 12c, if you want to use Oracle ASM to provide storage service for Oracle Database releases that are earlier than Oracle Database 12c, then you must use the following command to modify the Oracle ASM resource (ora.asm):

$ srvctl modify asm -count ALL

This setting changes the cardinality of the Oracle ASM resource so that Flex ASM instances run on all cluster nodes. You must change the setting even if you have a cluster with three or less than three nodes, to ensure database releases earlier than 11g Release 2 can find the ora.node.sid.inst resource alias.

9.3.4 Using ASMCA to Administer Disk Groups for Earlier Database Versions

Use Oracle ASM Configuration Assistant (ASMCA) to create and modify disk groups when you install earlier Oracle databases and Oracle RAC databases on Oracle Grid Infrastructure installations. Starting with Oracle Database 11g Release 2 (11.2), Oracle ASM is installed as part of an Oracle Grid Infrastructure installation, with Oracle Clusterware. You can no longer use Database Configuration Assistant (DBCA) to perform administrative tasks on Oracle ASM.

See Also:

Oracle Automatic Storage Management Administrator's Guide for details about configuring disk group compatibility for databases using Oracle Database 11g or earlier software with Oracle Grid Infrastructure 12c (12.1)

9.3.5 Pinning Cluster Nodes for Oracle Database Release 10.x or 11.x

When Oracle Clusterware 12c Release 1 (12.1) is installed on a cluster with no previous Oracle software version, it configures the cluster nodes dynamically, which is compatible with Oracle Database Release 11.2 and later, but Oracle Database 10g and 11.1 require a persistent configuration. This process of association of a node name with a node number is called pinning.

Note:

During an upgrade, all cluster member nodes are pinned automatically, and no manual pinning is required for existing databases. This procedure is required only if you install earlier database versions after installing Oracle Grid Infrastructure 12c Release 1 (12.1) software.

To pin a node in preparation for installing or using an earlier Oracle Database version, use Grid_home/bin/crsctl with the following command syntax, where nodes is a space-delimited list of one or more nodes in the cluster whose configuration you want to pin:

crsctl pin css -n nodes

For example, to pin nodes node3 and node4, log in as root and enter the following command:

$ crsctl pin css -n node3 node4

To determine if a node is in a pinned or unpinned state, use Grid_home/bin/olsnodes with the following command syntax:

To list all pinned nodes:

olsnodes -t -n 

For example:

# /u01/app/12.1.0/grid/bin/olsnodes -t -n
node1 1       Pinned
node2 2       Pinned
node3 3       Pinned
node4 4       Pinned

To list the state of a particular node:

olsnodes -t -n node3

For example:

# /u01/app/12.1.0/grid/bin/olsnodes -t -n node3
node3 3       Pinned

See Also:

Oracle Clusterware Administration and Deployment Guide for more information about pinning and unpinning nodes

9.3.6 Using the Correct LSNRCTL Commands

To administer local and SCAN listeners using the lsnrctl command, set your $ORACLE_HOME environment variable to the path for the Oracle Grid Infrastructure home (Grid home). Do not attempt to use the lsnrctl commands from Oracle home locations for previous releases, as they cannot be used with the new release.

9.4 Modifying Oracle Clusterware Binaries After Installation

After installation, if you need to modify the Oracle Clusterware configuration, then you must unlock the Grid home.

For example, if you want to apply a one-off patch, or if you want to modify an Oracle Clusterware configuration to run IPC traffic over RDS on the interconnect instead of using the default UDP, then you must unlock the Grid home.

Note:

Before relinking executables, you must shut down all executables that run in the Oracle home directory that you are unlocking and relinking. In addition, shut down applications linked with Oracle shared libraries.

Unlock the home using the following procedure:

  1. Log in as root, and change directory to the path Grid_home/crs/install, where Grid_home is the path to the Grid home, and unlock the Grid home using the command rootcrs.sh -unlock -crshome Grid_home, where Grid_home is the path to your Grid infrastructure home. For example, with the Grid home /u01/app/12.1.0/grid, enter the following command:

    # cd /u01/app/12.1.0/grid/crs/install
    # perl rootcrs.sh -unlock -crshome /u01/app/12.1.0/grid
    
  2. Change user to the Oracle Grid Infrastructure software owner, and relink binaries using the command syntax make -f Grid_home/rdbms/lib/ins_rdbms.mk target, where Grid_home is the Grid home, and target is the binaries that you want to relink. For example, where the Grid user is grid, $ORACLE_HOME is set to the Grid home, and where you are updating the interconnect protocol from UDP to IPC, enter the following command:

    # su grid
    $ make -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk ipc_rds ioracle
    

    Note:

    To relink binaries, you can also change to the Oracle Grid Infrastructure installation owner and run the command Grid_home/bin/relink.
  3. Relock the Grid home and restart the cluster using the following command:

    # perl rootcrs.sh -patch
    
  4. Repeat steps 1 through 3 on each cluster member node.

Note:

Do not delete directories in the Grid home. For example, do not delete the directory Grid_home/Opatch. If you delete the directory, then the Grid infrastructure installation owner cannot use Opatch to patch the grid home, and Opatch displays the error message "checkdir error: cannot create Grid_home/OPatch".