E How to Complete Installation Prerequisite Tasks Manually

This appendix provides instructions for how to complete configuration tasks manually that Cluster Verification Utility (CVU) and the installer (OUI) normally complete during installation. Use this appendix as a guide if you cannot use the fixup script.

This appendix contains the following information:

E.1 Configuring SSH Manually on All Cluster Nodes

Passwordless SSH configuration is a mandatory installation requirement. SSH is used during installation to configure cluster member nodes, and SSH is used after installation by configuration assistants, Oracle Enterprise Manager, OPatch, and other features.

Automatic Passwordless SSH configuration using OUI creates RSA encryption keys on all nodes of the cluster. If you have system restrictions that require you to set up SSH manually, such as using DSA keys, then use this procedure as a guide to set up passwordless SSH.

In the examples that follow, the Oracle software owner listed is the grid user.

This section contains the following:

E.1.1 Checking Existing SSH Configuration on the System

To determine if SSH is running, enter the following command:

$ pgrep sshd

If SSH is running, then the response to this command is one or more process ID numbers. In the home directory of the installation software owner (grid, oracle), use the command ls -al to ensure that the .ssh directory is owned and writable only by the user.

You need either an RSA or a DSA key for the SSH protocol. RSA is used with the SSH 1.5 protocol, while DSA is the default for the SSH 2.0 protocol. With OpenSSH, you can use either RSA or DSA. The instructions that follow are for SSH1. If you have an SSH2 installation, and you cannot use SSH1, then refer to your SSH distribution documentation to configure SSH1 compatibility or to configure SSH2 with DSA.

E.1.2 Configuring SSH on Cluster Nodes

To configure SSH, you must first create RSA or DSA keys on each cluster node, and then copy all the keys generated on all cluster node members into an authorized keys file that is identical on each node. Note that the SSH files must be readable only by root and by the software installation user (oracle, grid), as SSH ignores a private key file if it is accessible by others. In the examples that follow, the DSA key is used.

You must configure SSH separately for each Oracle software installation owner that you intend to use for installation.

To configure SSH, complete the following:

E.1.2.1 Create SSH Directory, and Create SSH Keys On Each Node

Complete the following steps on each node:

  1. Log in as the software owner (in this example, the grid user).

  2. To ensure that you are logged in as grid, and to verify that the user ID matches the expected user ID you have assigned to the grid user, enter the commands id and id grid. Ensure that Oracle user group and user and the user terminal window process you are using have group and user IDs are identical. For example:

    $ id 
    uid=502(grid) gid=501(oinstall) groups=501(oinstall),502(grid,asmadmin,asmdba)
    $ id grid
    uid=502(grid) gid=501(oinstall) groups=501(oinstall),502(grid,asmadmin,asmdba)
    
  3. If necessary, create the .ssh directory in the grid user's home directory, and set permissions on it to ensure that only the oracle user has read and write permissions:

    $ mkdir ~/.ssh
    $ chmod 700 ~/.ssh
    

    Note:

    SSH configuration will fail if the permissions are not set to 700.
  4. Enter the following command:

    $ /usr/bin/ssh-keygen -t dsa
    

    At the prompts, accept the default location for the key file (press Enter).

    Note:

    SSH with passphrase is not supported for Oracle Clusterware 11g Release 2 and later releases.

    This command writes the DSA public key to the ~/.ssh/id_dsa.pub file and the private key to the ~/.ssh/id_dsa file.

    Never distribute the private key to anyone not authorized to perform Oracle software installations.

  5. Repeat steps 1 through 4 on each node that you intend to make a member of the cluster, using the DSA key.

E.1.2.2 Add All Keys to a Common authorized_keys File

Complete the following steps:

  1. On the local node, change directories to the .ssh directory in the Oracle Grid Infrastructure owner's home directory (typically, either grid or oracle).

    Then, add the DSA key to the authorized_keys file using the following commands:

    $ cat id_dsa.pub  authorized_keys
    $ ls
    

    In the .ssh directory, you should see the id_dsa.pub keys that you have created, and the file authorized_keys.

  2. On the local node, use SCP (Secure Copy) or SFTP (Secure FTP) to copy the authorized_keys file to the oracle user .ssh directory on a remote node. The following example is with SCP, on a node called node2, with the Oracle Grid Infrastructure owner grid, where the grid user path is /home/grid:

    [grid@node1 .ssh]$ scp authorized_keys node2:/home/grid/.ssh/
    

    You are prompted to accept a DSA key. Enter Yes, and you see that the node you are copying to is added to the known_hosts file.

    When prompted, provide the password for the grid user, which should be the same on all nodes in the cluster. The authorized_keys file is copied to the remote node.

    Your output should be similar to the following, where xxx represents parts of a valid IP address:

    [grid@node1 .ssh]$ scp authorized_keys node2:/home/grid/.ssh/
    The authenticity of host 'node2 (xxx.xxx.173.152) can't be established.
    DSA key fingerprint is 7e:60:60:ae:40:40:d1:a6:f7:4e:zz:me:a7:48:ae:f6:7e.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'node1,xxx.xxx.173.152' (dsa) to the list
    of known hosts
    grid@node2's password:
    authorized_keys       100%             828             7.5MB/s      00:00
    
  3. Using SSH, log in to the node where you copied the authorized_keys file. Then change to the .ssh directory, and using the cat command, add the DSA keys for the second node to the authorized_keys file, clicking Enter when you are prompted for a password, so that passwordless SSH is set up:

    [grid@node1 .ssh]$ ssh node2
    [grid@node2 grid]$ cd .ssh
    [grid@node2 ssh]$ cat id_dsa.pub   authorized_keys
    

    Repeat steps 2 and 3 from each node to each other member node in the cluster.

    When you have added keys from each cluster node member to the authorized_keys file on the last node you want to have as a cluster node member, then use scp to copy the authorized_keys file with the keys from all nodes back to each cluster node member, overwriting the existing version on the other nodes.

    To confirm that you have all nodes in the authorized_keys file, enter the command more authorized_keys, and determine if there is a DSA key for each member node. The file lists the type of key (ssh-dsa), followed by the key, and then followed by the user and server. For example:

    ssh-dsa AAAABBBB . . . = grid@node1
    

    Note:

    The grid user's /.ssh/authorized_keys file on every node must contain the contents from all of the /.ssh/id_dsa.pub files that you generated on all cluster nodes.

E.1.3 Enabling SSH User Equivalency on Cluster Nodes

After you have copied the authorized_keys file that contains all keys to each node in the cluster, complete the following procedure, in the order listed. In this example, the Oracle Grid Infrastructure software owner is named grid:

  1. On the system where you want to run OUI, log in as the grid user.

  2. Use the following command syntax, where hostname1, hostname2, and so on, are the public host names (alias and fully qualified domain name) of nodes in the cluster to run SSH from the local node to each node, including from the local node to itself, and from each node to each other node:

    [grid@nodename]$ ssh hostname1 date
    [grid@nodename]$ ssh hostname2 date
        .
        .
        .
    

    For example:

    [grid@node1 grid]$ ssh node1 date
    The authenticity of host 'node1 (xxx.xxx.100.101)' can't be established.
    DSA key fingerprint is 7z:60:60:zz:48:48:z1:a0:f7:4e.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'node1,xxx.xxx.100.101' (DSA) to the list of
    known hosts.
    Mon Dec 4 11:08:13 PST 2006
    [grid@node1 grid]$ ssh node1.example.com date
    The authenticity of host 'node1.example.com (xxx.xxx.100.101)' can't be
    established.
    DSA key fingerprint is 7z:60:60:zz:48:48:z1:a0:f7:4e.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'node1.example.com,xxx.xxx.100.101' (DSA) to the
    list of known hosts.
    Mon Dec 4 11:08:13 PST 2006
    [grid@node1 grid]$ ssh node2 date
    Mon Dec 4 11:08:35 PST 2006
    .
    .
    .
    

    At the end of this process, the public host name for each member node should be registered in the known_hosts file for all other cluster nodes.

    If you are using a remote client to connect to the local node, and you see a message similar to "Warning: No xauth data; using fake authentication data for X11 forwarding," then this means that your authorized keys file is configured correctly, but your SSH configuration has X11 forwarding enabled. To correct this issue, proceed to Section 5.2.4, "Setting Remote Display and X11 Forwarding Configuration."

  3. Repeat step 2 on each cluster node member.

If you have configured SSH correctly, then you can now use the ssh or scp commands without being prompted for a password. For example:

[grid@node1 ~]$ ssh node2 date
Mon Feb 26 23:34:42 UTC 2009
[grid@node1 ~]$ ssh node1 date
Mon Feb 26 23:34:48 UTC 2009

If any node prompts for a password, then verify that the ~/.ssh/authorized_keys file on that node contains the correct public keys, and that you have created an Oracle software owner with identical group membership and IDs.

E.2 Configuring Kernel Parameters on Oracle Solaris

This section contains the following:

Note:

The kernel parameter and shell limit values shown in the following section are minimum installation values only. For production database systems, Oracle recommends that you tune kernel resources to optimize the performance of the system.

See Managing System Information, Processes, and Performance in Oracle Solaris 11.1 for more information about kernel resource management at the following URL:

http://docs.oracle.com/cd/E26502_01/index.html

E.2.1 Minimum Parameter Settings for Installation

During installation, you can generate and run the Fixup script to check and set the kernel parameter values required for successful installation of the database. This script updates required kernel parameters if necessary to minimum values.

If you cannot use the fixup script, then review the following table to set the values manually. On Oracle Solaris 10, verify that the kernel parameters shown in the following table are set to values greater than or equal to the minimum value shown.

Note:

On Oracle Solaris 10, you are not required to make changes to the /etc/system file to implement the System V IPC. Oracle Solaris 10 uses the resource control facility for its implementation.
Resource Control Minimum Value
project.max-sem-ids 100
process.max-sem-nsems 256
project.max-shm-memory This value varies according to the size of the RAM. See Section E.2.2 for minimum values.
project.max-shm-ids 100
tcp_smallest_anon_port 9000
tcp_largest_anon_port 65500
udp_smallest_anon_port 9000
udp_largest_anon_port 65500

Note:

  • project.max-shm-memory resource control = cumulative sum of all shared memory allocated on each Oracle Database instance started under the corresponding project
  • The project.max-shm-memory resource control value assumes that no other application is using the shared memory segment from this project other than the Oracle instances. If applications, other than the Oracle instances are using the shared memory segment, then you must add that shared memory usage to the project.max-shm-memory resource control value.

  • Ensure that memory_target (or max_sga_size) does not exceed process.max-address-space and project.max-shm-memory. For more information, see My Oracle Support Note 1370537.1 at:

    https://support.oracle.com

E.2.2 Requirements for Shared Memory Resources on Oracle Solaris

The resource control project.max-shm-memory enables you to set the maximum shared memory for a project.

Table E-1 shows the installation minimum settings for project.max-shm-memory:

Table E-1 Requirement for Resource Control project.max-shm-memory

RAM project.max-shm-memory setting

1 GB to 16 GB

Half the size of the physical memory

Greater than 16 GB

At least 8 GB


E.2.3 Checking Shared Memory Resource Controls on Oracle Solaris

Use the prctl command to make runtime interrogations of and modifications to the resource controls associated with an active process, task, or project on the system.

To view the current value of project.max-shm-memory set for a project and system-wide, enter the following command:

# prctl -n project.max-shm-memory -i project default

where default is the project ID, which you can obtain by running the command id -p

For example, to change the setting for project.max-shm-memory to 6 GB for the project default without a system reboot, enter:

prctl -n project.max-shm-memory -v 6gb -r -i project default

See Also:

Administering Oracle Solaris 11 at:

http://docs.oracle.com/cd/E26502_01/index.html

E.2.4 Viewing and Modifying Kernel Parameter Values

On Oracle Solaris 10 and later releases, use the following procedure to view the current value specified for resource controls, and to change them if necessary:

  1. To view the current values of the resource control, enter the following commands:

    $ id -p // to verify the project id
    uid=100(oracle) gid=100(dba) projid=1 (group.dba)
    $ prctl -n project.max-shm-memory -i project group.dba
    $ prctl -n project.max-sem-ids -i project group.dba
    
  2. If you must change any of the current values, then:

    1. To modify the value of max-shm-memory to 6 GB:

      # prctl -n project.max-shm-memory -v 6gb -r -i project group.dba 
      
    2. To modify the value of max-sem-ids to 256:

      # prctl -n project.max-sem-ids -v 256 -r -i project group.dba
      

    Note:

    When you use the command prctl (Resource Control) to change system parameters, you do not need to restart the system for these parameter changes to take effect. However, the changed parameters do not persist after a system restart.

Use the following procedure to modify the resource control project settings, so that they persist after a system restart:

  1. By default, Oracle instances are run as the oracle user of the dba group. A project with the name group.dba is created to serve as the default project for the oracle user. Run the command id to verify the default project for the oracle user:

    # su - oracle
    $ id -p
    uid=100(oracle) gid=100(dba) projid=100(group.dba)
    $ exit
    
  2. To set the maximum shared memory size to 4 GB, run the projmod command:

    # projmod -sK "project.max-shm-memory=(privileged,4G,deny)" group.dba
    

    Alternatively, add the resource control value project.max-shm-memory=(privileged,4294967295,deny) to the last field of the project entries for the Oracle project.

  3. After these steps are complete, check the values for the /etc/project file using the following command:

    # cat /etc/project
    

    The output should be similar to the following:

    system:0::::
    user.root:1::::
    noproject:2::::
    default:3::::
    group.staff:10::::
    group.dba:100:Oracle default project:::project.max-shmmemory=(privileged,4294967295,deny)
    
  4. To verify that the resource control is active, check process ownership, and run the commands id and prctl, as in the following example:

    # su - oracle
    $ id -p
    uid=100(oracle) gid=100(dba) projid=100(group.dba)
    $ prctl -n project.max-shm-memory -i process $$
    process: 5754: -bash
    NAME     PRIVILEGE     VALUE     FLAG     ACTION    RECIPIENT
    project.max-shm-memory
                   privileged         4.00GB     -             deny
    

    Note:

    The value for the maximum shared memory depends on the SGA requirements and should be set to a value greater than the SGA size.

    For additional information, see the Oracle Solaris Tunable Parameters Reference Manual.

E.3 Configuring Shell Limits for Oracle Solaris

Oracle recommends that you set shell limits and system configuration parameters as described in this section.

Note:

The shell limit values in this section are minimum values only. For production database systems, Oracle recommends that you tune these values to optimize the performance of the system. See your operating system documentation for more information about configuring shell limits.

The ulimit settings determine process memory related resource limits. Verify that the shell limits displayed in the following table are set to the values shown:

Shell Limit Description Soft Limit (KB) Hard Limit (KB)
STACK Size of the stack segment of the process at most 10240 at most 32768
NOFILES Open file descriptors at least 1024 at least 65536
MAXUPRC or MAXPROC Maximum user processes at least 2047 at least 16384

To display the current value specified for these shell limits enter the following commands:

ulimit -s
ulimit -n

You can also use the following command:

ulimit -a

In the above command, -a lists all current resource limits.

E.4 Setting UDP and TCP Kernel Parameters

Use NDD to ensure that the Oracle Solaris kernel TCP/IP ephemeral port range is broad enough to provide enough ephemeral ports for the anticipated server workload. Ensure that the lower range is set to at least 9000 or higher, to avoid Well Known ports, and to avoid ports in the Registered Ports range commonly used by Oracle and other server ports. Set the port range high enough to avoid reserved ports for any applications you may intend to use. If the lower value of the range you have is greater than 9000, and the range is large enough for your anticipated workload, then you can ignore OUI warnings regarding the ephemeral port range.

Use the following command to check your current range for ephemeral ports:

On Oracle Solaris 10, use the following ndd command:

# /usr/sbin/ndd /dev/tcp tcp_smallest_anon_port tcp_largest_anon_port
32768
 
65535

On Oracle Solaris 11, use the following ipadm command:

# ipadm show-prop -p smallest_anon_port,largest_anon_port tcp
PROTO PROPERTY           PERM CURRENT PERSISTENT DEFAULT POSSIBLE
tcp   smallest_anon_port rw   32768       --     32768   1024-65535
tcp   largest_anon_port  rw   65500       --     65535   32768-65535

In the preceding examples, the ephemeral ports are set to the default range (32768-65500).

If necessary for your anticipated workload or number of servers, update the UDP and TCP ephemeral port range to a broader range. For example, on Oracle Solaris 11:

# ipadm set-prop -p smallest_anon_port=9000 tcp
# ipadm set-prop -p largest_anon_port=65500 tcp
# ipadm set-prop -p smallest_anon_port=9000 udp
# ipadm set-prop -p largest_anon_port=65500 udp

Oracle recommends that you make these settings permanent. Refer to your Oracle Solaris system administration documentation for information about how to automate this ephemeral port range alteration on system restarts.

See Also:

Oracle Solaris Tunable Parameters Reference Manual available at the following link:

http://docs.oracle.com/cd/E26505_01/html/E37386/chapter4-57.html