This appendix provides instructions to complete configuration tasks manually that Cluster Verification Utility (CVU) and Oracle Universal Installer (OUI) normally complete during installation using Fixup scripts. Use this appendix as a guide if you cannot use Fixup scripts.
This appendix contains the following information:
Passwordless SSH configuration is a mandatory installation requirement. SSH is used during installation to configure cluster member nodes, and SSH is used after installation by configuration assistants, Oracle Enterprise Manager, OPatch, and other features.
Automatic Passwordless SSH configuration using OUI creates RSA encryption keys on all nodes of the cluster. If you have system restrictions that require you to set up SSH manually, such as using DSA keys, then use this procedure as a guide to set up passwordless SSH.
In the examples that follow, the Oracle software owner listed is the grid
user.
This section contains the following:
To determine if SSH is running, enter the following command:
$ pgrep sshd
If SSH is running, then the response to this command is one or more process ID numbers. In the home directory of the installation software owner (grid
, oracle
), use the command ls -al
to ensure that the .ssh
directory is owned and writable only by the user.
You need either an RSA or a DSA key for the SSH protocol. RSA is used with the SSH 1.5 protocol, while DSA is the default for the SSH 2.0 protocol. With OpenSSH, you can use either RSA or DSA. The instructions that follow are for SSH1. If you have an SSH2 installation, and you cannot use SSH1, then refer to your SSH distribution documentation to configure SSH1 compatibility or to configure SSH2 with DSA.
To configure SSH, you must first create RSA or DSA keys on each cluster node, and then copy all the keys generated on all cluster node members into an authorized keys file that is identical on each node. Note that the SSH files must be readable only by root
and by the software installation user (oracle
, grid
), as SSH ignores a private key file if it is accessible by others. In the examples that follow, the DSA key is used.
You must configure SSH separately for each Oracle software installation owner that you intend to use for installation.
To configure SSH, complete the following:
Complete the following steps on each node:
Log in as the software owner (in this example, the grid
user).
To ensure that you are logged in as grid
, and to verify that the user ID matches the expected user ID you have assigned to the grid
user, enter the commands id
and id grid
. Ensure that Oracle user group and user and the user terminal window process you are using have group and user IDs are identical. For example:
$ id uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall) 1100(grid,asmadmin,asmdba) $ id grid uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall), 1100(grid,asmadmin,asmdba)
If necessary, create the .ssh
directory in the grid
user's home directory, and set permissions on it to ensure that only the oracle
user has read and write permissions:
$ mkdir ~/.ssh $ chmod 700 ~/.ssh
Enter the following command:
$ /usr/bin/ssh-keygen -t dsa
At the prompts, accept the default location for the key file (press Enter).
Note:
SSH with passphrase is not supported for Oracle Clusterware 11g Release 2 and later releases.This command writes the DSA public key to the ~/.ssh/id_dsa.pub
file and the private key to the ~/.ssh/id_dsa
file.
Never distribute the private key to anyone not authorized to perform Oracle software installations.
Repeat steps 1 through 4 on each node that you intend to make a member of the cluster, using the DSA key.
Complete the following steps:
On the local node, change directories to the .ssh
directory in the Oracle Grid Infrastructure owner's home directory (typically, either grid
or oracle
).
Then, add the DSA key to the authorized_keys
file using the following commands:
$ cat id_dsa.pub >> authorized_keys $ ls
In the SSH directory, you should see the id_dsa.pub
keys that you have created, and the file authorized_keys
.
On the local node, use SCP (Secure Copy) or SFTP (Secure FTP) to copy the authorized_keys
file to the oracle
user .ssh
directory on a remote node. The following example is with SCP, on a node called node2, with the Oracle Grid Infrastructure owner grid
, where the grid
user path is /home/grid
:
[grid@node1 .ssh]$ scp authorized_keys node2:/home/grid/.ssh/
You are prompted to accept a DSA key. Enter Yes, and you see that the node you are copying to is added to the known_hosts
file.
When prompted, provide the password for the Grid user, which should be the same on all nodes in the cluster. The authorized_keys
file is copied to the remote node.
Your output should be similar to the following, where xxx
represents parts of a valid IP address:
[grid@node1 .ssh]$ scp authorized_keys node2:/home/grid/.ssh/ The authenticity of host 'node2 (xxx.xxx.173.152) can't be established. DSA key fingerprint is 7e:60:60:ae:40:40:d1:a6:f7:4e:zz:me:a7:48:ae:f6:7e. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1,xxx.xxx.173.152' (dsa) to the list of known hosts grid@node2's password: authorized_keys 100% 828 7.5MB/s 00:00
Using SSH, log in to the node where you copied the authorized_keys
file. Then change to the .ssh
directory, and using the cat
command, add the DSA keys for the second node to the authorized_keys
file, clicking Enter when you are prompted for a password, so that passwordless SSH is set up:
[grid@node1 .ssh]$ ssh node2 [grid@node2 grid]$ cd .ssh [grid@node2 ssh]$ cat id_dsa.pub >> authorized_keys
Repeat steps 2 and 3 from each node to each other member node in the cluster.
When you have added keys from each cluster node member to the authorized_keys
file on the last node you want to have as a cluster node member, then use scp
to copy the authorized_keys
file with the keys from all nodes back to each cluster node member, overwriting the existing version on the other nodes.
To confirm that you have all nodes in the authorized_keys
file, enter the command more authorized_keys
, and determine if there is a DSA key for each member node. The file lists the type of key (ssh-dsa
), followed by the key, and then followed by the user and server. For example:
ssh-dsa AAAABBBB . . . = grid@node1
Note:
Thegrid
user's /.ssh/authorized_keys
file on every node must contain the contents from all of the /.ssh/id_dsa.pub
files that you generated on all cluster nodes.After you have copied the authorized_keys
file that contains all keys to each node in the cluster, complete the following procedure, in the order listed. In this example, the Oracle Grid Infrastructure software owner is named grid
:
On the system where you want to run OUI, log in as the grid
user.
Use the following command syntax, where hostname1
, hostname2
, and so on, are the public host names (alias and fully qualified domain name) of nodes in the cluster to run SSH from the local node to each node, including from the local node to itself, and from each node to each other node:
[grid@nodename]$ ssh hostname1 date [grid@nodename]$ ssh hostname2 date . . .
For example:
[grid@node1 grid]$ ssh node1 date The authenticity of host 'node1 (xxx.xxx.100.101)' can't be established. DSA key fingerprint is 7z:60:60:zz:48:48:z1:a0:f7:4e. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1,xxx.xxx.100.101' (DSA) to the list of known hosts. Mon Dec 4 11:08:13 PST 2006 [grid@node1 grid]$ ssh node1.example.com date The authenticity of host 'node1.example.com (xxx.xxx.100.101)' can't be established. DSA key fingerprint is 7z:60:60:zz:48:48:z1:a0:f7:4e. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1.example.com,xxx.xxx.100.101' (DSA) to the list of known hosts. Mon Dec 4 11:08:13 PST 2006 [grid@node1 grid]$ ssh node2 date Mon Dec 4 11:08:35 PST 2006 . . .
At the end of this process, the public host name for each member node should be registered in the known_hosts
file for all other cluster nodes.
If you are using a remote client to connect to the local node, and you see a message similar to "Warning: No xauth data; using fake authentication data for X11 forwarding," then this means that your authorized keys file is configured correctly, but your SSH configuration has X11 forwarding enabled. To correct this issue, proceed to Section 5.2.4, "Setting Display and X11 Forwarding Configuration".
Repeat step 2 on each cluster node member.
If you have configured SSH correctly, then you can now use the ssh
or scp
commands without being prompted for a password. For example:
[grid@node1 ~]$ ssh node2 date Mon Feb 26 23:34:42 UTC 2009 [grid@node1 ~]$ ssh node1 date Mon Feb 26 23:34:48 UTC 2009
If any node prompts for a password, then verify that the ~/.ssh/authorized_keys
file on that node contains the correct public keys, and that you have created an Oracle software owner with identical group membership and IDs.
This section contains the following topics:
Note:
The parameter and shell limit values shown in this section are recommended values only. For production database systems, Oracle recommends that you tune these values to optimize the performance of the system. See your operating system documentation for more information about tuning kernel parameters.Oracle recommends that you set shell limits and system configuration parameters as described in this section.
Set shell limits for the Oracle Grid Infrastructure installation owner and for root
. Verify that unlimited is set for both accounts either by using the smit
utility or by editing the /etc/security/limits
file. The root user requires these settings because the crs daemon (crsd) runs as root.
For AIX, it is the ulimit
settings that determine process memory related resource limits. Verify that the shell limits displayed in the following table are set to the values shown:
Shell Limit (As Shown in smit) | Recommended Value |
---|---|
Soft File Descriptors | at least 1024 KB |
Hard File Descriptors | at least 65536 KB |
Soft nproc | -1 (unlimited) |
Hard nproc | -1 (unlimited) |
Soft STACK size | at least 10240 KB |
Hard STACK size | at least 10240 KB, at most 32768 KB |
Soft FILE size | -1 (Unlimited) |
Soft CPU time | -1 (Unlimited)
Note: This is the default value. |
Soft DATA segment | -1 (Unlimited) |
Soft Real Memory size | -1 (Unlimited) |
Processes (per user) | -1 (Unlimited)
Note: This limit is available only in AIX 6.1 or later. Refer to Section E.2.2, "Configuring System Configuration Parameters" for information on configuration of processes per user limits. |
To display the current value specified for these shell limits, and to change them if necessary perform the following steps:
Enter the following command:
# smit chuser
In the User NAME field, enter the user name of the Oracle software owner, for example oracle
.
Scroll down the list and verify that the value shown for the soft limits listed in the previous table is -1.
If necessary, edit the existing value. To edit the values, you can use the smit
utility. However, to set the value of Soft Real Memory size
, you must edit the file /etc/security/limits
. If you have permissions to run smit
utility, then you automatically have the permissions to edit the limits
file.
When you have finished making changes, press F10 to exit.
If you cannot use the Fixup scripts, then verify that the kernel parameters shown in the following table are set to values greater than or equal to the minimum value shown. If the current value for any parameter is greater than the value listed in this table, then the Fixup scripts do not change the value of that parameter.
Parameter | Minimum Value |
---|---|
maxuprocs |
16384 |
ncargs |
128 |
The following procedure describes how to verify and set the values manually.
To verify that the maximum number of processes allowed per user is set to 2048 or greater, use the following steps:
Note:
For production systems, this value should be at least 128 plus the sum of thePROCESSES
and PARALLEL_MAX_SERVERS
initialization parameters for each database running on the system.Enter the following command:
# smit chgsys
Verify that the value shown for Maximum number of PROCESSES allowed per user is greater than or equal to 2048.
If necessary, edit the existing value.
When you have finished making changes, press F10 to exit.
To verify that long commands can be executed from shell, use the following steps:
Note:
Oracle recommends that you set thencargs
system attribute to a value greater than or equal to 128. The ncargs
attribute determines the maximum number of values that can be passed as command line arguments.Enter the following command:
# smit chgsys
Verify that the value shown for ARG/ENV list size in 4K byte blocks is greater than or equal to 128.
If necessary, edit the existing value.
When you have finished making changes, press F10 to exit.
On AIX 6 and AIX 7, the Asynchronous Input Output (AIO) device drivers are enabled by default. For both AIX 6 and AIX 7, increase the number of aioserver
processes from the default value. The recommended value for aio_maxreqs
is 64k (65536
). Confirm this value for both AIX 6 and AIX 7.
Confirm the aio_maxreqs
value using the following procedure:
# ioo –o aio_maxreqs aio_maxreqs = 65536
When performing an asynchronous I/O to a file system, each asynchronous I/O operation is tied to an asynchronous I/O server. Thus, the number of asynchronous I/O servers limits the number of concurrent asynchronous I/O operations in the system.
The initial number of servers that are started during a system restart is determined by the aio_minservers
parameter. As concurrent asynchronous I/O operations occur, additional asynchronous I/O servers are started, up to a maximum of the value set in the aio_maxservers
parameter.
In general, to set the number of asynchronous I/O servers, complete the following procedure:
Adjust the initial value of aio_maxservers
to 10 times the number of disks divided by the number of CPUs that are to be used concurrently but no more than 80.
Monitor the performance effects on the system during periods of high I/O activity. If all AIO server processes are started, then increase the aio_maxservers
value. Also, continue to monitor the system performance during peak I/O activity to determine if there was a benefit from the additional AIO servers. Too many asynchronous I/O servers increase memory and processor overload of additional processes, but this disadvantage is small. See your operating system vendor documentation for information about tuning AIO parameters.
To monitor the number of AIO server processes that have started, enter the following:
# ps -ek|grep -v grep|grep –v posix_aioserver|grep -c aioserver
If you do not use a Fixup script or CVU to set ephemeral ports, then use NDD to ensure that the AIX kernel TCP/IP ephemeral port range is broad enough to provide enough ephemeral ports for the anticipated server workload. Ensure that the lower range is set to at least 9000 or higher, to avoid Well Known ports, and to avoid ports in the Registered Ports range commonly used by Oracle and other server ports. Set the port range high enough to avoid reserved ports for any applications you may intend to use. If the lower value of the range you have is greater than 9000, and the range is large enough for your anticipated workload, then you can ignore OUI warnings regarding the ephemeral port range.
For example:
# /usr/sbin/no -a | fgrep ephemeral tcp_ephemeral_low = 32768 tcp_ephemeral_high = 65500 udp_ephemeral_low = 32768 udp_ephemeral_high = 65500
In the preceding example, the TCP and UDP ephemeral ports are set to the default range (32768-65536).
If you expect your workload to require a high number of ephemeral ports, then update the UDP and TCP ephemeral port range to a broader range. For example:
# /usr/sbin/no -p -o tcp_ephemeral_low=9000 -o tcp_ephemeral_high=65500 # /usr/sbin/no -p -o udp_ephemeral_low=9000 -o udp_ephemeral_high=65500