Skip Headers
Oracle® Grid Infrastructure Installation Guide
11g Release 2 (11.2) for Oracle Solaris

Part Number E24616-05
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

5 Oracle Grid Infrastructure Postinstallation Procedures

This chapter describes how to complete the postinstallation tasks after you have installed the Oracle Grid Infrastructure software.

This chapter contains the following topics:

5.1 Required Postinstallation Tasks

You must perform the following tasks after completing your installation:

Note:

In prior releases, backing up the voting disks using a dd command was a required postinstallation task. With Oracle Clusterware release 11.2 and later, backing up and restoring a voting disk using the dd command may result in the loss of the voting disk, so this procedure is not supported.

5.1.1 Download and Install Patch Updates

Refer to the My Oracle Support Web site for required patch updates for your installation.

Note:

Browsers require an Adobe Flash plug-in, version 9.0.115 or higher to use My Oracle Support. Check your browser for the correct version of Flash plug-in by going to the Adobe Flash checker page, and installing the latest version of Adobe Flash.

If you do not have Flash installed, then download the latest version of the Flash Player from the Adobe Web site:

http://www.adobe.com/go/getflashplayer

To download required patch updates:

  1. Use a Web browser to view the My Oracle Support Web site:

    https://support.oracle.com

  2. Log in to My Oracle Support Web site.

    Note:

    If you are not a My Oracle Support registered user, then click Register for My Oracle Support and register.
  3. On the main My Oracle Support page, click Patches & Updates.

  4. On the Patches & Update page, click Advanced Search.

  5. On the Advanced Search page, click the search icon next to the Product or Product Family field.

  6. In the Search and Select: Product Family field, select Database and Tools in the Search list field, enter RDBMS Server in the text field, and click Go.

    RDBMS Server appears in the Product or Product Family field. The current release appears in the Release field.

  7. Select your platform from the list in the Platform field, and at the bottom of the selection list, click Go.

  8. Any available patch updates appear under the Results heading.

  9. Click the patch number to download the patch.

  10. On the Patch Set page, click View README and read the page that appears. The README page contains information about the patch set and how to apply the patches to your installation.

  11. Return to the Patch Set page, click Download, and save the file on your system.

  12. Use the unzip utility provided with Oracle Database 11g release 2 (11.2) to uncompress the Oracle patch updates that you downloaded from My Oracle Support. The unzip utility is located in the $ORACLE_HOME/bin directory.

  13. Refer to Appendix E for information about how to stop database processes in preparation for installing patches.

5.2 Recommended Postinstallation Tasks

Oracle recommends that you complete the following tasks as needed after installing Oracle Grid Infrastructure:

5.2.1 Back Up the root.sh Script

Oracle recommends that you back up the root.sh script after you complete an installation. If you install other products in the same Oracle home directory, then the installer updates the contents of the existing root.sh script during the installation. If you require information contained in the original root.sh script, then you can recover it from the root.sh file copy.

5.2.2 Configure IPMI-based Failure Isolation Using Crsctl

On Oracle Solaris platforms, where Oracle does not currently support the native IPMI driver, DHCP addressing is not supported and manual configuration is required for IPMI support. OUI will not collect the administrator credentials, so failure isolation must be manually configured, the BMC must be configured with a static IP address, and the address must be manually stored in the OLR.

To configure Failure Isolation using IPMI, complete the following steps on each cluster member node:

  1. If necessary, start Oracle Clusterware using the following command:

    $ crsctl start crs
    
  2. Use the BMC management utility to obtain the BMC's IP address and then use the cluster control utility crsctl to store the BMC's IP address in the Oracle Local Registry (OLR) by issuing the crsctl set css ipmiaddr address command. For example:

    $crsctl set css ipmiaddr 192.168.10.45
    
  3. Enter the following crsctl command to store the user ID and password for the resident BMC in the OLR, where youradminacct is the IPMI administrator user account, and provide the password when prompted:

    $ crsctl set css ipmiadmin youradminact
    IPMI BMC Password: 
    

    This command attempts to validate the credentials you enter by sending them to another cluster node. The command fails if that cluster node is unable to access the local BMC using the credentials.

    When you store the IPMI credentials in the OLR, you must have the anonymous user specified explicitly, or a parsing error will be reported.

5.2.3 Tune Semaphore Parameters

Refer to the following guidelines only if the default semaphore parameter values are too low to accommodate all Oracle processes:

Note:

Oracle recommends that you refer to the operating system documentation for more information about setting semaphore parameters.
  1. Calculate the minimum total semaphore requirements using the following formula:

    2 * sum (process parameters of all database instances on the system) + overhead for background processes + system and other application requirements

  2. Set semmns (total semaphores systemwide) to this total.

  3. Set semmsl (semaphores for each set) to 250.

  4. Set semmni (total semaphores sets) to semmns divided by semmsl, rounded up to the nearest multiple of 1024.

5.2.4 Create a Fast Recovery Area Disk Group

During installation, by default you can create one disk group. If you plan to add an Oracle Database for a standalone server or an Oracle RAC database, then you should create the Fast Recovery Area for database files.

5.2.4.1 About the Fast Recovery Area and the Fast Recovery Area Disk Group

The Fast Recovery Area is a unified storage location for all Oracle Database files related to recovery. Database administrators can define the DB_RECOVERY_FILE_DEST parameter to the path for the Fast Recovery Area to enable on-disk backups, and rapid recovery of data. Enabling rapid backups for recent data can reduce requests to system administrators to retrieve backup tapes for recovery operations.

When you enable Fast Recovery in the init.ora file, all RMAN backups, archive logs, control file automatic backups, and database copies are written to the Fast Recovery Area. RMAN automatically manages files in the Fast Recovery Area by deleting obsolete backups and archive files no longer required for recovery.

Oracle recommends that you create a Fast Recovery Area disk group. Oracle Clusterware files and Oracle Database files can be placed on the same disk group, and you can also place Fast recovery files in the same disk group. However, Oracle recommends that you create a separate Fast Recovery disk group to reduce storage device contention.

The Fast Recovery Area is enabled by setting DB_RECOVERY_FILE_DEST. The size of the Fast Recovery Area is set with DB_RECOVERY_FILE_DEST_SIZE. As a general rule, the larger the Fast Recovery Area, the more useful it becomes. For ease of use, Oracle recommends that you create a Fast Recovery Area disk group on storage devices that can contain at least three days of recovery information. Ideally, the Fast Recovery Area should be large enough to hold a copy of all of your data files and control files, the online redo logs, and the archived redo log files needed to recover your database using the data file backups kept under your retention policy.

Multiple databases can use the same Fast Recovery Area. For example, assume you have created one Fast Recovery Area disk group on disks with 150 GB of storage, shared by three different databases. You can set the size of the Fast Recovery Area for each database depending on the importance of each database. For example, if database1 is your least important database, database 2 is of greater importance and database 3 is of greatest importance, then you can set different DB_RECOVERY_FILE_DEST_SIZE settings for each database to meet your retention target for each database: 30 GB for database 1, 50 GB for database 2, and 70 GB for database 3.

5.2.4.2 Creating the Fast Recovery Area Disk Group

To create a Fast recovery file disk group:

  1. Navigate to the Grid home bin directory, and start Oracle ASM Configuration Assistant (ASMCA). For example:

    $ cd /u01/app/11.2.0/grid/bin
    $ ./asmca
    
  2. ASMCA opens at the Disk Groups tab. Click Create to create a new disk group

  3. The Create Disk Groups window opens.

    In the Disk Group Name field, enter a descriptive name for the Fast Recovery Area group. For example: FRA.

    In the Redundancy section, select the level of redundancy you want to use.

    In the Select Member Disks field, select eligible disks to be added to the Fast Recovery Area, and click OK.

  4. The Diskgroup Creation window opens to inform you when disk group creation is complete. Click OK.

  5. Click Exit.

5.3 Using Older Oracle Database Versions with Grid Infrastructure

Review the following sections for information about using older Oracle Database releases with 11g release 2 (11.2) Oracle Grid Infrastructure installations:

5.3.1 General Restrictions for Using Older Oracle Database Versions

You can use Oracle Database release 9.2, release 10.x and release 11.1 with Oracle Clusterware 11g release 2 (11.2).

However, placing Oracle Database homes on Oracle ACFS that are prior to Oracle Database release 11.2 is not supported, because earlier releases are not designed to use Oracle ACFS.

If you upgrade an existing version of Oracle Clusterware, then required configuration of existing databases is completed automatically. However, if you complete a new installation of Oracle Grid Infrastructure for a cluster, and then want to install a version of Oracle Database prior to 11.2, then you must complete additional manual configuration tasks.

Note:

Before you start an Oracle RAC or Oracle Database installation on an Oracle Clusterware release 11.2 installation, if you are upgrading from releases 11.1.0.7, 11.1.0.6, and 10.2.0.4, Oracle recommends that you check for the latest recommended patches for the release you are upgrading from, and install those patches as needed prior to the upgrade.

For more information on recommended patches, refer to "Oracle Upgrade Companion," which is available through Note 785351.1 on My Oracle Support:

https://support.oracle.com

You may also refer to Notes 756388.1 and 756671.1 for the current list of recommended patches for each release

5.3.2 Using ASMCA to Administer Disk Groups for Older Database Versions

Use Oracle ASM Configuration Assistant (ASMCA) to create and modify disk groups when you install older Oracle databases and Oracle RAC databases on Oracle Grid Infrastructure installations. Starting with 11g release 2, Oracle ASM is installed as part of an Oracle Grid Infrastructure installation, with Oracle Clusterware. You can no longer use Database Configuration Assistant (DBCA) to perform administrative tasks on Oracle ASM.

5.3.3 Pinning Cluster Nodes for Oracle Database Release 10.x or 11.x

When Oracle Clusterware 11g release 11.2 is installed on a cluster with no previous Oracle software version, it configures Oracle Database 11g release 11.2 and later releases dynamically. However, dynamic configuration does not occur when you install an Oracle Database release 10.x or 11.1 on the cluster. Before installing 10.x or 11.1 Oracle Databases on an Oracle Clusterware 11g release 11.2 cluster you must establish a persistent configuration. Creating a persistent configuration for a node is called pinning a node.

Note:

During an upgrade, all cluster member nodes are pinned automatically, and no manual pinning is required for existing databases. This procedure is required only if you install older database versions after installing Oracle Grid Infrastructure release 11.2 software.

To pin a node in preparation for installing or using an older Oracle Database version, use Grid_home/bin/crsctl with the following command syntax, where nodes is a space-delimited list of one or more nodes in the cluster whose configuration you want to pin:

crsctl pin css -n nodes

For example, to pin nodes node3 and node4, log in as root and enter the following command:

$ crsctl pin css -n node3 node4

To determine if a node is in a pinned or unpinned state, use Grid_home/bin/olsnodes with the following command syntax:

To list all pinned nodes:

olsnodes -t -n 

For example:

# /u01/app/11.2.0/grid/bin/olsnodes -t -n
node1 1       Pinned
node2 2       Pinned
node3 3       Pinned
node4 4       Pinned

To list the state of a particular node:

olsnodes -t -n node3

For example:

# /u01/app/11.2.0/grid/bin/olsnodes -t -n node3
node3 3       Pinned

See Also:

Oracle Clusterware Administration and Deployment Guide for more information about pinning and unpinning nodes

5.3.4 Enabling The Global Services Daemon (GSD) for Oracle Database Release 9.2

By default, the Global Services daemon (GSD) is disabled. If you install Oracle Database 9i release 2 (9.2) on Oracle Grid Infrastructure for a Cluster 11g release 2 (11.2), then you must enable the GSD. Use the following commands to enable the GSD before you install Oracle Database release 9.2:

srvctl enable nodeapps -g
srvctl start nodeapps

5.3.5 Using the Correct LSNRCTL Commands

To administer 11g release 2 local and scan listeners using the lsnrctl command, set your $ORACLE_HOME environment variable to the path for the Oracle Grid Infrastructure home (Grid home). Do not attempt to use the lsnrctl commands from Oracle home locations for previous releases, as they cannot be used with the new release.

5.4 Modifying Oracle Clusterware Binaries After Installation

After installation, if you need to modify the Oracle Clusterware configuration, then you must unlock the Grid home.

For example, if you want to apply a one-off patch, or if you want to modify an Oracle Exadata configuration to run IPC traffic over RDS on the interconnect instead of using the default UDP, then you must unlock the Grid home.

Caution:

Before relinking executables, you must shut down all executables that run in the Oracle home directory that you are relinking. In addition, shut down applications linked with Oracle shared libraries.

Unlock the home using the following procedure:

  1. Change directory to the path Grid_home/crs/install, where Grid_home is the path to the Grid home, and unlock the Grid home using the command rootcrs.pl -unlock -crshome Grid_home, where Grid_home is the path to your Grid infrastructure home. For example, with the Grid home /u01/app/11.2.0/grid, enter the following command:

    # cd /u01/app/11.2.0/grid/crs/install
    # perl rootcrs.pl -unlock -crshome /u01/app/11.2.0/grid
    
  2. Change user to the Oracle Grid Infrastructure software owner, and relink binaries using the command syntax make -f Grid_home/rdbms/lib/ins_rdbms.mk target, where Grid_home is the Grid home, and target is the binaries that you want to relink. For example, where the grid user is grid, $ORACLE_HOME is set to the Grid home, and where you are updating the interconnect protocol from UDP to IPC, enter the following command:

    # su grid
    $ make -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk ipc_rds ioracle
    

    Note:

    To relink binaries, you can also change to the grid installation owner and run the command Grid_home/bin/relink.
  3. Relock the Grid home and restart the cluster using the following command:

    # perl rootcrs.pl -patch
    
  4. Repeat steps 1 through 3 on each cluster member node.