5 Fleet Patching and Provisioning and Maintenance
Fleet Patching and Provisioning is a software lifecycle management method for provisioning and maintaining Oracle homes.
Fleet Patching and Provisioning enables mass deployment and maintenance of standard operating environments for databases, clusters, and user-defined software types. With Fleet Patching and Provisioning, you can also install clusters and provision, patch, scale, and upgrade Oracle Grid Infrastructure and Oracle Database 11g release 2 (11.2), and later. Additionally, you can provision applications and middleware.
-
As a central server (Fleet Patching and Provisioning Server), that stores and manages standardized images, called gold images. You can deploy gold images to any number of nodes across a data center. You can use the deployed homes to create new clusters and databases, and patch, upgrade, and scale existing installations.
The server manages software homes on the cluster hosting the Fleet Patching and Provisioning Server, itself, Fleet Patching and Provisioning Clients, and can also manage installations running Oracle Grid Infrastructure 11g release 2 (11.2.0.3 and 11.2.0.4) and 12c release 1 (12.1.0.2). The server can also manage installations running no grid infrastructure.
A Fleet Patching and Provisioning Server can provision new installations and can manage existing installations without any changes to the existing installations (such as no agent, daemon, or configuration prerequisites). Fleet Patching and Provisioning Servers also include capabilities for automatically sharing gold images among peer Fleet Patching and Provisioning Servers to support enterprises with geographically distributed data centers.
-
As a client (Fleet Patching and Provisioning Client), that can be managed from the central Fleet Patching and Provisioning Server or directly by running commands on the Fleet Patching and Provisioning Client, itself. As with the Fleet Patching and Provisioning Server, the Fleet Patching and Provisioning Client is a service built in to Oracle Grid Infrastructure and is available with Oracle Grid Infrastructure 12c release 2 (12.2.0.1), and later. The Fleet Patching and Provisioning Client service can retrieve gold images from the Fleet Patching and Provisioning Server, upload new images based on policy, and apply maintenance operations to itself.
For patching operations, a third option is available with Oracle Database and Oracle Grid Infrastructure 18c. The procedures for updating database and grid infrastructure homes have been modularized into independent automatons that are included with Oracle Database and Oracle Grid Infrastructure, and can be run locally without any central Fleet Patching and Provisioning Server in the architecture. This provides an immediate entry point to the capabilities of Fleet Patching and Provisioning as soon as you bring up an Oracle Database or cluster.
-
Ensures standardization and enables high degrees of automation with gold images and managed lineage of deployed software.
-
Minimizes the maintenance window by deploying new homes (gold images) out-of-place, without disrupting active databases or clusters.
-
Simplifies maintenance by providing automatons that are invoked with an API across database versions and deployment models.
-
Reduces maintenance risk with built-in validations and a dry-run mode to ensure operations will succeed end-to-end.
-
In the event of an issue, commands are resumable and restartable, further reducing risk of maintenance operations.
-
Minimizes and, in many cases, eliminates the impact of patching and upgrades, with features that include:
-
Zero-downtime database upgrade: A fully-automated upgrade executed entirely within the deployment with no extra nodes or external storage required.
-
Adaptive management of database sessions and OJVM during rolling patching.
-
Options for fine-grained management of consolidated deployments.
-
The deployment and maintenance operations are extensible, allowing customizations to include environment-specific actions into the automated workflow.
Fleet Patching and Provisioning Automatons
-
Zero-downtime database upgrade automates all of the steps involved in a database upgrade to minimize or even eliminate application downtime while upgrading an Oracle database. It also minimizes resource requirements and provides a fallback path in case the upgrade must be rolled back.
-
Adaptive Oracle RAC Rolling Patching for OJVM Deployments: In a clustered environment, the default approach for Fleet Patching and Provisioning for patching a database is Oracle RAC rolling patching. However non-rolling may be required if the patched database home contains OJVM patches. In this case, Fleet Patching and Provisioning determines whether rolling patching is possible and does so, if applicable.
-
Dry-run command evaluation: Before running any command, Fleet Patching and Provisioning checks various preconditions to ensure the command will succeed. However, some conditions cannot be detected prior to a command running. And, while Fleet Patching and Provisioning allows a failed command to be reverted or resumed after an error condition is corrected, it is preferable to address as many potential issues as possible before the command is run. The command evaluation mode will test the preconditions for a given command, without making any changes, and report potential problems and correct them before the command is actually run.
-
Independent automatons: Prior to Oracle Database 18c, performing any Fleet Patching and Provisioning operation (for example, switching a database home to a later version) required the presence of a central Fleet Patching and Provisioning Server. Beginning with Oracle Database 18c, key functionality can be performed independently, with no central Fleet Patching and Provisioning Server in the architecture.
Note:
When you install Oracle Grid Infrastructure, the Fleet Patching and Provisioning Server is configured, by default, in the local mode to support the local switch home capability. If you must configure the general Fleet Patching and Provisioning Server product, then you must remove the current local-mode Fleet Patching and Provisioning Server, using the following commands, asroot
:# srvctl stop rhpserver
Ignore a message similar to "Fleet Patching and Provisioning Server is not running".
# srvctl remove rhpserver
Global Fleet Standardization and Management
-
Sharing gold images between peer Fleet Patching and Provisioning Servers: Large enterprises typically host multiple data centers and, within each data center, there may be separate network segments. In the Fleet Patching and Provisioning architecture, one Fleet Patching and Provisioning Server operates on a set of targets within a given data center (or network segment of a data center). Therefore each data center requires at least one Fleet Patching and Provisioning Server.
While each data center may have some unique requirements in terms of the gold images that target servers will use, the goal of standardization is using the same gold image across all data centers whenever possible. To that end, Fleet Patching and Provisioning supports peer-to-peer sharing of gold images to easily propagate gold images among multiple Fleet Patching and Provisioning Servers.
-
Gold image drift detection and aggregation: After you provision a software home from a gold image, you may have to apply a patch directly to the deployed home. At this point the deployed home has drifted from the gold image. Fleet Patching and Provisioning provides two capabilities for monitoring and reporting drift:
-
Fleet Patching and Provisioning compares a specific home to its parent gold image and lists any patches that are applied to the home but that are not in the gold image.
-
Fleet Patching and Provisioning compares a specific gold image to all of its descendant homes and lists the aggregation of all patches applied to those homes that are not in the gold image. This provides a build specification for a new gold image that could be applied to all of the descendants of the original gold image, such that no patches will be lost from any of those deployments.
-
-
Configuration collection and reporting: The Fleet Patching and Provisioning Server can collect and retain operating system configuration and the root file system contents of specified Fleet Patching and Provisioning Clients. If a Fleet Patching and Provisioning Client node is rendered unusable (for example, a user accidentally deletes or changes operating system configuration or the root file system), then it can be difficult to determine the problem and correct it. This feature automates the collection of relevant information, enabling simple restoration in the event of node failure.
Flexibility and Extensibility
-
RESTful API: Fleet Patching and Provisioning provides a RESTful API for many common operations, including provisioning, patching, upgrading, and query operations.
See Also:
Oracle Database REST API Reference -
Customizable authentication: Host-to-host authentication in certain environments, particularly in compliance-conscious industries, such as financials and e-commerce, often uses technologies and products that are not supported, natively, by Fleet Patching and Provisioning. This feature allows integrating Fleet Patching and Provisioning authentication with the mechanisms in use at your data center.
-
Command scheduler: The ability to schedule and bundle automated tasks is essential for maintenance of a large database estate. Fleet Patching and Provisioning supports scheduling tasks such as provisioning software homes, switching to a new home, and scaling a cluster. Also, you can add a list of clients to a command, facilitating large-scale operations.
-
Configurable connectivity: As security concerns and compliance requirements increase, so do the restrictions on connectivity across the intranets of many enterprises. You can configure the small set ports used for communication between the Fleet Patching and Provisioning Server and its Clients, allowing low-impact integration into firewalled or audit-conscious environments.
Other Fleet Patching and Provisioning Features
-
Zero-downtime upgrade: Automation of all of upgrade steps involved minimizes or even eliminates application downtime while upgrading an Oracle Database. It also minimizes resource requirements and provides a fallback path in case you must roll back the upgrade. You can run a zero-downtime upgrade on certain versions of Oracle RAC and Oracle RAC One Node databases.
-
Provision new server pools: The Fleet Patching and Provisioning Server can install and configure Oracle Grid Infrastructure (11g release 2 (11.2.0.4), and 12c release 1 (12.1.0.2) and release 2 (12.2)) on nodes that have no Oracle software inventory and can then manage those deployments with the full complement of Fleet Patching and Provisioning functionality.
-
Provision and manage any software home: Fleet Patching and Provisioning enables you to create a gold image from any software home. You can then provision that software to any Fleet Patching and Provisioning Client or target as a working copy of a gold image. The software may be any binary that you will run on a Fleet Patching and Provisioning Client or target.
-
Provision, scale, patch, and upgrade Oracle Grid Infrastructure: The Fleet Patching and Provisioning Server can provision Oracle Grid Infrastructure 11g release 2 (11.2.0.4) homes, and later, add or delete nodes from an Oracle Grid Infrastructure configuration, and can also be used to patch and upgrade Oracle Grid Infrastructure homes. In addition, there is a rollback capability that facilitates undoing a failed patch procedure. While patching Oracle Grid Infrastructure, you can use Fleet Patching and Provisioning to optionally patch any database homes hosted on the cluster.
-
Provision, scale, patch, and upgrade Oracle Database: You can use Fleet Patching and Provisioning, you can provision, scale, and patch Oracle Database 11g release 2 (11.2.0.4), and later. You can also upgrade Oracle Databases from 11g release 2 (11.2.0.3) to 11g release 2 (11.2.0.4); from 11g release 2 (11.2.0.4) to 12c release 1 (12.1.0.2); and from 11g release 2 (11.2.0.4) or 12c release 1 (12.1.0.2) to 12c release 2 (12.2).
When you provision such software, Fleet Patching and Provisioning offers additional features for creating various types of databases (such as Oracle RAC, single instance, and Oracle Real Application Clusters One Node (Oracle RAC One Node) databases) on different types of storage, and other options, such as using templates and creating container databases (CDBs). The Fleet Patching and Provisioning Server can add nodes to an Oracle RAC configuration, and remove nodes from an Oracle RAC configuration. Fleet Patching and Provisioning also improves and makes more efficient patching of database software, allowing for rapid and remote patching of the software, in most cases, without any downtime for the database.
-
Support for single-instance databases: You can use Fleet Patching and Provisioning to provision, patch, and upgrade single-instance databases running on clusters or Oracle Restart, or on single, standalone nodes.
-
Advanced patching capabilities: When patching an Oracle Grid Infrastructure or Oracle Database home, Fleet Patching and Provisioning offers a batch mode that speeds the patching process by patching some or all nodes of a cluster in parallel, rather than sequentially.
For Oracle Database homes, you can define disjoint sets of nodes. Each set of nodes is updated sequentially. By defining sets with reference to the database instances running on them, you can minimize the impact of rolling updates by ensuring that services are never taken completely offline. A “smartmove” option is available to help define the sets of batches to meet this goal.
Integration with Application Continuity is another enhancement to help eliminate the impact of maintenance. This provides the ability to gracefully drain and relocate services within a cluster, completely masking the maintenance from users.
-
Notifications:The Fleet Patching and Provisioning Server is the central repository for the software homes available to the data center. Therefore, it is essential that administrators throughout the data center be aware of changes to the inventory which might impact their areas of responsibility.
Fleet Patching and Provisioning enables you and other users to subscribe to image series events. Anyone subscribed will be notified by email of any changes to the images available in a particular image series. Also, users can be notified by email when a working copy of a gold image is added to or deleted from a client.
-
Custom workflow support: You can create actions for various Fleet Patching and Provisioning operations, such as importing images, adding or deleting working copies of the gold images, and managing a software home. You can define different actions for each operation, and further differentiate by the type of image to which the operation applies. Actions that you define can be executed before or after the given operation, and are executed on the deployment the operation applies to, whether it is the Fleet Patching and Provisioning Server, a target that is not running a Fleet Patching and Provisioning Client, or a target that is running a Fleet Patching and Provisioning Client.
-
Resume failed operations: If an operation, such as adding an image, provisioning a working copy of a gold image, or performing a scale, patch or upgrade fails, then Fleet Patching and Provisioning reports the error and stops. After the problem is corrected (for example, a directory permissions or ownership misconfiguration on a target node), you can rerun the RHPCTL command that failed, and it will resume from the point of failure. This avoids redoing any work that may have been completed prior to the failure.
-
Audit command: The Fleet Patching and Provisioning Server records the execution of all Fleet Patching and Provisioning operations and also records their outcome (whether success or failure). An audit mechanism enables you to query the audit log in a variety of dimensions, and also to manage its contents and size.
Note:
-
Oracle does not support Fleet Patching and Provisioning on HP-UX or Windows operating systems.
-
The Fleet Patching and Provisioning Server does not manage operating system images.
This section includes the following topics:
Related Topics
Fleet Patching and Provisioning Architecture
Conceptual information regarding Fleet Patching and Provisioning architecture.
The Fleet Patching and Provisioning architecture consists of a Fleet Patching and Provisioning Server and any number of Fleet Patching and Provisioning Clients and targets. Oracle recommends deploying the Fleet Patching and Provisioning Server in a multi-node cluster so that it is highly available.
The Fleet Patching and Provisioning Server cluster is a repository for all data, of which there are primarily two types:
-
Gold images
-
Metadata related to users, roles, permissions, and identities
The Fleet Patching and Provisioning Server acts as a central server for provisioning Oracle Database homes, Oracle Grid Infrastructure homes, and other application software homes, making them available to the cluster hosting the Fleet Patching and Provisioning Server and to the Fleet Patching and Provisioning Client clusters and targets.
Users operate on the Fleet Patching and Provisioning Server or Fleet Patching and Provisioning Client to request deployment of Oracle homes or to query gold images. When a user makes a request for an Oracle home, specifying a gold image, the Fleet Patching and Provisioning Client communicates with the Fleet Patching and Provisioning Server to pass on the request. The Fleet Patching and Provisioning Server processes the request by taking appropriate action to instantiate a copy of the gold image, and to make it available to the Fleet Patching and Provisioning Client cluster using available technologies such as Oracle Automatic Storage Management Cluster File System (Oracle ACFS) and local file systems.
The Fleet Patching and Provisioning Server communicates with Fleet Patching and Provisioning Clients (running Oracle Grid Infrastructure 12c release 2 (12.2.0.1) or later), and targets (a target is an installation with Oracle Grid Infrastructure 12c release 1 (12.1.0.2) or release 11.2, or with no Oracle Grid Infrastructure installed) using the following ports, several of which you can configure, as described in Table 5-1. Additionally, differences in ports used when communicating with Fleet Patching and Provisioning Clients versus targets are noted.
Table 5-1 Fleet Patching and Provisioning Communication Ports
Client, Target, or Both | Protocol | Port | Purpose | Description |
---|---|---|---|---|
Target |
TCP |
22 |
SSH |
Authentication-based operations involving clientless targets. |
Client |
TCP |
22 |
SSH |
Provisioning of Oracle Grid Infrastructure 12c release 2 (12.2.0.1) or later requires an SSH port. (Subsequent Fleet Patching and Provisioning Server/Client interactions use the JMX path.) |
Client |
TCP |
8896 |
JMX registry and server port |
For establishing Fleet Patching and Provisioning Server communication with Fleet Patching and Provisioning Clients. You can configure this port using the Note: The preceding command requires you to restart the service. |
Client |
UDP |
Port 53 must be open only on the Fleet Patching and Provisioning Server to accept incoming connections. |
GNS port |
GNS is used for Fleet Patching and Provisioning Clients to locate the Fleet Patching and Provisioning Server. You can configure GNS with or without zone delegation. |
Both |
TCP |
Specify one port for each RHPCTL command from ephemeral port range, or one port as specified with the Progress listener port must be open to accept incoming connections on the server where RHPCTL is run (whether a Fleet Patching and Provisioning Server or Client). |
Command progress listener |
Fleet Patching and Provisioning Server opens a random port from the ephemeral range to monitor progress on the client or target, or uses the fixed port you specify with the |
Clients running Oracle Database 18c or Oracle Database 12c release 2 (12.2.0.1) with the January 2018 RU, or later |
TCP |
You must have six ports for gold image provisioning on a Fleet Patching and Provisioning Client server to accept incoming connections. |
Gold image transfers to Oracle Database 18c clients and clients with the 2018 January RU (running the |
Transferring copies of gold images from the Fleet Patching and Provisioning Server to clients uses six ports chosen from the ephemeral range, or six ports from the range you define using the |
Both |
TCP/UDP |
Fixed and configurable ports, as described in Table 5-2 |
NFS NFS is not required if RHPC clusters are at release 18c (18.4) or later. |
Fleet Patching and Provisioning Server transfers software homes to targets and Oracle Database 12c release 2 (12.2.0.1) pre-January 2018 RU clients using temporary NFS mount points. NFS is also used for remote command execution support on clients and targets (all versions). |
Table 5-2 Ports for NFS
Port | Description |
---|---|
2049, 111 |
Fixed NFS ports |
Six ports chosen from ephemeral range (default) or defined in |
Configurable NFS ports, with each item followed by its configuration file entry:
|
Fleet Patching and Provisioning Server
The Fleet Patching and Provisioning Server is a highly available software provisioning system that uses Oracle Automatic Storage Management (Oracle ASM), Oracle Automatic Storage Management Cluster File System (Oracle ACFS), Grid Naming Service (GNS), and other components.
The Fleet Patching and Provisioning Server primarily acts as a central server for provisioning Oracle homes, making them available to Fleet Patching and Provisioning Client and targets.
Features of the Fleet Patching and Provisioning Server:
-
Efficiently stores gold images and image series for the managed homes, including separate binaries, and metadata related to users, roles, and permissions.
-
Provides a list of available homes to clients upon request.
-
Patch a software home once and then deploy the home to any Fleet Patching and Provisioning Client or any other target, instead of patching every site.
-
Provides the ability to report on existing deployments.
-
Deploys homes on physical servers and virtual machines.
-
Notifies subscribers of changes to image series.
-
Maintains an audit log of all RHPCTL commands run.
Fleet Patching and Provisioning Targets
Computers of which Fleet Patching and Provisioning is aware are known as targets.
Fleet Patching and Provisioning Servers can create new targets, and can also install and configure Oracle Grid Infrastructure on targets with only an operating system installed. Subsequently, Fleet Patching and Provisioning Server can provision database and other software on those targets, perform maintenance, scale the target cluster, in addition to many other operations. All Fleet Patching and Provisioning commands are run on the Fleet Patching and Provisioning Server. Targets running the Fleet Patching and Provisioning Client in Oracle Clusterware 12c release 2 (12.2), and later, may also run many of the Fleet Patching and Provisioning commands to request new software from the Fleet Patching and Provisioning Server and initiate maintenance themselves, among other tasks.
Note:
If you have targets running the Fleet Patching and Provisioning Client in Oracle Clusterware prior to 12c release 2 (12.2), then you can import images using the RHPCTL utility without making them clients. For clients running in Oracle Clusterware 12c release 2 (12.2), and later, you must configure and enable Fleet Patching and Provisioning Clients to simplify connectivity and credential issues.Fleet Patching and Provisioning Clients
The Fleet Patching and Provisioning Client is part of the Oracle Grid Infrastructure. Users operate on a Fleet Patching and Provisioning Client to perform tasks such as requesting deployment of Oracle homes and listing available gold images.
When a user requests an Oracle home specifying a gold image, the Fleet Patching and Provisioning Client communicates with the Fleet Patching and Provisioning Server to pass on the request. The Fleet Patching and Provisioning Server processes the request by instantiating a working copy of the gold image and making it available to the Fleet Patching and Provisioning Client using Oracle ACFS or a different local file system.
The Fleet Patching and Provisioning Client:
-
Can use Oracle ACFS to store working copies of gold images which can be rapidly provisioned as local homes; new homes can be quickly created or undone using Oracle ACFS snapshots.
Note:
Oracle supports using other local file systems besides Oracle ACFS.
-
Provides a list of available homes from the Fleet Patching and Provisioning Server.
-
Has full functionality in Oracle Clusterware 12c release 2 (12.2) and can communicate with Fleet Patching and Provisioning Servers from Oracle Clusterware 12c release 2 (12.2), or later.
Related Topics
Authentication Options for Fleet Patching and Provisioning Operations
Some RHPCTL commands show authentication choices as an optional parameter.
Specifying an authentication option is not required when running an RHPCTL command on a Fleet Patching and Provisioning Client, nor when running an RHPCTL command on the Fleet Patching and Provisioning Server and operating on a Fleet Patching and Provisioning Client, because the server and client establish a trusted relationship when the client is created, and authentication is handled internally each time a transaction takes place. (The only condition for server/client communication under which an authentication option must be specified is when the server is provisioning a new Oracle Grid Infrastructure deployment—in this case, the client does not yet exist.)
-
Provide the
root
password (onstdin
) for the target -
Provide the
sudo
user name,sudo
binary path, and the password (stdin
) for target -
Provide a password (either
root
orsudouser
) non-interactively from local encrypted store (using the-cred
authentication parameter) -
Provide a path to the identity file stored on the Fleet Patching and Provisioning Server for SSL-encrypted passwordless authentication (using the
-auth sshkey
option)
Passwordless Authentication Details
crsusr
on the Fleet Patching and Provisioning Server and root
or a sudouser
on the target.
Note:
The steps to create that equivalence are platform-dependent and so not shown in detail here. For Linux, see commandsssh-keygen
to be run on the target and ssh-copy-id
to be run on the Fleet Patching and Provisioning Server.
crsusr
on the Fleet Patching and Provisioning Server and root
on the target node, nonRHPClient4004.example.com
, and saved the key information on the Fleet Patching and Provisioning Server at /home/oracle/rhp/ssh-key/key –path
, then the following command will provision a copy of the specified gold image to the target node with passwordless authentication:$ rhpctl add workingcopy -workingcopy db12102_160607wc1 –image db12102_160607
-targetnode nonRHPClient4004.example.com –path /u01/app/oracle/12.1/rhp/dbhome_1
-oraclebase /u01/app/oracle -auth sshkey -arg1 user:root -arg2
identity_file:/home/oracle/rhp/ssh-key/key
crsusr
on the Fleet Patching and Provisioning Server and a privileged user (other than root
) on the target, the -auth
portion of the command would be similar to the following:-auth sshkey -arg1 user:ssh_user -arg2 identity_file:path_to_identity_file_on_RHPS
-arg3 sudo_location:path_to_sudo_binary_on_target
Fleet Patching and Provisioning Roles
An administrator assigns roles to Fleet Patching and Provisioning users with access-level permissions defined for each role. Users on Fleet Patching and Provisioning Clients are also assigned specific roles. Fleet Patching and Provisioning includes basic built-in and composite built-in roles.
Basic Built-In Roles
The basic built-in roles and their functions are:
-
GH_ROLE_ADMIN: An administrative role for everything related to roles. Users assigned this role are able to run
rhpctl verb role
commands. -
GH_SITE_ADMIN: An administrative role for everything related to Fleet Patching and Provisioning Clients. Users assigned this role are able to run
rhpctl verb client
commands. -
GH_SERIES_ADMIN: An administrative role for everything related to image series. Users assigned this role are able to run
rhpctl verb series
commands. -
GH_SERIES_CONTRIB: Users assigned this role can add images to a series using the
rhpctl insertimage series
command, or delete images from a series using therhpctl deleteimage series
command. -
GH_WC_ADMIN: An administrative role for everything related to working copies of gold images. Users assigned this role are able to run
rhpctl verb workingcopy
commands. -
GH_WC_OPER: A role that enables users to create a working copy of a gold image for themselves or others using the
rhpctl add workingcopy
command with the-user
option (when creating for others). Users assigned this role do not have administrative privileges and can only administer the working copies of gold images that they create. -
GH_WC_USER: A role that enables users to create a working copy of a gold image using the
rhpctl add workingcopy
command. Users assigned this role do not have administrative privileges and can only delete working copies that they create. -
GH_IMG_ADMIN: An administrative role for everything related to images. Users assigned this role are able to run
rhpctl verb image
commands. -
GH_IMG_USER: A role that enables users to create an image using the
rhpctl add | import image
commands. Users assigned this role do not have administrative privileges and can only delete images that they create. -
GH_IMG_TESTABLE: A role that enables users to add a working copy of an image that is in the
TESTABLE
state. Users assigned this role must also be assigned either the GH_WC_ADMIN role or the GH_WC_USER role to add a working copy. -
GH_IMG_RESTRICT: A role that enables users to add a working copy from an image that is in the
RESTRICTED
state. Users assigned this role must also be assigned either the GH_WC_ADMIN role or the GH_WC_USER role to add a working copy. -
GH_IMG_PUBLISH: Users assigned this role can promote an image to another state or retract an image from the
PUBLISHED
state to either theTESTABLE
orRESTRICTED
state. -
GH_IMG_VISIBILITY: Users assigned this role can modify access to promoted or published images using the
rhpctl allow | disallow image
commands.
Composite Built-In Roles
The composite built-in roles and their functions are:
-
GH_SA: The Oracle Grid Infrastructure user on a Fleet Patching and Provisioning Server automatically inherits this role.
The GH_SA role includes the following basic built-in roles: GH_ROLE_ADMIN, GH_SITE_ADMIN, GH_SERIES_ADMIN, GH_SERIES_CONTRIB, GH_WC_ADMIN, GH_IMG_ADMIN, GH_IMG_TESTABLE, GH_IMG_RESTRICT, GH_IMG_PUBLISH, and GH_IMG_VISIBILITY.
-
GH_CA: The Oracle Grid Infrastructure user on a Fleet Patching and Provisioning Client automatically inherits this role.
The GH_CA role includes the following basic built-in roles: GH_SERIES_ADMIN, GH_SERIES_CONTRIB, GH_WC_ADMIN, GH_IMG_ADMIN, GH_IMG_TESTABLE, GH_IMG_RESTRICT, GH_IMG_PUBLISH, and GH_IMG_VISIBILITY.
-
GH_OPER: This role includes the following built-in roles: GH_WC_OPER, GH_SERIES_ADMIN, GH_IMG_TESTABLE, GH_IMG_RESTRICT, and GH_IMG_USER. Users assigned this role can delete only images that they have created.
Consider a gold image called G1
that is available on the Fleet Patching and Provisioning Server.
Further consider that a user, U1
, on a Fleet Patching and Provisioning Client, Cl1
, has the GH_WC_USER role. If U1
requests to provision an Oracle home based on the gold image G1
, then U1
can do so, because of the permissions granted by the GH_WC_USER role. If U1
requests to delete G1
, however, then that request would be denied because the GH_WC_USER role does not have the necessary permissions.
The Fleet Patching and Provisioning Server can associate user-role mappings to the Fleet Patching and Provisioning Client. After the Fleet Patching and Provisioning Server delegates user-role mappings, the Fleet Patching and Provisioning Client can then modify user-role mappings on the Fleet Patching and Provisioning Server for all users that belong to the Fleet Patching and Provisioning Client. This is implied by the fact that only the Fleet Patching and Provisioning Server qualifies user IDs from a Fleet Patching and Provisioning Client site with the client cluster name of that site. Thus, the Fleet Patching and Provisioning Client CL1
will not be able to update user mappings of a user on CL2
, where CL2
is the cluster name of a different Fleet Patching and Provisioning Client.
Fleet Patching and Provisioning Images
By default, when you create a gold image using either rhpctl import image
or rhpctl add image
, the image is ready to provision working copies. However, under certain conditions, you may want to restrict access to images and require someone to test or validate the image before making it available for general use.
You can also create a set of gold images on the Fleet Patching and Provisioning Server that can be collectively categorized as a gold image series which relate to each other, such as identical release versions, gold images published by a particular user, or images for a particular department within an organization.
Related Topics
Gold Image Distribution Among Fleet Patching and Provisioning Servers
Fleet Patching and Provisioning can automatically share and synchronize gold images between Fleet Patching and Provisioning Servers.
RHPS_1
, for example) cannot register a peer Fleet Patching and Provisioning Server if that peer has the same name as a Fleet Patching and Provisioning Client or target within the management domain of RHPS_1
.
$ rhpctl query peerserver
$ rhpctl query image -server server_cluster_name
The preceding command displays all images on a specific peer Fleet Patching and Provisioning Server. Additionally, you can specify a peer server along with the -image image_name
parameter to display details of a specific image on a specific peer server.
A Fleet Patching and Provisioning Server can have multiple peers. Oracle does not support chained relationships between peers, however, such as, if RHPS_1
is a peer of RHPS_2
, and RHPS_2
is also a peer of RHPS_3
, then no relationship is established or implied between RHPS_1
and RHPS_3
, although you can make them peers if you want.
$ rhpctl instantiate image –server server_cluster_name
rhpctl instantiate image
command activates an auto-update mechanism. From that point on, when you create gold images on a peer Fleet Patching and Provisioning Server (such as RHPS_2
), they are candidates for being automatically copied to the Fleet Patching and Provisioning Server that performed the instantiate operation (such as RHPS_1
). Whether a new gold image is automatically copied depends on that image’s relevance to any instantiate parameters that you may include in the command:
-
-all
: Creates an automatic push for all gold images created onRHPS_2
toRHPS_1
-
-image image_name
: Creates an automatic push for all new descendant gold images of the named image created onRHPS_2
toRHPS_1
. A descendant of the named image is an image that is created onRHPS_2
using therhpctl add image
command. -
-series series_name
: Creates an automatic push for all gold images added to the named series onRHPS_2
toRHPS_1
-
-imagetype image_type
: Creates an automatic push for all gold images created of the named image type onRHPS_2
toRHPS_1
To stop receiving updates that were established by the rhpctl instantiate image
command, run rhpctl uninstantiate image
and specify the peer Fleet Patching and Provisioning Server and one of the following: all, image name, image series name, or image type.
$ rhpctl unregister server -server server_cluster_name
Fleet Patching and Provisioning Server Auditing
The Fleet Patching and Provisioning Server records the execution of all Fleet Patching and Provisioning operations, and also records whether those operations succeeded or failed.
An audit mechanism enables administrators to query the audit log in a variety of dimensions, and also to manage its contents and size.
Fleet Patching and Provisioning Notifications
The Fleet Patching and Provisioning Server is the central repository for the software homes available to the data center. Therefore, it is essential for administrators throughout the data center to be aware of changes to the inventory that may impact their areas of responsibility.
You can create subscriptions to image series events. Fleet Patching and Provisioning notifies a subscribed role or number of users by email of any changes to the images available in the series, including addition or removal of an image. Each series may have a unique group of subscribers.
Also, when a working copy of a gold image is added to or deleted from a target, the owner of the working copy and any additional users can be notified by email. If you want to enable notifications for additional Fleet Patching and Provisioning events, you can create a user-defined action as described in the next section.
Fleet Patching and Provisioning Implementation
Implementing Fleet Patching and Provisioning involves creating a Fleet Patching and Provisioning Server, adding gold images to the server, and creating working copies of gold images to provision software.
After you install and configure Oracle Clusterware, you can configure and start using Fleet Patching and Provisioning. You must create a Fleet Patching and Provisioning Server where you create and store gold images of database and other software homes.
Creating a Fleet Patching and Provisioning Server
The Fleet Patching and Provisioning Server uses a repository that you create in an Oracle ACFS file system in which you store all the software homes that you want to make available to clients and targets.
To create a Fleet Patching and Provisioning Server:
After you start the Fleet Patching and Provisioning Server, use the Fleet Patching and Provisioning Control (RHPCTL) utility to further manage Fleet Patching and Provisioning.
Adding Gold Images to the Fleet Patching and Provisioning Server
Use RHPCTL to add gold images for later provisioning of software.
The Fleet Patching and Provisioning Server stores and serves gold images of software homes. These images must be instantiated on the Fleet Patching and Provisioning Server.
Note:
Images are read-only, and you cannot run programs from them. To create a usable software home from an image, you must create a working copy of a gold image. You cannot directly use images as software homes. You can, however, use images to create working copies (software homes).
You can import software to the Fleet Patching and Provisioning Server using any one of the following methods:
-
You can import an image from an installed home on the Fleet Patching and Provisioning Server using the following command:
rhpctl import image -image image_name -path path_to_installed_home [-imagetype ORACLEDBSOFTWARE | ORACLEGISOFTWARE | ORACLEGGSOFTWARE | SOFTWARE]
-
You can import an image from an installed home on a Fleet Patching and Provisioning Client, using the following command run from the Fleet Patching and Provisioning Client:
rhpctl import image -image image_name -path path_to_installed_home
-
You can create an image from an existing working copy using the following command:
rhpctl add image –image image_name -workingcopy working_copy_name
Use the first two commands in the preceding list to seed the image repository, and to add additional images over time. Use the third command on the Fleet Patching and Provisioning Server as part of the workflow for creating a gold image that includes patches applied to a pre-existing gold image.
The preceding three commands also create an Oracle ACFS file system in the Fleet Patching and Provisioning root directory, similar to the following:
/u01/rhp/images/images/RDBMS_121020617524
Related Topics
Image State
You can set the state of an image to TESTABLE
or RESTRICTED
so that only users with the GH_IMG_TESTABLE or GH_IMG_RESTRICT roles can provision working copies from this image. Once the image has been tested or validated, you can change the state and make the image available for general use by running the rhpctl promote image -image
image_name
-state PUBLISHED
command. The default image state is PUBLISHED
when you add a new gold image, but you can optionally specify a different state with the rhpctl add image
and rhpctl import image
commands.
Image Series
An image series is a convenient way to group different gold images into a logical sequence.
Fleet Patching and Provisioning treats each image as an independent entity with respect to other images. No relationship is assumed between images, even if they follow some specific nomenclature. The image administrator may choose to name images in a logical manner that makes sense to the user community, but this does not create any management grouping within the Fleet Patching and Provisioning framework.
Use the rhpctl add series
command to create an image series and associate one or more images to this series. The list of images in an image series is an ordered list. Use the rhpctl insertimage series
and rhpctl deleteimage series
to add and delete images in an image series. You can also change the order of images in a series using these commands.
The insertimage
and deleteimage
commands do not instantiate or delete actual gold images but only change the list. Also, an image can belong to more than one series (or no series at all).
Image Type
When you add or import a gold image, you must specify an image type.
- ORACLEDBSOFTWARE
- ORACLEGISOFTWARE
- ORACLEGGSOFTWARE
- SOFTWARE
Every gold image must have an image type, and you can create your own image types. A new image type must be based on one of the built-in types. The image type directs Fleet Patching and Provisioning to apply its capabilities for managing Oracle Grid Infrastructure and Oracle Database homes. Fleet Patching and Provisioning also uses image type to organize the custom workflow support framework.
Creating a Custom Image Type
Use the rhpctl add imagetype
command to create custom image types.
For example, to create an image type called DBTEST, which is based on the ORACLEDBSOFTWARE image type:
$ rhpctl add imagetype -imagetype DBTEST -basetype ORACLEDBSOFTWARE
Note:
When you create an image type that is based on an existing image type, the new image type does not inherit any user actions (for custom workflow support) from the base type.Provisioning Copies of Gold Images
Use RHPCTL to provision copies of gold images to Fleet Patching and Provisioning Servers, Clients, and targets.
After you create and import a gold image, you can provision software by adding a copy of the gold image (called a working copy) on the Fleet Patching and Provisioning Server, on a Fleet Patching and Provisioning Client, or a target. You can run the software provisioning command on either the Server or a Client.
Note:
-
The directory you specify in the
-path
parameter must be empty. -
You can re-run the provisioning command in case of an interruption or failure due to system or user errors. After you fix the reported errors, re-run the command and it will resume from the point of failure.
Related Topics
User Group Management in Fleet Patching and Provisioning
When you create a working copy of a gold image as part of a move or upgrade operation, Fleet Patching and Provisioning configures the operating system groups in the new working copy to match those of the source software home (either the unmanaged or the managed home from which you move or upgrade).
When you create a gold image of SOFTWARE image type, any user groups in the source are not inherited and images of this type never contain user group information. When you provision a working copy from a SOFTWARE gold image using the rhpctl add workingcopy
command, you can, optionally, configure user groups in the working copy using the -groups
parameter.
The rhpctl move database
, rhpctl move gihome
, rhpctl upgrade database
, and rhpctl upgrade gihome
commands all require you to specify a source home (either an unmanaged home or a managed home (working copy) that you provisioned using Fleet Patching and Provisioning), and a destination home (which must be a working copy).
When you have provisioned the destination home using the rhpctl add workingcopy
command, prior to performing a move or upgrade operation, you must ensure that the groups configured in the source home match those in the destination home. Fleet Patching and Provisioning configures the groups as part of the add operation.
When you create a gold image of either the ORACLEGISOFTWARE or the ORACLEDBSOFTWARE image type from a source software home (using the rhpctl import image
command) or from a working copy (using the rhpctl add image
command), the gold image inherits the Oracle user groups that were configured in the source. You cannot override this feature.
You can define user groups for ORACLEGISOFTWARE and ORACLEDBSOFTWARE working copies using the rhpctl add workingcopy
command, depending on the image type and user group, as discussed in the subsequent sections.
This section describes how Fleet Patching and Provisioning manages user group configuration, and how the -groups
command-line option of rhpctl add workingcopy
functions.
ORACLEGISOFTWARE (Oracle Grid Infrastructure 11g release 2 (11.2), and 12c release 1 (12.1) and release 2 (12.2))
When you provision an Oracle Grid Infrastructure working copy of a gold image, the groups are set in the working copy according to the type of provisioning (whether regular provisioning or software only, and with or without the -local
parameter), and whether you specify the -groups
parameter with rhpctl add workingcopy
. You can define OSDBA and OSASM user groups in Oracle Grid Infrastructure software with either the -softwareonly
command parameter or by using a response file with the rhpctl add workingcopy
command.
If you are provisioning only the Oracle Grid Infrastructure software using the -softwareonly
command parameter, then you cannot use the -groups
parameter, and Fleet Patching and Provisioning obtains OSDBA and OSASM user group information from the active Grid home.
If you use the -local
command parameter (which is only valid when you use the -softwareonly
command parameter) with rhpctl add workingcopy
, then Fleet Patching and Provisioning takes the values of the groups from the command line (using the -groups
parameter) or uses the default values, which Fleet Patching and Provisioning obtains from the osdbagrp
binary of the gold image.
If none of the preceding applies, then Fleet Patching and Provisioning uses the installer default user group.
-
Uses the value of the user group from the command line, if provided, for OSDBA or OSASM, or both.
-
If you provide no value on the command line, then Fleet Patching and Provisioning retrieves the user group information defined in the response file.
If you are defining the OSOPER Oracle group, then, again, you can either use the -softwareonly
command parameter or use a response file with the rhpctl add workingcopy
command.
If you use the -softwareonly
command parameter, then you can provide the value on the command line (using the -groups
parameter) or leave the user group undefined.
If you are provisioning and configuring a working copy of a gold image using information from a response file, then you can provide the value on the command line, use the information contained in the response file, or leave the OSOPER Oracle group undefined.
ORACLEDBSOFTWARE (Oracle Database 11g release 2 (11.2), and 12c release 1 (12.1) and release 2 (12.2))
If you are provisioning a working copy of Oracle Database software and you want to define Oracle groups, then use the -groups
command parameter with the rhpctl add workingcopy
command. Oracle groups available in the various Oracle Database releases are as follows:
-
Oracle Database 11g release 2 (11.2)
- OSDBA
- OSOPER
-
Oracle Database 12c release 1 (12.1)
- OSDBA
- OSOPER
- OSBACKUP
- OSDG
- OSKM
-
Oracle Database 12c release 2 (12.2)
- OSDBA
- OSOPER
- OSBACKUP
- OSDG
- OSKM
- OSRAC
Regardless of which of the preceding groups you are defining (except for OSOPER), Fleet Patching and Provisioning takes the values of the groups from the command line (using the -groups
parameter) or uses the default values, which Fleet Patching and Provisioning obtains from the osdbagrp
binary of the gold image.
If any group picked up from the osdbagrp
binary is not in the list of groups to which the database user belongs (given by the id
command), then Fleet Patching and Provisioning uses the installer default user group. Otherwise, the database user is the user running the rhpctl add workingcopy
command.
Storage Options for Provisioned Software
Choose one of three storage options where Fleet Patching and Provisioning stores working copies of gold images.
When you provision software using the rhpctl add workingcopy
command, you can choose from three storage options where Fleet Patching and Provisioning places that software:
-
In an Oracle ACFS shared file system managed by Fleet Patching and Provisioning (for database homes only)
-
In a local file system not managed by Fleet Patching and Provisioning
Using the rhpctl add workingcopy
command with the –storagetype
and –path
parameters, you can choose where you store provisioned working copies. The applicability of the parameters depends on whether the node (or nodes) to which you are provisioning the working copy is a Fleet Patching and Provisioning Server, Fleet Patching and Provisioning Client, or a non-Fleet Patching and Provisioning client. You can choose from the following values for the –stroragetype
parameter:
-
RHP_MANAGED
: Choosing this value, which is available for Fleet Patching and Provisioning Servers and Fleet Patching and Provisioning Clients, stores working copies in an Oracle ACFS shared file system. The–path
parameter is not used with this option because Fleet Patching and Provisioning manages the storage option.Notes:
-
You cannot store Oracle Grid Infrastructure homes in
RHP_MANAGED
storage. -
Oracle recommends using the
RHP_MANAGED
storage type, which is available on Fleet Patching and Provisioning Servers, and on Clients configured with an Oracle ASM disk group. -
If you provision working copies on a Fleet Patching and Provisioning Server, then you do not need to specify the
-storagetype
option because it will default toRHP_MANAGED
. -
If you choose to provision working copies on a Fleet Patching and Provisioning Client, and you do not specify the
-path
parameter, then the storage type defaults toRHP_MANAGED
only if there is an Oracle ASM disk group on the client. Otherwise the command will fail. If you specify a location on the client for the-path
parameter, then the storage type defaults toLOCAL
with or without an Oracle ASM disk group.
-
-
LOCAL
: Choosing this value stores working copies in a local file system that is not managed by Fleet Patching and Provisioning. You must specify a path to the file system on the Fleet Patching and Provisioning Server, Fleet Patching and Provisioning Client, or non-Fleet Patching and Provisioning client, or to the Oracle ASM disk group on the Fleet Patching and Provisioning Client.
In cases where you specify the –path
parameter, if the file system is shared among all of the nodes in the cluster, then the working copy gets created on this shared storage. If the file system is not shared, then the working copy gets created in the location of the given path on every node in the cluster.
Note:
The directory you specify in the-path
parameter must be empty.
Related Topics
Provisioning for a Different User
If you want a different user to provision software other than the user running the command, then use the -user
parameter of the rhpctl add workingcopy
command.
When the provisioning is completed, all files and directories of the provisioned software are owned by the user you specified. Permissions on files on the remotely provisioned software are the same as the permissions that existed on the gold image from where you provisioned the application software.
Propagating Images Between Fleet Patching and Provisioning Servers
With automatic image propagation, you can set up automated copies of software images across different peer Fleet Patching and Provisioning Servers. Gold images that you register at one site are copied to peer Fleet Patching and Provisioning Servers.
RHPS-A
and RHPS-B
, where RHPS-A
is the source and RHPS-B
is the destination for software images.
Oracle Grid Infrastructure Management
The Fleet Patching and Provisioning Server provides an efficient and secure platform for the distribution of Oracle Grid Infrastructure homes to targets and Fleet Patching and Provisioning Clients.
Also, Fleet Patching and Provisioning Clients have the ability to fetch Oracle Grid Infrastructure homes from the Fleet Patching and Provisioning Server.
Oracle Grid Infrastructure homes are distributed in the form of working copies of gold images. After a working copy has been provisioned, Fleet Patching and Provisioning can optionally configure Oracle Grid Infrastructure. This gives Fleet Patching and Provisioning the ability to create an Oracle Grid Infrastructure installation on a group of one or more nodes that initially do not have Oracle Grid Infrastructure installed.
Fleet Patching and Provisioning also has commands for managing Oracle Grid Infrastructure homes, such as switching to a patched home or upgrading to a new Oracle Grid Infrastructure version. These are both single commands that orchestrate the numerous steps involved. Reverting to the original home is just as simple. Also, Fleet Patching and Provisioning can add or delete nodes from an Oracle Grid Infrastructure configuration.
About Deploying Oracle Grid Infrastructure Using Fleet Patching and Provisioning (FPP)
Fleet Patching and Provisioning (FPP) is a software lifecycle management method for provisioning and maintaining Oracle homes. Fleet Patching and Provisioning enables mass deployment and maintenance of standard operating environments for databases, clusters, and user-defined software types.
Note:
Starting with Oracle Grid Infrastructure 19c, the feature formerly known as Rapid Home Provisioning (RHP) is now Fleet Patching and Provisioning (FPP).Fleet Patching and Provisioning enables you to install clusters, and provision, patch, scale, and upgrade Oracle Grid Infrastructure, Oracle Restart, and Oracle Database homes. The supported versions are 11.2, 12.1, 12.2, 18c, and 19c. You can also provision applications and middleware using Fleet Patching and Provisioning.
Fleet Patching and Provisioning is a service in Oracle Grid Infrastructure that you can use in either of the following modes:
-
Central Fleet Patching and Provisioning Server
The Fleet Patching and Provisioning Server stores and manages standardized images, called gold images. Gold images can be deployed to any number of nodes across the data center. You can create new clusters and databases on the deployed homes and can use them to patch, upgrade, and scale existing installations.
The Fleet Patching and Provisioning Server can manage the following types of installations:-
Software homes on the cluster hosting the Fleet Patching and Provisioning Server itself.
-
Fleet Patching and Provisioning Clients running Oracle Grid Infrastructure 12c Release 2 (12.2), 18c, and 19c.
-
Installations running Oracle Grid Infrastructure 11g Release 2 (11.2) and 12c Release 1 (12.1).
-
Installations running without Oracle Grid Infrastructure.
The Fleet Patching and Provisioning Server can provision new installations and can manage existing installations without requiring any changes to the existing installations. The Fleet Patching and Provisioning Server can automatically share gold images among peer servers to support enterprises with geographically distributed data centers.
-
-
Fleet Patching and Provisioning Client
The Fleet Patching and Provisioning Client can be managed from the Fleet Patching and Provisioning Server, or directly by executing commands on the client itself. The Fleet Patching and Provisioning Client is a service built into the Oracle Grid Infrastructure and is available in Oracle Grid Infrastructure 12c Release 2 (12.2) and later releases. The Fleet Patching and Provisioning Client can retrieve gold images from the Fleet Patching and Provisioning Server, upload new images based on the policy, and apply maintenance operations to itself.
Fleet Patching and Provisioning
Deploying Oracle software using Fleet Patching and Provisioning has the following advantages:
-
Ensures standardization and enables high degrees of automation with gold images and managed lineage of deployed software.
-
Minimizes downtime by deploying new homes as images (called gold images) out-of-place, without disrupting active databases or clusters.
-
Simplifies maintenance by providing automatons which are invoked with a simple, consistent API across database versions and deployment models.
-
Reduces maintenance risk with built-in validations and a “dry run” mode to test the operations.
-
Enables you to resume or restart the commands in the event of an unforeseen issue, reducing the risk of maintenance operations.
-
Minimizes and often eliminates the impact of patching and upgrades, with features that include:
-
Zero-downtime database upgrade with fully automated upgrade, executed entirely within the deployment without requiring any extra nodes or external storage.
-
Adaptive management of database sessions and OJVM during rolling patching.
-
Options for management of consolidated deployments.
-
-
The deployment and maintenance operations enable customizations to include environment-specific actions into the automated workflow.
Related Topics
See Also:
Oracle Clusterware Administration and Deployment Guide for information about setting up the Fleet Patching and Provisioning Server and Client, and for creating and using gold images for provisioning and patching Oracle Grid Infrastructure and Oracle Database homes.Provisioning Oracle Grid Infrastructure Software
Fleet Patching and Provisioning has several methods to provision and, optionally, configure Oracle Grid Infrastructure and Oracle Restart grid infrastructure homes.
rhpctl add workingcopy
command to install and configure Oracle Grid Infrastructure, and to enable simple and repeatable creation of standardized deployments.
-softwareonly
parameter of the rhpctl add workingcopy
command. This provisions but does not activate the new Grid home, so that when you are ready to switch to that new home, you can do so with a single command.
Patching Oracle Grid Infrastructure Software
Fleet Patching and Provisioning provides three methods to patch Oracle Grid Infrastructure software homes: rolling, non-rolling, and in batches.
You can also perform this operation using the independent automaton in an environment where no Fleet Patching and Provisioning Server is present. In this case, the source and destination homes are not working copies of gold images, but are two installed homes that you deployed with some method other than using Fleet Patching and Provisioning.
This section includes the following topics:
Related Topics
Patching Oracle Grid Infrastructure Using the Rolling Method
The rolling method for patching Oracle Grid Infrastructure is the default method.
rhpctl move gihome
command (an atomic operation), which returns after the Oracle Grid Infrastructure stack on each node has been restarted on the new home. Nodes are restarted sequentially, so that only one node at a time will be offline, while all other nodes in the cluster remain online.
Notes:
-
You cannot move the Grid home to a home that Fleet Patching and Provisioning does not manage. Therefore, rollback (to the original home) applies only to moves between two working copies. This restriction does not apply when using the independent automaton since it operates on unmanaged homes only.
-
You can delete the source working copy at any time after moving a Grid home. Once you delete the working copy, however, you cannot perform a rollback. Also, use the
rhpctl delete workingcopy
command (as opposed torm
, for example) to remove the source working copy to keep the Fleet Patching and Provisioning inventory correct. -
If you use the
-abort
parameter to terminate the patching operation, then Fleet Patching and Provisioning does not clean up or undo any of the patching steps. The cluster, databases, or both may be in an inconsistent state because all nodes are not patched.
Patching Oracle Grid Infrastructure Using the Non-Rolling Method
You can use the -nonrolling
parameter with the rhpctl move gihome
command, which restarts the Oracle Grid Infrastructure stack on all nodes in parallel.
Patching Oracle Grid Infrastructure Using Batches
The third patching method is to sequentially process batches of nodes, with a number of nodes in each batch being restarted in parallel.
User-Defined Batches
When you use this method of patching, the first time you run the rhpctl move gihome
command, you must specify the source home, the destination home, the batches, and other options, as needed. The command terminates after the first node restarts.
To patch Oracle Grid Infrastructure using batches that you define:
-
Define a list of batches on the command line and begin the patching process, as in the following example:
$ rhpctl move gihome -sourcewc wc1 -destwc wc2 -batches "(n1),(n2,n3),(n4)"
The preceding command example initiates the move operation, and terminates and reports successful when the Oracle Grid Infrastructure stack restarts in the first batch. Oracle Grid Infrastructure restarts the batches in the order you specified in the
-batches
parameter.In the command example, node
n1
forms the first batch, nodesn2
andn3
form the second batch, and noden4
forms the last batch. The command defines the source working copy aswc1
and the patched (destination) working copy aswc2
.Notes:
You can specify batches such that singleton services (policy-managed singleton services or administrator-managed services running on one instance) are relocated between batches and non-singleton services remain partially available during the patching process.
-
You must process the next batch by running the
rhpctl move gihome
command, again, as follows:$ rhpctl move gihome -destwc wc2 -continue
The preceding command example restarts the Oracle Grid Infrastructure stack on the second batch (nodes
n2
andn3
). The command terminates by reporting that the second batch was successfully patched. -
Repeat the previous step until you have processed the last batch of nodes. If you attempt to run the command with the
-continue
parameter after the last batch has been processed, then the command returns an error.If the
rhpctl move gihome
command fails at any time during the above sequence, then, after determining and fixing the cause of the failure, rerun the command with the-continue
option to attempt to patch the failed batch. If you want to skip the failed batch and continue with the next batch, use the-continue
and-skip
parameters. If you attempt to skip over the last batch, then the move operation is terminated.Alternatively, you can reissue the command using the
-revert
parameter to undo the changes that have been made and return the configuration to its initial state.You can use the
-abort
parameter instead of the-continue
parameter at any point in the preceding procedure to terminate the patching process and leave the cluster in its current state.Notes:
-
Policy-managed services hosted on a server pool with one active server, and administrator-managed services with one preferred instance and no available instances cannot be relocated and will go OFFLINE while instances are being restarted.
-
If a move operation is in progress, then you cannot initiate another move operation from the same source home or to the same destination home.
-
After the move operation has ended, services may be running on nodes different from the ones they were running on before the move and you will have to manually relocate them back to the original instances, if necessary.
-
If you use the
-abort
parameter to terminate the patching operation, then Fleet Patching and Provisioning does not clean up or undo any of the patching steps. The cluster, databases, or both may be in an inconsistent state because all nodes are not patched. -
Depending on the start dependencies, services that were offline before the move began could come online during the move.
-
Fleet Patching and Provisioning-Defined Batches
Using Fleet Patching and Provisioning to define and patch batches of nodes means that you need only run one command, as shown in the following command example, where the source working is wc1
and the destination working copy is wc2
:
$ rhpctl move gihome -sourcewc wc1 -destwc wc2 -smartmove -saf Z+ [-eval]
There is no need for you to do anything else unless you used the -separate
parameter with the command. In that case, the move operation waits for user intervention to proceed to the next batch, and then you will have to run the command with the -continue
parameter after each batch completes.
$ rhpctl move gihome -destwc destination_workingcopy_name -revert [authentication_option]
You can use the -revert
parameter with an un-managed home.
The parameters used in the preceding example are as follows:
-
-smartmove
: This parameter restarts the Oracle Grid Infrastructure stack on disjoint sets of nodes so that singleton resources are relocated before Oracle Grid Infrastructure starts.Note:
If the server pool to which a resource belongs contains only one active server, then that resource will go offline as relocation cannot take place.The
-smartmove
parameter:-
Creates a map of services and nodes on which they are running.
-
Creates batches of nodes. The first batch will contain only the Hub node, if the configuration is an Oracle Flex Cluster. For additional batches, a node can be merged into a batch if:
-
The availability of any non-singleton service, running on this node, does not go below the specified service availability factor (or the default of 50%).
-
There is a singleton service running on this node and the batch does not contain any of the relocation target nodes for the service.
-
-
Restarts the Oracle Grid Infrastructure stack batch by batch.
-
-
Service availability factor (
-saf Z+
): You can specify a positive number, as a percentage, that will indicate the minimum number of database instances on which a database service must be running. For example:-
If you specify
-saf 50
for a service running on two instances, then only one instance can go offline at a time. -
If you specify
-saf 50
for a service running on three instances, then only one instance can go offline at a time. -
If you specify
-saf 75
for a service running on two instances, then an error occurs because the target can never be met. -
The service availability factor is applicable for services running on at least two instances. As such, the service availability factor can be 0% to indicate a non-rolling move, but not 100%. The default is 50%.
-
If you specify a service availability factor for singleton services, then the parameter will be ignored because the availability of such services is 100% and the services will be relocated.
-
-
-eval
: You can optionally use this parameter to view the auto-generated batches. This parameter also shows the sequence of the move operation without actually patching the software.
Related Topics
Combined Oracle Grid Infrastructure and Oracle Database Patching
When you patch an Oracle Grid Infrastructure deployment, Fleet Patching and Provisioning enables you to simultaneously patch the Oracle Database homes on the cluster, so you can patch both types of software homes in a single maintenance operation.
Note:
You cannot patch both Oracle Grid Infrastructure and Oracle Database in combination, with the independent automaton.The following optional parameters of the rhpctl move gihome
command are relevant to the combined Oracle Grid Infrastructure and Oracle Database patching use case:
-
-auto
: Automatically patch databases along with patching Oracle Grid Infrastructure -
-dbhomes mapping_of_Oracle_homes
: Mapping of source and destination working copies in the following format:sourcewc1=destwc1,...,source_oracle_home_path=destwcN
-
-dblist db_name_list
: Patch only the specified databases -
-excludedblist db_name_list
: Patch all databases except the specified databases -
-nodatapatch
: Indicates thatdatapatch
is not be run for databases being moved
As an example, assume that a Fleet Patching and Provisioning Server with Oracle Grid Infrastructure 12c release 2 (12.2) has provisioned the following working copies on an Oracle Grid Infrastructure 12c release 1 (12.1.0.2) target cluster which includes the node test_749
:
-
GI121WC1
: The active Grid home on the Oracle Grid Infrastructure 12c release 1 (12.1.0.2) cluster -
GI121WC2
: A software-only Grid home on the Oracle Grid Infrastructure 12c release 1 (12.1.0.2) cluster -
DB121WC1
: An Oracle RAC 12c release 1 (12.1.0.2.0) database home running database instances -
DB121025WC1
: An Oracle RAC 12c release 1 (12.1.0.2.5) database home with no database instances (this is the patched home) -
DB112WC1
: An Oracle RAC 11g release 2 (11.2.0.4.0) database home running database instances -
DB112045WC1
: An Oracle RAC 11g release 2 (11.2.0.4.5) database home with no database instances (this is the patched home)
Further assume that you want to simultaneously move
-
Oracle Grid Infrastructure from working copy
GI121WC1
to working copyGI121WC2
-
Oracle RAC Database
db1
from working copyDB121WC1
to working copyDB121025WC1
-
Oracle RAC Database
db2
in working copyDB112WC1
to working copyDB112045WC1
The following single command accomplishes the moves:
$ rhpctl move gihome -sourcewc GI121WC1 -destwc GI121WC2 -auto
-dbhomes DB121WC1=DB121025WC1,DB112WC1=DB112045WC1 -targetnode test_749 {authentication_option}
Notes:
-
If you have an existing Oracle home that is not currently a working copy, then specify the Oracle home path instead of the working copy name for the source home. In the preceding example, if the Oracle home path for an existing 12.1.0.2 home is
/u01/app/prod/12.1.0.2/dbhome1
, then replaceDB121WC1=DB121025WC1
with/u01/app/prod/12.1.0.2/dbhome1=DB121025WC1
. -
If the move operation fails at some point before completing, then you can either resolve the cause of the failure and resume the operation by rerunning the command, or you can undo the partially completed operation by issuing the following command, which restores the configuration to its initial state:
$ rhpctl move gihome -destwc GI121WC2 -revert {authentication_option}
In the preceding command example, the Oracle Grid Infrastructure 12c release 1 (12.1.0.2) Grid home moves from working copy GI121WC1
to working copy GI121WC2
, databases running on working copy DB121WC1
move to working copy DB121025WC1
, and databases running on working copy DB112WC1
move to working copy DB112045WC1
.
For each node in the client cluster, RHPCTL:
-
Runs any configured pre-operation user actions for moving the Oracle Grid Infrastructure (
move gihome
). -
Runs any configured pre-operation user actions for moving the database working copies (
move database
). -
Stops services running on the node, and applies drain and disconnect options.
-
Performs the relevant patching operations for Oracle Clusterware and Oracle Database.
-
Runs any configured post-operation user actions for moving the database working copies (
move database
). -
Runs any configured post-operation user actions for moving the Oracle Grid Infrastructure working copy (
move gihome
).
Related Topics
Zero-Downtime Oracle Grid Infrastructure Patching
Use Fleet Patching and Provisioning to patch Oracle Grid Infrastructure without bringing down Oracle RAC database instances.
Patching Oracle Grid Infrastructure Using Local-Mode Configuration
When you install Oracle Grid Infrastructure or when you upgrade an older version to this current version, the Fleet Patching and Provisioning Server is configured automatically in local mode.
rhpctl move gihome
or rhpctl move database
command, specifying the source and destination paths instead of working copy names.
Note:
You must enable and start the Fleet Patching and Provisioning Server using the following commands before you can use the local-mode patching operation:$ srvctl enable rhpserver
$ srvctl start rhpserver
$ srvctl stop rhpserver
$ srvctl remove rhpserver
Proceed with the steps described in "Creating a Fleet Patching and Provisioning Server" to create the central-mode Fleet Patching and Provisioning Server.
rhpctl move gihome
command parameters for the patching operation:
-
-node
: If the home you are moving is an Oracle Grid Infrastructure home installed on more than one node, then the default operation is a rolling update on all nodes. To apply a patch to just one node, specify the name of that node with this parameter. -
-nonrolling
: If the home you are moving is an Oracle Grid Infrastructure home installed on more than one node, then the default operation is a rolling update on all nodes. To patch all nodes in a nonrolling manner, use this parameter instead of the-node
parameter. -
-ignorewcpatches
: By default, Fleet Patching and Provisioning will not perform the move operation if the destination home is missing any patches present in the source home. You can override this functionality by using this parameter, for example, to move back to a previous source home if you must undo an update.
Related Topics
Error Prevention and Automated Recovery Options
Fleet Patching and Provisioning has error prevention and automated recovery options to assist you during maintenance operations.
During maintenance operations, errors must be avoided whenever possible and, when they occur, you must have automated recovery paths to avoid service disruption.
Error Prevention
Many RHPCTL commands include the -eval
parameter, which you can use to run the command and evaluate the current configuration without making any changes to determine if the command can be successfully run and how running the command will impact the configuration. Commands that you run using the -eval
parameter run as many prerequisite checks as possible without changing the configuration. If errors are encountered, then RHPCTL reports them in the command output. After you correct any errors, you can run the command again using -eval
to validate the corrections. Running the command successfully using –eval
provides a high degree of confidence that running the actual command will succeed.
You can test commands with the -eval
parameter outside of any maintenance window, so the full window is available for the maintenance procedure, itself.
Automated Recovery Options
During maintenance operations, errors can occur either in-flight (for example, partway through either an rhpctl move database
or rhpctl move gihome
command) or after a successful operation (for example, after an rhpctl move database
command, you encounter performance or behavior issues).
In-Flight Errors
-
Correct any errors that RHPCTL report and rerun the command, which will resume running at the point of failure.
If rerunning the command succeeds and the move operation has a post-operation user action associated with it, then the user action is run. If there is a pre-operation user action, however, then RHPCTL does not rerun the command.
-
Run a new move command, specifying only the destination from the failed move (working copy or unmanaged home), an authentication option, if required, and use the
-revert
parameter. This will restore the configuration to its initial state.No user actions associated with the operation are run.
-
Run a new move command, specifying only the destination from the failed move (working copy or unmanaged home), an authentication option if required, and the
-abort
parameter. This leaves the configuration in its current state. Manual intervention is required at this point to place the configuration in a final state.No user actions associated with the operation are run.
Post-Update Issues
Note:
For the independent automatons, the source and destination homes are always unmanaged homes (those homes not provisioned by Fleet Patching and Provisioning). When the move operation is run on a Fleet Patching and Provisioning Server or Fleet Patching and Provisioning Client, the destination home must be a managed home that was provisioned by Fleet Patching and Provisioning.Upgrading Oracle Grid Infrastructure Software
If you are using Fleet Patching and Provisioning, then you can use a single command to upgrade an Oracle Grid Infrastructure home.
Fleet Patching and Provisioning supports upgrades to Oracle Grid Infrastructure 12c release 1 (12.1.0.2) from 11g release 2 (11.2.0.3 and 11.2.0.4). Upgrading to Oracle Grid Infrastructure 12c release 2 (12.2.0.1) is supported from 11g release 2 (11.2.0.3 and 11.2.0.4) and 12c release 1 (12.1.0.2). The destination for the upgrade can be a working copy of a gold image already provisioned or you can choose to create the working copy as part of this operation.
As an example, assume that a target cluster is running Oracle Grid Infrastructure on an Oracle Grid Infrastructure home that was provisioned by Fleet Patching and Provisioning. This Oracle Grid Infrastructure home is 11g release 2 (11.2.0.4) and the working copy is named accordingly.
After provisioning a working copy version of Oracle Grid Infrastructure 12c release 2 (12.2.0.1) (named GIOH12201 in this example), you can upgrade to that working copy with this single command:
$ rhpctl upgrade gihome -sourcewc GIOH11204 -destwc GIOH12201
Note:
You can delete the source working copy at any time after completing an upgrade. Once you delete the working copy, however, you cannot perform a rollback. Also, use therhpctl delete workingcopy
command (as opposed to rm
, for example) to remove the source working copy to keep the Fleet Patching and Provisioning inventory correct.
Oracle Database Software Management
The Fleet Patching and Provisioning Server provides an efficient and secure platform for the distribution of Oracle Database Homes to targets and Fleet Patching and Provisioning Clients.
Also, Fleet Patching and Provisioning Clients have the ability to fetch database homes from the Fleet Patching and Provisioning Server.
Oracle Database homes are distributed in the form of working copies of gold images. Database instances (one or more) can then be created on the working copy.
Fleet Patching and Provisioning also has commands for managing existing databases, such as switching to a patched home or upgrading to a new database version. These are both single commands which orchestrate the numerous steps involved. Reverting to the original home is just as simple.
Provisioning a Copy of a Gold Image of a Database Home
Use the rhpctl add workingcopy
command to provision a working copy of a database home on a Fleet Patching and Provisioning Server, Client, or target.
Related Topics
Creating an Oracle Database on a Copy of a Gold Image
Create an Oracle Database on a working copy.
The Fleet Patching and Provisioning Server can add a database on a working copy that is on the Fleet Patching and Provisioning Server, itself, a Fleet Patching and Provisioning Client, or a non-Fleet Patching and Provisioning Client target. A Fleet Patching and Provisioning Client can create a database on a working copy that is running on the Fleet Patching and Provisioning Client, itself.
Note:
When you create a database using Fleet Patching and Provisioning, the feature uses random passwords for both the SYS and SYSTEM schemas in the database and you cannot retrieve these passwords. A user with the DBA or operator role must connect to the database, locally, on the node where it is running and reset the passwords to these two accounts.Patching Oracle Database Software
To patch an Oracle database, you move the database home to a new home, which includes the patches you want to implement.
rhpctl move database
command to move one or more database homes to a working copy of the same database release level. The databases may be running on a working copy, or on an Oracle Database home that is not managed by Fleet Patching and Provisioning.
-nonrolling
option to perform patching in non-rolling mode. The database is then completely stopped on the old ORACLE_HOME
, and then restarted to make it run from the newly patched ORACLE_HOME
.
Note:
Part of the patching process includes applying Datapatch. When you move an Oracle Database 12c release 1 (12.1) or higher, Fleet Patching and Provisioning completes this step for you. When you move to a version previous to Oracle Database 12c release 1 (12.1), however, you must run Datapatch manually. Fleet Patching and Provisioning is Oracle Data Guard-aware, and will not apply Datapatch to Oracle Data Guard standbys.Workflow for Database Patching
Assume that a database namedmyorcldb
is running on a working copy that was created from an Oracle Database 12c release 2 (12.2) gold image named DB122
. The typical workflow for patching an Oracle Database home is as follows:
Patching Oracle Database Using Batches
During database patching, Fleet Patching and Provisioning can sequentially process batches of nodes, with a number of nodes in each batch being restarted in parallel. This method maximizes service availability during the patching process. You can define the batches on the command line or choose to have Fleet Patching and Provisioning generate the list of batches based on its analysis of the database services running in the cluster.
Adaptive Oracle RAC-Rolling Patching for OJVM Deployments
In a clustered environment, the default approach for applying database maintenance with Fleet Patching and Provisioning is Oracle RAC rolling. However, non-rolling may be required if the new (patched) database home contains OJVM patches. In this case, Fleet Patching and Provisioning determines whether the rolling approach is possible, and rolls when applicable. (See MOS Note 2217053.1 for details.)
Patching Oracle Database with the Independent Automaton
The independent local-mode automaton updates Oracle Database homes, including Oracle Database single-instance databases in a cluster or standalone (with no Oracle Grid Infrastructure), an Oracle RAC database, or an Oracle RAC One Node database.
rhpctl move database
command parameters for any of the patching scenarios:
-
-dbname
: If the database home is hosting more than one database, you can move specific databases by specifying a comma-delimited list with this parameter. Databases not specified are not moved. If you do not use this parameter, then RHPCTL moves all databases.Note:
If you are moving a non-clustered (single-instance) database, then, for the value of the-dbname
parameter, you must specify the SID of the database instead of the database name. -
-ignorewcpatches
: By default, Fleet Patching and Provisioning will not perform the move operation if the destination home is missing any patches present in the source home. You can override this functionality by using this parameter, for example, to move back to a previous source home if you must undo an update.
-
-node
: If the home you are moving is a database home installed on more than one node, then the default operation is a rolling update on all nodes. To apply a patch to just one node, specify the name of that node with this parameter. -
-nonrolling
: If the home you are moving is a database home installed on more than one node, then the default operation is a rolling update on all nodes. To patch all nodes in a nonrolling manner, use this parameter instead of the-node
parameter.
-disconnect
and -noreplay
: Applies to single-instance Oracle Databases, and Oracle RAC and Oracle RAC One Node database instances. Use the -disconnect
parameter to disconnect all sessions before stopping or relocating services. If you choose to use -disconnect
, then you can choose to use the -noreplay
parameter to disable session replay during disconnection.
-drain_timeout
: Applies to single-instance Oracle Databases, and Oracle RAC, and Oracle RAC One Node database instances. Use this parameter to specify the time, in seconds, allowed for resource draining to be completed from each node. Accepted values are an empty string (""), 0, or any positive integer. The default value is an empty string, which means that this parameter is not set. This is applicable to older versions to maintain traditional behavior. If it is set to 0, then the stop option is applied immediately.
The draining period is intended for planned maintenance operations. During the draining period, on each node in succession, all current client requests are processed, but new requests are not accepted.
-stopoption
: Applies to single-instance Oracle Databases, and Oracle RAC, and Oracle RAC One Node database instances. Specify a stop option for the database. Stop options include: ABORT, IMMEDIATE, NORMAL, TRANSACTIONAL, and TRANSACTIONAL_LOCAL.
Note:
Therhpctl move database
command is Oracle Data Guard-aware, and will not run Datapatch if the database is an Oracle Data Guard standby.
Related Topics
Patching Oracle Exadata Software
In addition to Oracle Grid Infrastructure and Oracle Database homes, Fleet Patching and Provisioning supports patching the Oracle Exadata components: database nodes, storage cells, and InfiniBand switches.
rhpctl add workingcopy
command, which stores the Oracle Exadata system information (list of nodes and the images with which they were last patched) on the Fleet Patching and Provisioning Server, before patching the desired Oracle Exadata nodes.
rhpctl update workingcopy
command. After patching, Fleet Patching and Provisioning updates the images of the nodes.
rhpctl query workingcopy
command for a working copy based on the EXAPATCHSOFTWARE
image type, the command returns a list of nodes and their images.
Related Topics
Upgrading Oracle Database Software
Fleet Patching and Provisioning provides two options for upgrading Oracle Database. Both options are performed with a single command.
rhpctl upgrade database
command performs a traditional upgrade incurring downtime. The rhpctl zdtupgrade database
command performs an Oracle RAC or Oracle RAC One Node upgrade with minimal or no downtime.
rhpctl upgrade database
command to upgrade to Oracle Database 12c release 1 (12.1.0.2) from Oracle Database 11g release 2 (11.2.0.3 and 11.2.0.4). Upgrading to Oracle Database 12c release 2 (12.2.0.1) is supported from Oracle Database 11g release 2 (11.2.0.3 and 11.2.0.4) and Oracle Database 12c release 1 (12.1.0.2).
Note:
The version of Oracle Grid Infrastructure on which the pre-upgrade database is running must be the same version or higher than the version of the database to which you are upgrading.Note:
You can delete the source working copy at any time after completing an upgrade. Once you delete the working copy, however, you cannot perform a rollback. Also, use therhpctl delete workingcopy
command (as opposed to rm
, for example) to remove the source working copy to keep the Fleet Patching and Provisioning inventory correct.
Related Topics
Zero-Downtime Upgrade
Using Fleet Patching and Provisioning, which automates and orchestrates database upgrades, you can upgrade an Oracle RAC or Oracle RAC One Node database with no disruption in service.
The zero-downtime upgrade process is resumable, restartable, and recoverable should any errors interrupt the process. You can fix the issue then re-run the command, and Fleet Patching and Provisioning continues from the error point. Oracle also provides hooks at the beginning and end of the zero-downtime upgrade process, allowing call outs to user-defined scripts, so you can customize the process.
-
Database upgrade targets: Oracle RAC and Oracle RAC One Node, with the following upgrade paths:
- 11g release 2 (11.2.0.4) to 12c release 1 (12.1.0.2)
- 11g release 2 (11.2.0.4) to 12c release 2 (12.2.0.1)
- 12c release 1 (12.1.0.2) to 12c release 2 (12.2.0.1)
-
Fleet Patching and Provisioning management: The source database home can either be unmanaged (not provisioned by Fleet Patching and Provisioning service) or managed (provisioned by Fleet Patching and Provisioning service)
-
Database state: The source database must be in archive log mode
Upgrading Container Databases
You can use Fleet Patching and Provisioning to upgrade CDBs but Fleet Patching and Provisioning does not support converting a non-CDB to a CDB during upgrade. To prepare for a zero-downtime upgrade, you complete configuration steps and validation checks. When you run a zero-downtime upgrade using Fleet Patching and Provisioning, you can stop the upgrade and resume it, if necessary. You can recover from any upgrade errors, and you can restart the upgrade. You also have the ability to insert calls to your own scripts during the upgrade, so you can customize your upgrade procedure.
Zero-Downtime Upgrade Environment Prerequisites
-
Server environment: Oracle Grid Infrastructure 18c with Fleet Patching and Provisioning
-
Database hosts: Databases hosted on one of the following platforms:
-
Oracle Grid Infrastructure 18c Fleet Patching and Provisioning Client
-
Oracle Grid Infrastructure 18c Fleet Patching and Provisioning Server
-
Oracle Grid Infrastructure 12c (12.2.0.1) Fleet Patching and Provisioning Client
-
Oracle Grid Infrastructure 12c (12.1.0.2) target cluster
-
-
Database-specific prerequisites for the environment: During an upgrade, Fleet Patching and Provisioning manages replication to a local data file to preserve transactions applied to the new database when it is ready. There are two possibilities for the local data file:
- Snap clone, which is available if the database data files and redo and archive redo logs are on Oracle ACFS file systems
- Full copy, for all other cases
-
Fleet Patching and Provisioning requires either Oracle GoldenGate or Oracle Data Guard during a zero-downtime database upgrade. As part of the upgrade procedure, Fleet Patching and Provisioning configures and manages the Oracle GoldenGate deployment.
Running a Zero-Downtime Upgrade Using Oracle GoldenGate for Replication
Run a zero-downtime upgrade using Oracle GoldenGate for replication.
-
Prepare the Fleet Patching and Provisioning Server.
Create gold images of the Oracle GoldenGate software in the image library of the Fleet Patching and Provisioning Server.Note:
You can download the Oracle GoldenGate software for your platform from Oracle eDelivery. The Oracle GoldenGate 12.3 installable kit contains the required software for both Oracle Database 11g and Oracle Database 12c databases.If you download the Oracle GoldenGate software, then extract the software home and perform a software only installation on the Fleet Patching and Provisioning Server.
Create gold images of the Oracle GoldenGate software for both databases, as follows:
In both of the preceding commands,$ rhpctl import image -image 112ggimage -path path -imagetype ORACLEGGSOFTWARE $ rhpctl import image -image 12ggimage -path path -imagetype ORACLEGGSOFTWARE
path
refers to the location of the Oracle GoldenGate software home on the Fleet Patching and Provisioning Server for each release of the database. -
Prepare the target database.
Provision working copies of the Oracle GoldenGate software to the cluster hosting the database, as follows:$ rhpctl add workingcopy -workingcopy GG_Wcopy_11g -image 112ggimage -user user_name -node 12102_cluster_node -path path {-root | -sudouser user_name -sudopath sudo_bin_path} $ rhpctl add workingcopy -workingcopy GG_Wcopy_12c -image 12ggimage -user user_name -node 12102_cluster_node -path path {-root | -sudouser user_name -sudopath sudo_bin_path}
If the database is hosted on the Fleet Patching and Provisioning Server, itself, then neither the-targetnode
nor-client
parameters are required.Note:
Working copy names must be unique, therefore you must use a different working copy name on subsequent targets. You can create unique working copy names by including the name of the target/client cluster name in the working copy name. -
Provision a working copy of the Oracle Database 12c software home to the target cluster.
Note:
You can do this preparation ahead of the maintenance window without disrupting any operations taking place on the target.Related Topics
Running a Zero-Downtime Upgrade Using Oracle Data Guard for Replication
Run a zero-downtime upgrade using Oracle Data Guard for replication.
-
Data Guard Broker is not enabled
-
Flash recovery area (FRA) is configured
Customizing Zero-Downtime Upgrades
You can customize zero-downtime upgrades using the user-action framework of Fleet Patching and Provisioning.
Table 5-3 Zero-Downtime Upgrade Plugins
Plugin Type | Pre or Post | Plugin runs... |
---|---|---|
ZDTUPGRADE_DATABASE |
Pre |
Before Fleet Patching and Provisioning starts zero-downtime upgrade. |
Post |
After Fleet Patching and Provisioning completes zero-downtime upgrade. |
|
ZDTUPGRADE_DATABASE_SNAPDB |
Pre |
Before creating the snapshot or full-clone database. |
Post |
After starting the snapshot or full-clone database (but before switching over). |
|
ZDTUPGRADE_DATABASE_DBUA |
Pre |
Before running DBUA (after switching over). |
Post |
After DBUA completes. |
|
ZDTUPGRADE_DATABASE_SWITCHBACK |
Pre |
Before switching back users to the upgraded source database. |
Post |
After switching back users to the upgraded source database (before deleting snapshot or full-clone database). |
Persistent Home Path During Patching
Oracle recommends out-of-place patching when applying updates.
Out-of-place patching involves deploying the patched environment in a new directory path and then switching the software home to the new path. This approach allows for non-disruptive software distribution because the existing home remains active while the new home is provisioned, and also facilitates rollback because the unpatched software home is available should any issues arise after the switch. Additionally, out-of-place patching for databases enables you to choose to move a subset of instances to the new home if more than one instance is running on the home, whereas with in-place patching, you must patch all instances at the same time.
A potential impediment to traditional out-of-place patching is that the software home path changes. While Fleet Patching and Provisioning manages this internally and transparently for Oracle Database and Oracle Grid Infrastructure software, some users have developed scripts which depend on the path. To address this, Fleet Patching and Provisioning uses a file system feature that enables separation of gold image software from the site-specific configuration changes, so the software home path is persistent throughout updates.
This feature is available with Oracle Database 12c release 2 (12.2) and Oracle Grid Infrastructure 12c release 2 (12.2) working copies provisioned in local storage. Also, if you provision an Oracle Database 12c release 2 (12.2) or an Oracle Grid Infrastructure 12c release 2 (12.2) home without using this feature, then, during a patching operation using either the rhpctl move database
or rhpctl move gihome
command, you can convert to this configuration and take advantage of the feature.
Note:
You can only patch Oracle Grid Infrastructure on a Fleet Patching and Provisioning Client with a home that is based on a persistent home path from a Fleet Patching and Provisioning Server.Managing Fleet Patching and Provisioning Clients
Management tasks for Fleet Patching and Provisioning Clients include creation, enabling and disabling, creating users and assigning roles to those users, and managing passwords.
Using SRVCTL and RHPCTL, you can perform all management tasks for a Fleet Patching and Provisioning Client.
Creating a Fleet Patching and Provisioning Client
Users operate on a Fleet Patching and Provisioning Client to perform tasks such as requesting deployment of Oracle homes and querying gold images.
To create a Fleet Patching and Provisioning Client:
Enabling and Disabling Fleet Patching and Provisioning Clients
On the Fleet Patching and Provisioning Server, you can enable or disable a Fleet Patching and Provisioning Client.
Fleet Patching and Provisioning Clients communicate with the Fleet Patching and Provisioning Server for all actions. You cannot run any RHPCTL commands without a connection to a Fleet Patching and Provisioning Server.
To enable or disable a Fleet Patching and Provisioning Client, run the following command from the Fleet Patching and Provisioning Server cluster:
$ rhpctl modify client -client client_name -enabled TRUE | FALSE
To enable a Fleet Patching and Provisioning Client, specify -enabled TRUE
. Conversely, specify -enabled FALSE
to disable the client. When you disable a Fleet Patching and Provisioning Client cluster, all RHPCTL commands from that client cluster will be rejected by the Fleet Patching and Provisioning Server, unless and until you re-enable the client.
Note:
Disabling a Fleet Patching and Provisioning Client cluster does not disable any existing working copies on the client cluster. The working copies will continue to function and any databases in those working copies will continue to run.
Deleting a Fleet Patching and Provisioning Client
Use the following procedure to delete a Fleet Patching and Provisioning Client.
Creating Users and Assigning Roles for Fleet Patching and Provisioning Client Cluster Users
When you create a Fleet Patching and Provisioning Client with the rhpctl add client
command, you can use the -maproles
parameter to create users and assign roles to them. You can associate multiple users with roles, or you can assign a single user multiple roles with this command.
After the client has been created, you can add and remove roles for users using the rhpctl grant role
command and the rhpctl revoke role
, respectively.
User-Defined Actions
You can create actions for various Fleet Patching and Provisioning operations, such as import image, add and delete working copy, and add, delete, move, and upgrade a software home.
You can create actions for various Fleet Patching and Provisioning operations, such as import image, add and delete working copy of a gold image, and add, delete, move, and upgrade a software home. You can define different actions for each operation, which can be further differentiated by the type of image to which the operation applies. User-defined actions can be run before or after a given operation, and are run on the deployment on which the operation is run, whether it be a Fleet Patching and Provisioning Server, a Fleet Patching and Provisioning Client (12c release 2 (12.2), or later), or a target that is not running a Fleet Patching and Provisioning Client.
User-defined actions are shell scripts which are stored on the Fleet Patching and Provisioning Server. When a script runs, it is given relevant information about the operation on the command line. Also, you can associate a file with the script. The Fleet Patching and Provisioning Server will copy that file to the same location on the Client or target where the script is run.
For example, perhaps you want to create user-defined actions that are run after a database upgrade, and you want to define different actions for Oracle Database 11g and 12c. This requires you to define new image types, as in the following example procedure.
-
Create a new image type, (
DB11IMAGE
, for example), based on the ORACLEDBSOFTWARE image type, as follows:$ rhpctl add imagetype -imagetype DB11IMAGE -basetype ORACLEDBSOFTWARE
When you add or import an Oracle Database 11g gold image, you specify the image type as
DB11IMAGE
. -
Define a user action and associate it with the
DB11IMAGE
image type and the upgrade operation. You can have different actions that are run before or after upgrade. -
To define an action for Oracle Database 12c, create a new image type (
DB12IMAGE
, for example) that is based on the ORACLEDBSOFTWARE image type, as in the preceding step, but with theDB12IMAGE
image type.Note:
If you define user actions for the base type of a user-defined image type (in this case the base type is ORACLEDBSOFTWARE), then Fleet Patching and Provisioning performs those actions before the actions for the user-defined image type.
You can modify the image type of an image using the rhpctl modify image
command. Additionally, you can modify, add, and delete other actions. The following two tables, Table 5-4 and Table 5-5, list the operations you can customize and the parameters you can use to define those operations, respectively.
Table 5-4 Fleet Patching and Provisioning User-Defined Operations
Operation | Parameter List |
---|---|
IMPORT_IMAGE |
|
ADD_WORKINGCOPY |
|
ADD_DATABASE |
|
DELETE_WORKINGCOPY |
|
DELETE_DATABASE |
|
MOVE_GIHOME |
|
MOVE_DATABASE This user action is run for each database involved in a patching operation. If the run scope is set to If the run scope is set to |
|
UPGRADE_GIHOME |
|
UPGRADE_DATABASE |
|
ADDNODE_DATABASE |
|
DELETENODE_DATABASE |
|
ADDNODE_GIHOME |
|
DELETENODE_GIHOME |
|
ADDNODE_WORKINGCOPY |
|
ZDTUPGRADE_DATABASE |
|
ZDTUPGRADE_DATABASE_SNAPDB |
|
ZDTUPGRADE_DATABASE_DBUA |
|
ZDTUPGRADE_DATABASE_SWITCHBACK |
|
Table 5-5 User-Defined Operations Parameters
Parameter | Description |
---|---|
RHP_OPTYPE |
The operation type for which the user action is being executed, as listed in the previous table. |
RHP_PHASE |
This parameter indicates whether the user action is executed before or after the operation (is either PRE or POST). |
RHP_SOURCEWC |
The source working copy name for a patch of upgrade operation. |
RHP_SOURCEPATH |
The path of the source working copy home. |
RHP_DESTINATIONWC |
The destination working copy name for a patch or upgrade operation. |
RHP_DESTINATIONPATH |
The path of the destination working copy home. |
RHP_SRCGGWC |
The name of the version of the Oracle GoldenGate working copy from which you want to upgrade. |
RHP_SRCGGPATH |
The absolute path of the version of the Oracle GoldenGate software home from which you want to upgrade. |
RHP_DESTGGWC |
The name of the version of the Oracle GoldenGate working copy to which you want to upgrade. |
RHP_DESTGGPATH |
The absolute path of the version of the Oracle GoldenGate software home to which you want to upgrade. |
RHP_PATH |
This is the path to the location of the software home. This parameter represents the path on the local node from where the RHPCTL command is being run for an |
RHP_PATHOWNER |
The owner of the path for the gold image that is being imported. |
RHP_PROGRESSLISTENERHOST |
The host on which the progress listener is listening. You can use this parameter, together with a progress listener port, to create a TCP connection to print output to the console on which the RHPCTL command is being run. |
RHP_PROGRESSLISTENERPORT |
The port on which the progress listener host is listening. You can use this parameter, together with a progress listener host name, to create a TCP connection to print output to the console on which the RHPCTL command is being run. |
RHP_IMAGE |
The image associated with the operation. In the case of a move operation, it will reflect the name of the destination image. |
RHP_IMAGETYPE |
The image type of the image associated with the operation. In the case of a move operation, it will reflect the name of the destination image. |
RHP_VERSION |
The version of the Oracle Grid Infrastructure software running on the Fleet Patching and Provisioning Server. |
RHP_CLI |
The exact command that was run to invoke the operation. |
RHP_STORAGETYPE |
The type of storage for the home (either |
RHP_USER |
The user for whom the operation is being performed. |
RHP_NODES |
The nodes on which a database will be created. |
RHP_ORACLEBASE |
The Oracle base location for the provisioned home. |
RHP_DBNAME |
The name of the database to be created. |
RHP_CLIENT |
The name of the client cluster. |
RHP_DATAPATCH |
This parameter is set to TRUE at the conclusion of the user action on the node where the SQL patch will be run after the move database operation is complete. |
RHP_USERACTIONDATA |
This parameter is present in all of the operations and is used to pass user-defined items to the user action as an argument during runtime. |
Example of User-Defined Action
Suppose there is an image type, APACHESW
, to use for provisioning and managing Apache deployments. Suppose, too, that there is a Gold Image of Apache named apacheinstall
. The following example shows how to create a user action that will run prior to provisioning any copy of our Apache Gold Image.
The following is a sample user action script named addapache_useraction.sh
:
$ cat /scratch/apacheadmin/addapache_useraction.sh
#!/bin/sh
#refer to arguments using argument names
touch /tmp/SAMPLEOUT.txt;
for i in "$@"
do
export $i
done
echo "OPTYPE = $RHP_OPTYPE" >> /tmp/SAMPLEOUT.txt;
echo "PHASE = $RHP_PHASE" >> /tmp/SAMPLEOUT.txt;
echo "WORKINGCOPY = $RHP_WORKINGCOPY" >> /tmp/SAMPLEOUT.txt;
echo "PATH = $RHP_PATH" >> /tmp/SAMPLEOUT.txt;
echo "STORAGETYPE = $RHP_STORAGETYPE" >> /tmp/SAMPLEOUT.txt;
echo "USER = $RHP_USER" >> /tmp/SAMPLEOUT.txt;
echo "NODES = $RHP_NODES" >> /tmp/SAMPLEOUT.txt;
echo "ORACLEBASE = $RHP_ORACLEBASE" >> /tmp/SAMPLEOUT.txt;
echo "DBNAME = $RHP_DBNAME" >> /tmp/SAMPLEOUT.txt;
echo "PROGRESSLISTENERHOST = $RHP_PROGRESSLISTENERHOST" >> /tmp/SAMPLEOUT.txt;
echo "PROGRESSLISTENERPORT = $RHP_PROGRESSLISTENERPORT" >> /tmp/SAMPLEOUT.txt;
echo "IMAGE = $RHP_IMAGE" >> /tmp/SAMPLEOUT.txt;
echo "IMAGETYPE = $RHP_IMAGETYPE" >> /tmp/SAMPLEOUT.txt;
echo "RHPVERSION = $RHP_VERSION" >> /tmp/SAMPLEOUT.txt;
echo "CLI = $RHP_CLI" >> /tmp/SAMPLEOUT.txt;
echo "USERACTIONDATA = $RHP_USERACTIONDATA" >> /tmp/SAMPLEOUT.txt;
$
The script is registered to run at the start of rhpctl add workingcopy
commands. The add working copy operation aborts if the script fails.
The following command creates a user action called addapachepre:
$ rhpctl add useraction -optype ADD_WORKINGCOPY -pre -onerror ABORT -useraction
addapachepre -actionscript /scratch/apacheadmin/addapache_useraction.sh
-runscope ONENODE
The following command registers the user action for the APACHESW
image type:
$ rhpctl modify imagetype -imagetype APACHESW -useractions addapachepre
The registered user action is invoked automatically at the start of commands that deploy a working copy of any image of the APACHESW type, such as the following:
$ rhpctl add workingcopy -workingcopy apachecopy001 -image apacheinstall
-path /scratch/apacheadmin/apacheinstallloc -sudouser apacheadmin -sudopath
/usr/local/bin/sudo -node targetnode003 -user apacheadmin -useractiondata "sample"
The sample script creates the /tmp/SAMPLEOUT.txt
output file. Based on the example command, the output file contains:
$ cat /tmp/SAMPLEOUT.txt
OPTYPE = ADD_WORKINGCOPY
PHASE = PRE
WORKINGCOPY = apachecopy001
PATH = /scratch/apacheadmin/apacheinstallloc
STORAGETYPE =
USER = apacheadmin
NODES = targetnode003
ORACLEBASE =
DBNAME =
PROGRESSLISTENERHOST = mds11042003.my.company.com
PROGRESSLISTENERPORT = 58068
IMAGE = apacheinstall
IMAGETYPE = APACHESW
RHPVERSION = 12.2.0.1.0
CLI = rhpctl__add__workingcopy__-image__apacheinstall__-path__/scratch/apacheadmin
/apacheinstallloc__-node__targetnode003__-useractiondata__sample__
-sudopath__/usr/local/bin/sudo__-workingcopy__apachecopy__-user__apacheadmin__
-sudouser__apacheadmin__USERACTIONDATA = sample
$
Notes:
-
In the preceding output example empty values terminate with an equals sign (
=
). -
The spaces in the command-line value of the
RHP_CLI
parameter are replaced by two underscore characters (__
) to differentiate this from other parameters.
Job Scheduler for Operations
The Fleet Patching and Provisioning job scheduler provides you with a mechanism to submit operations at a scheduled time instead of running the command immediately, querying the metadata of the job, and then deleting the job from the repository.
-
Enables you to schedule a command to run at a specific point in time by providing the time value
-
Performs the job and stores the metadata for the job, along with the current status of the job
-
Stores the logs for each of the jobs that have run or are running
-
Enables you to query job details ( for all jobs or for specific jobs, based on the user roles)
-
Deletes jobs
-
Authorizes the running, querying, and deleting of jobs, based on role-based access for users
-schedule timer_value
command parameter with any of the following RHPCTL commands to schedule certain Rapid Hope Provisioning operations:
-
rhpctl add workingcopy
-
rhpctl import image
-
rhpctl delete image
-
rhpctl add database
-
rhpctl move gihome
-
rhpctl move database
-
rhpctl upgrade database
-
rhpctl addnode database
-
rhpctl deletenode database
-
rhpctl delete workingcopy
$ rhpctl add workingcopy -workingcopy 18_3 -image 18_3_Base -oraclebase /u01/app/oracle -schedule 2016-12-21T19:13:17+05
All commands are run in reference with the time zone of the server, according to the ISO-8601 value, and RHPCTL displays the command result by specifying the same time zone.
Command Results
RHPCTL stores any command that is run from the command queue on the Fleet Patching and Provisioning Server. When you query a command result by specifying the command identifier, then RHPCTL returns the path to the job output file, along with the results.
Job Operation
When you run an RHPCTL command with the -schedule
parameter, the operation creates a job with a unique job ID that you can query to obtain the status of the job.
Job Status
EXECUTED | TIMER_RUNNING | EXECUTING | UNKNOWN | TERMINATED
-
EXECUTED
: The job is complete. -
TIMER_RUNNING
: The timer for the job is still running. -
EXECUTING
: The timer for the job has expired and is running. -
UNKNOWN
: There is an unexpected failure due to issues such as a target going down, nodes going down, or any resource failures. -
TERMINATED
: There is an abrupt failure or the operation has stopped.
Related Topics
Oracle Restart Patching and Upgrading
You can use Fleet Patching and Provisioning to patch and upgrade Oracle Restart using gold images.
You can move the target of single-node Oracle Restart to an Oracle home that you provision from a gold image that includes any patches. Fleet Patching and Provisioning ensures the copying of the configuration files, such as listener.ora, to the new Oracle home.
You can also use Fleet Patching and Provisioning to upgrade Oracle Restart using gold images. Upgrade the Oracle Restart environment by upgrading the Oracle home on the target Oracle home that you provision from a higher-level gold image. Fleet Patching and Provisioning updates the configuration and inventory settings.
rhpctl add workingcopy -workingcopy Oracle_Restart_working_copy -resposnefile Oracle_Restart_resposne_file -targetnode node_on_which_Oracle_Restart_is_provisioned {superuser credentials}
Fleet Patching and Provisioning Use Cases
Review these topics for step-by-step procedures to provision, patch, and upgrade your software using Fleet Patching and Provisioning.
Fleet Patching and Provisioning is a software lifecycle management solution and helps standardize patching, provisioning, and upgrade of your standard operating environment.
Creating an Oracle Grid Infrastructure 12c Release 2 Deployment
Provision Oracle Grid Infrastructure software on two nodes that do not currently have a Grid home, and then configure Oracle Grid Infrastructure to form a multi-node Oracle Grid Infrastructure installation.
Before You Begin
Provide configuration details for storage, network, users and groups, and node information for installing Oracle Grid Infrastructure in a response file. You can store the response file in any location on the Fleet Patching and Provisioning Server.
You can provision an Oracle Standalone Cluster, Oracle Application Clusters, Oracle Domain Services Cluster, or Oracle Member Clusters. Ensure that the response file has the required cluster configuration details.
Ensure that you have storage, network, and operating system requirements configured as stated in the Oracle Grid Infrastructure Installation Guide.
Procedure
Oracle Grid Infrastructure 12c release 2 is provisioned as per the settings in the same response file.
During provisioning, if an error occurs, the procedure stops and allows you to fix the error. After fixing the error, you can resume the provisioning operation from where it last stopped.
Watch a video
Provisioning an Oracle Database Home and Creating a Database
This procedure provisions Oracle Database 12c release 2 (12.2) software and creates Oracle Database instances.
Procedure
Watch a video
Provisioning a Pluggable Database
You can provision a pluggable database (PDB) on an existing container database (CDB) running in a provisioned database working copy.
rhpctl addpdb database
command.
Upgrading to Oracle Grid Infrastructure 12c Release 2
This procedure uses Fleet Patching and Provisioning to upgrade your Oracle Grid Infrastructure cluster from 11g release 2 (11.2.0.4) to 12c release 2 (12.2).
Before You Begin
To upgrade to Oracle Grid Infrastructure 12c release 2 (12.2.0.1), your source must be Oracle Grid Infrastructure 11g release 2 (11.2.0.3 or 11.2.0.4), or Oracle Grid Infrastructure 12c release 2 (12.1.0.2).
Ensure that groups configured in the source home match those in the destination home.
Ensure that you have an image GI_HOME_12201
of the Oracle Grid Infrastructure 12c release 2 (12.2.0.1) software to provision your working copy.
GI_11204
is the active Grid Infrastructure home on the cluster being upgraded. It is a working copy because in this example, Fleet Patching and Provisioning provisioned the cluster. Fleet Patching and Provisioning can also upgrade clusters whose Grid Infrastructure homes are unmanaged that is, homes that Fleet Patching and Provisioning did not provision.
Procedure
Patching Oracle Grid Infrastructure Without Changing the Grid Home Path
This procedure explains how to patch Oracle Grid Infrastructure without changing the Grid home path.
Before You Begin
-
Ensure that the gold image containing the Grid home is imported and exists on the Fleet Patching and Provisioning Server.
-
Ensure that the directory you provide in the
path
option is not an existing directory. -
The source Grid home must be a managed home (provisioned by Fleet Patching and Provisioning). It does not need to be an Oracle Layered File System (OLFS)-compliant home.
-
The Grid home must be Oracle Grid Infrastructure 12c (12.2.0.1) or later.
Procedure for Patching
Patching Oracle Grid Infrastructure and Oracle Databases Simultaneously
This procedure patches Oracle Grid Infrastructure and Oracle Databases on the cluster to the latest patch level without cluster downtime.
Before You Begin
In this procedure, Oracle Grid Infrastructure 12c release 2 (12.2.0.1) is running on the target cluster. Working copy GI_HOME_12201_WCPY
is the active Grid home on this cluster. Working copy DB_HOME_12201_WCPY
runs an Oracle RAC 12c release 2 (12.2.0.1) Database with running database instance db1
. Working copy DB_HOME_12102_WCPY
runs an Oracle RAC 12c release 1 (12.1.0.2) Database with running database instance db2
Ensure that you have images GI_HOME_12201_PSU1
, DB_HOME_12201_PSU1
, DB_HOME_12102_PSU5
with the required patches for Oracle Grid Infrastructure and Oracle RAC Database on the Fleet Patching and Provisioning Server.
The groups configured in the source home must match with those in the destination home.
Procedure
Patching Oracle Database 12c Release 1 Without Downtime
This procedure explains how to patch Oracle Database 12c release 1 (12.1.0.2) with the latest patching without bringing down the database.
Before You Begin
You have an Oracle Database db12102
that you want to patch to the latest patch level.
Ensure that the working copy db12102_psu
based on the image DB12102_PSU
contains the latest patches and is available.
Procedure
From the Fleet Patching and Provisioning Server, run one of the following commands as per your source and destination database:
For all Oracle Databases, you can also specify these additional options with the move database
command:
-
-keepplacement
: For admin-managed Oracle RAC Databases (not Oracle RAC One Node Database), Fleet Patching and Provisioning retains the services on the same nodes after the move. -
-disconnect
: Disconnects all sessions before stopping or relocating services. -
-drain_timeout
: Specify the time, in seconds, allowed for resource draining to be completed for planned maintenance operations. During the draining period, all current client requests are processed, but new requests are not accepted. This option is available only with Oracle Database 12c release 2 (12.2) or later. -
-stopoption
: Stops the database. -
-nodatapatch
: Ensuresdatapatch
is not run for databases you are moving.
Watch a video
Upgrading to Oracle Database 12c Release 2
This procedure describes how to upgrade an Oracle database from Oracle Database 11g release 2 (11.2) to 12c release 2 with a single command, using Fleet Patching and Provisioning, both for managed and unmanaged Oracle homes.
Before you Begin
-
To upgrade to Oracle Database 12c release 2 (12.2.0.1), your source database must be either Oracle Database 11g release 2 (11.2.0.3 or 11.2.0.4), or Oracle Database 12c release 1 (12.1.0.2).
-
Oracle Grid Infrastructure on which the pre-upgrade database is running must be of the same release or later than the database release to which you are upgrading.
-
The source Oracle home to be upgraded can be a managed working copy, that is an Oracle home provisioned using Fleet Patching and Provisioning, or an unmanaged home, that is, an Oracle home not provisioned using Fleet Patching and Provisioning. If you are upgrading an unmanaged Oracle home, provide the complete path of the database for upgrade.
Procedure to Upgrade Oracle Database using Fleet Patching and Provisioning
Note:
During upgrade, if an error occurs, the procedure stops and allows you to fix the error. After fixing the error, you can resume the upgrade operation from where it last stopped.Watch a video
Related Topics
Adding a Node to a Cluster and Scaling an Oracle RAC Database to the Node
You can add a node to your two-node cluster by using Fleet Patching and Provisioning to add the node, and then extend an Oracle RAC database to the new node.
Before You Begin
In this procedure, Oracle Grid Infrastructure 12c release 2 (12.2.0.1) is running on the cluster. Working copy GI_HOME_12202_WCPY
is the active Grid home on this cluster.
The Oracle RAC database home runs on the working copy DB_HOME_12202_WCPY
.
Ensure that you have storage, network, and operating system requirements configured for the new node as stated in Oracle Grid Infrastructure Installation Guide.
Procedure
Watch a video
Related Topics
Adding Gold Images for Fleet Patching and Provisioning
Create gold images of software home and store them on the Fleet Patching and Provisioning Server, to use later to provision Oracle homes.
Before You Begin
The Oracle home to be used for creating the gold image can be on the Fleet Patching and Provisioning Server, or Fleet Patching and Provisioning Client, or any target machine that the Fleet Patching and Provisioning Server can communicate with.
Procedure
Create gold images of Oracle homes in any of the following ways and store them on the Fleet Patching and Provisioning server:
Note:
You cannot directly use images as software homes. Use images to create working copies of software homes.User Actions for Common Fleet Patching and Provisioning Tasks
You can use Fleet Patching and Provisioning user actions to perform many tasks, such as installing and configuring any type of software and running scripts.
Deploying a Web Server
The following procedure demonstrates automated deployment of Apache Web Server using Fleet Patching and Provisioning:
- Create a script to install Apache Web server, as follows:
- On the Fleet Patching and Provisioning Server, download and extract the Apache Web server installation kit.
- Create the script to install, configure, and start the Apache Web server.
- Register the script as a user action with Fleet Patching and Provisioning by running the following command on the Fleet Patching and Provisioning Server:
rhpctl useraction -useraction apachestart -actionscript /user1/useractions/apacheinstall.sh -post -optype ADD_WORKINGCOPY -onerror ABORT
The preceding command adds the
apachestart
user action for the action script stored in the specified directory. As per the specified properties, the user action runs after theADD_WORKINGCOPY
operation and aborts if there is any error. - Create an image type and associate the user action with the image type, as follows:
rhpctl add imagetype -imagetype apachetype -basetype SOFTWARE -useraction "apachestart"
The preceding command creates a new image type called
apachetype
, a derivative of the basic image type,SOFTWARE
, with an associated user actionapachestart
. - Create a gold image of the image type, as follows:
rhpctl import image -image apacheinstall -path /user1/apache2219_kit/ -imagetype apachetype
The preceding command creates a gold image,
apacheinstall
, with the script for Apache Web server installation, in the specified path, based on theimagetype
you created earlier.To view the properties of this image, run the
rhpctl query image -image apacheinstall
command. - Deploy a working copy of the gold image on the target, as follows:
rhpctl add workingcopy -workingcopy apachecopy -image apacheinstall -path /user1/apacheinstallloc -sudouser user1 -sudopath /usr/local/bin/sudo -node node1 -user user1 -useractiondata "/user1/apachehome:1080:2.2.19"
Fleet Patching and Provisioning provisions the software to the target and runs the
apachestart
script specified in the user action. You can provide the Apache Web server configuration details such as port number with theuseractiondata
option. If the target is a Fleet Patching and Provisioning Client, then you need not specifysudo
credentials.
Registering Multiple Scripts Using a Single User Action
Run multiple scripts as part of a user action plug-in by registering a wrapper script and bundled custom scripts. The wrapper script extracts the bundled scripts, which are copied under the directory of the wrapper script, and then runs those extracted scripts as necessary, similar to the following procedure:
- The following command creates a user action called
ohadd_ua
, and associates a wrapper script,wc_add.sh
, with a zip file containing other scripts:rhpctl add useraction -useraction ohadd_ua -actionscript /scratch/crsusr/wc_add.sh -actionfile /scratch/crsusr/pack.zip -pre -runscope ALLNODES -optype ADD_WORKINGCOPY
The wrapper script,
wc_add.sh
, extracts thepack.zip
file into the script path, a temporary path to which the user action scripts are copied. The wrapper script can invoke any scripts contained in the file. - The following command creates an image type,
sw_ua
, for theohadd_ua
user action:rhpctl add imagetype -imagetype sw_ua -useractions ohadd_ua -basetype SOFTWARE
- The following command creates an image called
swimgua
from the software specified in the path:rhpctl import image -image swimgua -path /tmp/custom_sw -imagetype sw_ua
- The following command adds a working copy called
wcua
and runs thewc_add.sh
script:rhpctl add workingcopy -workingcopy wcua -image swimgua -client targetcluster