8 Sharded Database Deployment

Sharded database deployment includes the prerequisites and instructions for installing the required software components, creating the catalog, roles, and the sharded database, configuring replication for high availability, and creating the schema for the sharded database.

The following topics contain the concepts and tasks you need to deploy a sharded database:

8.1 Introduction to Sharded Database Deployment

Oracle Sharding provides the capability to automatically deploy the sharded database, which includes both the shards and the replicas.

The sharded database administrator defines the topology (regions, shard hosts, replication technology) and invokes the DEPLOY command with a declarative specification using the GDSCTL command-line interface.

At a high level, the deployment steps are:

  1. Set up the components.

    • Create a database that hosts the shard catalog.

    • Install Oracle Database software on the shard nodes.

    • Install shard director (GSM) software on the shard director nodes.

    Note:

    For production deployments, it is highly recommended that you configure Data Guard for the shard catalog database.
  2. Specify the topology layout using the following commands.

    • CREATE SHARDCATALOG

    • ADD GSM

    • START GSM

    • ADD CREDENTIAL (if using CREATE SHARD)

    • ADD SHARDGROUP

    • ADD INVITEDNODE

    • CREATE SHARD (or ADD SHARD) for each shard

  3. Run DEPLOY and add the global service to access any shard in the sharded database.

    • DEPLOY

    • ADD SERVICE

8.1.1 Choosing a Deployment Method

You can deploy a sharded database, by creating the shards at the same time for a new database, or by adding the shards from a preexisting database.

Oracle Sharding supports two deployment methods. The first method is with the CREATE SHARD command, where the creation of shards and the replication configuration are automatically done by the Oracle Sharding management tier. This method cannot be used in a multitenant architecture where PDBs are used as shards.

The second deployment method is with the ADD SHARD command. If your database creation standards require that you deploy the SDB using your own pre-created databases, the ADD SHARD deployment method supports this requirement by simply adding your prebuilt database shards.

Deployment Method: CREATE SHARD

The DEPLOY command creates the shards. This is done using the DBMS_SCHEDULER package (executed on the shard catalog), which communicates with the Scheduler agents on the remote shard hosts.

Agents then invoke DBCA and NETCA, and if Oracle GoldenGate replication is specified, GoldenGate Creation Assistance (GGCA) , to create the shards and the local listeners. After the primary shards are created the corresponding standby shards are built using the RMAN DUPLICATE command.

When Data Guard is used as the high availability solution, once the primary and standby shards are built, the DEPLOY command configures Data Guard Broker with Fast-Start Failover (FSFO) enabled. The FSFO observers are automatically started on the regional shard director.

Note:

The CREATE SHARD method is not supported for PDBs used as shards. Only the ADD SHARD methods can be used in a multitenant architecture.

Archivelog and flashback are enabled for all of the shards. This is required for the FSFO observer to perform standby auto-reinstantiation upon failover.

Deployment Method: ADD SHARD

Use the ADD SHARD command to add shards to a sharded database configuration if you have your own database creation standards and prefer to deploy the sharded database using your own pre-created databases. This method is recommended for shards that are Oracle RAC-enabled, Oracle Restart-enabled, or PDB shards. This method is recommended when ASM is being used. The ADD SHARD deployment method supports this requirement by adding shards, which already have database installations deployed on them, rather than creating new instances.

When the ADD SHARD command is used for deployment, and Data Guard is used for high availability, the DEPLOY command handles the configuration of Oracle GoldenGate, or Data Guard, Broker and Fast-start Failover. It also handles the scenario where you have pre-configured Data Guard for the shard that is being added.

Unlike sharding with Data Guard or Active Data Guard, you cannot deploy Oracle GoldenGate manually, it must be done using the DEPLOY command.

8.1.2 Using Oracle Multitenant with Oracle Sharding

You can use a multitenant container database (CDB) containing pluggable databases (PDBs) as shards and shard catalogs in your Oracle Sharding configuration.

To support consolidation of databases on under-utilized hardware, for ease of management, or geographical business requirements, you can use PDBs in CDBs as database shards. For example, for database consolidation, you can add other, non-shard or shard PDBs to the CDB containing a shard PDB. A CDB can contain shard PDBs from different sharded databases, each with their own separate catalog databases. CDBs can support multiple PDB shards from different sharded databases; however, this support is limited to only one PDB shard from a given sharded database for each CDB.

To add a shard PDB to the sharded database configuration, you should first add the CDB in which that shard PDB is contained to the shard catalog. The GDSCTL command ADD CDB is used to add a pre-created CDB to the shard catalog. Then, use the GDSCTL ADD SHARD command with the -cdb option to add shards which are a PDB contained within a CDB to the sharded database during deployment.

The following example adds a CDB with unique name db11 to the shard catalog and then adds it to shardgroup shgrp1 in a sharded database configuration.

GDSCTL> add cdb -connect CDB$ROOT_connect_string -pwd GSMROOTUSER_password
GDSCTL> add shard -cdb db11 -connect PDB_connect_string –shardgroup shgrp1
 -deploy_as active_standby -pwd GSMUSER_password

Use CONFIG CDB to display information about the CDB in the shard catalog.

GDSCTL> config cdb

Name: tstsdbyb 
Connection string: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=cdb1host)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=cdb1.example.com))) 
SCAN address: 
ONS remote port: 0 
Disk Threshold, ms: 20 
CPU Threshold, %: 75 
Version: 18.0.0.0 
Rack:

Note:

Oracle GoldenGate replication does not support PDBs as shards.

Moving PDB Shards

You can manually unplug a shard PDB from a CDB, and plug it in to a different CDB. This can be done outside of the sharding interfaces, and then you can update the shard catalog metadata to indicate that the PDB shard has moved to another CDB. The GDSCTL command ADD SHARD with the –REPLACE option is used to update the location of the shard PDB in the shard catalog.

Certain PDB operations, such as unplugging and plugging an existing PDB back into the same CDB, using the SQL commands ALTER PLUGGABLE DATABASE UNPLUG and ALTER PLUGGABLE DATABASE OPEN, do not cause a registration update event. This can mean that the shard PDB will not show as registered in the GDSCTL config shard output after it is plugged back in. If this occurs, a registration event can be forced by connecting to the PDB in SQL*Plus, and running a full shutdown and startup cycle for the PDB using the SHUTDOWN and STARTUP SQL commands.

PDB Shard High Availability

Oracle Data Guard supports replication only at the CDB level. The existing sharding architecture allows replicated copies of the sharded data for high availability, and it can optionally configure and use Data Guard to create and maintain these copies. Data Guard does not currently support replication at the PDB level; it can only replicate an entire container.

If you choose to add PDB shards from different sharded databases into the same CDB, then the replication topology in all of the resident sharded databases in the CDB must match. It is possible to create an invalid configuration where the sharded databases are configured with a different number of standby PDBs in each Data Guard configuration, so it is your responsibility to ensure that this mismatch does not happen by validating the numbers and locations of the PDBs and CDBs in each sharded database.

Making Changes to CDB in Sharding Configuration

Use MODIFY CDB to change the metadata of the CDB in the shard catalog. Some parameters cannot be used after the CDB contains shards, or contains shards that have been deployed.

GDSCTL> modify cdb -shard cdb1 -pwd new_GSMROOTUSER_password

Use REMOVE CDB to remove a CDB from the shard catalog. Removing a CDB does not destroy it.

GDSCTL> remove cdb -cdb cdb1

Upgrading from a Non-PDB Shard to a PDB Shard

When upgrading from a non-PDB sharded environment to one which makes use of PDBs, you must back up each existing non-PDB shard and then create a new CDB, and a PDB inside it. The shard is then restored to the PDB inside the CDB, as the CDB migration guide recommends. At this point, the shard has become a PDB inside a CDB, and you use the GDSCTL ADD CDB command to add the new CDB, and then run ADD SHARD -REPLACE, specifying the connect string of the PDB, to tell the sharding infrastructure to replace the old location of the shard with new PDB location.

Database Compatibility and Migration

When upgrading from an Oracle Database 18c installation which contains a single PDB shard for a given CDB, you must update the shard catalog metadata for any PDB. Specifically, in 18c, the name of a PDB shard is the db_unique_name of its CDB; however, in Oracle Database 19c, the shard names are db_unique_name_of_CDB_pdb_name.

To update the catalog metadata to reflect this new naming methodology, and to also support the new GSMROOTUSER account as described in About the GSMROOTUSER Account, perform the following steps during the upgrade process as described in Upgrading Sharded Database Components.

  • After upgrading any CDB that contains a PDB shard, ensure that the GSMROOTUSER account exists, is unlocked, has been assigned a password, and has been granted SYSDG, SYSBACKUP, and gsmrootuser_role privileges. The following SQL statements in SQL*Plus will successfully set up GSMROOTUSER while connected to the root container (CDB$ROOT) of the CDB.
    SQL> alter session set "_oracle_script"=true;
    Session altered.
         
    SQL> create user gsmrootuser;
    User created.
    
    SQL> alter user gsmrootuser identified by new_GSMROOTUSER_password
      account unlock;
    User altered.
    
    SQL> grant sysdg, sysbackup, gsmrootuser_role to gsmrootuser container=current;
    Grant succeeded.
    
    SQL> alter session set "_oracle_script"=false;
    Session altered.
  • After upgrading the catalog database to the desired Oracle Database version, run the following PL/SQL procedure to update the catalog metadata to reflect the new name for the PDB shards present in the configuration. This procedure must be executed for each Oracle Database 18c PDB shard.

    The first parameter to pdb_fixup is the value of db_unique_name in the CDB that contains the PDB shard. In Oracle Database 18c, this is the same as the shard name as shown by gdsctl config shard.

    The second parameter is the PDB name of the shard PDB as shown by show con_name in SQL*Plus when connected to the shard PDB.

    The pdb_fixup procedure will update the catalog metadata to make it compatible with the new naming method for PDB shards.

    SQL> connect sys/password as sysdba
    Connected.
    SQL> set serveroutput on
    SQL> execute gsmadmin_internal.dbms_gsm_pooladmin.pdb_fixup('cdb1', 'pdb1');
  • After upgrading all shard directors to the desired version, run the following GDSCTL command once for each CDB in the configuration to inform the shard directors of the password for the GSMROOTUSER in each CDB.
    GDSCTL> modify cdb -cdb CDB_name -pwd new_GSMROOTUSER_password

See Also:

Oracle Database Global Data Services Concepts and Administration Guide for information about the GDSCTL commands used with PDB shards

Oracle Multitenant Administrator's Guide for information about Oracle Multitenant

8.2 Oracle Sharding Prerequisites

Before you install any software, review these hardware, network, and operating system requirements for Oracle Sharding.

  • Hardware and operating system requirements for the shards are the same as those on which Oracle Database is supported. See your Oracle Database installation documentation for these requirements.

  • Hardware, software, and operating system requirements for the shard catalog and shard directors are the same as those on which the Global Data Services catalog and global service manager are supported. See Oracle Database Global Data Services Concepts and Administration Guide for these requirements.

  • Network requirements are Low Latency GigE

  • Port communication requirements are listed below. All of the following are required for using CREATE SHARD. When using ADD SHARD, items 4 and 5 are not relevant.

    1. Each and every shard must be able to reach each and every shard director's listener and ONS ports. The default listener port of the shard director is 1522, and the default ONS ports on most platforms are 6123 for the local ONS and 6234 for remote ONS. These shard director listener ports and the ONS ports must also be opened to the application/client tier, all of the shards, the shard catalog, and all other shard directors.

    2. Each and every shard must be able to reach the TNS Listener port of the shard catalog (both primary and standby).

    3. The TNS Listener port (default 1521) of each shard must be opened to shard directors and the shard catalog.

    4. On the primary and standby shard catalog database, the port used for -agent_port (default 8080) in the CREATE SHARDCATALOG command must be visible to all of the shards.

    5. The scheduler agent port on all of the shards must be visible to shard catalog node. Execute schagent -status on each shard to identify the port.

8.3 Installing Oracle Database Software

Install Oracle Database on each system that will host the shard catalog or database shards.

Before installing Oracle Database, create an operating system user on all of the systems where you will be hosting the sharded database, shard catalog, and shard directors, and assign them to the DBA group. Allow the user to run su, and make note of the credentials so that you can use them in later procedures.

See Oracle Database Installation Guide for Linux, or your platform’s installation guide, for information about configuring operating system users.

  1. Download the Oracle Database installer on all of the systems that will host the shard catalog or the database shards.
  2. Install Oracle Database on all of the systems where you intend to host the shard catalog and sharded database.
    1. Run the installer on the first system.
      $ cd /u01/stage/database
      $ ./runInstaller

      As you step through the Oracle Database installation, be sure to select the following options on the noted screens:

      • On the Installation Option page, select Install database software only.

      • On the Grid Installation Options page, select Single instance database installation. Oracle RAC and Oracle RAC One Node are not supported in this release.

      • On the Database Edition page, select Enterprise Edition.

      • On the Installation Location page, use the same Oracle base and Software location values that you used when creating the environment scripts in the steps above.

      • On the Create Inventory page, accept the defaults.

      • On the Operating System Groups page, accept the defaults or make changes as appropriate for your environment.

      • On the Summary page, you can click Save Response File to create a file for silent installation of the Oracle Database software on the remaining hosts.

      • During installation, execute the orainstRoot.sh and root.sh scripts as root in a separate terminal when prompted.

    2. Optionally, using the response file you created in the first installation, run a silent install on each of the remaining hosts.

      Note that, after performing a silent install using a response file, when you run the database root.sh script its execution might not prompt you interactively for any values and uses only default values (for example, for the local user bin directory). If any non-default values are desired, specify just the -responseFile location when invoking the Installer and omit the -silent option. Click through the Installer screens, accepting the response file values, and then run the root script(s) when prompted. During root script execution, any user prompts are presented to you and non-default values can be entered.

See Also:

Oracle Database Installation Guide for Linux for more information about using the response file for silent installation of Oracle Database

8.4 Installing the Shard Director Software

Install the global service manager software on each system that you want to host a shard director.

  1. Download the Oracle Global Service Manager installer on all of the systems that will host the shard directors.
  2. See Oracle Database Global Data Services Concepts and Administration Guide for information about installing a global service manager.
  3. Optionally, using the response file you created in the first installation, run a silent install on each of the remaining shard director hosts.

    See Oracle Database Global Data Services Concepts and Administration Guide for more information about the silent install process.

    Note that, after performing a silent install using a response file, when you run the database root.sh script its execution might not prompt you interactively for any values and uses only default values (for example, for the local user bin directory). If any non-default values are desired, specify just the -responseFile location when invoking the Installer and omit the -silent option. Click through the Installer screens, accepting the response file values, and then run the root script(s) when prompted. During root script execution, any user prompts are presented to you and non-default values can be entered.

8.5 Creating the Shard Catalog Database

Create an Oracle Database using DBCA to host the shard catalog.

  1. Connect to the host where you will host the shard catalog, and verify that the expected environment variables are set to the correct values.
    $ env |grep ORA
    ORACLE_BASE=/u01/app/oracle
    ORACLE_HOME=/u01/app/oracle/product/18.0.0/dbhome_1
  2. Create the oradata and fast_recovery_area directories.
    $ mkdir /u01/app/oracle/oradata
    $ mkdir /u01/app/oracle/fast_recovery_area
    
  3. Run DBCA to create the shard catalog database.
    $ dbca
    The Database Configuration Assistant opens.
  4. On the Database Operation page, select Create a database, and click Next.
  5. On the Creation Mode page, select Advanced configuration, and click Next.
  6. On the Deployment Type page, select the Oracle Single Instance database database type, select the General Purpose or Transaction Processing template, and click Next.
  7. On the Database Identification page, enter the Global Database name and the shard catalog SID that you configured in the shard catalog host environment script, and click Next.
  8. On the Storage Option page, select the Use following for the database storage attributes option, select File System, select the Use Oracle-Managed Files (OMF) option, and click Next.
  9. On the Select Fast Recovery Option page, select Specify Fast Recovery Area, select Enable archiving, and click Next.
  10. On the Specify Network Configuration Details page, select Create a new listener, set the listener name and port number, and click Next.
    Make note of the listener name so that you can connect to the database later.
  11. Skip the Data Vault Option page.
  12. On the Configuration Options page Memory tab, select Use Automatic Shared Memory Management.
  13. On the Configuration Options page Character sets tab, select Use Unicode (AL32UTF8), and click Next
  14. On the Management Option page, uncheck the Configure Enterprise Manager (EM) database express option, and click Next.
  15. On the User Credentials page, select the appropriate option for your business needs, enter the passwords, and click Next.

    Make a note of the passwords you entered because you will need them later.

  16. On the Creation Option page, select Create database, and click Next.
  17. On the Summary page, click Finish.
  18. After the database has been created, make a note of the Global Database Name, SID, and spfile values.
  19. If you plan to use Oracle Data Guard to protect the shard catalog database, click Password Management, unlock the SYSDG account, and make a note of the password you enter for this account.
  20. Click Close to exit DBCA.

8.6 Setting Up the Oracle Sharding Management and Routing Tier

The shard catalog, shard directors and shards must be configured to communicate with each other.

Before you begin, carefully read through the port requirements prerequisites listed in Oracle Sharding Prerequisites, and make any changes necessary before proceeding with the tasks in this section.

  1. On the shard catalog host, verify that the expected environment values are set to the correct values.
    $ env |grep ORA
    ORACLE_SID=shardcat
    ORACLE_BASE=/u01/app/oracle
    ORACLE_HOME=/u01/app/oracle/product/18.0.0/dbhome_1
  2. If the shard catalog listener is not already started, start the shard catalog listener.
    $ lsnrctl start
  3. Verify that DB_CREATE_FILE_DEST parameter is set on the shard catalog database.

    If the parameter is not set, then set it as shown in the following example. Note that open_links and open_links_per_instance are set to 16 for the purposes of the Oracle Sharding demo application.

    $ sqlplus / as sysdba
    
    SQL> alter system set db_create_file_dest='/u01/app/oracle/oradata' scope=both;
    SQL> alter system set open_links=16 scope=spfile;
    SQL> alter system set open_links_per_instance=16 scope=spfile;
    
  4. Shut down and restart the shard catalog database.
    SQL> shutdown immediate
    Datablase closed.
    Database dismounted.
    
    SQL> startup
    ORACLE instance started.
    
    Total System Global Area 4798283776 bytes
    Fixed Size                  4430760 bytes
    Variable Size            1006634072 bytes
    Database Buffers         3774873600 bytes
    Redo Buffers               12345344 bytes
    Database mounted.
    Database opened.
  5. Grant roles and privileges on the shard catalog database.
    SQL> set echo on
    SQL> set termout on
    SQL> spool setup_grants_privs.lst
    
    1. Unlock and set the password for the GSMCATUSER schema.

      This schema is used by the shard director when connecting to the shard catalog database.

      SQL> alter user gsmcatuser account unlock;
      SQL> alter user gsmcatuser identified by gsmcatuser_password;
    2. Create the administrator schema and grant privileges to it.

      The mysdbadmin account is an account in the shard catalog database which stores information on the sharding environment. The mysdbadmin account is the database administrator schema for making administrative changes to the sharded database environment. GDSCTL connects through this user to the database when GDSCTL commands are run, and the mysdbadmin user makes the necessary changes in the database.

      SQL> create user mysdbadmin identified by mysdbadmin_password;
      SQL> grant connect, create session, gsmadmin_role to mysdbadmin;
      SQL> grant inherit privileges on user SYS to GSMADMIN_INTERNAL;
      SQL> spool off
  6. Connect to a shard director host and start GDSCTL.

    The commands in the following steps are executed from a shard director host because the GDSCTL command line interface is installed there.

    The following example includes a sanity check that environment variables have been set properly.

    $ env |grep ORA
    ORACLE_BASE=/u01/app/oracle
    ORACLE_HOME=/u01/app/oracle/product/18.0.0/gsmhome_1
    
    $ gdsctl
    
  7. Create the shard catalog and configure the remote scheduler agent on the shard catalog.

    In this example, the sharded database is given the name cust_sdb, and two regions are created: region1 and region2. The regions are used as local availability domains for failover of the shards from the primary to physical standby. The regions are not meant to represent geographical regions in these examples.

    To create shard catalog for system-managed sharding, with Oracle GoldenGate replication:

    GDSCTL> create shardcatalog -database
     shard_catalog_host:port_number:shard_catalog_name  
     -user gsm_admin/mysdbadmin_password -repl OGG -repfactor 2 -sdb cust_sdb
     -region region1, region2 -agent_port port_num -agent_password rsa_password

    Note:

    For production systems, it is a good practice to use the default (120 chunks per shard) , instead of specifying the -chunks parameter while creating the shard catalog.

    Because system-managed is the default sharding method, it does not need to be specified with the -sharding parameter.

    To create shard catalog for a composite sharded database, with Data Guard replication:

    GDSCTL> create shardcatalog -database
     shard_catalog_host:port_number:shard_catalog_name
     -chunks 60 -shardspace shardspace1  -sharding composite -sdb comp_shpool
     -protectmode maxavailability -user gsm_admin/mysdbadmin_password

    To create shard catalog for a user-defined sharded database, with Data Guard replication:

    GDSCTL> create shardcatalog -sdb udef_shpool -sharding user
     -protectmode maxavailability
     -database shard_catalog_host:port_number:shard_catalog_name
     -user gsm_admin/mysdbadmin_password -region region1, region2

    Note:

    The -agent_port and -agent_password parameters are not necessary if you are using the ADD SHARD deployment method.

  8. While you are connected to the shard director host, create and start the shard director.
    GDSCTL> add gsm -gsm sharddirector1 -listener listener_port -pwd gsmcatuser_password
     -catalog shard_catalog_host:1521:shardcat -region region1
    
    GDSCTL> start gsm -gsm sharddirector1

    Repeat steps 6 and 8 on each shard director host. Replace the shard director name and region name with appropriate values for each host.

  9. Using GDSCTL, set the operating system credentials.
    GDSCTL> add credential -credential credential_name -osaccount os_account_name
     -ospassword os_password
    GDSCTL> exit

    Note:

    this step is not necessary if you are using the ADD SHARD deployment method.

    These credentials are the operating system user name and password on the shard hosts (not the catalog host), and the credentials are used by the remote scheduler agent to run jobs on the hosts to set up the shards using DBCA, NETCA, and the like.

    Repeat this step if a different operating system credential will be used for each host.

  10. Connect to each of the shard hosts, register remote scheduler agents on them, and create directories for oradata and fast_recovery_area on them.

    Note:

    This step is only required if you are using the CREATE SHARD method described in Introduction to Sharded Database Deployment. If you are using the ADD SHARD method you can skip this step.

    Execute the following statements on each of the machines that will host a shard.

    Note that the os_account_name is the account used for Oracle software installation, shard_host is the host name or IP address of the machine hosting the shard, the shard_catalog_host is the host name or IP address of the shard catalog host, and port_num is the remote scheduler agent port number as specified in the -agent_port parameter for create shardcatalog in step 7 above. The remote scheduler agent prompts you for the agent registration password specified in the -agent_password parameter of create shardcatalog in step 7 above.

    $ ssh os_account_name@shard_host
    passwd: os_password
    
    $ schagent -start
    $ schagent -status
    $ schagent -registerdatabase shard_catalog_host port_num
    
    $ mkdir /u01/app/oracle/oradata
    $ mkdir /u01/app/oracle/fast_recovery_area

See Also:

Oracle Database Global Data Services Concepts and Administration Guide for information about usage and options for the GDSCTL commands.

8.7 Creating and Deploying a System-Managed Sharded Database

The following topics describe the tasks for creating and deploying a system-managed sharded database.

8.7.1 Deploying a System-Managed Sharded Database

To deploy a system-managed sharded database you create shardgroups and shards, create and configure the databases to be used as shards, execute the DEPLOY command, and create role-based global services.

System-managed sharding does not require the user to map data to shards. Data is automatically distributed across shards using partitioning by consistent hash. The partitioning algorithm evenly and randomly distributes data across shards. For more conceptual information about the system-managed sharded Database, see System-Managed Sharding.

  1. If you are using the ADD SHARD method described in Introduction to Sharded Database Deployment you must first create the databases to be used as shards on their respective hosts.

    The shard databases must have the following characteristics:

    • They must have an associated TNS Listener on each host

    • The GSMUSER account must be unlocked with a known password

    • SYSDG and SYSBACKUP privileges must be granted to GSMUSER

    • The primary and standby databases must be configured as such

    • Redo apply should be set up between the corresponding primary and standby databases

    • Flashback and force logging should be enabled

    • The compatible parameter must be set to at least 12.2.0

    • A server parameter file (SPFILE) must be in use

    • A DATA_PUMP_DIR directory object must be created in each database and must point to a valid directory

    You must then validate that a database is correctly set up for sharding. Run the following statements against each database, while logged in as SYS, before adding it to the configuration.

    SQL> set serveroutput on
    SQL> execute DBMS_GSM_FIX.validateShard

    Screen output will include INFO, WARNING, and ERROR information that needs to be analyzed for any issues. All WARNING and ERROR messages must be resolved. Re-run validateShard() after making changes to confirm the configuration.

  2. Connect to the shard director host.
    $ ssh os_user@shard_director1_host
    
  3. Set the global service manager for the current session, and specify the credentials to administer it.
    $ gdsctl
    GDSCTL> set gsm -gsm sharddirector1
    GDSCTL> connect mysdbadmin/mysdbadmin_password
  4. Add a shardgroup for the primary shards.

    In this example the shardgroup is named primary_shardgroup, and is part of a Data Guard replication solution.

    GDSCTL> add shardgroup -shardgroup primary_shardgroup -deploy_as primary
     -region region1

    The following example shows the shardgroup creation for an Oracle GoldenGate replication solution.

    GDSCTL> add shardgroup -shardgroup shardgroup1 -region region1 -repfactor 2
  5. Add a shardgroup for the standby shards.

    In this example the shardgroup is named standby_shardgroup, and is created for an Active Data Guard active standby using the -deploy_as parameter.

    GDSCTL> add shardgroup -shardgroup standby_shardgroup -deploy_as active_standby
     -region region2 

    Adding a second shardgroup in an Oracle GoldenGate configuration might look like the following example.

    GDSCTL> add shardgroup -shardgroup shardgroup2 -region region2 -repfactor 2 
  6. Add each shard’s host address to the valid node checking for registration (VNCR) list in the catalog, then create or add the shard in either the primary or standby shardgroup, as shown in the following examples.

    Note:

    The valid node checking for registration (VNCR) feature provides the ability to configure and dynamically update a set of IP addresses, host names, or subnets from which registration requests are allowed by the shard directors. Database instance registration with a shard director succeeds only when the request originates from a valid node. By default, the shard management tier (based on Oracle Global Data Services framework) automatically adds a VNCR entry for the host on which a remote database is running each time create shard or add shard is executed. The automation (called auto-VNCR) finds the public IP address of the target host, and automatically adds a VNCR entry for that IP address. If the host has multiple public IP addresses, then the address on which the database registers may not be the same as the address which was added using auto-VNCR and , as a result, registration many be rejected. If the target database host has multiple public IP addresses, it is advisable that you configure VNCR manually for this host using the add invitednode or add invitedsubnet commands in GDSCTL.

    If there are multiple net-cards on the target host (/sbin/ifconfig returns more than one public interface), use add invitednode to be safe (after finding out which interface will be used to route packets).

    If there is any doubt about registration, then use config vncr and use add invitednode as necessary. There is no harm in doing this, because if the node is added already, auto-VNCR ignores it, and if you try to add it after auto-VNCR already added it, you will get a warning stating that it already exists.

    The following example shows how to create four shards using the CREATE SHARD command, using a Data Guard high availability solution, two of which are in the primary shardgroup and two in the standby shardgroup. The credential_name is the operating system credential you created in the shard catalog for the destination host.

    While creating the shards, you can also set the SYS password in the create shard using -sys_password as shown in the following example. This sets the SYS password after the shards are created when running DEPLOY.

    There are other optional parameters for CREATE SHARD that allow you to customize the database parameters, storage and file locations, listener port numbers, and so on, which are documented in the Oracle Database Global Data Services Concepts and Administration Guide appendices.

    GDSCTL> add invitednode shard_host_1 
    GDSCTL> create shard -shardgroup primary_shardgroup -destination shard_host_1
     -credential credential_name -sys_password sys_password
    
    GDSCTL> add invitednode shard_host_2
    GDSCTL> create shard -shardgroup standby_shardgroup -destination shard_host_2
     -credential credential_name -sys_password sys_password
    GDSCTL> add invitednode shard_host_3 
    GDSCTL> create shard -shardgroup primary_shardgroup -destination shard_host_3
     -credential credential_name -sys_password sys_password
    GDSCTL> add invitednode shard_host_4
    GDSCTL> create shard -shardgroup standby_shardgroup -destination shard_host_4
     -credential credential_name -sys_password sys_password

    In an Oracle GoldenGate replication solution, the shardgroups would not be designated as primary and standby because replication is handled at the chunk level and distributed among the shards within a shardgroup. However, a disaster recovery best practice is to replicate a shardgroup to one or more data centers. The following is an example of creating a shard with Oracle GoldenGate replication.

    GDSCTL> create shard -shardgroup shardgroup -destination shard_host
     -credential oracle_cred -netparam /home/oracle/netca_dbhome.rsp
     -gg_service shard_host_1:$ADMINSRVR_PORT/$GGHOME 
     -gg_password ggadmin_password
     -dbparamfile /home/oracle/dbparams01.tmp
     -dbtemplatefile /home/oracle/sharddb01.dbt

    If you are using the ADD SHARD method described in Introduction to Sharded Database Deployment, use the following command instead of the CREATE SHARD commands in the example above. If the shard database to be added is a pluggable database (PDB), you must use the -cdb option to ADD SHARD to specify which container database (CDB) the PDB shard is in. In addition, ADD CDB must be used before the ADD SHARD command to add the CDB to the catalog. See Oracle Database Global Data Services Concepts and Administration Guide for the syntax for ADD CDB and ADD SHARD. Note that in Oracle Database 18c, only one PDB in each CDB is allowed to be a shard.

    GDSCTL> add shard –shardgroup shgrp1
     –connect shard_host:TNS_listener_port/shard_database_name
     –pwd GSMUSER_password
  7. Check the configuration from a shard director.

    Note that the shard names, sh1, sh2, sh3, and sh4, are system generated shard names when the CREATE SHARD method is used.

    GDSCTL> config
    Regions
    -----------------------
    region1
    region2
    
    GSMs
    -----------------------
    sharddirector1
    sharddirector2
    
    Sharded Database
    -----------------------
    cust_sdb
    
    Databases
    -----------------------
    sh1
    sh2
    sh3
    sh4
    
    Shard Groups
    -----------------------
    primary_shardgroup
    standby_shardgroup
    
    Shard spaces
    -----------------------
    shardspaceora
    
    Services
    -----------------------
    
    GDSCTL pending requests
    -----------------------
    Command          Object          Status
    -------          ------          ------
    
    Global properties
    -----------------------
    Name: oradbcloud
    Master GSM: sharddirector1
    DDL sequence #: 0
    
    GDSCTL> config shardspace
    SHARDSPACE       Chunks
    ----------       ------
    shardspaceora    12
    
    GDSCTL> config shardgroup
    Shard Group               Chunks    Region    SHARDSPACE
    -----------               ------    ------    ----------
    primary_shardgroup        12        region1   shardspaceora
    standby_shardgroup        12        region2   shardspaceora
    
    GDSCTL> config vncr
    Name    Group ID 
    ----    --------
    shard_host_1 
    shard_host_2
    shard_host_3
    shard_host_4
    shard_catalog_host_IP
    
    GDSCTL> config shard
    Name     Shard Group          Status    State    Region    Availability
    ----     -----------          ------    -----    ------    ------------
    sh1      primary_shardgroup   U         none     region1   -
    sh2      standby_shardgroup   U         none     region2   -
    sh3      primary_shardgroup   U         none     region1   -
    sh4      standby_shardgroup   U         none     region2   -
  8. Run the DEPLOY command to create the shards and the replicas.

    The DEPLOY command takes some time to run, approximately 15 to 30 minutes.

    GDSCTL> deploy

    When the CREATE SHARD method is used to create the shards, the DEPLOY command creates the primary and standby shards using DBCA. Archivelog and flashback, which is required for the Fast-Start Failover observers to perform standby reinstantiation, are enabled for all of the shards.

    Once the primary and standby shards are built, the DEPLOY command configures the Data Guard Broker with Fast-Start Failover enabled. The Fast-Start Failover observers are automatically started on the standby group’s shard director (sharddirector2 in this example).

  9. Verify that all of the shards are deployed.
    GDSCTL> config shard
    Name    Shard Group           Status    State        Region    Availability
    ----    -----------           ------    -----        ------    ------------
    sh1      primary_shardgroup   Ok        Deployed     region1   ONLINE
    sh2      standby_shardgroup   Ok        Deployed     region2   READ_ONLY
    sh3      primary_shardgroup   Ok        Deployed     region1   ONLINE
    sh4      standby_shardgroup   Ok        Deployed     region2   READ_ONLY
  10. Verify that all of the shards are registered.
    GDSCTL> databases
    Database: "sh1" Registered: Y State: Ok ONS: N. Role: PRIMARY
    Instances: 1 Region: region1
       Registered instances: 
         cust_sdb%1
    Database: "sh2" Registered: Y State: Ok ONS: N. Role: PH_STNDBY
    Instances: 1 Region: region2
       Registered instances: 
         cust_sdb%11
    Database: "sh3" Registered: Y State: Ok ONS: N. Role: PRIMARY
    Instances: 1 Region: region1
       Registered instances: 
         cust_sdb%21
    Database: "sh4" Registered: Y State: Ok ONS: N. Role: PH_STNDBY
    Instances: 1 Region: region2
       Registered instances: 
         cust_sdb%31
  11. Check the configuration of a shard.
    GDSCTL> config shard -shard sh1
    Name: sh1
    Shard Group: primary_shardgroup
    Status: Ok
    State: Deployed
    Region: region1
    Connection string: shard_host_1:TNS_listener_port/sh1:dedicated
    SCAN address:
    ONS remote port: 0
    Disk Threshold, ms: 20
    CPU Threshold, %: 75
    Version: 18.0.0.0
    Last Failed DDL: 
    DDL Error: ---
    Failed DDL id:
    Availability: ONLINE
    
    Supported services
    -----------------------
    Name                Preferred Status
    ----                --------- ------
  12. Add a global service that runs on all of the primary shards.

    The oltp_rw_srvc global service is a global data service that a client can use to connect to the sharded database. The oltp_rw_srvc service runs the OLTP transactions on the primary shards. Likewise, the oltp_ro_srvc global service is created to run read-only workload on the standby shards.

    GDSCTL> add service -service oltp_rw_srvc -role primary 
    
    GDSCTL> config service 
    
    Name          Network name                      Pool         Started Preferred all
    ----          ------------                      ----         ------- -------------
    oltp_rw_srvc  oltp_rw_srvc.cust_sdb.oracdbcloud cust_sdb     No      Yes
  13. Start the oltp_rw_srvc global service.
    GDSCTL> start service -service oltp_rw_srvc
    
    GDSCTL> status service
    Service "oltp_rw_srvc.cust_sdb.oradbcloud" has 2 instance(s). Affinity: ANYWHERE
       Instance "cust_sdb%1", name: "sh1", db: "sh1", region: "region1", status: ready.
       Instance "cust_sdb%21", name: "sh3", db: "sh3", region: "region1", status: ready.
  14. Add a global service for the read-only workload to run on the standby shards.
    GDSCTL> add service -service oltp_ro_srvc -role physical_standby
    
    GDSCTL> config service 
    
    Name          Network name                      Pool         Started Preferred all
    ----          ------------                      ----         ------- -------------
    oltp_rw_srvc  oltp_rw_srvc.cust_sdb.oracdbcloud cust_sdb     Yes     Yes
    oltp_ro_srvc  oltp_ro_srvc.cust_sdb.oracdbcloud cust_sdb     No      Yes
    
  15. Start the read-only service, and verify the status of the global services.
    GDSCTL> start service -service oltp_ro_srvc
    
    GDSCTL> status service
    Service "oltp_ro_srvc.cust_sdb.oradbcloud" has 2 instance(s). Affinity: ANYWHERE
       Instance "cust_sdb%11", name: "sh2", db: "sh2", region: "region2", status: ready.
       Instance "cust_sdb%31", name: "sh4", db: "sh4", region: "region2", status: ready.
    Service "oltp_rw_srvc.cust_sdb.oradbcloud" has 2 instance(s). Affinity: ANYWHERE
       Instance "cust_sdb%1", name: "sh1", db: "sh1", region: "region1", status: ready.
       Instance "cust_sdb%21", name: "sh3", db: "sh3", region: "region1", status: ready.

8.7.2 Creating a Schema for a System-Managed Sharded Database

Create the schema user, tablespace set, sharded tables, and duplicated tables for the sharded database. Verify that the DDLs are propagated to all of the shards, and, while connected to the shards, verify the automatic Data Guard Broker configuration with Fast-Start Failover.

  1. Connect to the shard catalog database, create the application schema user, and grant privileges and roles to the user.

    In this example, the application schema user is called app_schema.

    $ sqlplus / as sysdba
    
    SQL> alter session enable shard ddl;
    SQL> create user app_schema identified by app_schema_password;
    SQL> grant all privileges to app_schema;
    SQL> grant gsmadmin_role to app_schema;
    SQL> grant select_catalog_role to app_schema;
    SQL> grant connect, resource to app_schema;
    SQL> grant dba to app_schema;
    SQL> grant execute on dbms_crypto to app_schema;
  2. Create a tablespace set for the sharded tables.
    SQL> CREATE TABLESPACE SET TSP_SET_1 using template
     (datafile size 100m autoextend on next 10M maxsize unlimited
      extent management local segment space management auto);
    

    Specifying the shardspace is optional when creating the tablespace set. If the shardspace is not specified in the command, the default shardspace, shardspaceora, is used.

  3. If you use LOBs in a column, you can specify a tablespace set for the LOBs.
    SQL> CREATE TABLESPACE SET LOBTS1;

    Note:

    Tablespace sets for LOBS cannot be specified at the subpartitition level in system-managed sharding.

  4. Create a tablespace for the duplicated tables.

    In this example the duplicated table is the Products table in the sample Customers-Orders-Products schema.

    SQL> CREATE TABLESPACE products_tsp datafile size 100m
     autoextend on next 10M maxsize unlimited
     extent management local uniform size 1m; 
    
  5. Create a sharded table for the root table.

    In this example, the root table is the Customers table in the sample Customers-Orders-Products schema.

    SQL> CONNECT app_schema/app_schema_password
    
    SQL> ALTER SESSION ENABLE SHARD DDL;
    
    SQL> CREATE SHARDED TABLE Customers
      (
        CustId      VARCHAR2(60) NOT NULL,
        FirstName   VARCHAR2(60),
        LastName    VARCHAR2(60),
        Class       VARCHAR2(10),
        Geo         VARCHAR2(8),
        CustProfile VARCHAR2(4000),
        Passwd      RAW(60),
        CONSTRAINT pk_customers PRIMARY KEY (CustId),
        CONSTRAINT json_customers CHECK (CustProfile IS JSON)
      ) TABLESPACE SET TSP_SET_1
      PARTITION BY CONSISTENT HASH (CustId) PARTITIONS AUTO;

    Note:

    If any columns contain LOBs, you can include the tablespace set in the parent table creation statement, as shown here.

    SQL> CREATE SHARDED TABLE Customers
      (
        CustId      VARCHAR2(60) NOT NULL,
        FirstName   VARCHAR2(60),
        LastName    VARCHAR2(60),
        Class       VARCHAR2(10),
        Geo         VARCHAR2(8),
        CustProfile VARCHAR2(4000),
        Passwd      RAW(60),
        image       BLOB,
        CONSTRAINT pk_customers PRIMARY KEY (CustId),
        CONSTRAINT json_customers CHECK (CustProfile IS JSON)
      ) TABLESPACE SET TSP_SET_1
        LOB(image) store as (TABLESPACE SET LOBTS1) 
      PARTITION BY CONSISTENT HASH (CustId) PARTITIONS AUTO;
  6. Create a sharded table for the other tables in the table family.

    In this example, sharded tables are created for the Orders and LineItems tables in the sample Customers-Orders-Products schema.

    The Orders sharded table is created first:

    SQL> CREATE SHARDED TABLE Orders
      (
        OrderId     INTEGER NOT NULL,
        CustId      VARCHAR2(60) NOT NULL,
        OrderDate   TIMESTAMP NOT NULL,
        SumTotal    NUMBER(19,4),
        Status      CHAR(4),
        CONSTRAINT  pk_orders PRIMARY KEY (CustId, OrderId),
        CONSTRAINT  fk_orders_parent FOREIGN KEY (CustId) 
        REFERENCES Customers ON DELETE CASCADE
      ) PARTITION BY REFERENCE (fk_orders_parent);
    

    Create the sequence used for the OrderId column.

    SQL> CREATE SEQUENCE Orders_Seq;

    Create a sharded table for LineItems

    SQL> CREATE SHARDED TABLE LineItems
      (
        OrderId     INTEGER NOT NULL,
        CustId      VARCHAR2(60) NOT NULL,
        ProductId   INTEGER NOT NULL,
        Price       NUMBER(19,4),
        Qty         NUMBER,
        CONSTRAINT  pk_items PRIMARY KEY (CustId, OrderId, ProductId),
        CONSTRAINT  fk_items_parent FOREIGN KEY (CustId, OrderId)
        REFERENCES Orders ON DELETE CASCADE
      ) PARTITION BY REFERENCE (fk_items_parent);
  7. Create any required duplicated tables.

    In this example, the Products table is a duplicated object.

    SQL> CREATE DUPLICATED TABLE Products
      (
        ProductId  INTEGER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
        Name       VARCHAR2(128),
        DescrUri   VARCHAR2(128),
        LastPrice  NUMBER(19,4)
      ) TABLESPACE products_tsp;
  8. From the shard director host, verify that there were no failures during the creation of the tablespaces.
    GDSCTL> show ddl
    id    DDL Text                                  Failed shards
    --    --------                                  -------------
    5     grant connect, resource to app_schema
    6     grant dba to app_schema
    7     grant execute on dbms_crypto to app_s... 
    8     CREATE TABLESPACE SET  TSP_SET_1 usin...
    9     CREATE TABLESPACE products_tsp datafi...
    10    CREATE SHARDED TABLE Customers (   Cu...
    11    CREATE SHARDED TABLE Orders (   Order...
    12    CREATE SEQUENCE Orders_Seq;
    13    CREATE SHARDED TABLE LineItems (   Or...
    14    CREATE MATERIALIZED VIEW "APP_SCHEMA"...
    

    Note:

    The show ddl command output might be truncated. You can run SELECT ddl_text FROM gsmadmin_internal.ddl_requests on the catalog to see the full text of the statements.
  9. Verify that there were no DDL errors on each of the shards.

    Run the config shard and config chunks commands on each shard in your configuration.

    GDSCTL> config shard -shard sh1
    Name: sh1
    Shard Group: primary_shardgroup
    Status: Ok
    State: Deployed
    Region: region1
    Connection string: shard_host_1:1521/sh1_host:dedicated
    SCAN address: 
    ONS remote port: 0
    Disk Threshold, ms: 20
    CPU Threshold, %: 75
    Version: 18.0.0.0
    Last Failed DDL: 
    DDL Error: ---
    Failed DDL id: 
    Availability: ONLINE
    
    Supported services
    ------------------------
    Name                                          Preferred Status    
    ----                                          --------- ------    
    oltp_ro_srvc                                  Yes       Enabled   
    oltp_rw_srvc                                  Yes       Enabled  
    
    GDSCTL> config chunks
    Chunks
    ------------------------
    Database                      From      To        
    --------                      ----      --        
    sh1                           1         6         
    sh2                           1         6         
    sh3                           7         12        
    sh4                           7         12 
  10. Verify that the tablespaces of the tablespace set you created for the sharded table family and the tablespaces you created for the duplicated tables are created on all of the shards.

    The number of tablespaces in the tablespace set is based on the number of chunks you specified in the create shardcatalog command.

    The tablespace set with the first 6 chunks of the 12 that were specified in the shard catalog creation example, and the duplicated Products tablespace is shown in the following example.

    $ sqlplus / as sysdba
    
    SQL> select TABLESPACE_NAME, BYTES/1024/1024 MB from sys.dba_data_files
     order by tablespace_name;
    
    TABLESPACE_NAME 		    MB
    ----------------------- ----------
    C001TSP_SET_1           100
    C002TSP_SET_1           100
    C003TSP_SET_1			      100
    C004TSP_SET_1			      100
    C005TSP_SET_1			      100
    C006TSP_SET_1			      100
    PRODUCTS_TSP            100
    SYSAUX                  650
    SYSTEM                  890
    SYS_SHARD_TS			      100
    TSP_SET_1			          100
    
    TABLESPACE_NAME 		     MB
    ------------------------ ----------
    UNDOTBS1			           105
    USERS					             5
    
    13 rows selected.
    

    Repeat this step on all of the shards in your configuration.

  11. Verify that the chunks and chunk tablespaces were created on all of the shards.
    SQL> set linesize 140
    SQL> column table_name format a20
    SQL> column tablespace_name format a20
    SQL> column partition_name format a20
    SQL> show parameter db_unique_name
    
    NAME             TYPE        VALUE
    ---------------- ----------- ------------------------------
    db_unique_name   string      sh1
    
    SQL> select table_name, partition_name, tablespace_name
     from dba_tab_partitions
     where tablespace_name like 'C%TSP_SET_1'
     order by tablespace_name;
    
    
    TABLE_NAME       PARTITION_NAME   TABLESPACE_NAME
    ---------------- ---------------- --------------------
    ORDERS           CUSTOMERS_P1     C001TSP_SET_1
    CUSTOMERS        CUSTOMERS_P1     C001TSP_SET_1
    LINEITEM         CUSTOMERS_P1     C001TSP_SET_1
    CUSTOMERS        CUSTOMERS_P2     C002TSP_SET_1
    LINEITEMS        CUSTOMERS_P2     C002TSP_SET_1
    ORDERS           CUSTOMERS_P2     C002TSP_SET_1
    CUSTOMERS        CUSTOMERS_P3     C003TSP_SET_1
    ORDERS           CUSTOMERS_P3     C003TSP_SET_1
    LINEITEMS        CUSTOMERS_P3     C003TSP_SET_1
    ORDERS           CUSTOMERS_P4     C004TSP_SET_1
    CUSTOMERS        CUSTOMERS_P4     C004TSP_SET_1
    
    TABLE_NAME       PARTITION_NAME   TABLESPACE_NAME
    ---------------- ---------------- --------------------
    LINEITEMS        CUSTOMERS_P4     C004TSP_SET_1
    CUSTOMERS        CUSTOMERS_P5     C005TSP_SET_1
    LINEITEMS        CUSTOMERS_P5     C005TSP_SET_1
    ORDERS           CUSTOMERS_P5     C005TSP_SET_1
    CUSTOMERS        CUSTOMERS_P6     C006TSP_SET_1
    LINEITEMS        CUSTOMERS_P6     C006TSP_SET_1
    ORDERS           CUSTOMERS_P6     C006TSP_SET_1
    18 rows selected.
    

    Repeat this step on all of the shards in your configuration.

  12. Connect to the shard catalog database and verify that the chunks are uniformly distributed.
    $ sqlplus / as sysdba
    
    SQL> set echo off
    SQL> SELECT a.name Shard, COUNT(b.chunk_number) Number_of_Chunks
      FROM gsmadmin_internal.database a, gsmadmin_internal.chunk_loc b
      WHERE a.database_num=b.database_num
      GROUP BY a.name
      ORDER BY a.name;
    
    SHARD			       NUMBER_OF_CHUNKS
    ------------------------------ ----------------
    sh1					      6
    sh2					      6
    sh3					      6
    sh4					      6
  13. Verify that the sharded and duplicated tables were created.

    Log in as the application schema user on the shard catalog database and each of the shards.

    The following example shows querying the tables on a database shard as the app_schema user.

    $ sqlplus app_schema/app_schema_password
    Connected.
    
    SQL> select table_name from user_tables;
    
    TABLE_NAME
    -----------------------------------------------------------------------
    CUSTOMERS
    ORDERS
    LINEITEMS
    PRODUCTS
    
    4 rows selected.
  14. Verify that the Data Guard Broker automatic Fast-Start Failover configuration was done.
    $ ssh os_username@shard_host_1
    $ dgmgrl
    
    DGMGRL> connect sys/password
    Connected to "sh1"
    Connected as SYSDG.
    DGMGRL> show configuration
    
    Configuration - sh1
    
      Protection Mode: MaxPerformance
      Members:
      sh1 - Primary database
        sh2 - (*) Physical standby database 
    
    Fast-Start Failover: ENABLED
    
    Configuration Status:
    SUCCESS   (status updated 15 seconds ago)
    
    DGMGRL> show database sh1
    
    Database - sh1
    
      Role:               PRIMARY
      Intended State:     TRANSPORT-ON
      Instance(s):
        sh1
    
    Database Status:
    SUCCESS
    
    DGMGRL> show database sh2
    
    Database - sh2
    
      Role:               PHYSICAL STANDBY
      Intended State:     APPLY-ON
      Transport Lag:      0 seconds (computed 0 seconds ago)
      Apply Lag:          0 seconds (computed 0 seconds ago)
      Average Apply Rate: 2.00 KByte/s
      Real Time Query:    ON
      Instance(s):
        sh2
    
    Database Status:
    SUCCESS
    
    DGMGRL> show fast_start failover
    
    Fast-Start Failover: ENABLED
    
      Threshold:          30 seconds
      Target:             sh2
      Observer:           shard_director_host
      Lag Limit:          30 seconds
      Shutdown Primary:   TRUE
      Auto-reinstate:     TRUE
      Observer Reconnect: (none)
      Observer Override:  FALSE
    
    Configurable Failover Conditions
      Health Conditions:
        Corrupted Controlfile          YES
        Corrupted Dictionary           YES
        Inaccessible Logfile            NO
        Stuck Archiver                  NO
        Datafile Write Errors          YES
    
      Oracle Error Conditions:
        (none)
  15. Locate the Fast-Start Failover observers.

    Connect to the shard catalog database and run the following commands:

    $ sqlplus / as sysdba
    
    SQL> SELECT observer_state FROM gsmadmin_internal.broker_configs;
    
    OBSERVER_STATE
    --------------------------------------------------------------------------------
    GSM server SHARDDIRECTOR2. Observer started. 
    Log files at '/u01/app/oracle/product/18.0.0/gsmhome_1/network/admin/
    gsm_observer_1.log'.
    
    GSM server SHARDDIRECTOR2. Observer started. 
    Log files at '/u01/app/oracle/product/18.0.0/gsmhome_1/network.admin/
    gsm_observer_2.log'.

See Also:

Oracle Database Global Data Services Concepts and Administration Guide for information about GDSCTL command usage

8.7.3 System-Managed SDB Demo Application

The system-managed sharded database (SDB) demo application simulates the workload of an online retail store. Use it to validate the setup of any system-managed (automatic sharding) SDB configuration. The demo application also provides a practical example of sharding concepts for administrators and developers new to database sharding.

The demo application assumes that a system-managed SDB environment was already created along with the CUSTOMER table-family. The environment may have any number of chunks and shards (database nodes). When run, the application will first populate the Products table and then start a one-hour workload that can be paused at any time by the administrator. The workload includes four types of transactions: create a customer order, lookup the list of orders, create a new product, and multi-shard query with report generation. All aspects of a sharded database configuration are exercised.

You can download the demo application, along with a README file that describes how to run and monitor it, from My Oracle Support Document 2184500.1.

8.8 Creating and Deploying a User-Defined SDB

The following topics describe the tasks for creating and deploying a user-defined SDB.

8.8.1 Deploying a User-Defined SDB

The following procedure describes how to deploy a user-defined sharded database using the ADD SHARD command and an Oracle Active Data Guard high availability solution.

User-defined sharding allows the user to map data to shards. For more conceptual information about the user-defined sharding method, see User-Defined Sharding.

  1. Because this procedure describes using the ADD SHARD method as detailed in Introduction to Sharded Database Deployment, you must first create the databases to be used as shards on their respective hosts.

    The shard databases must have the following characteristics:

    • They must have an associated TNS Listener on each host

    • The GSMUSER account must be unlocked with a known password

    • SYSDG and SYSBACKUP privileges must be granted to GSMUSER

    • The primary and standby databases must be configured as such

    • Redo apply should be set up between the corresponding primary and standby databases

    • Flashback and force logging should be enabled

    • The compatible parameter must be set to at least 12.2.0

    • A server parameter file (SPFILE) must be in use

    • A DATA_PUMP_DIR directory object must be created in each database and must point to a valid directory

    You must then validate that a database is correctly set up for sharding. Execute the following against each database before adding it to the configuration.

    SQL> set serveroutput on
    SQL> execute DBMS_GSM_FIX.validateShard

    Screen output will include INFO, WARNING, and ERROR information that needs to be analyzed for any issues. All WARNING and ERROR messages must be resolved. Re-run validateShard() after making changes to confirm the configuration.

  2. Connect to the shard director host.
    $ ssh os_user@shard_director1_host
    
  3. Set the global service manager for the current session, and specify the credentials to administer it.
    $ gdsctl
    GDSCTL> set gsm -gsm sharddirector1
    GDSCTL> connect mysdbadmin/mysdbadmin_password
  4. Add shardspaces to the sharded database configuration for each customized grouping of shards your business case requires.

    The shardspaces contain the primary shard database and one or more active standbys.

    In this example the shardspaces are named shspace1 and shspace2. You can choose your own names.

    GDSCTL> add shardspace -shardspace shspace1 -protectmode maxavailability
    GDSCTL> add shardspace -shardspace shspace2 -protectmode maxavailability
  5. Add each shard’s host address to the valid node checking for registration (VNCR) list in the catalog, then add the shard in either the primary or standby shardgroup, as shown in the following example.

    Note:

    The valid node checking for registration (VNCR) feature provides the ability to configure and dynamically update a set of IP addresses, host names, or subnets from which registration requests are allowed by the shard directors. Database instance registration with a shard director succeeds only when the request originates from a valid node. By default, the shard management tier (based on Oracle Global Data Services framework) automatically adds a VNCR entry for the host on which a remote database is running each time create shard or add shard is executed. The automation (called auto-VNCR) finds the public IP address of the target host, and automatically adds a VNCR entry for that IP address. If the host has multiple public IP addresses, then the address on which the database registers may not be the same as the address which was added using auto-VNCR and , as a result, registration many be rejected. If the target database host has multiple public IP addresses, it is advisable that you configure VNCR manually for this host using the add invitednode or add invitedsubnet commands in GDSCTL.

    If there are multiple net-cards on the target host (/sbin/ifconfig returns more than one public interface), use add invitednode to be safe (after finding out which interface will be used to route packets).

    If there is any doubt about registration, then use config vncr and use add invitednode as necessary. There is no harm in doing this, because if the node is added already, auto-VNCR ignores it, and if you try to add it after auto-VNCR already added it, you will get a warning stating that it already exists.

    The following example shows how to add four shards using the ADD SHARD command, the first two of which are primary and active standby shards in shardspace shspace1, and the second two are primary and active standbys in shardspace shspace2. Note that the primaries are given a region of region1 and the standbys are given region2.

    GDSCTL> add invitednode shard_host_1 
    GDSCTL> add shard -connect shard_host_1:1521/shard_database_name
     -shardspace shspace1 -deploy_as primary -pwd GSMUSER_password -region region1
    
    GDSCTL> add invitednode shard_host_2
    GDSCTL> add shard -connect shard_host_2:1521/shard_database_name
     -shardspace shspace1 -deploy_as active_standby -pwd GSMUSER_password
     -region region2
    
    GDSCTL> add invitednode shard_host_3 
    GDSCTL> add shard -connect shard_host_3:1521/shard_database_name
     -shardspace shspace2 -deploy_as primary -pwd GSMUSER_password -region region1
    
    GDSCTL> add invitednode shard_host_4
    GDSCTL> add shard -connect shard_host_4:1521/shard_database_name
     -shardspace shspace2 -deploy_as active_standby -pwd GSMUSER_password
     -region region2

    If the shard database to be added is a pluggable database (PDB), you must use the -cdb option to ADD SHARD to specify which container database (CDB) the PDB shard is in. In addition, ADD CDB must be used before the ADD SHARD command to add the CDB to the catalog. See Oracle Database Global Data Services Concepts and Administration Guide for the syntax for ADD CDB and ADD SHARD. Note that in Oracle Database 18c, only one PDB in each CDB is allowed to be a shard.

  6. Check the configuration from a shard director.
    GDSCTL> config
    Regions
    -----------------------
    region1
    region2
    
    GSMs
    -----------------------
    sharddirector1
    sharddirector2
    
    Sharded Database
    -----------------------
    udef_shpool
    
    Databases
    -----------------------
    sh1
    sh2
    sh3
    sh4
    
    Shard spaces
    -----------------------
    shspace1
    shspace2
    
    Services
    -----------------------
    
    GDSCTL pending requests
    -----------------------
    Command          Object          Status
    -------          ------          ------
    
    Global properties
    -----------------------
    Name: oradbcloud
    Master GSM: sharddirector1
    DDL sequence #: 0
    
    
    GDSCTL> config vncr
    Name                    Group ID 
    ----                    --------
    shard_host_1 
    shard_host_2
    shard_host_3
    shard_host_4
    shard_catalog_host_IP
    
    
    GDSCTL> config shard
    Name     Shard space          Status    State    Region    Availability
    ----     -----------          ------    -----    ------    ------------
    sh1      shspace1             U         none     region1   -
    sh2      shspace1             U         none     region2   -
    sh3      shspace2             U         none     region1   -
    sh4      shspace2             U         none     region2   -
  7. Run the DEPLOY command to create the shards and the replicas.

    The DEPLOY command takes some time to run, approximately 15 to 30 minutes.

    GDSCTL> deploy

    Once the primary and standby shards are built, the DEPLOY command configures the Data Guard Broker with Fast-Start Failover enabled. The Fast-Start Failover observers are automatically started on the standby group’s shard director (sharddirector2 in this example).

  8. Verify that all of the shards are deployed.
    GDSCTL> config shard
    Name    Shard space           Status    State        Region    Availability
    ----    -----------           ------    -----        ------    ------------
    sh1      shspace1             Ok        Deployed     region1   ONLINE
    sh2      shspace1             Ok        Deployed     region2   READ_ONLY
    sh3      shspace2             Ok        Deployed     region1   ONLINE
    sh4      shspace2             Ok        Deployed     region2   READ_ONLY
  9. Verify that all of the shards are registered.
    GDSCTL> databases
    Database: "sh1" Registered: Y State: Ok ONS: N. Role: PRIMARY
    Instances: 1 Region: region1
       Registered instances: 
         udef_shpool%1
    Database: "sh2" Registered: Y State: Ok ONS: N. Role: PH_STNDBY
    Instances: 1 Region: region2
       Registered instances: 
         udef_shpool%11
    Database: "sh3" Registered: Y State: Ok ONS: N. Role: PRIMARY
    Instances: 1 Region: region1
       Registered instances: 
         udef_shpool%21
    Database: "sh4" Registered: Y State: Ok ONS: N. Role: PH_STNDBY
    Instances: 1 Region: region2
       Registered instances: 
         udef_shpool%31
  10. Check the configuration of a shard.
    GDSCTL> config shard -shard sh1
    Name: sh1
    Shard space: shspace1
    Status: Ok
    State: Deployed
    Region: region1
    Connection string: shard_host_1:1521/sh1:dedicated
    SCAN address:
    ONS remote port: 0
    Disk Threshold, ms: 20
    CPU Threshold, %: 75
    Version: 18.0.0.0
    Last Failed DDL: 
    DDL Error: ---
    Failed DDL id:
    Availability: ONLINE
    
    Supported services
    -----------------------
    Name                Preferred Status
    ----                --------- ------
  11. Add a global service that runs on all of the primary shards.

    The oltp_rw_srvc global service is a global data service that a client can use to connect to the sharded database. The oltp_rw_srvc service runs the OLTP transactions on the primary shards. Likewise, the oltp_ro_srvc global service is created to run read-only workload on the standby shards.

    GDSCTL> add service -service oltp_rw_srvc -role primary 
    
    GDSCTL> config service 
    
    Name          Network name                      Pool         Started Preferred all
    ----          ------------                      ----         ------- -------------
    oltp_rw_srvc  oltp_rw_srvc.cust_sdb.oracdbcloud udef_shpool  No      Yes
  12. Start the oltp_rw_srvc global service.
    GDSCTL> start service -service oltp_rw_srvc
    
    GDSCTL> status service
    Service "oltp_rw_srvc.cust_sdb.oradbcloud" has 2 instance(s). Affinity: ANYWHERE
       Instance "udef_shpool%1", name: "sh1", db: "sh1", region: "region1",
       status: ready.
       Instance "udef_shpool%21", name: "sh3", db: "sh3", region: "region1",
       status: ready.
  13. Add a global service for the read-only workload to run on the standby shards.
    GDSCTL> add service -service oltp_ro_srvc -role physical_standby
    
    GDSCTL> config service 
    
    Name          Network name                      Pool         Started Preferred all
    ----          ------------                      ----         ------- -------------
    oltp_rw_srvc  oltp_rw_srvc.cust_sdb.oracdbcloud cust_sdb     Yes     Yes
    oltp_ro_srvc  oltp_ro_srvc.cust_sdb.oracdbcloud cust_sdb     No      Yes
    
  14. Start the read-only service, and verify the status of the global services.
    GDSCTL> start service -service oltp_ro_srvc
    
    GDSCTL> status service
    Service "oltp_ro_srvc.cust_sdb.oradbcloud" has 2 instance(s). Affinity: ANYWHERE
       Instance "udef_shpool%11", name: "sh2", db: "sh2", region: "region2",
     status: ready.
       Instance "udef_shpool%31", name: "sh4", db: "sh4", region: "region2",
     status: ready.
    Service "oltp_rw_srvc.cust_sdb.oradbcloud" has 2 instance(s). Affinity: ANYWHERE
       Instance "udef_shpool%1", name: "sh1", db: "sh1", region: "region1",
     status: ready.
       Instance "udef_shpool%21", name: "sh3", db: "sh3", region: "region1",
     status: ready.

See Also:

Oracle Database Global Data Services Concepts and Administration Guide for more information about GDSCTL command usage

8.8.2 Creating a Schema for a User-Defined SDB

Create the schema user, tablespace set, sharded tables, and duplicated tables for the SDB. Verify that the DDLs are propagated to all of the shards, and, while connected to the shards, verify the automatic Data Guard Broker configuration with Fast-Start Failover.

  1. Connect to the shard catalog database, create the application schema user, and grant privileges and roles to the user.

    In this example, the application schema user is called app_schema.

    $ sqlplus / as sysdba
    
    SQL> alter session enable shard ddl;
    SQL> create user app_schema identified by app_schema_password;
    SQL> grant all privileges to app_schema;
    SQL> grant gsmadmin_role to app_schema;
    SQL> grant select_catalog_role to app_schema;
    SQL> grant connect, resource to app_schema;
    SQL> grant dba to app_schema;
    SQL> grant execute on dbms_crypto to app_schema;
  2. Create tablespaces for the sharded tables.
    SQL> CREATE TABLESPACE c1_tsp DATAFILE SIZE 100M autoextend on next 10M maxsize 
    unlimited extent management local segment space management auto in
     shardspace shspace1;
    
    SQL> CREATE TABLESPACE c2_tsp DATAFILE SIZE 100M autoextend on next 10M maxsize 
    unlimited extent management local segment space management auto in
     shardspace shspace2;
  3. If you use LOBs in any columns, you can specify tablespaces for the LOBs.
    SQL> CREATE TABLESPACE lobts1 ... in shardspace shspace1;
    
    SQL> CREATE TABLESPACE lobts2 ... in shardspace shspace2;
  4. Create a tablespace for the duplicated tables.

    In this example the duplicated table is the Products table in the sample Customers-Orders-Products schema.

    SQL> CREATE TABLESPACE products_tsp datafile size 100m autoextend
     on next 10M maxsize unlimited extent management local uniform size 1m; 
    
  5. Create a sharded table for the root table.

    In this example, the root table is the Customers table in the sample Customers-Orders-Products schema.

    SQL> CONNECT app_schema/app_schema_password
    
    SQL> ALTER SESSION ENABLE SHARD DDL;
    
    SQL> CREATE SHARDED TABLE Customers
      (
        CustId      VARCHAR2(60) NOT NULL,
        CustProfile VARCHAR2(4000),
        Passwd      RAW(60),
        CONSTRAINT pk_customers PRIMARY KEY (CustId),
        CONSTRAINT json_customers CHECK (CustProfile IS JSON)
      ) PARTITION BY RANGE (CustId)
      ( PARTITION ck1 values less than ('m') tablespace ck1_tsp,
        PARTITION ck2 values less than (MAXVALUE) tablespace ck2_tsp
      );

    Note:

    If any columns in the sharded tables contain LOBs, the CREATE SHARDED TABLE statement can include the LOB tablespaces, as shown here.

    SQL> CREATE SHARDED TABLE Customers
      (
        CustId      VARCHAR2(60) NOT NULL,
        CustProfile VARCHAR2(4000),
        Passwd      RAW(60),
        image       BLOB,
        CONSTRAINT pk_customers PRIMARY KEY (CustId),
        CONSTRAINT json_customers CHECK (CustProfile IS JSON)
      ) PARTITION BY RANGE (CustId)
      ( PARTITION ck1 values less than ('m') tablespace ck1_tsp
         lob(image) store as (tablespace lobts1),
        PARTITION ck2 values less than (MAXVALUE) tablespace ck2_tsp
         lob(image) store as (tablespace lobts2)
      );
  6. Create a sharded table for the other tables in the table family.

    In this example, sharded tables are created for the Orders and LineItems tables in the sample Customers-Orders-Products schema.

    The Orders sharded table is created first:

    SQL> CREATE SHARDED TABLE Orders
      (
        OrderId     INTEGER NOT NULL,
        CustId      VARCHAR2(60) NOT NULL,
        OrderDate   TIMESTAMP NOT NULL,
        SumTotal    NUMBER(19,4),
        Status      CHAR(4),
        CONSTRAINT  pk_orders PRIMARY KEY (CustId, OrderId),
        CONSTRAINT  fk_orders_parent FOREIGN KEY (CustId) 
        REFERENCES Customers ON DELETE CASCADE
      ) PARTITION BY REFERENCE (fk_orders_parent);
    

    Create the sequence used for the OrderId column.

    SQL> CREATE SEQUENCE Orders_Seq;

    Create a sharded table for LineItems

    SQL> CREATE SHARDED TABLE LineItems
      (
        OrderId     INTEGER NOT NULL,
        CustId      VARCHAR2(60) NOT NULL,
        ProductId   INTEGER NOT NULL,
        Price       NUMBER(19,4),
        Qty         NUMBER,
        CONSTRAINT  pk_items PRIMARY KEY (CustId, OrderId, ProductId),
        CONSTRAINT  fk_items_parent FOREIGN KEY (CustId, OrderId)
        REFERENCES Orders ON DELETE CASCADE
      ) PARTITION BY REFERENCE (fk_items_parent);
  7. Create any required duplicated tables.

    In this example, the Products table is a duplicated object.

    SQL> CREATE DUPLICATED TABLE Products
      (
        ProductId  INTEGER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
        Name       VARCHAR2(128),
        DescrUri   VARCHAR2(128),
        LastPrice  NUMBER(19,4)
      ) TABLESPACE products_tsp;
  8. From the shard director host, verify that there were no failures during the creation of the tablespaces.
    GDSCTL> show ddl
    id    DDL Text                                  Failed shards
    --    --------                                  -------------
    
    3       grant create table, create procedure,...               
    4       grant unlimited tablespace to app_schema               
    5       grant select_catalog_role to app_schema                
    6       create tablespace c1_tsp DATAFILE SIZ...               
    7       Create tablespace c2_tsp DATAFILE SIZ...               
    8       CREATE SHARDED TABLE Customers (   Cu...               
    9       CREATE SHARDED TABLE Orders (   Order...               
    10      CREATE SHARDED TABLE LineItems (   Or...               
    11      create tablespace products_tsp datafi... 
    12      CREATE MATERIALIZED VIEW "APP_SCHEMA"...      

    Note:

    The show ddl command output might be truncated. You can run SELECT ddl_text FROM gsmadmin_internal.ddl_requests on the catalog to see the full text of the statements.
  9. Verify that there were no DDL errors on each of the shards.

    Run the config shard and config chunks commands on each shard in your configuration.

    GDSCTL> config shard -shard sh1
    
    Name: sh1
    Shard space: shspace1
    Status: Ok
    State: Deployed
    Region: region1
    Connection string: shard_host_1:1521/sh1:dedicated
    SCAN address: 
    ONS remote port: 0
    Disk Threshold, ms: 20
    CPU Threshold, %: 75
    Version: 18.0.0.0
    Last Failed DDL: 
    DDL Error: ---
    Failed DDL id: 
    Availability: ONLINE
    Rack:
    
    Supported services
    ------------------------
    Name                                          Preferred Status    
    ----                                          --------- ------    
    oltp_ro_srvc                                  Yes       Enabled   
    oltp_rw_srvc                                  Yes       Enabled  
    
    GDSCTL> config chunks
    Chunks
    ------------------------
    Database                      From      To        
    --------                      ----      --        
    sh1                           1         1         
    sh2                           1         1         
    sh3                           2         2        
    sh4                           2         2 
  10. Verify that the tablespaces you created for the sharded table family and the tablespaces you created for the duplicated tables are created on all of the shards.

    The number of tablespaces in the tablespace set is based on the number of chunks you specified in the create shardcatalog command.

    The tablespace set with the first 6 chunks of the 12 that were specified in the shard catalog creation example, and the duplicated Products tablespace is shown in the following example.

    $ sqlplus / as sysdba
    
    SQL> select TABLESPACE_NAME, BYTES/1024/1024 MB
     from sys.dba_data_files
     order by tablespace_name;
    
    TABLESPACE_NAME 		    MB
    ----------------------- ----------
    C1_TSP                     100
    PRODUCTS_TSP                    10
    SYSAUX                    722.1875
    SYSEXT                          39
    SYSTEM                  782.203125
    SYS_SHARD_TS                   100
    UD1                            470
    
    7 rows selected.
    

    Repeat this step on all of the shards in your configuration.

  11. Verify that the chunks and chunk tablespaces were created on all of the shards.
    SQL> set linesize 140
    SQL> column table_name format a20
    SQL> column tablespace_name format a20
    SQL> column partition_name format a20
    SQL> show parameter db_unique_name
    
    NAME             TYPE        VALUE
    ---------------- ----------- ------------------------------
    db_unique_name   string      sh1
    
    SQL> select table_name, partition_name, tablespace_name
     from dba_tab_partitions
     where tablespace_name like 'C%TSP_SET_1'
     order by tablespace_name;
    
    
    TABLE_NAME       PARTITION_NAME   TABLESPACE_NAME
    ---------------- ---------------- --------------------
    CUSTOMERS            CK1                  C1_TSP
    ORDERS               CK1                  C1_TSP
    LINEITEMS            CK1                  C1_TSP
    

    Repeat this step on all of the shards in your configuration.

  12. Verify that the sharded and duplicated tables were created.

    Log in as the application schema user on the shard catalog database and each of the shards.

    The following example shows querying the tables on a database shard as the app_schema user.

    $ sqlplus app_schema/app_schema_password
    Connected.
    
    SQL> select table_name from user_tables;
    
    TABLE_NAME
    -----------------------------------------------------------------------
    CUSTOMERS
    ORDERS
    LINEITEMS
    PRODUCTS
    USLOG$_PRODUCTS
    
  13. Verify that the Data Guard Broker automatic Fast-Start Failover configuration was done.
    $ ssh os_username@shard_host_1
    $ dgmgrl
    
    DGMGRL> connect sys/password
    Connected to "sh1"
    Connected as SYSDG.
    DGMGRL> show configuration
    
    Configuration - sh1
    
      Protection Mode: MaxPerformance
      Members:
      sh1 - Primary database
        sh2 - (*) Physical standby database 
    
    Fast-Start Failover: ENABLED
    
    Configuration Status:
    SUCCESS   (status updated 15 seconds ago)
    
    DGMGRL> show database sh1
    
    Database - sh1
    
      Role:               PRIMARY
      Intended State:     TRANSPORT-ON
      Instance(s):
        sh1
    
    Database Status:
    SUCCESS
    
    DGMGRL> show database sh2
    
    Database - sh2
    
      Role:               PHYSICAL STANDBY
      Intended State:     APPLY-ON
      Transport Lag:      0 seconds (computed 0 seconds ago)
      Apply Lag:          0 seconds (computed 0 seconds ago)
      Average Apply Rate: 2.00 KByte/s
      Real Time Query:    ON
      Instance(s):
        sh2
    
    Database Status:
    SUCCESS
    
    DGMGRL> show fast_start failover
    
    Fast-Start Failover: ENABLED
    
      Threshold:          30 seconds
      Target:             sh2
      Observer:           shard_director_host
      Lag Limit:          30 seconds
      Shutdown Primary:   TRUE
      Auto-reinstate:     TRUE
      Observer Reconnect: (none)
      Observer Override:  FALSE
    
    Configurable Failover Conditions
      Health Conditions:
        Corrupted Controlfile          YES
        Corrupted Dictionary           YES
        Inaccessible Logfile            NO
        Stuck Archiver                  NO
        Datafile Write Errors          YES
    
      Oracle Error Conditions:
        (none)
  14. Locate the Fast-Start Failover observers.

    Connect to the shard catalog database and run the following commands:

    $ ssh oracle@shard6
    
    $ ps -ef |grep dgmgrl
    oracle    8210  8089  0 22:18 pts/4    00:00:00 grep dgmgrl
    oracle   20189     1  0 02:57 ?        00:02:40 dgmgrl -delete_script
     @/u01/app/oracle/product/18.0.0/gsmhome_1/network/admin/gsm_observer_1.cfg
    oracle   20193     1  0 02:57 ?        00:02:43 dgmgrl -delete_script
     @/u01/app/oracle/product/18.0.0/gsmhome_1/network/admin/gsm_observer_2.cfg

See Also:

Oracle Database Global Data Services Concepts and Administration Guide for information about GDSCTL command usage

8.9 Creating and Deploying a Composite SDB

To deploy a composite SDB you must install the required Oracle Sharding software components, configure the objects for a composite SDB, and create the schema.

The composite sharding method allows you to create multiple shardspaces for different subsets of data in a table partitioned by consistent hash. A shardspace is set of shards that store data that corresponds to a range or list of key values.

The following topics describe the tasks for deploying a composite SDB.

8.9.1 Deploying a Composite SDB

To deploy a composite SDB you create shardgroups and shards, execute the DEPLOY command, and create role-based global services.

The examples used in this deployment procedure are based on a global distribution scenario where separate shardspaces and shardgroups are created for America and Europe.
  1. Connect to a shard director host, and verify the environment variables.
    $ ssh os_user@shard_director_home
    $ env |grep ORA
    ORACLE_BASE=/u01/app/oracle
    ORACLE_HOME=/u01/app/oracle/product/18.0.0/gsmhome_1
  2. Set the global service manager for the current session, and specify the credentials to administer it.
    $ gdsctl
    GDSCTL> set gsm -gsm sharddirector1
    GDSCTL> connect mysdbadmin/mysdbadmin_password
  3. Add shardspaces and shardgroups for each customized grouping of shards your business case requires.

    In this example the shardspaces and shardgroups are created for the America and Europe customers. You can choose your own names.

    GDSCTL> add shardspace -shardspace cust_america
    GDSCTL> add shardgroup -shardspace cust_america -shardgroup america_shgrp1
     -deploy_as primary -region region1
    
    GDSCTL> add shardspace -shardspace cust_europe
    GDSCTL> add shardgroup -shardspace cust_europe -shardgroup europe_shgrp1
     -deploy_as primary -region region2

    Note:

    For production deployments, additional shardgroups must be created for high availability using the add shardgroup command

  4. Verify the shardspace and shardgroup configurations.
    GDSCTL> config shardspace
    SHARDSPACE                    Chunks                        
    ----------                    ------                        
    cust_america                  12                            
    cust_europe                   12                            
    shardspaceora                 12   
     
    GDSCTL>config shardgroup
    Shard Group         Chunks Region              SHARDSPACE          
    -----------         ------ ------              ----------          
    america_shgrp1      12     region1             cust_america        
    europe_shgrp1       12     region2             cust_europe         
    
  5. Verify the sharded database configuration.
    GDSCTL> config
    
    Regions
    ------------------------
    region1 
    region2   
    
    GSMs
    ------------------------
    sharddirector1    
    sharddirector2 
    
    Sharded Database
    ------------------------
    cust_sdb_comp 
    
    Databases
    ------------------------
    
    Shard Groups
    ------------------------
    america_shgrp1  
    europe_shgrp1 
    
    Shard spaces
    ------------------------
    cust_america 
    cust_europe 
    shardspaceora
    
    Services
    ------------------------
    
    GDSCTL pending requests
    ------------------------
    Command              Object               Status
    -------              ------               ------
    
    Global properties
    ------------------------
    Name: oradbcloud
    Master GSM: sharddirector1
    DDL sequence #: 0
    
  6. Add each shard’s host address to the valid node checking for registration (VNCR) list in the catalog, then create the shard in either the primary or standby shardgroup, as shown in the following example.

    Note:

    The valid node checking for registration (VNCR) feature provides the ability to configure and dynamically update a set of IP addresses, host names, or subnets from which registration requests are allowed by the shard directors. Database instance registration with a shard director succeeds only when the request originates from a valid node. By default, the shard management tier (based on Oracle Global Data Services framework) automatically adds a VNCR entry for the host on which a remote database is running each time create shard or add shard is executed. The automation (called auto-VNCR) finds the public IP address of the target host, and automatically adds a VNCR entry for that IP address. If the host has multiple public IP addresses, then the address on which the database registers may not be the same as the address which was added using auto-VNCR and , as a result, registration many be rejected. If the target database host has multiple public IP addresses, it is advisable that you configure VNCR manually for this host using the add invitednode or add invitedsubnet commands in GDSCTL.

    If there are multiple net-cards on the target host (/sbin/ifconfig returns more than one public interface), use add invitednode to be safe (after finding out which interface will be used to route packets).

    If there is any doubt about registration, then use config vncr and use add invitednode as necessary. There is no harm in doing this, because if the node is added already, auto-VNCR ignores it, and if you try to add it after auto-VNCR already added it, you will get a warning stating that it already exists.

    The example shows how to create four shards, two of which are in the America shardgroup and two in the Europe shardgroup. The os_credential is the operating system credential you created on each host.

    While creating the shards, you can also set the SYS password in the CREATE SHARD using -sys_password as shown in the following example. This sets the SYS password after the shards are created when running DEPLOY. There are other optional parameters for CREATE SHARD that allow you to customize the database parameters, storage and file locations, listener port numbers, and so on, which are documented in the Oracle Database Global Data Services Concepts and Administration Guide appendices.

    GDSCTL> add invitednode shard_host_1 
    GDSCTL> create shard -shardgroup america_shgrp1 -destination shard_host_1
     -credential os_credential-sys_password
    
    GDSCTL> add invitednode shard_host_2
    GDSCTL> create shard -shardgroup america_shgrp1 -destination shard_host_2
     -credential os_credential-sys_password
    
    GDSCTL> add invitednode shard_host_3 
    GDSCTL> create shard -shardgroup europe_shgrp1 -destination shard_host_3
     -credential os_credential-sys_password
    
    GDSCTL> add invitednode shard_host_4
    GDSCTL> create shard -shardgroup europe_shgrp1 -destination shard_host_4
     -credential os_credential-sys_password

    If you are using the ADD SHARD method described in Introduction to Sharded Database Deployment, use the following command instead of the CREATE SHARD commands in the example above. If the shard database to be added is a pluggable database (PDB), you must use the -cdb option to ADD SHARD to specify which container database (CDB) the PDB shard is in. In addition, ADD CDB must be used before the ADD SHARD command to add the CDB to the catalog. See Oracle Database Global Data Services Concepts and Administration Guide for the syntax for ADD CDB and ADD SHARD. Note that in Oracle Database 18c, only one PDB in each CDB is allowed to be a shard.

    GDSCTL> add shard –shardgroup america_shgrp1
     –connect shard_host:TNS_listener_port/shard_database_name
     –pwd GSMUSER_password
  7. Check the configuration from a shard director.

    Note that the shard names, sh1, sh2, sh3, and sh4, are system generated shard names.

    GDSCTL> config shard
    Name           Shard Group         Status    State       Region    Availability
    ----           -----------         ------    -----       ------    ------------
    sh1            america_shgrp1      U         none        region1   -         
    sh2            america_shgrp1      U         none        region1   -         
    sh3            europe_shgrp1       U         none        region2   -         
    sh4            europe_shgrp1       U         none        region2   -         
    
    GDSCTL> config vncr
    Name    Group ID 
    ----    --------
    shard_host_1
    shard_host_2
    shard_host_3
    shard_host_4
    shard_catalog_host_IP
    
  8. Run the DEPLOY command to create the shards.
    GDSCTL> deploy

    The DEPLOY command takes some time to run, approximately 15 to 30 minutes. The DEPLOY command creates the shards using DBCA.

  9. Verify that all of the shards are deployed.
    GDSCTL> config shard
    Name        Shard Group         Status    State       Region    Availability 
    ----        -----------         ------    -----       ------    ------------ 
    sh1         america_shgrp1      Ok        Deployed    region1   ONLINE  
    sh2         america_shgrp1      Ok        Deployed    region1   ONLINE 
    sh3         europe_shgrp1       Ok        Deployed    region2   ONLINE 
    sh4         europe_shgrp1       Ok        Deployed    region2   ONLINE
  10. Verify that all of the shards are registered.
    GDSCTL> databases
    Database: "sh1" Registered: Y State: Ok ONS: N. Role: PRIMARY Instances: 1
     Region: region1
       Registered instances:
         cust_sdb_comp%1
    Database: "sh2" Registered: Y State: Ok ONS: N. Role: PRIMARY Instances: 1
     Region: region1
       Registered instances:
         cust_sdb_comp%11
    Database: "sh3" Registered: Y State: Ok ONS: N. Role: PRIMARY Instances: 1
     Region: region2
       Registered instances:
         cust_sdb_comp%21
    Database: "sh4" Registered: Y State: Ok ONS: N. Role: PRIMARY Instances: 1
     Region: region2
       Registered instances:
         cust_sdb_comp%31
    
  11. Check the configuration of a shard.
    GDSCTL> config shard -shard sh1
    
    Name: sh1
    Shard Group: america_shgrp1
    Status: Ok
    State: Deployed
    Region: region1
    Connection string: shard1:1521/sh1:dedicated
    SCAN address: 
    ONS remote port: 0
    Disk Threshold, ms: 20
    CPU Threshold, %: 75
    Version: 18.0.0.0
    Last Failed DDL: 
    DDL Error: ---
    Failed DDL id: 
    Availability: ONLINE
    
    
    Supported services
    ------------------------
    Name                                                            Preferred Status    
    ----                                                            --------- ------
    
  12. Add a global service that runs on all of the primary shards.

    The oltp_rw_srvc global service is the global data services listener that helps route a connection from the client to the actual database. The oltp_rw_srvc service runs the OLTP transactions on the primary shards.

    GDSCTL> add service -service oltp_rw_srvc
    GDSCTL> config service 
    
    Name           Network name                  Pool           Started Preferred all 
    ----           ------------                  ----           ------- ------------- 
    oltp_rw_srvc   oltp_rw_srvc.cust_sdb_comp.or cust_sdb_comp  No      Yes           
                   adbcloud   
    
  13. Start the oltp_rw_srvc global service.
    GDSCTL> start service -service oltp_rw_srvc
    
    GDSCTL> status service
    Service "oltp_rw_srvc.cust_sdb_comp.oradbcloud" has 4 instance(s). 
     Affinity: ANYWHERE
       Instance "cust_sdb_comp%1", name: "sh1", db: "sh1", region: "region1",
     status: ready.
       Instance "cust_sdb_comp%11", name: "sh2", db: "sh2", region: "region1",
     status: ready.
       Instance "cust_sdb_comp%21", name: "sh3", db: "sh3", region: "region2",
     status: ready.
       Instance "cust_sdb_comp%31", name: "sh4", db: "sh4", region: "region2",
     status: ready. 
    

8.9.2 Creating a Schema for a Composite SDB

Create the schema user, tablespace set, sharded tables, and duplicated tables for the SDB. Verify that the DDLs are propagated to all of the shards, and, while connected to the shards, verify the automatic Data Guard Broker configuration with Fast-Start Failover.

  1. Connect to the shard catalog host, and set the ORACLE_SID to the shard catalog name.
  2. Connect to the shard catalog database, create the application schema user, and grant privileges and roles to the user.

    In this example, the application schema user is called app_schema.

    $ sqlplus / as sysdba
    
    SQL> connect / as sysdba
    SQL> alter session enable shard ddl;
    SQL> create user app_schema identified by app_schema_password;
    SQL> grant connect, resource, alter session to app_schema;
    SQL> grant execute on dbms_crypto to app_schema;
    SQL> grant create table, create procedure, create tablespace,
     create materialized view to app_schema;
    SQL> grant unlimited tablespace to app_schema;
    SQL> grant select_catalog_role to app_schema;
    SQL> grant all privileges to app_schema;
    SQL> grant gsmadmin_role to app_schema;
    SQL> grant dba to app_schema;
    
  3. Create tablespace sets for the sharded tables.
    SQL> CREATE TABLESPACE SET  
      TSP_SET_1 in shardspace cust_america using template
      (datafile size 100m autoextend on next 10M maxsize
       unlimited extent management
       local segment space management auto );
    
    SQL> CREATE TABLESPACE SET
      TSP_SET_2 in shardspace cust_europe using template
      (datafile size 100m autoextend on next 10M maxsize
       unlimited extent management
       local segment space management auto );

    Specifying the shardspace is optional when creating the tablespace set. If the shardspace is not specified in the command, the default shardspace is used.

  4. If you use LOBs in any columns, you can specify tablespace sets for the LOBs.
    SQL> CREATE TABLESPACE SET LOBTS1 in shardspace cust_america ... ;
    
    SQL> CREATE TABLESPACE SET LOBTS2 in shardspace cust_europe ... ;

    Note:

    Tablespace sets for LOBs cannot be specified at the subpartitition level in composite sharding.

  5. Create a tablespace for the duplicated tables.

    In this example the duplicated table is the Products table in the sample Customers-Orders-Products schema.

    CREATE TABLESPACE products_tsp datafile size 100m autoextend on next 10M
     maxsize unlimited extent management local uniform size 1m;
  6. Create a sharded table for the root table.

    In this example, the root table is the Customers table in the sample Customers-Orders-Products schema.

    connect app_schema/app_schema_password
    alter session enable shard ddl;
    
    CREATE SHARDED TABLE Customers
    (
      CustId      VARCHAR2(60) NOT NULL,
      FirstName   VARCHAR2(60),
      LastName    VARCHAR2(60),
      Class       VARCHAR2(10),
      Geo         VARCHAR2(8),
      CustProfile VARCHAR2(4000),
      Passwd      RAW(60),
      CONSTRAINT pk_customers PRIMARY KEY (CustId),
      CONSTRAINT json_customers CHECK (CustProfile IS JSON)
    ) partitionset by list(GEO)
    partition by consistent hash(CustId)
    partitions auto
    (partitionset america values ('AMERICA') tablespace set tsp_set_1,
    partitionset europe values ('EUROPE') tablespace set tsp_set_2
    );
    

    Note:

    If any columns in the sharded tables contain LOBs, the CREATE SHARDED TABLE statement can include the LOB tablespace set, as shown here.

    CREATE SHARDED TABLE Customers
    (
      CustId      VARCHAR2(60)  NOT NULL,
      FirstName   VARCHAR2(60),
      LastName    VARCHAR2(60),
      Class       VARCHAR2(10),
      Geo         VARCHAR2(8)   NOT NULL,
      CustProfile VARCHAR2(4000),
      Passwd      RAW(60),
      image       BLOB,
      CONSTRAINT pk_customers PRIMARY KEY (CustId),
      CONSTRAINT json_customers CHECK (CustProfile IS JSON)
    ) partitionset by list(GEO)
    partition by consistent hash(CustId)
    partitions auto
    (partitionset america values ('AMERICA') tablespace set tsp_set_1
     lob(image) store as (tablespace set lobts1),
    partitionset europe values ('EUROPE') tablespace set tsp_set_2
     lob(image) store as (tablespace set lobts2));
    
  7. Create a sharded table for the other tables in the table family.

    In this example, sharded tables are created for the Orders and LineItems tables in the sample Customers-Orders-Products schema.

    Create the sequence used for the OrderId column.

    CREATE SEQUENCE Orders_Seq;
    

    The Orders sharded table is created first:

    CREATE SHARDED TABLE Orders
    (
      OrderId     INTEGER NOT NULL,
      CustId      VARCHAR2(60) NOT NULL,
      OrderDate   TIMESTAMP NOT NULL,
      SumTotal    NUMBER(19,4),
      Status      CHAR(4),
      constraint  pk_orders primary key (CustId, OrderId),
      constraint  fk_orders_parent foreign key (CustId) 
        references Customers on delete cascade
    ) partition by reference (fk_orders_parent);
    

    Create a sharded table for LineItems

    CREATE SHARDED TABLE LineItems
    (
      OrderId     INTEGER NOT NULL,
      CustId      VARCHAR2(60) NOT NULL,
      ProductId   INTEGER NOT NULL,
      Price       NUMBER(19,4),
      Qty         NUMBER,
      constraint  pk_items primary key (CustId, OrderId, ProductId),
      constraint  fk_items_parent foreign key (CustId, OrderId)
        references Orders on delete cascade
    ) partition by reference (fk_items_parent);
  8. Create any required duplicated tables.

    In this example, the Products table is a duplicated object.

    CREATE DUPLICATED TABLE Products
    (
      ProductId  INTEGER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
      Name       VARCHAR2(128),
      DescrUri   VARCHAR2(128),
      LastPrice  NUMBER(19,4)
    ) tablespace products_tsp;
  9. From the shard director host, verify that there were no failures during the creation of the tablespaces.
    GDSCTL> show ddl
    id      DDL Text                                 Failed shards 
    --      --------                                 ------------- 
    11      CREATE TABLESPACE SET  TSP_SET_2 in s...               
    12      CREATE TABLESPACE products_tsp datafi...               
    13      CREATE SHARDED TABLE Customers (   Cu...               
    14      CREATE SEQUENCE Orders_Seq;                            
    15      CREATE SHARDED TABLE Orders (   Order...               
    16      CREATE SHARDED TABLE LineItems (   Or...               
    17      create database link "PRODUCTSDBLINK@...               
    18      CREATE MATERIALIZED VIEW "PRODUCTS"  ...               
    19      CREATE OR REPLACE FUNCTION PasswCreat...               
    20      CREATE OR REPLACE FUNCTION PasswCheck...     
    
  10. Verify that there were no DDL errors on each of the shards.

    Run the config shard and config chunks commands on each shard in your configuration.

    GDSCTL> config shard -shard sh1
    
    Name: sh1
    Shard Group: america_shgrp1
    Status: Ok
    State: Deployed
    Region: region1
    Connection string: shard1:1521/sh1:dedicated
    SCAN address: 
    ONS remote port: 0
    Disk Threshold, ms: 20
    CPU Threshold, %: 75
    Version: 18.0.0.0
    Last Failed DDL: 
    DDL Error: ---
    Failed DDL id: 
    Availability: ONLINE
    
    
    Supported services
    ------------------------
    Name                                                            Preferred Status    
    ----                                                            --------- ------    
    oltp_rw_srvc                                                    Yes       Enabled  
    
    GDSCTL> config chunks
    Chunks
    ------------------------
    Database                      From      To        
    --------                      ----      --        
    sh1                           1         6         
    sh2                           7         12        
    sh3                           1         6         
    sh4                           7         12
    
  11. Verify that the tablespaces of the tablespace set you created for the sharded table family and the tablespaces you created for the duplicated tables are created on all of the shards.

    The number of tablespaces in the tablespace set is based on the number of chunks you specified in the create shardcatalog command.

    The tablespace set with the first 6 chunks of the 12 that were specified in the shard catalog creation example, and the duplicated Products tablespace is shown in the following example on the shard_host_1.

    $ sqlplus / as sysdba
    
    SQL> select TABLESPACE_NAME, BYTES/1024/1024 MB
     from sys.dba_data_files
     order by tablespace_name;
    
    TABLESPACE_NAME 		       MB
    ------------------------------ ----------
    C001TSP_SET_1			      100
    C002TSP_SET_1			      100
    C003TSP_SET_1			      100
    C004TSP_SET_1			      100
    C005TSP_SET_1			      100
    C006TSP_SET_1			      100
    PRODUCTS_TSP			      100
    SYSAUX				      650
    SYSTEM				      890
    SYS_SHARD_TS			      100
    TSP_SET_1			      100
    
    TABLESPACE_NAME 		       MB
    ------------------------------ ----------
    TSP_SET_2			      100
    UNDOTBS1			      110
    USERS					5
    
    14 rows selected.
    

    Repeat this step on all of the shards in your configuration.

  12. Verify that the chunks and chunk tablespaces were created on all of the shards.
    SQL> set linesize 140
    SQL> column table_name format a20
    SQL> column tablespace_name format a20
    SQL> column partition_name format a20
    SQL> show parameter db_unique_name
    NAME				     TYPE	 VALUE
    ------------------------------------ ----------- ------------------------------
    db_unique_name			     string	 sh2
    
    
    SQL> select table_name, partition_name, tablespace_name
     from dba_tab_partitions
     where tablespace_name like 'C%TSP_SET_1'
     order by tablespace_name;
    
    TABLE_NAME	     PARTITION_NAME	  TABLESPACE_NAME
    -------------------- -------------------- --------------------
    LINEITEMS	     CUSTOMERS_P7	  C007TSP_SET_1
    CUSTOMERS	     CUSTOMERS_P7	  C007TSP_SET_1
    ORDERS		     CUSTOMERS_P7	  C007TSP_SET_1
    CUSTOMERS	     CUSTOMERS_P8	  C008TSP_SET_1
    LINEITEMS	     CUSTOMERS_P8	  C008TSP_SET_1
    ORDERS		     CUSTOMERS_P8	  C008TSP_SET_1
    LINEITEMS	     CUSTOMERS_P9	  C009TSP_SET_1
    CUSTOMERS	     CUSTOMERS_P9	  C009TSP_SET_1
    ORDERS		     CUSTOMERS_P9	  C009TSP_SET_1
    CUSTOMERS	     CUSTOMERS_P10	  C00ATSP_SET_1
    LINEITEMS	     CUSTOMERS_P10	  C00ATSP_SET_1
    
    TABLE_NAME	     PARTITION_NAME	  TABLESPACE_NAME
    -------------------- -------------------- --------------------
    ORDERS		     CUSTOMERS_P10	  C00ATSP_SET_1
    CUSTOMERS	     CUSTOMERS_P11	  C00BTSP_SET_1
    LINEITEMS	     CUSTOMERS_P11	  C00BTSP_SET_1
    ORDERS		     CUSTOMERS_P11	  C00BTSP_SET_1
    CUSTOMERS	     CUSTOMERS_P12	  C00CTSP_SET_1
    LINEITEMS	     CUSTOMERS_P12	  C00CTSP_SET_1
    ORDERS		     CUSTOMERS_P12	  C00CTSP_SET_1
    
    18 rows selected.
    

    Repeat this step on all of the shards in your configuration.

  13. Connect to the shard catalog database and verify that the chunks are uniformly distributed.
    $ sqlplus / as sysdba
    
    SQL> set echo off
    SQL> select a.name Shard,  count( b.chunk_number) Number_of_Chunks
     from gsmadmin_internal.database a, gsmadmin_internal.chunk_loc b
     where a.database_num=b.database_num  group by a.name;
    
    SHARD			       NUMBER_OF_CHUNKS
    ------------------------------ ----------------
    sh1					      6
    sh2					      6
    sh3					      6
    sh4					      6
    
  14. Verify that the sharded and duplicated tables were created.

    Log in as the application schema user on the shard catalog database and each of the shards.

    The following example shows querying the tables on a database shard as the app_schema user.

    $ sqlplus app_schema/app_schema_password
    Connected.
    SQL> select table_name from user_tables;
    
    TABLE_NAME
    -----------------------------------------------------------------------
    CUSTOMERS
    ORDERS
    LINEITEMS
    PRODUCTS
    
    4 rows selected.

8.10 Using Transparent Data Encryption with Oracle Sharding

Oracle Sharding supports Transparent Data Encryption (TDE), but in order to successfully move chunks in a sharded database with TDE enabled, all of the shards must share and use the same encryption key for the encrypted tablespaces.

A sharded database consists of multiple independent databases and a catalog database. For TDE to work properly, especially when data is moved between shards, certain restrictions apply. In order for chunk movement between shards to work when data is encrypted, you must ensure that all of the shards use the same encryption key.

There are two ways to accomplish this:

  • Create and export an encryption key from the shard catalog, and then import and activate the key on all of the shards individually.

  • Store the wallet in a shared location and have the shard catalog and all of the shards use the same wallet.

The following TDE statements are automatically propagated to shards when executed on the shard catalog with shard DDL enabled:

  • alter system set encryption wallet open/close identified by password

  • alter system set encryption key

  • administer key management set keystore [open|close] identified by password

  • administer key management set key identified by password

  • administer key management use key identified by password

  • administer key management create key store identified by password

Limitations

The following limitations apply to using TDE with Oracle Sharding.

  • For MOVE CHUNK to work, all shard database hosts must be on the same platform.

  • MOVE CHUNK cannot use compression during data transfer, which may impact performance.

  • Only encryption on the tablespace level is supported. Encryption on specific columns is not supported.

See Also:

Oracle Database Advanced Security Guide for more information about TDE

8.10.1 Creating a Single Encryption Key on All Shards

To propagate a single encryption key to all of the databases in the sharded database configuration, you must create a master encryption key on the shard catalog, then use wallet export, followed by wallet import onto the shards, and activate the keys.

Note:

This procedure assumes that the keystore password and wallet directory path are the same for the shard catalog and all of the shards. If you require different passwords and directory paths, all of the commands should be issued individually on each shard and the shard catalog with shard DDL disabled using the shard’s own password and path.

These steps should be done before any data encryption is performed.

  1. Create an encryption key on the shard catalog.

    With shard DDL enabled, issue the following statements.

    ADMINISTER KEY MANAGEMENT CREATE KEYSTORE wallet_directory_path IDENTIFIED BY
     keystore_password;
    ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY keystore_password;

    The keystore_password should be the same if you prefer to issue wallet open and close commands centrally from the catalog.

    Note:

    The wallet directory path should match the ENCRYPTION_WALLET_LOCATION in the corresponding sqlnet.ora.

    ENCRYPTION_WALLET_LOCATION parameter is being deprecated. You are advised to use the WALLET_ROOT static initialization and TDE_CONFIGURATION dynamic initialization parameter instead.

    With shard DDL disabled, issue the following statement.

    ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY keystore_password WITH BACKUP;

    An encryption key is created and activated in the shard catalog database’s wallet.

    If you issue this statement with DDL enabled, it will also create encryption keys in each of the shards’ wallets, which are different keys from that of the catalog. In order for data movement to work, you cannot use different encryption keys on each shard.

  2. Get the master key ID from the shard catalog keystore.
    SELECT KEY_ID  FROM V$ENCRYPTION_KEYS 
    WHERE ACTIVATION_TIME =
     (SELECT MAX(ACTIVATION_TIME) FROM V$ENCRYPTION_KEYS
      WHERE ACTIVATING_DBID = (SELECT DBID FROM V$DATABASE));
  3. With shard DDL disabled, export the catalog wallet containing the encryption key.
    ADMINISTER KEY MANAGEMENT EXPORT ENCRYPTION KEYS WITH SECRET secret_phrase TO
     wallet_export_file IDENTIFIED BY keystore_password;
    (Optional) Enter the result of the step here.
  4. Physically copy the wallet file to each of the shard hosts, into their corresponding wallet export file location, or put the wallet file on a shared disk to which all of the shards have access.
  5. With shard DDL disabled, log on to each shard and import the wallet containing the key.
    ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY keystore_password;
    ADMINISTER KEY MANAGEMENT IMPORT ENCRYPTION KEYS WITH SECRET secret_phrase FROM
     wallet_export_file IDENTIFIED BY keystore_password WITH BACKUP;
  6. Restart the shard databases.
  7. Activate the key on all of the shards.

    On the catalog with shard DDL enabled

    ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY keystore_password;
    ADMINISTER KEY MANAGEMENT USE KEY master_key_id IDENTIFIED BY keystore_password
     WITH BACKUP;

All of the shards and the shard catalog database now have the same encryption key activated and ready to use for data encryption. On the shard catalog, you can issue TDE DDLs (with shard DDL enabled) such as:

  • Create encrypted tablespaces and tablespace sets.

  • Create sharded tables using encrypted tablespaces.

  • Create sharded tables containing encrypted columns (with limitations).

Validate that the key IDs on all of the shards match the ID on the shard catalog.

SELECT KEY_ID  FROM V$ENCRYPTION_KEYS 
WHERE ACTIVATION_TIME =
 (SELECT MAX(ACTIVATION_TIME) FROM V$ENCRYPTION_KEYS
  WHERE ACTIVATING_DBID = (SELECT DBID FROM V$DATABASE));