Chapter 21 MySQL NDB Cluster 7.5

Table of Contents

21.1 NDB Cluster Overview
21.1.1 NDB Cluster Core Concepts
21.1.2 NDB Cluster Nodes, Node Groups, Replicas, and Partitions
21.1.3 NDB Cluster Hardware, Software, and Networking Requirements
21.1.4 What is New in MySQL NDB Cluster 7.5
21.1.5 MySQL Server Using InnoDB Compared with NDB Cluster
21.1.6 Known Limitations of NDB Cluster
21.2 NDB Cluster Installation
21.2.1 The NDB Cluster Auto-Installer
21.2.2 Installation of NDB Cluster on Linux
21.2.3 Installing NDB Cluster on Windows
21.2.4 Initial Configuration of NDB Cluster
21.2.5 Initial Startup of NDB Cluster
21.2.6 NDB Cluster Example with Tables and Data
21.2.7 Safe Shutdown and Restart of NDB Cluster
21.2.8 Upgrading and Downgrading NDB Cluster
21.3 Configuration of NDB Cluster
21.3.1 Quick Test Setup of NDB Cluster
21.3.2 Overview of NDB Cluster Configuration Parameters, Options, and Variables
21.3.3 NDB Cluster Configuration Files
21.3.4 Using High-Speed Interconnects with NDB Cluster
21.4 NDB Cluster Programs
21.4.1 ndbd — The NDB Cluster Data Node Daemon
21.4.2 ndbinfo_select_all — Select From ndbinfo Tables
21.4.3 ndbmtd — The NDB Cluster Data Node Daemon (Multi-Threaded)
21.4.4 ndb_mgmd — The NDB Cluster Management Server Daemon
21.4.5 ndb_mgm — The NDB Cluster Management Client
21.4.6 ndb_blob_tool — Check and Repair BLOB and TEXT columns of NDB Cluster Tables
21.4.7 ndb_config — Extract NDB Cluster Configuration Information
21.4.8 ndb_cpcd — Automate Testing for NDB Development
21.4.9 ndb_delete_all — Delete All Rows from an NDB Table
21.4.10 ndb_desc — Describe NDB Tables
21.4.11 ndb_drop_index — Drop Index from an NDB Table
21.4.12 ndb_drop_table — Drop an NDB Table
21.4.13 ndb_error_reporter — NDB Error-Reporting Utility
21.4.14 ndb_index_stat — NDB Index Statistics Utility
21.4.15 ndb_print_backup_file — Print NDB Backup File Contents
21.4.16 ndb_print_file — Print NDB Disk Data File Contents
21.4.17 ndb_print_schema_file — Print NDB Schema File Contents
21.4.18 ndb_print_sys_file — Print NDB System File Contents
21.4.19 ndbd_redo_log_reader — Check and Print Content of Cluster Redo Log
21.4.20 ndb_restore — Restore an NDB Cluster Backup
21.4.21 ndb_select_all — Print Rows from an NDB Table
21.4.22 ndb_select_count — Print Row Counts for NDB Tables
21.4.23 ndb_setup.py — Start browser-based Auto-Installer for NDB Cluster
21.4.24 ndb_show_tables — Display List of NDB Tables
21.4.25 ndb_size.pl — NDBCLUSTER Size Requirement Estimator
21.4.26 ndb_waiter — Wait for NDB Cluster to Reach a Given Status
21.4.27 Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs
21.5 Management of NDB Cluster
21.5.1 Summary of NDB Cluster Start Phases
21.5.2 Commands in the NDB Cluster Management Client
21.5.3 Online Backup of NDB Cluster
21.5.4 MySQL Server Usage for NDB Cluster
21.5.5 Performing a Rolling Restart of an NDB Cluster
21.5.6 Event Reports Generated in NDB Cluster
21.5.7 NDB Cluster Log Messages
21.5.8 NDB Cluster Single User Mode
21.5.9 Quick Reference: NDB Cluster SQL Statements
21.5.10 ndbinfo: The NDB Cluster Information Database
21.5.11 INFORMATION_SCHEMA Tables for NDB Cluster
21.5.12 NDB Cluster Security Issues
21.5.13 NDB Cluster Disk Data Tables
21.5.14 Adding NDB Cluster Data Nodes Online
21.5.15 Distributed MySQL Privileges for NDB Cluster
21.5.16 NDB API Statistics Counters and Variables
21.6 NDB Cluster Replication
21.6.1 NDB Cluster Replication: Abbreviations and Symbols
21.6.2 General Requirements for NDB Cluster Replication
21.6.3 Known Issues in NDB Cluster Replication
21.6.4 NDB Cluster Replication Schema and Tables
21.6.5 Preparing the NDB Cluster for Replication
21.6.6 Starting NDB Cluster Replication (Single Replication Channel)
21.6.7 Using Two Replication Channels for NDB Cluster Replication
21.6.8 Implementing Failover with NDB Cluster Replication
21.6.9 NDB Cluster Backups With NDB Cluster Replication
21.6.10 NDB Cluster Replication: Multi-Master and Circular Replication
21.6.11 NDB Cluster Replication Conflict Resolution
21.7 NDB Cluster Release Notes

This chapter contains information about MySQL NDB Cluster, which is a high-availability, high-redundancy version of MySQL adapted for the distributed computing environment. Recent NDB Cluster release series use version 7 of the NDB storage engine (also known as NDBCLUSTER) to enable running several computers with MySQL servers and other software in a cluster. NDB Cluster 7.5, now available as a General Availability (GA) release beginning with version 7.5.4, incorporates version 7.5 of the NDB storage engine. Previous GA releases still available for production, NDB 7.3 and NDB Cluster 7.4, incorporate NDB versions 7.3 and 7.4, respectively.

Support for the NDB storage engine is not included in standard MySQL Server 5.7 binaries built by Oracle. Instead, users of NDB Cluster binaries from Oracle should upgrade to the most recent binary release of NDB Cluster for supported platforms—these include RPMs that should work with most Linux distributions. NDB Cluster users who build from source should use the sources provided for NDB Cluster . (Locations where the sources can be obtained are listed later in this section.)

This chapter contains information about NDB Cluster 7.5 releases through 5.7.18-ndb-7.5.7. NDB Cluster 7.5 is available as a General Availability release, and recommended for new deployments. The NDB Cluster 7.4 and NDB Cluster 7.3 release series are previous GA releases still supported in production. NDB Cluster 7.2 is a previous GA release series which is still supported. We currently recommend that new deployments for production use NDB Cluster 7.5. For more information about NDB Cluster 7.4 and NDB Cluster 7.3, see MySQL NDB Cluster 7.3 and NDB Cluster 7.4. For information about NDB Cluster 7.2, see MySQL NDB Cluster 7.2.

Supported Platforms.  NDB Cluster is currently available and supported on a number of platforms. For exact levels of support available for on specific combinations of operating system versions, operating system distributions, and hardware platforms, please refer to http://www.mysql.com/support/supportedplatforms/cluster.html.

Availability.  NDB Cluster binary and source packages are available for supported platforms from http://dev.mysql.com/downloads/cluster/.

NDB Cluster release numbers.  NDB Cluster follows a somewhat different release pattern from the mainline MySQL Server 5.7 series of releases. In this Manual and other MySQL documentation, we identify these and later NDB Cluster releases employing a version number that begins with NDB. This version number is that of the NDBCLUSTER storage engine used in the release, and not of the MySQL server version on which the NDB Cluster release is based.

Version strings used in NDB Cluster software.  The version string displayed by NDB Cluster programs uses this format:

mysql-mysql_server_version-ndb-ndb_engine_version

mysql_server_version represents the version of the MySQL Server on which the NDB Cluster release is based. For all NDB Cluster 7.5 releases, this is 5.7. ndb_engine_version is the version of the NDB storage engine used by this release of the NDB Cluster software. You can see this format used in the mysql client, as shown here:

shell> mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.18-ndb-7.5.7 Source distribution

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> SELECT VERSION()\G
*************************** 1. row ***************************
VERSION(): 5.7.18-ndb-7.5.7
1 row in set (0.00 sec)

This version string is also displayed in the output of the SHOW command in the ndb_mgm client:

ndb_mgm> SHOW
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=1    @10.0.10.6  (5.7.18-ndb-7.5.7, Nodegroup: 0, *)
id=2    @10.0.10.8  (5.7.18-ndb-7.5.7, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=3    @10.0.10.2  (5.7.18-ndb-7.5.7)

[mysqld(API)]   2 node(s)
id=4    @10.0.10.10  (5.7.18-ndb-7.5.7)
id=5 (not connected, accepting connect from any host)

The version string identifies the mainline MySQL version from which the NDB Cluster release was branched and the version of the NDB storage engine used. For example, the full version string for NDB 7.5.4 (the first NDB 7.5 GA release) was mysql-5.7.16-ndb-7.5.4. From this we can determine the following:

New NDB Cluster releases are numbered according to updates in the NDB storage engine, and do not necessarily correspond in a one-to-one fashion with mainline MySQL Server releases. For example, NDB 7.5.4 (as previously noted) was based on MySQL 5.7.16, while NDB 7.5.3 was based on MySQL 5.7.13 (version string: mysql-5.7.13-ndb-7.5.3).

Compatibility with standard MySQL 5.7 releases.  While many standard MySQL schemas and applications can work using NDB Cluster , it is also true that unmodified applications and database schemas may be slightly incompatible or have suboptimal performance when run using NDB Cluster (see Section 21.1.6, “Known Limitations of NDB Cluster”). Most of these issues can be overcome, but this also means that you are very unlikely to be able to switch an existing application datastore—that currently uses, for example, MyISAM or InnoDB—to use the NDB storage engine without allowing for the possibility of changes in schemas, queries, and applications. In addition, the MySQL Server and NDB Cluster codebases diverge considerably, so that the standard mysqld cannot function as a drop-in replacement for the version of mysqld supplied with NDB Cluster .

NDB Cluster development source trees.  NDB Cluster development trees can also be accessed from https://github.com/mysql/mysql-server.

The NDB Cluster development sources maintained at https://github.com/mysql/mysql-server are licensed under the GPL. For information about obtaining MySQL sources using Bazaar and building them yourself, see Section 2.9.3, “Installing MySQL Using a Development Source Tree”.

Note

As with MySQL Server 5.7, NDB Cluster 7.5 releases are built using CMake.

NDB Cluster 7.5 is available as a General Availability (GA) release, and recommended for new deployments beginning with version 7.5.4. NDB Cluster 7.4 and NDB Cluster 7.3 are previous GA releases which are still supported in production. NDB 7.2 is a previous GA release series which is still supported, although it is no longer recommended for new deployments. We currently recommend that new deployments for production use NDB 7.5. For an overview of major features added in NDB 7.4, see What is New in NDB Cluster 7.4. For similar information about NDB Cluster 7.3, see What is New in NDB Cluster 7.3. For an overview of major features added in previous NDB Cluster releases, see What is New in NDB Cluster in NDB Cluster 7.2. NDB 7.1 and earlier versions of NDB Cluster are no longer being developed or maintained.

The contents of this chapter are subject to revision as NDB Cluster continues to evolve. Additional information regarding NDB Cluster can be found on the MySQL Web site at http://www.mysql.com/products/cluster/.

Additional Resources.  More information about NDB Cluster can be found in the following places:

21.1 NDB Cluster Overview

NDB Cluster is a technology that enables clustering of in-memory databases in a shared-nothing system. The shared-nothing architecture enables the system to work with very inexpensive hardware, and with a minimum of specific requirements for hardware or software.

NDB Cluster is designed not to have any single point of failure. In a shared-nothing system, each component is expected to have its own memory and disk, and the use of shared storage mechanisms such as network shares, network file systems, and SANs is not recommended or supported.

NDB Cluster integrates the standard MySQL server with an in-memory clustered storage engine called NDB (which stands for Network DataBase). In our documentation, the term NDB refers to the part of the setup that is specific to the storage engine, whereas NDB Cluster refers to the combination of one or more MySQL servers with the NDB storage engine.

An NDB Cluster consists of a set of computers, known as hosts, each running one or more processes. These processes, known as nodes, may include MySQL servers (for access to NDB data), data nodes (for storage of the data), one or more management servers, and possibly other specialized data access programs. The relationship of these components in an NDB Cluster is shown here:

Figure 21.1 NDB Cluster Components

NDB Cluster Components

All these programs work together to form an NDB Cluster (see Section 21.4, “NDB Cluster Programs”. When data is stored by the NDB storage engine, the tables (and table data) are stored in the data nodes. Such tables are directly accessible from all other MySQL servers (SQL nodes) in the cluster. Thus, in a payroll application storing data in a cluster, if one application updates the salary of an employee, all other MySQL servers that query this data can see this change immediately.

Although an NDB Cluster SQL node uses the mysqld server daemon, it differs in a number of critical respects from the mysqld binary supplied with the MySQL 5.7 distributions, and the two versions of mysqld are not interchangeable.

In addition, a MySQL server that is not connected to an NDB Cluster cannot use the NDB storage engine and cannot access any NDB Cluster data.

The data stored in the data nodes for NDB Cluster can be mirrored; the cluster can handle failures of individual data nodes with no other impact than that a small number of transactions are aborted due to losing the transaction state. Because transactional applications are expected to handle transaction failure, this should not be a source of problems.

Individual nodes can be stopped and restarted, and can then rejoin the system (cluster). Rolling restarts (in which all nodes are restarted in turn) are used in making configuration changes and software upgrades (see Section 21.5.5, “Performing a Rolling Restart of an NDB Cluster”). Rolling restarts are also used as part of the process of adding new data nodes online (see Section 21.5.14, “Adding NDB Cluster Data Nodes Online”). For more information about data nodes, how they are organized in an NDB Cluster , and how they handle and store NDB Cluster data, see Section 21.1.2, “NDB Cluster Nodes, Node Groups, Replicas, and Partitions”.

Backing up and restoring NDB Cluster databases can be done using the NDB-native functionality found in the NDB Cluster management client and the ndb_restore program included in the NDB Cluster distribution. For more information, see Section 21.5.3, “Online Backup of NDB Cluster”, and Section 21.4.20, “ndb_restore — Restore an NDB Cluster Backup”. You can also use the standard MySQL functionality provided for this purpose in mysqldump and the MySQL server. See Section 5.5.4, “mysqldump — A Database Backup Program”, for more information.

NDB Cluster nodes can employ different transport mechanisms for inter-node communications; TCP/IP over standard 100 Mbps or faster Ethernet hardware is used in most real-world deployments.

21.1.1 NDB Cluster Core Concepts

NDBCLUSTER (also known as NDB) is an in-memory storage engine offering high-availability and data-persistence features.

The NDBCLUSTER storage engine can be configured with a range of failover and load-balancing options, but it is easiest to start with the storage engine at the cluster level. NDB Cluster 's NDB storage engine contains a complete set of data, dependent only on other data within the cluster itself.

The Cluster portion of NDB Cluster is configured independently of the MySQL servers. In an NDB Cluster , each part of the cluster is considered to be a node.

Note

In many contexts, the term node is used to indicate a computer, but when discussing NDB Cluster it means a process. It is possible to run multiple nodes on a single computer; for a computer on which one or more cluster nodes are being run we use the term cluster host.

There are three types of cluster nodes, and in a minimal NDB Cluster configuration, there will be at least three nodes, one of each of these types:

  • Management node: The role of this type of node is to manage the other nodes within the NDB Cluster , performing such functions as providing configuration data, starting and stopping nodes, and running backups. Because this node type manages the configuration of the other nodes, a node of this type should be started first, before any other node. An MGM node is started with the command ndb_mgmd.

  • Data node: This type of node stores cluster data. There are as many data nodes as there are replicas, times the number of fragments (see Section 21.1.2, “NDB Cluster Nodes, Node Groups, Replicas, and Partitions”). For example, with two replicas, each having two fragments, you need four data nodes. One replica is sufficient for data storage, but provides no redundancy; therefore, it is recommended to have 2 (or more) replicas to provide redundancy, and thus high availability. A data node is started with the command ndbd (see Section 21.4.1, “ndbd — The NDB Cluster Data Node Daemon”) or ndbmtd (see Section 21.4.3, “ndbmtd — The NDB Cluster Data Node Daemon (Multi-Threaded)”).

    NDB Cluster tables are normally stored completely in memory rather than on disk (this is why we refer to NDB Cluster as an in-memory database). However, some NDB Cluster data can be stored on disk; see Section 21.5.13, “NDB Cluster Disk Data Tables”, for more information.

  • SQL node: This is a node that accesses the cluster data. In the case of NDB Cluster , an SQL node is a traditional MySQL server that uses the NDBCLUSTER storage engine. An SQL node is a mysqld process started with the --ndbcluster and --ndb-connectstring options, which are explained elsewhere in this chapter, possibly with additional MySQL server options as well.

    An SQL node is actually just a specialized type of API node, which designates any application which accesses NDB Cluster data. Another example of an API node is the ndb_restore utility that is used to restore a cluster backup. It is possible to write such applications using the NDB API. For basic information about the NDB API, see Getting Started with the NDB API.

Important

It is not realistic to expect to employ a three-node setup in a production environment. Such a configuration provides no redundancy; to benefit from NDB Cluster 's high-availability features, you must use multiple data and SQL nodes. The use of multiple management nodes is also highly recommended.

For a brief introduction to the relationships between nodes, node groups, replicas, and partitions in NDB Cluster , see Section 21.1.2, “NDB Cluster Nodes, Node Groups, Replicas, and Partitions”.

Configuration of a cluster involves configuring each individual node in the cluster and setting up individual communication links between nodes. NDB Cluster is currently designed with the intention that data nodes are homogeneous in terms of processor power, memory space, and bandwidth. In addition, to provide a single point of configuration, all configuration data for the cluster as a whole is located in one configuration file.

The management server manages the cluster configuration file and the cluster log. Each node in the cluster retrieves the configuration data from the management server, and so requires a way to determine where the management server resides. When interesting events occur in the data nodes, the nodes transfer information about these events to the management server, which then writes the information to the cluster log.

In addition, there can be any number of cluster client processes or applications. These include standard MySQL clients, NDB-specific API programs, and management clients. These are described in the next few paragraphs.

Standard MySQL clients.  NDB Cluster can be used with existing MySQL applications written in PHP, Perl, C, C++, Java, Python, Ruby, and so on. Such client applications send SQL statements to and receive responses from MySQL servers acting as NDB Cluster SQL nodes in much the same way that they interact with standalone MySQL servers.

MySQL clients using an NDB Cluster as a data source can be modified to take advantage of the ability to connect with multiple MySQL servers to achieve load balancing and failover. For example, Java clients using Connector/J 5.0.6 and later can use jdbc:mysql:loadbalance:// URLs (improved in Connector/J 5.1.7) to achieve load balancing transparently; for more information about using Connector/J with NDB Cluster , see Using Connector/J with NDB Cluster.

NDB client programs.  Client programs can be written that access NDB Cluster data directly from the NDBCLUSTER storage engine, bypassing any MySQL Servers that may be connected to the cluster, using the NDB API, a high-level C++ API. Such applications may be useful for specialized purposes where an SQL interface to the data is not needed. For more information, see The NDB API.

NDB-specific Java applications can also be written for NDB Cluster using the NDB Cluster Connector for Java. This NDB Cluster Connector includes ClusterJ, a high-level database API similar to object-relational mapping persistence frameworks such as Hibernate and JPA that connect directly to NDBCLUSTER, and so does not require access to a MySQL Server. Support is also provided in NDB Cluster for ClusterJPA, an OpenJPA implementation for NDB Cluster that leverages the strengths of ClusterJ and JDBC; ID lookups and other fast operations are performed using ClusterJ (bypassing the MySQL Server), while more complex queries that can benefit from MySQL's query optimizer are sent through the MySQL Server, using JDBC. See Java and NDB Cluster, and The ClusterJ API and Data Object Model, for more information.

NDB Cluster also supports applications written in JavaScript using Node.js. The MySQL Connector for JavaScript includes adapters for direct access to the NDB storage engine and as well as for the MySQL Server. Applications using this Connector are typically event-driven and use a domain object model similar in many ways to that employed by ClusterJ. For more information, see MySQL NoSQL Connector for JavaScript.

The Memcache API for NDB Cluster , implemented as the loadable ndbmemcache storage engine for memcached version 1.6 and later, can be used to provide a persistent NDB Cluster data store, accessed using the memcache protocol.

The standard memcached caching engine is included in the NDB Cluster 7.5 distribution. Each memcached server has direct access to data stored in NDB Cluster , but is also able to cache data locally and to serve (some) requests from this local cache.

For more information, see ndbmemcache—Memcache API for NDB Cluster.

Management clients.  These clients connect to the management server and provide commands for starting and stopping nodes gracefully, starting and stopping message tracing (debug versions only), showing node versions and status, starting and stopping backups, and so on. An example of this type of program is the ndb_mgm management client supplied with NDB Cluster (see Section 21.4.5, “ndb_mgm — The NDB Cluster Management Client”). Such applications can be written using the MGM API, a C-language API that communicates directly with one or more NDB Cluster management servers. For more information, see The MGM API.

Oracle also makes available MySQL Cluster Manager, which provides an advanced command-line interface simplifying many complex NDB Cluster management tasks, such restarting an NDB Cluster with a large number of nodes. The MySQL Cluster Manager client also supports commands for getting and setting the values of most node configuration parameters as well as mysqld server options and variables relating to NDB Cluster . See MySQL™ Cluster Manager 1.4.2 User Manual, for more information.

Event logs.  NDB Cluster logs events by category (startup, shutdown, errors, checkpoints, and so on), priority, and severity. A complete listing of all reportable events may be found in Section 21.5.6, “Event Reports Generated in NDB Cluster”. Event logs are of the two types listed here:

  • Cluster log: Keeps a record of all desired reportable events for the cluster as a whole.

  • Node log: A separate log which is also kept for each individual node.

Note

Under normal circumstances, it is necessary and sufficient to keep and examine only the cluster log. The node logs need be consulted only for application development and debugging purposes.

Checkpoint.  Generally speaking, when data is saved to disk, it is said that a checkpoint has been reached. More specific to NDB Cluster , a checkpoint is a point in time where all committed transactions are stored on disk. With regard to the NDB storage engine, there are two types of checkpoints which work together to ensure that a consistent view of the cluster's data is maintained. These are shown in the following list:

  • Local Checkpoint (LCP): This is a checkpoint that is specific to a single node; however, LCPs take place for all nodes in the cluster more or less concurrently. An LCP involves saving all of a node's data to disk, and so usually occurs every few minutes. The precise interval varies, and depends upon the amount of data stored by the node, the level of cluster activity, and other factors.

  • Global Checkpoint (GCP): A GCP occurs every few seconds, when transactions for all nodes are synchronized and the redo-log is flushed to disk.

For more information about the files and directories created by local checkpoints and global checkpoints, see NDB Cluster Data Node File System Directory Files.

21.1.2 NDB Cluster Nodes, Node Groups, Replicas, and Partitions

This section discusses the manner in which NDB Cluster divides and duplicates data for storage.

A number of concepts central to an understanding of this topic are discussed in the next few paragraphs.

Data node.  An ndbd or ndbmtd process, which stores one or more replicas—that is, copies of the partitions (discussed later in this section) assigned to the node group of which the node is a member.

Each data node should be located on a separate computer. While it is also possible to host multiple data node processes on a single computer, such a configuration is not usually recommended.

It is common for the terms node and data node to be used interchangeably when referring to an ndbd or ndbmtd process; where mentioned, management nodes (ndb_mgmd processes) and SQL nodes (mysqld processes) are specified as such in this discussion.

Node group.  A node group consists of one or more nodes, and stores partitions, or sets of replicas (see next item).

The number of node groups in an NDB Cluster is not directly configurable; it is a function of the number of data nodes and of the number of replicas (NoOfReplicas configuration parameter), as shown here:

[# of node groups] = [# of data nodes] / NoOfReplicas

Thus, an NDB Cluster with 4 data nodes has 4 node groups if NoOfReplicas is set to 1 in the config.ini file, 2 node groups if NoOfReplicas is set to 2, and 1 node group if NoOfReplicas is set to 4. Replicas are discussed later in this section; for more information about NoOfReplicas, see Section 21.3.3.6, “Defining NDB Cluster Data Nodes”.

Note

All node groups in an NDB Cluster must have the same number of data nodes.

You can add new node groups (and thus new data nodes) online, to a running NDB Cluster; see Section 21.5.14, “Adding NDB Cluster Data Nodes Online”, for more information.

Partition.  This is a portion of the data stored by the cluster. Each node is responsible for keeping at least one copy of any partitions assigned to it (that is, at least one replica) available to the cluster.

The number of partitions used by default by NDB Cluster depends on the number of data nodes and the number of LDM threads in use by the data nodes, as shown here:

[# of partitions] = [# of data nodes] * [# of LDM threads]

When using data nodes running ndbmtd, the number of LDM threads is controlled by the setting for MaxNoOfExecutionThreads. When using ndbd there is a single LDM thread, which means that there are as many cluster partitions as nodes participating in the cluster. This is also the case when using ndbmtd with MaxNoOfExecutionThreads set to 3 or less. (You should be aware that the number of LDM threads increases with the value of this parameter, but not in a strictly linear fashion, and that there are additional constraints on setting it; see the description of MaxNoOfExecutionThreads for more information.)

NDB and user-defined partitioning.  NDB Cluster normally partitions NDBCLUSTER tables automatically. However, it is also possible to employ user-defined partitioning with NDBCLUSTER tables. This is subject to the following limitations:

  1. Only the KEY and LINEAR KEY partitioning schemes are supported in production with NDB tables.

  2. The maximum number of partitions that may be defined explicitly for any NDB table is 8 * MaxNoOfExecutionThreads * [number of node groups], the number of node groups in an NDB Cluster being determined as discussed previously in this section. When using ndbd for data node processes, setting MaxNoOfExecutionThreads has no effect; in such a case, it can be treated as though it were equal to 1 for purposes of performing this calculation.

    See Section 21.4.3, “ndbmtd — The NDB Cluster Data Node Daemon (Multi-Threaded)”, for more information.

For more information relating to NDB Cluster and user-defined partitioning, see Section 21.1.6, “Known Limitations of NDB Cluster”, and Section 22.6.2, “Partitioning Limitations Relating to Storage Engines”.

Replica.  This is a copy of a cluster partition. Each node in a node group stores a replica. Also sometimes known as a partition replica. The number of replicas is equal to the number of nodes per node group.

A replica belongs entirely to a single node; a node can (and usually does) store several replicas.

The following diagram illustrates an NDB Cluster with four data nodes running ndbd, arranged in two node groups of two nodes each; nodes 1 and 2 belong to node group 0, and nodes 3 and 4 belong to node group 1.

Note

Only data nodes are shown here; although a working NDB Cluster requires an ndb_mgmd process for cluster management and at least one SQL node to access the data stored by the cluster, these have been omitted from the figure for clarity.

Figure 21.2 NDB Cluster with Two Node Groups

An NDB Cluster, with 2 node groups having 2 nodes each

The data stored by the cluster is divided into four partitions, numbered 0, 1, 2, and 3. Each partition is stored—in multiple copies—on the same node group. Partitions are stored on alternate node groups as follows:

  • Partition 0 is stored on node group 0; a primary replica (primary copy) is stored on node 1, and a backup replica (backup copy of the partition) is stored on node 2.

  • Partition 1 is stored on the other node group (node group 1); this partition's primary replica is on node 3, and its backup replica is on node 4.

  • Partition 2 is stored on node group 0. However, the placing of its two replicas is reversed from that of Partition 0; for Partition 2, the primary replica is stored on node 2, and the backup on node 1.

  • Partition 3 is stored on node group 1, and the placement of its two replicas are reversed from those of partition 1. That is, its primary replica is located on node 4, with the backup on node 3.

What this means regarding the continued operation of an NDB Cluster is this: so long as each node group participating in the cluster has at least one node operating, the cluster has a complete copy of all data and remains viable. This is illustrated in the next diagram.

Figure 21.3 Nodes Required for a 2x2 NDB Cluster

Nodes required to keep a 2x2 NDB Cluster viable

In this example, where the cluster consists of two node groups of two data nodes, each running an instance of ndbd, any combination of at least one node in node group 0 and at least one node in node group 1 is sufficient to keep the cluster alive (indicated by arrows in the diagram). However, if both nodes from either node group fail, the remaining two nodes are not sufficient (shown by the arrows marked out with an X); in either case, the cluster has lost an entire partition and so can no longer provide access to a complete set of all NDB Cluster data.

In NDB 7.5.4 and later, the maximum number of node groups supported for a single NDB Cluster instance is 48 (Bug#80845, Bug #22996305).

21.1.3 NDB Cluster Hardware, Software, and Networking Requirements

One of the strengths of NDB Cluster is that it can be run on commodity hardware and has no unusual requirements in this regard, other than for large amounts of RAM, due to the fact that all live data storage is done in memory. (It is possible to reduce this requirement using Disk Data tables—see Section 21.5.13, “NDB Cluster Disk Data Tables”, for more information about these.) Naturally, multiple and faster CPUs can enhance performance. Memory requirements for other NDB Cluster processes are relatively small.

The software requirements for NDB Cluster are also modest. Host operating systems do not require any unusual modules, services, applications, or configuration to support NDB Cluster . For supported operating systems, a standard installation should be sufficient. The MySQL software requirements are simple: all that is needed is a production release of NDB Cluster . It is not strictly necessary to compile MySQL yourself merely to be able to use NDB Cluster . We assume that you are using the binaries appropriate to your platform, available from the NDB Cluster software downloads page at http://dev.mysql.com/downloads/cluster/.

For communication between nodes, NDB Cluster supports TCP/IP networking in any standard topology, and the minimum expected for each host is a standard 100 Mbps Ethernet card, plus a switch, hub, or router to provide network connectivity for the cluster as a whole. We strongly recommend that an NDB Cluster be run on its own subnet which is not shared with machines not forming part of the cluster for the following reasons:

  • Security.  Communications between NDB Cluster nodes are not encrypted or shielded in any way. The only means of protecting transmissions within an NDB Cluster is to run your NDB Cluster on a protected network. If you intend to use NDB Cluster for Web applications, the cluster should definitely reside behind your firewall and not in your network's De-Militarized Zone (DMZ) or elsewhere.

    See Section 21.5.12.1, “NDB Cluster Security and Networking Issues”, for more information.

  • Efficiency.  Setting up an NDB Cluster on a private or protected network enables the cluster to make exclusive use of bandwidth between cluster hosts. Using a separate switch for your NDB Cluster not only helps protect against unauthorized access to NDB Cluster data, it also ensures that NDB Cluster nodes are shielded from interference caused by transmissions between other computers on the network. For enhanced reliability, you can use dual switches and dual cards to remove the network as a single point of failure; many device drivers support failover for such communication links.

Network communication and latency.  NDB Cluster requires communication between data nodes and API nodes (including SQL nodes), as well as between data nodes and other data nodes, to execute queries and updates. Communication latency between these processes can directly affect the observed performance and latency of user queries. In addition, to maintain consistency and service despite the silent failure of nodes, NDB Cluster uses heartbeating and timeout mechanisms which treat an extended loss of communication from a node as node failure. This can lead to reduced redundancy. Recall that, to maintain data consistency, an NDB Cluster shuts down when the last node in a node group fails. Thus, to avoid increasing the risk of a forced shutdown, breaks in communication between nodes should be avoided wherever possible.

The failure of a data or API node results in the abort of all uncommitted transactions involving the failed node. Data node recovery requires synchronization of the failed node's data from a surviving data node, and re-establishment of disk-based redo and checkpoint logs, before the data node returns to service. This recovery can take some time, during which the Cluster operates with reduced redundancy.

Heartbeating relies on timely generation of heartbeat signals by all nodes. This may not be possible if the node is overloaded, has insufficient machine CPU due to sharing with other programs, or is experiencing delays due to swapping. If heartbeat generation is sufficiently delayed, other nodes treat the node that is slow to respond as failed.

This treatment of a slow node as a failed one may or may not be desirable in some circumstances, depending on the impact of the node's slowed operation on the rest of the cluster. When setting timeout values such as HeartbeatIntervalDbDb and HeartbeatIntervalDbApi for NDB Cluster , care must be taken care to achieve quick detection, failover, and return to service, while avoiding potentially expensive false positives.

Where communication latencies between data nodes are expected to be higher than would be expected in a LAN environment (on the order of 100 µs), timeout parameters must be increased to ensure that any allowed periods of latency periods are well within configured timeouts. Increasing timeouts in this way has a corresponding effect on the worst-case time to detect failure and therefore time to service recovery.

LAN environments can typically be configured with stable low latency, and such that they can provide redundancy with fast failover. Individual link failures can be recovered from with minimal and controlled latency visible at the TCP level (where NDB Cluster normally operates). WAN environments may offer a range of latencies, as well as redundancy with slower failover times. Individual link failures may require route changes to propagate before end-to-end connectivity is restored. At the TCP level this can appear as large latencies on individual channels. The worst-case observed TCP latency in these scenarios is related to the worst-case time for the IP layer to reroute around the failures.

SCI support.  It is also possible to use the high-speed Scalable Coherent Interface (SCI) with NDB Cluster , but this is not a requirement. See Section 21.3.4, “Using High-Speed Interconnects with NDB Cluster”, for more about this protocol and its use with NDB Cluster .

21.1.4 What is New in MySQL NDB Cluster 7.5

In this section, we describe changes in the implementation of NDB Cluster in MySQL NDB Cluster 7.5 as compared to NDB 7.4 and earlier release series. NDB Cluster 7.5 is available as a General Availability release beginning with NDB 7.5.4, and is recommended for new deployments. NDB Cluster 7.4 a recent General Availability release still supported for new deployments. NDB Cluster 7.3, is a previous GA release, still supported in production for existing deployments. NDB Cluster 7.2 is also a previous GA release series which is still supported in production. NDB 7.1 and earlier releases series are no longer maintained or supported in production. We recommend that new deployments use NDB Cluster 7.4 or NDB Cluster 7.5, which is the latest GA release. For information about features added in NDB 7.4, see What is New in NDB Cluster 7.4; What is New in NDB Cluster 7.4 contains information about features added in NDB 7.3. For information about NDB Cluster 7.2 and previous NDB Cluster releases, see What is New in NDB Cluster in NDB Cluster 7.2.

Major changes and new features in NDB Cluster 7.5 which are likely to be of interest are shown in the following list:

  • ndbinfo Enhancements.  A number of changes are made in the ndbinfo database, chief of which is that it now provides detailed information about NDB Cluster node configuration parameters.

    The config_params table has been made read-only, and has been enhanced with additional columns providing information about each configuration parameter, including the parameter's type, default value, maximum and minimum values (where applicable), a brief description of the parameter, and whether the parameter is required. This table also provides each parameter with a unique param_number.

    A row in the config_values table shows the current value of a given parameter on the node having a specified ID. The parameter is identified by the value of the config_param column, which maps to the config_params table's param_number.

    Using this relationship you can write a join on these two tables to obtain the default, maximum, minimum, and current values for one or more NDB Cluster configuration parameters by name. An example SQL statement using such a join is shown here:

    SELECT  p.param_name AS Name,
            v.node_id AS Node,
            p.param_type AS Type,
            p.param_default AS 'Default',
            p.param_min AS Minimum,
            p.param_max AS Maximum,
            CASE p.param_mandatory WHEN 1 THEN 'Y' ELSE 'N' END AS 'Required',
            v.config_value AS Current
    FROM    config_params p
    JOIN    config_values v
    ON      p.param_number = v.config_param
    WHERE   p. param_name IN ('NodeId', 'HostName','DataMemory', 'IndexMemory');
    

    For more information about these changes, see Section 21.5.10.7, “The ndbinfo config_params Table”. See Section 21.5.10.8, “The ndbinfo config_values Table”, for further information and examples.

    In addition, the ndbinfo database no longer depends on the MyISAM storage engine. All ndbinfo tables and views now use NDB (shown as NDBINFO).

    Several new ndbinfo tables were introduced in NDB 7.5.4. These tables are listed here, with brief descriptions:

    • dict_obj_info provides the names and types of database objects in NDB, as well as information about parent obejcts where applicable

    • table_distribution_status provides NDB table distribution status information

    • table_fragments provides information about the distribution of NDB table fragments

    • table_info provides information about logging, checkpointing, storage, and other options in force for each NDB table

    • table_replicas provides information about fragment replicas

    See the descriptions of the individual tables for more information.

  • Default row and column format changes.  Starting with NDB 7.5.1, the default value for both the ROW_FORMAT option and the COLUMN_FORMAT option for CREATE TABLE can be set to DYNAMIC rather than FIXED, using a new MySQL server variable ndb_default_column_format is added as part of this change; set this to FIXED or DYNAMIC (or start mysqld with the equivalent option --ndb-default-column-format=FIXED) to force this value to be used for COLUMN_FORMAT and ROW_FORMAT. Prior to NDB 7.5.4, the default for this variable was DYNAMIC; in this and later versions, the default is FIXED, which provides backwards compatibility with prior releases (Bug #24487363).

    The row format and column format used by existing table columns are unaffected by this change. New columns added to such tables use the new defaults for these (possibly overridden by ndb_default_column_format), and existing columns are changed to use these as well, provided that the ALTER TABLE statement performing this operation specifies ALGORITHM=COPY.

    Note

    A copying ALTER TABLE cannot be done implicitly if mysqld is run with --ndb-allow-copying-alter-table=FALSE.

  • ndb_binlog_index No Longer Dependent On MyISAM.  As of NDB 7.5.2, the ndb_binlog_index table employed in NDB Cluster Replication now uses the InnoDB storage engine instead of MyISAM. When upgrading, you can run mysql_upgrade with --force --upgrade-system-tables to cause it to execute ALTER TABLE ... ENGINE=INNODB on this table. Use of MyISAM for this table remains supported for backward compatibility.

    A benefit of this change is that it makes it possible to depend on transactional behavior and lock-free reads for this table, which can help alleviate concurrency issues during purge operations and log rotation, and improve the availability of this table.

  • ALTER TABLE Changes.  NDB Cluster formerly supported an alternative syntax for online ALTER TABLE. This is no longer supported in NDB Cluster 7.5, which makes exclusive use of ALGORITHM = DEFAULT|COPY|INPLACE for table DDL, as in the standard MySQL Server.

    Another change affecting the use of this statement is that ALTER TABLE ... ALGORITHM=INPLACE RENAME may now contain DDL operations in addition to the renaming.

  • ExecuteOnComputer Parameter Deprecated.  The ExecuteOnComputer configuration parameter for management nodes, data nodes, and API nodes has been deprecated and is now subject to removal in a future release of NDB Cluster . You should use the equivalent HostName parameter for all three types of nodes.

  • records-per-key Optimization.  The NDB handler now uses the records-per-key interface for index statistics implemented for the optimizer in MySQL 5.7.5. Some of the benefits from this change include those listed here:

    • The optimizer now chooses better execution plans in many cases where a less optimal join index or table join order would previously have been chosen

    • Row estimates shown by EXPLAIN are more accurate

    • Cardinality estimates shown by SHOW INDEX are improved

  • Connection Pool Node IDs.  NDB 7.5.0 adds the mysqld --ndb-cluster-connection-pool-nodeids option, which allows a set of node IDs to be set for the connection pool. This setting overrides --ndb-nodeid, which means that it also overrides both the --ndb-connectstring option and the NDB_CONNECTSTRING environment variable.

    Note

    You can set the size for the connection pool using the --ndb-cluster-connection-pool option for mysqld.

  • create_old_temporals Removed.  The create_old_temporals system variable was deprecated in NDB Cluster 7.4, and has now been removed.

  • ndb_mgm Client PROMPT Command.  NDB Cluster 7.5 adds a new command for setting the client's command-line prompt. The following example illustrates the use of the PROMPT command:

    ndb_mgm> PROMPT mgm#1:
    mgm#1: SHOW
    Cluster Configuration
    ---------------------
    [ndbd(NDB)]     4 node(s)
    id=5    @10.100.1.1  (mysql-5.7.18-ndb-7.5.7, Nodegroup: 0, *)
    id=6    @10.100.1.3  (mysql-5.7.18-ndb-7.5.7, Nodegroup: 0)
    id=7    @10.100.1.9  (mysql-5.7.18-ndb-7.5.7, Nodegroup: 1)
    id=8    @10.100.1.11  (mysql-5.7.18-ndb-7.5.7, Nodegroup: 1)
    
    [ndb_mgmd(MGM)] 1 node(s)
    id=50   @10.100.1.8  (mysql-5.7.18-ndb-7.5.7)
    
    [mysqld(API)]   2 node(s)
    id=100  @10.100.1.8  (5.7.18-ndb-7.5.7)
    id=101  @10.100.1.10  (5.7.18-ndb-7.5.7)
    
    mgm#1: PROMPT
    ndb_mgm> EXIT
    jon@valhaj:/usr/local/mysql/bin>
    

    For additional information and examples, see Section 21.5.2, “Commands in the NDB Cluster Management Client”.

  • Increased FIXED column storage per fragment.  NDB Cluster 7.5 and later supports a maximum of 128 TB per fragment of data in FIXED columns. In NDB Cluster 7.4 and earlier, this was 16 GB per fragment.

  • Deprecated Parameters Removed.  The following NDB Cluster data node configuration parameters were deprecated in previous releases of NDB Cluster , and were removed in NDB 7.5.0:

    • Id: deprecated in NDB 7.1.9; replaced by NodeId.

    • NoOfDiskPagesToDiskDuringRestartTUP, NoOfDiskPagesToDiskDuringRestartACC: both deprecated, had no effect; replaced in MySQL 5.1.6 by DiskCheckpointSpeedInRestart, which itself was later deprecated (in NDB 7.4.1) and is now also removed.

    • NoOfDiskPagesToDiskAfterRestartACC, NoOfDiskPagesToDiskAfterRestartTUP: both deprecated, had no effect; replaced in MySQL 5.1.6 by DiskCheckpointSpeed, which itself was later deprecated (in NDB 7.4.1) and is now also removed.

    • ReservedSendBufferMemory: deprecated in NDB 7.2.5; no longer had any effect.

    • MaxNoOfIndexes: archaic (pre-MySQL 4.1), had no effect; long since replaced by MaxNoOfOrderedIndexes or MaxNoOfUniqueHashIndexes.

    • Discless: archaic (pre-MySQL 4.1) synonym for and long since replaced by Diskless.

    The archaic and unused (and for this reason also previously undocumented) ByteOrder computer configuration parameter was also removed in NDB 7.5.0.

    The parameters just described are not supported in NDB 7.5. Attempting to use any of these parameters in an NDB Cluster configuration file now results in an error.

  • DBTC Scan Enhancements.  Scans have been improved by reducing the number of signals used for communication between the DBTC and DBDIH kernel blocks in NDB, enabling higher scalability of data nodes when used for scan operations by decreasing the use of CPU resources for scan operations, in some cases by an estimated five percent.

    Also as result of these changes response times should be greatly improved, which could help prevent issues with overload of the main threads. In addition, scans made in the BACKUP kernel block have also been improved and made more efficient than in previous releases.

  • JSON column support.  NDB 7.5.2 and later supports the JSON column type for NDB tables and the JSON functions found in the MySQL Server, subject to the limitation that an NDB table can have at most 3 JSON columns.

  • Read from any replica; specify number of hashmap partition fragments.  Previously, all reads were directed towards the primary replica except for simple reads. (A simple read is a reads that lock the row while reading it.) Beginning with NDB 7.5.2, it is possible to enabling reads from any replica. This is disabled by default but can be enabled for a given SQL node using the ndb_read_backup system variable added in this release.

    Previously, it was possible to define tables with only one type of partition mapping, with one primary partition on each LDM in each node, but in NDB 7.5.2 it becomes possible to be more flexible about the assignment of partitions by setting a partition balance (fragment count type). Possible balance schemes are one per node, one per node group, one per LDM per node, and one per LDM per node group.

    This setting can be controlled for individual tables by means of a PARTITION_BALANCE option (renamed from FRAGMENT_COUNT_TYPE in NDB 7.5.4) embedded in NDB_TABLE comments in CREATE TABLE or ALTER TABLE statements. Settings for table-level READ_BACKUP are also supported using this syntax. For more information and examples, see Section 14.1.18.9, “Setting NDB_TABLE Options in Table Comments”.

    In NDB API applications, a table's partition balance can also be get and set using methods supplied for this purpose; see Table::getPartitionBalance(), and Table::setPartitionBalance(), as well as Object::PartitionBalance, for more information about these.

    As part of this work, NDB 7.5.2 also introduces the ndb_data_node_neighbour system variable. This is intended for use, in transaction hinting, to provide a nearby data node to this SQL node.

    NDB 7.5.3 adds a further enhancement to READ_BACKUP: In this and later versions, it is possible to set READ_BACKUP for a given table online as part of ALTER TABLE ... ALGORITHM=INPLACE ....

  • ThreadConfig improvements.  A number of enhancements and feature additions are implemented in NDB 7.5.2 for the ThreadConfig multithreaded data node (ndbmtd) configuration parameter, including support for an increased number of platforms. These changes are described in the next few paragraphs.

    Non-exclusive CPU locking is now supported on FreeBSD and Windows, using cpubind and cpuset. Exclusive CPU locking is now supported on Solaris (only) using the cpubind_exclusive and cpuset_exclusive parameters which are introduced in this release.

    Thread prioritzation is now available, controlled by the new thread_prio parameter. thread_prio is supported on Linux, FreeBSD, Windows, and Solaris, and varies somewhat by platform. For more information, see the description of ThreadConfig.

    The realtime parameter is now supported on Windows platforms.

  • Partitions larger than 16 GB.  Due to an improvement in the hash index implementation used by NDB Cluster data nodes, partitions of NDB tables may now contain more than 16 GB of data for fixed columns, and the maximum partition size for fixed columns is now raised to 128 TB. The previous limitation was due to the fact that the DBACC block in the NDB kernel used only 32-bit references to the fixed-size part of a row in the DBTUP block, although 45-bit references to this data are used in DBTUP itself and elsewhere in the kernel outside DBACC; all such references in to the data handled in the DBACC block now use 45 bits instead.

  • Print SQL statements from ndb_restore.  NDB 7.5.4 adds the --print-sql-log option for the ndb_restore utility provided with the NDB Cluster distribution. This option enables SQL logging to stdout. Important: Every table to be restored using this option must have an explicitly defined primary key.

    See Section 21.4.20, “ndb_restore — Restore an NDB Cluster Backup”, for more information.

  • Organization of RPM packages.  Beginning with NDB 7.5.4, the naming and organization of RPM packages provided for NDB Cluster align more closely with those released for the MySQL server. The names of all NDB Cluster RPMs are now prefixed with mysql-cluster. Data nodes are now installed using the data-node package; management nodes are now installed from the management-server package; and SQL nodes require the server and common packages. MySQL and NDB client programs, including the mysql client and the ndb_mgm management client, are now included in the client RPM.

    For a detailed listing of NDB Cluster RPMs and other information, see Section 21.2.2.2, “Installing NDB Cluster from RPM”.

NDB Cluster 7.5 is also supported by MySQL Cluster Manager, which provides an advanced command-line interface that can simplify many complex NDB Cluster management tasks. See MySQL™ Cluster Manager 1.4.2 User Manual, for more information.

21.1.5 MySQL Server Using InnoDB Compared with NDB Cluster

MySQL Server offers a number of choices in storage engines. Since both NDB and InnoDB can serve as transactional MySQL storage engines, users of MySQL Server sometimes become interested in NDB Cluster . They see NDB as a possible alternative or upgrade to the default InnoDB storage engine in MySQL 5.7. While NDB and InnoDB share common characteristics, there are differences in architecture and implementation, so that some existing MySQL Server applications and usage scenarios can be a good fit for NDB Cluster , but not all of them.

In this section, we discuss and compare some characteristics of the NDB storage engine used by NDB 7.5 with InnoDB used in MySQL 5.7. The next few sections provide a technical comparison. In many instances, decisions about when and where to use NDB Cluster must be made on a case-by-case basis, taking all factors into consideration. While it is beyond the scope of this documentation to provide specifics for every conceivable usage scenario, we also attempt to offer some very general guidance on the relative suitability of some common types of applications for NDB as opposed to InnoDB backends.

NDB Cluster 7.5 uses a mysqld based on MySQL 5.7, including support for InnoDB 1.1. While it is possible to use InnoDB tables with NDB Cluster , such tables are not clustered. It is also not possible to use programs or libraries from an NDB Cluster 7.5 distribution with MySQL Server 5.7, or the reverse.

While it is also true that some types of common business applications can be run either on NDB Cluster or on MySQL Server (most likely using the InnoDB storage engine), there are some important architectural and implementation differences. Section 21.1.5.1, “Differences Between the NDB and InnoDB Storage Engines”, provides a summary of the these differences. Due to the differences, some usage scenarios are clearly more suitable for one engine or the other; see Section 21.1.5.2, “NDB and InnoDB Workloads”. This in turn has an impact on the types of applications that better suited for use with NDB or InnoDB. See Section 21.1.5.3, “NDB and InnoDB Feature Usage Summary”, for a comparison of the relative suitability of each for use in common types of database applications.

For information about the relative characteristics of the NDB and MEMORY storage engines, see When to Use MEMORY or MySQL Cluster.

See Chapter 16, Alternative Storage Engines, for additional information about MySQL storage engines.

21.1.5.1 Differences Between the NDB and InnoDB Storage Engines

The NDB Cluster NDB storage engine is implemented using a distributed, shared-nothing architecture, which causes it to behave differently from InnoDB in a number of ways. For those unaccustomed to working with NDB, unexpected behaviors can arise due to its distributed nature with regard to transactions, foreign keys, table limits, and other characteristics. These are shown in the following table:

Feature

InnoDB 1.1

NDB Cluster NDB 7.5

MySQL Server Version

5.7

5.7

InnoDB Version

InnoDB 5.7.19

InnoDB 5.7.19

NDB Cluster Version

N/A

NDB 7.5.7

Storage Limits

64TB

3TB

(Practical upper limit based on 48 data nodes with 64GB RAM each; can be increased with disk-based data and BLOBs)

Foreign Keys

Yes

Yes.

Transactions

All standard types

READ COMMITTED

MVCC

Yes

No

Data Compression

Yes

No

(NDB Cluster checkpoint and backup files can be compressed)

Large Row Support (> 14K)

Supported for VARBINARY, VARCHAR, BLOB, and TEXT columns

Supported for BLOB and TEXT columns only

(Using these types to store very large amounts of data can lower NDB Cluster performance)

Replication Support

Asynchronous and semisynchronous replication using MySQL Replication

Automatic synchronous replication within an NDB Cluster .

Asynchronous replication between NDB Cluster s, using MySQL Replication

Scaleout for Read Operations

Yes (MySQL Replication)

Yes (Automatic partitioning in NDB Cluster ; NDB Cluster Replication)

Scaleout for Write Operations

Requires application-level partitioning (sharding)

Yes (Automatic partitioning in NDB Cluster is transparent to applications)

High Availability (HA)

Requires additional software

Yes (Designed for 99.999% uptime)

Node Failure Recovery and Failover

Requires additional software

Automatic

(Key element in NDB Cluster architecture)

Time for Node Failure Recovery

30 seconds or longer

Typically < 1 second

Real-Time Performance

No

Yes

In-Memory Tables

No

Yes

(Some data can optionally be stored on disk; both in-memory and disk data storage are durable)

NoSQL Access to Storage Engine

Yes

Yes

Multiple APIs, including Memcached, Node.js/JavaScript, Java, JPA, C++, and HTTP/REST

Concurrent and Parallel Writes

Not supported

Up to 48 writers, optimized for concurrent writes

Conflict Detection and Resolution (Multiple Replication Masters)

No

Yes

Hash Indexes

No

Yes

Online Addition of Nodes

Read-only replicas using MySQL Replication

Yes (all node types)

Online Upgrades

No

Yes

Online Schema Modifications

Yes, as part of MySQL 5.6.

Yes.

21.1.5.2 NDB and InnoDB Workloads

NDB Cluster has a range of unique attributes that make it ideal to serve applications requiring high availability, fast failover, high throughput, and low latency. Due to its distributed architecture and multi-node implementation, NDB Cluster also has specific constraints that may keep some workloads from performing well. A number of major differences in behavior between the NDB and InnoDB storage engines with regard to some common types of database-driven application workloads are shown in the following table::

Workload

InnoDB

NDB Cluster (NDB)

High-Volume OLTP Applications

Yes

Yes

DSS Applications (data marts, analytics)

Yes

Limited (Join operations across OLTP datasets not exceeding 3TB in size)

Custom Applications

Yes

Yes

Packaged Applications

Yes

Limited (should be mostly primary key access).

NDB Cluster 7.5 supports foreign keys.

In-Network Telecoms Applications (HLR, HSS, SDP)

No

Yes

Session Management and Caching

Yes

Yes

E-Commerce Applications

Yes

Yes

User Profile Management, AAA Protocol

Yes

Yes

21.1.5.3 NDB and InnoDB Feature Usage Summary

When comparing application feature requirements to the capabilities of InnoDB with NDB, some are clearly more compatible with one storage engine than the other.

The following table lists supported application features according to the storage engine to which each feature is typically better suited.

Preferred application requirements for InnoDB

Preferred application requirements for NDB

  • Foreign keys

    Note

    NDB Cluster 7.5 supports foreign keys.

  • Full table scans

  • Very large databases, rows, or transactions

  • Transactions other than READ COMMITTED

  • Write scaling

  • 99.999% uptime

  • Online addition of nodes and online schema operations

  • Multiple SQL and NoSQL APIs (see NDB Cluster APIs: Overview and Concepts)

  • Real-time performance

  • Limited use of BLOB columns

  • Foreign keys are supported, although their use may have an impact on performance at high throughput

21.1.6 Known Limitations of NDB Cluster

In the sections that follow, we discuss known limitations in current releases of NDB Cluster as compared with the features available when using the MyISAM and InnoDB storage engines. If you check the Cluster category in the MySQL bugs database at http://bugs.mysql.com, you can find known bugs in the following categories under MySQL Server: in the MySQL bugs database at http://bugs.mysql.com, which we intend to correct in upcoming releases of NDB Cluster :

  • NDB Cluster

  • Cluster Direct API (NDBAPI)

  • Cluster Disk Data

  • Cluster Replication

  • ClusterJ

This information is intended to be complete with respect to the conditions just set forth. You can report any discrepancies that you encounter to the MySQL bugs database using the instructions given in Section 1.7, “How to Report Bugs or Problems”. If we do not plan to fix the problem in NDB Cluster 7.5, we will add it to the list.

See Previous NDB Cluster Issues Resolved in NDB Cluster 7.3 for a list of issues in earlier releases that have been resolved in NDB Cluster 7.5.

Note

Limitations and other issues specific to NDB Cluster Replication are described in Section 21.6.3, “Known Issues in NDB Cluster Replication”.

21.1.6.1 Noncompliance with SQL Syntax in NDB Cluster

Some SQL statements relating to certain MySQL features produce errors when used with NDB tables, as described in the following list:

  • Temporary tables.  Temporary tables are not supported. Trying either to create a temporary table that uses the NDB storage engine or to alter an existing temporary table to use NDB fails with the error Table storage engine 'ndbcluster' does not support the create option 'TEMPORARY'.

  • Indexes and keys in NDB tables.  Keys and indexes on NDB Cluster tables are subject to the following limitations:

    • Column width.  Attempting to create an index on an NDB table column whose width is greater than 3072 bytes succeeds, but only the first 3072 bytes are actually used for the index. In such cases, a warning Specified key was too long; max key length is 3072 bytes is issued, and a SHOW CREATE TABLE statement shows the length of the index as 3072.

    • TEXT and BLOB columns.  You cannot create indexes on NDB table columns that use any of the TEXT or BLOB data types.

    • FULLTEXT indexes.  The NDB storage engine does not support FULLTEXT indexes, which are possible for MyISAM and InnoDB tables only.

      However, you can create indexes on VARCHAR columns of NDB tables.

    • USING HASH keys and NULL.  Using nullable columns in unique keys and primary keys means that queries using these columns are handled as full table scans. To work around this issue, make the column NOT NULL, or re-create the index without the USING HASH option.

    • Prefixes.  There are no prefix indexes; only entire columns can be indexed. (The size of an NDB column index is always the same as the width of the column in bytes, up to and including 3072 bytes, as described earlier in this section. Also see Section 21.1.6.6, “Unsupported or Missing Features in NDB Cluster”, for additional information.)

    • BIT columns.  A BIT column cannot be a primary key, unique key, or index, nor can it be part of a composite primary key, unique key, or index.

    • AUTO_INCREMENT columns.  Like other MySQL storage engines, the NDB storage engine can handle a maximum of one AUTO_INCREMENT column per table. However, in the case of a Cluster table with no explicit primary key, an AUTO_INCREMENT column is automatically defined and used as a hidden primary key. For this reason, you cannot define a table that has an explicit AUTO_INCREMENT column unless that column is also declared using the PRIMARY KEY option. Attempting to create a table with an AUTO_INCREMENT column that is not the table's primary key, and using the NDB storage engine, fails with an error.

  • Restrictions on foreign keys.  Support for foreign key constraints in NDB 7.5 is comparable to that provided by InnoDB, subject to the following restrictions:

    • Every column referenced as a foreign key requires an explicit unique key, if it is not the table's primary key.

    • ON UPDATE CASCADE is not supported when the reference is to the parent table's primary key.

      This is because an update of a primary key is implemented as a delete of the old row (containing the old primary key) plus an insert of the new row (with a new primary key). This is not visible to the NDB kernel, which views these two rows as being the same, and thus has no way of knowing that this update should be cascaded.

    • SET DEFAULT is not supported. (Also not supported by InnoDB.)

    • The NO ACTION keywords are accepted but treated as RESCRICT. (Also the same as with InnoDB.)

    • In earlier versions of NDB Cluster , when creating a table with foreign key referencing an index in another table, it sometimes appeared possible to create the foreign key even if the order of the columns in the indexes did not match, due to the fact that an appropriate error was not always returned internally. A partial fix for this issue improved the error used internally to work in most cases; however, it remains possible for this situation to occur in the event that the parent index is a unique index. (Bug #18094360)

    • Prior to NDB 7.4.15 and NDB 7.5.6, when adding or dropping a foreign key using ALTER TABLE, the parent table's metadata is not updated, which makes it possible subsequently to execute ALTER TABLE statements on the parent that should be invalid. To work around this issue, execute SHOW CREATE TABLE on the parent table immediately after adding or dropping the foreign key; this forces the parent's metadata to be reloaded.

      This issue is fixed in NDB 7.4.15 and NDB 7.5.6. (See Bug #82989, Bug #24666177)

    For more information, see Section 14.1.18.5, “Using FOREIGN KEY Constraints”, and Section 1.8.3.2, “FOREIGN KEY Constraints”.

  • NDB Cluster and geometry data types.  Geometry data types (WKT and WKB) are supported for NDB tables. However, spatial indexes are not supported.

  • Character sets and binary log files.  Currently, the ndb_apply_status and ndb_binlog_index tables are created using the latin1 (ASCII) character set. Because names of binary logs are recorded in this table, binary log files named using non-Latin characters are not referenced correctly in these tables. This is a known issue, which we are working to fix. (Bug #50226)

    To work around this problem, use only Latin-1 characters when naming binary log files or setting any the --basedir, --log-bin, or --log-bin-index options.

  • Creating NDB tables with user-defined partitioning.  Support for user-defined partitioning in NDB Cluster is restricted to [LINEAR] KEY partitioning. Using any other partitioning type with ENGINE=NDB or ENGINE=NDBCLUSTER in a CREATE TABLE statement results in an error.

    It is possible to override this restriction, but doing so is not supported for use in production settings. For details, see User-defined partitioning and the NDB storage engine (MySQL Cluster).

    Default partitioning scheme.  All NDB Cluster tables are by default partitioned by KEY using the table's primary key as the partitioning key. If no primary key is explicitly set for the table, the hidden primary key automatically created by the NDB storage engine is used instead. For additional discussion of these and related issues, see Section 22.2.5, “KEY Partitioning”.

    CREATE TABLE and ALTER TABLE statements that would cause a user-partitioned NDBCLUSTER table not to meet either or both of the following two requirements are not permitted, and fail with an error:

    1. The table must have an explicit primary key.

    2. All columns listed in the table's partitioning expression must be part of the primary key.

    Exception.  If a user-partitioned NDBCLUSTER table is created using an empty column-list (that is, using PARTITION BY [LINEAR] KEY()), then no explicit primary key is required.

    Maximum number of partitions for NDBCLUSTER tables.  The maximum number of partitions that can defined for a NDBCLUSTER table when employing user-defined partitioning is 8 per node group. (See Section 21.1.2, “NDB Cluster Nodes, Node Groups, Replicas, and Partitions”, for more information about NDB Cluster node groups.

    DROP PARTITION not supported.  It is not possible to drop partitions from NDB tables using ALTER TABLE ... DROP PARTITION. The other partitioning extensions to ALTER TABLEADD PARTITION, REORGANIZE PARTITION, and COALESCE PARTITION—are supported for Cluster tables, but use copying and so are not optimized. See Section 22.3.1, “Management of RANGE and LIST Partitions” and Section 14.1.8, “ALTER TABLE Syntax”.

  • Row-based replication.  When using row-based replication with NDB Cluster , binary logging cannot be disabled. That is, the NDB storage engine ignores the value of sql_log_bin.

  • JSON data type.  The MySQL JSON data type is supported for NDB tables in the mysqld supplied with NDB 7.5.2 and later.

    An NDB table can have a maximum of 3 JSON columns.

    The NDB API has no special provision for working with JSON data, which it views simply as BLOB data. Handling data as JSON must be performed by the application.

  • CPU and thread info ndbinfo tables.  NDB 7.5.2 adds several new tables to the ndbinfo information database providing information about CPU and thread activity by node, thread ID, and thread type. The tables are listed here:

    • cpustat: Provides per-second, per-thread CPU statistics

    • cpustat_50ms: Raw per-thread CPU statistics data, gathered every 50ms

    • cpustat_1sec: Raw per-thread CPU statistics data, gathered each second

    • cpustat_20sec: Raw per-thread CPU statistics data, gathered every 20 seconds

    • threads: Names and descriptions of thread types

    For more information about these tables, see Section 21.5.10, “ndbinfo: The NDB Cluster Information Database”.

  • Lock info ndbinfo tables.  NDB 7.5.3 adds new tables to the ndbinfo information database providing information about locks and lock attempts in a running NDB Cluster . These tables are listed here:

    • cluster_locks: Current lock requests which are waiting for or holding locks; this information can be useful when investigating stalls and deadlocks. Analogous to cluster_operations.

    • locks_per_fragment: Counts of lock claim requests, and their outcomes per fragment, as well as total time spent waiting for locks successfully and unsuccessfully. Analogous to operations_per_fragment and memory_per_fragment.

    • server_locks: Subset of cluster transactions—those running on the local mysqld, showing a connection id per transaction. Analogous to server_operations.

21.1.6.2 Limits and Differences of NDB Cluster from Standard MySQL Limits

In this section, we list limits found in NDB Cluster that either differ from limits found in, or that are not found in, standard MySQL.

Memory usage and recovery.  Memory consumed when data is inserted into an NDB table is not automatically recovered when deleted, as it is with other storage engines. Instead, the following rules hold true:

  • A DELETE statement on an NDB table makes the memory formerly used by the deleted rows available for re-use by inserts on the same table only. However, this memory can be made available for general re-use by performing OPTIMIZE TABLE.

    A rolling restart of the cluster also frees any memory used by deleted rows. See Section 21.5.5, “Performing a Rolling Restart of an NDB Cluster”.

  • A DROP TABLE or TRUNCATE TABLE operation on an NDB table frees the memory that was used by this table for re-use by any NDB table, either by the same table or by another NDB table.

    Note

    Recall that TRUNCATE TABLE drops and re-creates the table. See Section 14.1.34, “TRUNCATE TABLE Syntax”.

  • Limits imposed by the cluster's configuration.  A number of hard limits exist which are configurable, but available main memory in the cluster sets limits. See the complete list of configuration parameters in Section 21.3.3, “NDB Cluster Configuration Files”. Most configuration parameters can be upgraded online. These hard limits include:

  • Node and data object maximums.  The following limits apply to numbers of cluster nodes and metadata objects:

    • The maximum number of data nodes is 48.

      A data node must have a node ID in the range of 1 to 48, inclusive. (Management and API nodes may use node IDs in the range 1 to 255, inclusive.)

    • The total maximum number of nodes in an NDB Cluster is 255. This number includes all SQL nodes (MySQL Servers), API nodes (applications accessing the cluster other than MySQL servers), data nodes, and management servers.

    • The maximum number of metadata objects in current versions of NDB Cluster is 20320. This limit is hard-coded.

    See Previous NDB Cluster Issues Resolved in NDB Cluster 7.3, for more information.

21.1.6.3 Limits Relating to Transaction Handling in NDB Cluster

A number of limitations exist in NDB Cluster with regard to the handling of transactions. These include the following:

  • Transaction isolation level.  The NDBCLUSTER storage engine supports only the READ COMMITTED transaction isolation level. (InnoDB, for example, supports READ COMMITTED, READ UNCOMMITTED, REPEATABLE READ, and SERIALIZABLE.) You should keep in mind that NDB implements READ COMMITTED on a per-row basis; when a read request arrives at the data node storing the row, what is returned is the last committed version of the row at that time.

    Uncommitted data is never returned, but when a transaction modifying a number of rows commits concurrently with a transaction reading the same rows, the transaction performing the read can observe before values, after values, or both, for different rows among these, due to the fact that a given row read request can be processed either before or after the commit of the other transaction.

    To ensure that a given transaction reads only before or after values, you can impose row locks using SELECT ... LOCK IN SHARE MODE. In such cases, the lock is held until the owning transaction is committed. Using row locks can also cause the following issues:

    • Increased frequency of lock wait timeout errors, and reduced concurrency

    • Increased transaction processing overhead due to reads requiring a commit phase

    • Possibility of exhausting the available number of concurrent locks, which is limited by MaxNoOfConcurrentOperations

    NDB uses READ COMMITTED for all reads unless a modifier such as LOCK IN SHARE MODE or FOR UPDATE is used. LOCK IN SHARE MODE causes shared row locks to be used; FOR UPDATE causes exclusive row locks to be used. Unique key reads have their locks upgraded automatically by NDB to ensure a self-consistent read; BLOB reads also employ extra locking for consistency.

    See Section 21.5.3.4, “NDB Cluster Backup Troubleshooting”, for information on how NDB Cluster 's implementation of transaction isolation level can affect backup and restoration of NDB databases.

  • Transactions and BLOB or TEXT columns.  NDBCLUSTER stores only part of a column value that uses any of MySQL's BLOB or TEXT data types in the table visible to MySQL; the remainder of the BLOB or TEXT is stored in a separate internal table that is not accessible to MySQL. This gives rise to two related issues of which you should be aware whenever executing SELECT statements on tables that contain columns of these types:

    1. For any SELECT from an NDB Cluster table: If the SELECT includes a BLOB or TEXT column, the READ COMMITTED transaction isolation level is converted to a read with read lock. This is done to guarantee consistency.

    2. For any SELECT which uses a unique key lookup to retrieve any columns that use any of the BLOB or TEXT data types and that is executed within a transaction, a shared read lock is held on the table for the duration of the transaction—that is, until the transaction is either committed or aborted.

      This issue does not occur for queries that use index or table scans, even against NDB tables having BLOB or TEXT columns.

      For example, consider the table t defined by the following CREATE TABLE statement:

      CREATE TABLE t (
          a INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
          b INT NOT NULL,
          c INT NOT NULL,
          d TEXT,
          INDEX i(b),
          UNIQUE KEY u(c)
      ) ENGINE = NDB,
      

      Either of the following queries on t causes a shared read lock, because the first query uses a primary key lookup and the second uses a unique key lookup:

      SELECT * FROM t WHERE a = 1;
      
      SELECT * FROM t WHERE c = 1;
      

      However, none of the four queries shown here causes a shared read lock:

      SELECT * FROM t WHERE b = 1;
      
      SELECT * FROM t WHERE d = '1';
      
      SELECT * FROM t;
      
      SELECT b,c WHERE a = 1;
      

      This is because, of these four queries, the first uses an index scan, the second and third use table scans, and the fourth, while using a primary key lookup, does not retrieve the value of any BLOB or TEXT columns.

      You can help minimize issues with shared read locks by avoiding queries that use unique key lookups that retrieve BLOB or TEXT columns, or, in cases where such queries are not avoidable, by committing transactions as soon as possible afterward.

  • Rollbacks.  There are no partial transactions, and no partial rollbacks of transactions. A duplicate key or similar error causes the entire transaction to be rolled back.

    This behavior differs from that of other transactional storage engines such as InnoDB that may roll back individual statements.

  • Transactions and memory usage.  As noted elsewhere in this chapter, NDB Cluster does not handle large transactions well; it is better to perform a number of small transactions with a few operations each than to attempt a single large transaction containing a great many operations. Among other considerations, large transactions require very large amounts of memory. Because of this, the transactional behavior of a number of MySQL statements is effected as described in the following list:

    • TRUNCATE TABLE is not transactional when used on NDB tables. If a TRUNCATE TABLE fails to empty the table, then it must be re-run until it is successful.

    • DELETE FROM (even with no WHERE clause) is transactional. For tables containing a great many rows, you may find that performance is improved by using several DELETE FROM ... LIMIT ... statements to chunk the delete operation. If your objective is to empty the table, then you may wish to use TRUNCATE TABLE instead.

    • LOAD DATA statements.  LOAD DATA INFILE is not transactional when used on NDB tables.

      Important

      When executing a LOAD DATA INFILE statement, the NDB engine performs commits at irregular intervals that enable better utilization of the communication network. It is not possible to know ahead of time when such commits take place.

    • ALTER TABLE and transactions.  When copying an NDB table as part of an ALTER TABLE, the creation of the copy is nontransactional. (In any case, this operation is rolled back when the copy is deleted.)

  • Transactions and the COUNT() function.  When using NDB Cluster Replication, it is not possible to guarantee the transactional consistency of the COUNT() function on the slave. In other words, when performing on the master a series of statements (INSERT, DELETE, or both) that changes the number of rows in a table within a single transaction, executing SELECT COUNT(*) FROM table queries on the slave may yield intermediate results. This is due to the fact that SELECT COUNT(...) may perform dirty reads, and is not a bug in the NDB storage engine.

21.1.6.4 NDB Cluster Error Handling

Starting, stopping, or restarting a node may give rise to temporary errors causing some transactions to fail. These include the following cases:

  • Temporary errors.  When first starting a node, it is possible that you may see Error 1204 Temporary failure, distribution changed and similar temporary errors.

  • Errors due to node failure.  The stopping or failure of any data node can result in a number of different node failure errors. (However, there should be no aborted transactions when performing a planned shutdown of the cluster.)

In either of these cases, any errors that are generated must be handled within the application. This should be done by retrying the transaction.

See also Section 21.1.6.2, “Limits and Differences of NDB Cluster from Standard MySQL Limits”.

21.1.6.5 Limits Associated with Database Objects in NDB Cluster

Some database objects such as tables and indexes have different limitations when using the NDBCLUSTER storage engine:

  • Database and table names.  When using the NDB storage engine, the maximum allowed length both for database names and for table names is 63 characters. A statement using a database name or table name longer than this limit fails with an appropriate error.

  • Number of database objects.  The maximum number of all NDB database objects in a single NDB Cluster —including databases, tables, and indexes—is limited to 20320.

  • Attributes per table.  The maximum number of attributes (that is, columns and indexes) that can belong to a given table is 512.

  • Attributes per key.  The maximum number of attributes per key is 32.

  • Row size.  The maximum permitted size of any one row is 14000 bytes. Each BLOB or TEXT column contributes 256 + 8 = 264 bytes to this total.

  • BIT column storage per table.  The maximum combined width for all BIT columns used in a given NDB table is 4096.

  • FIXED column storage.  NDB Cluster 7.5 and later supports a maximum of 128 TB per fragment of data in FIXED columns. (Previously, this was 16 GB.)

21.1.6.6 Unsupported or Missing Features in NDB Cluster

A number of features supported by other storage engines are not supported for NDB tables. Trying to use any of these features in NDB Cluster does not cause errors in or of itself; however, errors may occur in applications that expects the features to be supported or enforced. Statements referencing such features, even if effectively ignored by NDB, must be syntactically and otherwise valid.

  • Index prefixes.  Prefixes on indexes are not supported for NDB tables. If a prefix is used as part of an index specification in a statement such as CREATE TABLE, ALTER TABLE, or CREATE INDEX, the prefix is not created by NDB.

    A statement containing an index prefix, and creating or modifying an NDB table, must still be syntactically valid. For example, the following statement always fails with Error 1089 Incorrect prefix key; the used key part isn't a string, the used length is longer than the key part, or the storage engine doesn't support unique prefix keys, regardless of storage engine:

    CREATE TABLE t1 (
        c1 INT NOT NULL,
        c2 VARCHAR(100),
        INDEX i1 (c2(500))
    );

    This happens on account of the SQL syntax rule that no index may have a prefix larger than itself.

  • Savepoints and rollbacks.  Savepoints and rollbacks to savepoints are ignored as in MyISAM.

  • Durability of commits.  There are no durable commits on disk. Commits are replicated, but there is no guarantee that logs are flushed to disk on commit.

  • Replication and binary logging.  Statement-based replication is not supported. Use --binlog-format=ROW (or --binlog-format=MIXED) when setting up cluster replication. See Section 21.6, “NDB Cluster Replication”, for more information.

    Replication using global transaction identifiers (GTIDs) is not compatible with NDB Cluster , and is not supported in NDB Cluster 7.5. Do not enable GTIDs when using the NDB storage engine, as this is very likely to cause problems up to and including failure of NDB Cluster Replication.

  • Generated columns.  The NDB storage engine does not support indexes on virtual generated columns.

    As with other storage engines, you can create an index on a stored generated column, but you should bear in mind that NDB uses DataMemory for storage of the generated column as well as IndexMemory for the index. See JSON columns and indirect indexing in MySQL Cluster, for an example.

    NDB Cluster writes changes in stored generated columns to the binary log, but does log not those made to virtual columns. This should not effect NDB Cluster Replication or replication between NDB and other MySQL storage engines.

Note

See Section 21.1.6.3, “Limits Relating to Transaction Handling in NDB Cluster”, for more information relating to limitations on transaction handling in NDB.

21.1.6.7 Limitations Relating to Performance in NDB Cluster

The following performance issues are specific to or especially pronounced in NDB Cluster :

  • Range scans.  There are query performance issues due to sequential access to the NDB storage engine; it is also relatively more expensive to do many range scans than it is with either MyISAM or InnoDB.

  • Reliability of Records in range.  The Records in range statistic is available but is not completely tested or officially supported. This may result in nonoptimal query plans in some cases. If necessary, you can employ USE INDEX or FORCE INDEX to alter the execution plan. See Section 9.9.4, “Index Hints”, for more information on how to do this.

  • Unique hash indexes.  Unique hash indexes created with USING HASH cannot be used for accessing a table if NULL is given as part of the key.

21.1.6.8 Issues Exclusive to NDB Cluster

The following are limitations specific to the NDB storage engine:

  • Machine architecture.  All machines used in the cluster must have the same architecture. That is, all machines hosting nodes must be either big-endian or little-endian, and you cannot use a mixture of both. For example, you cannot have a management node running on a PowerPC which directs a data node that is running on an x86 machine. This restriction does not apply to machines simply running mysql or other clients that may be accessing the cluster's SQL nodes.

  • Binary logging.  NDB Cluster has the following limitations or restrictions with regard to binary logging:

  • Schema operations (DDL statements) are rejected while any data node restarts.

  • Number of replicas.  The number of replicas, as determined by the NoOfReplicas data node configuration parameter, is the number of copies of all data stored by NDB Cluster. Setting this parameter to 1 means there is only a single copy; in this case, no redundancy is provided, and the loss of a data node entails loss of data. To guarantee redundancy, and thus preservation of data even if a data node fails, set this parameter to 2, which is the default and recommended value in production.

    Setting NoOfReplicas to a value greater than 2 is possible (to a maximum of 4) but unnecessary to guard against loss of data. In addition, values greater than 2 for this parameter are not supported in production.

See also Section 21.1.6.10, “Limitations Relating to Multiple NDB Cluster Nodes”.

21.1.6.9 Limitations Relating to NDB Cluster Disk Data Storage

Disk Data object maximums and minimums.  Disk data objects are subject to the following maximums and minimums:

  • Maximum number of tablespaces: 232 (4294967296)

  • Maximum number of data files per tablespace: 216 (65536)

  • The minimum and maximum possible sizes of extents for tablespace data files are 32K and 2G, respectively. See Section 14.1.19, “CREATE TABLESPACE Syntax”, for more information.

In addition, when working with NDB Disk Data tables, you should be aware of the following issues regarding data files and extents:

  • Data files use DataMemory. Usage is the same as for in-memory data.

  • Data files use file descriptors. It is important to keep in mind that data files are always open, which means the file descriptors are always in use and cannot be re-used for other system tasks.

  • Extents require sufficient DiskPageBufferMemory; you must reserve enough for this parameter to account for all memory used by all extents (number of extents times size of extents).

Disk Data tables and diskless mode.  Use of Disk Data tables is not supported when running the cluster in diskless mode.

21.1.6.10 Limitations Relating to Multiple NDB Cluster Nodes

Multiple SQL nodes.  The following are issues relating to the use of multiple MySQL servers as NDB Cluster SQL nodes, and are specific to the NDBCLUSTER storage engine:

  • No distributed table locks.  A LOCK TABLES works only for the SQL node on which the lock is issued; no other SQL node in the cluster sees this lock. This is also true for a lock issued by any statement that locks tables as part of its operations. (See next item for an example.)

  • ALTER TABLE operations.  ALTER TABLE is not fully locking when running multiple MySQL servers (SQL nodes). (As discussed in the previous item, NDB Cluster does not support distributed table locks.)

Multiple management nodes.  When using multiple management servers:

  • If any of the management servers are running on the same host, you must give nodes explicit IDs in connection strings because automatic allocation of node IDs does not work across multiple management servers on the same host. This is not required if every management server resides on a different host.

  • When a management server starts, it first checks for any other management server in the same NDB Cluster , and upon successful connection to the other management server uses its configuration data. This means that the management server --reload and --initial startup options are ignored unless the management server is the only one running. It also means that, when performing a rolling restart of an NDB Cluster with multiple management nodes, the management server reads its own configuration file if (and only if) it is the only management server running in this NDB Cluster . See Section 21.5.5, “Performing a Rolling Restart of an NDB Cluster”, for more information.

Multiple network addresses.  Multiple network addresses per data node are not supported. Use of these is liable to cause problems: In the event of a data node failure, an SQL node waits for confirmation that the data node went down but never receives it because another route to that data node remains open. This can effectively make the cluster inoperable.

Note

It is possible to use multiple network hardware interfaces (such as Ethernet cards) for a single data node, but these must be bound to the same address. This also means that it not possible to use more than one [tcp] section per connection in the config.ini file. See Section 21.3.3.9, “NDB Cluster TCP/IP Connections”, for more information.

21.2 NDB Cluster Installation

This section describes the basics for planning, installing, configuring, and running an NDB Cluster . Whereas the examples in Section 21.3, “Configuration of NDB Cluster” provide more in-depth information on a variety of clustering options and configuration, the result of following the guidelines and procedures outlined here should be a usable NDB Cluster which meets the minimum requirements for availability and safeguarding of data.

For information about upgrading or downgrading an NDB Cluster between release versions, see Section 21.2.8, “Upgrading and Downgrading NDB Cluster”.

This section covers hardware and software requirements; networking issues; installation of NDB Cluster ; basic configuration issues; starting, stopping, and restarting the cluster; loading of a sample database; and performing queries.

NDB Cluster 7.5 also provides an NDB Cluster Auto-Installer, a web-based graphical installer, as part of the NDB Cluster distribution. The Auto-Installer can be used to perform basic installation and setup of an NDB Cluster on one (for testing) or more host computers. See Section 21.2.1, “The NDB Cluster Auto-Installer”, for more information.

Assumptions.  The following sections make a number of assumptions regarding the cluster's physical and network configuration. These assumptions are discussed in the next few paragraphs.

Cluster nodes and host computers.  The cluster consists of four nodes, each on a separate host computer, and each with a fixed network address on a typical Ethernet network as shown here:

NodeIP Address
Management node (mgmd)192.168.0.10
SQL node (mysqld)192.168.0.20
Data node "A" (ndbd)192.168.0.30
Data node "B" (ndbd)192.168.0.40

This may be made clearer by the following diagram:

Figure 21.4 NDB Cluster Multi-Computer Setup

NDB Cluster Multi-Computer Setup

Network addressing.  In the interest of simplicity (and reliability), this How-To uses only numeric IP addresses. However, if DNS resolution is available on your network, it is possible to use host names in lieu of IP addresses in configuring Cluster. Alternatively, you can use the hosts file (typically /etc/hosts for Linux and other Unix-like operating systems, C:\WINDOWS\system32\drivers\etc\hosts on Windows, or your operating system's equivalent) for providing a means to do host lookup if such is available.

Potential hosts file issues.  A common problem when trying to use host names for Cluster nodes arises because of the way in which some operating systems (including some Linux distributions) set up the system's own host name in the /etc/hosts during installation. Consider two machines with the host names ndb1 and ndb2, both in the cluster network domain. Red Hat Linux (including some derivatives such as CentOS and Fedora) places the following entries in these machines' /etc/hosts files:

#  ndb1 /etc/hosts:
127.0.0.1   ndb1.cluster ndb1 localhost.localdomain localhost
#  ndb2 /etc/hosts:
127.0.0.1   ndb2.cluster ndb2 localhost.localdomain localhost

SUSE Linux (including OpenSUSE) places these entries in the machines' /etc/hosts files:

#  ndb1 /etc/hosts:
127.0.0.1       localhost
127.0.0.2       ndb1.cluster ndb1
#  ndb2 /etc/hosts:
127.0.0.1       localhost
127.0.0.2       ndb2.cluster ndb2

In both instances, ndb1 routes ndb1.cluster to a loopback IP address, but gets a public IP address from DNS for ndb2.cluster, while ndb2 routes ndb2.cluster to a loopback address and obtains a public address for ndb1.cluster. The result is that each data node connects to the management server, but cannot tell when any other data nodes have connected, and so the data nodes appear to hang while starting.

Caution

You cannot mix localhost and other host names or IP addresses in config.ini. For these reasons, the solution in such cases (other than to use IP addresses for all config.ini HostName entries) is to remove the fully qualified host names from /etc/hosts and use these in config.ini for all cluster hosts.

Host computer type.  Each host computer in our installation scenario is an Intel-based desktop PC running a supported operating system installed to disk in a standard configuration, and running no unnecessary services. The core operating system with standard TCP/IP networking capabilities should be sufficient. Also for the sake of simplicity, we also assume that the file systems on all hosts are set up identically. In the event that they are not, you should adapt these instructions accordingly.

Network hardware.  Standard 100 Mbps or 1 gigabit Ethernet cards are installed on each machine, along with the proper drivers for the cards, and that all four hosts are connected through a standard-issue Ethernet networking appliance such as a switch. (All machines should use network cards with the same throughput. That is, all four machines in the cluster should have 100 Mbps cards or all four machines should have 1 Gbps cards.) NDB Cluster works in a 100 Mbps network; however, gigabit Ethernet provides better performance.

Important

NDB Cluster is not intended for use in a network for which throughput is less than 100 Mbps or which experiences a high degree of latency. For this reason (among others), attempting to run an NDB Cluster over a wide area network such as the Internet is not likely to be successful, and is not supported in production.

Sample data.  We use the world database which is available for download from the MySQL Web site (see http://dev.mysql.com/doc/index-other.html). We assume that each machine has sufficient memory for running the operating system, required NDB Cluster processes, and (on the data nodes) storing the database.

For general information about installing MySQL, see Chapter 2, Installing and Upgrading MySQL. For information about installation of NDB Cluster on Linux and other Unix-like operating systems, see Section 21.2.2, “Installation of NDB Cluster on Linux”. For information about installation of NDB Cluster on Windows operating systems, see Section 21.2.3, “Installing NDB Cluster on Windows”.

For general information about NDB Cluster hardware, software, and networking requirements, see Section 21.1.3, “NDB Cluster Hardware, Software, and Networking Requirements”.

21.2.1 The NDB Cluster Auto-Installer

This section describes the web-based graphical configuration installer included as part of the NDB Cluster distribution. Topics discussed include an overview of the installer and its parts, software and other requirements for running the installer, navigating the GUI, and using the installer to set up and start or stop an NDB Cluster on one or more host computers.

21.2.1.1 NDB Cluster Auto-Installer Requirements

This section provides information on supported operating platforms and software, required software, and other prerequisites for running the NDB Cluster Auto-Installer.

Supported platforms.  The NDB Cluster Auto-Installer is available with most NDB 7.5.2 and later NDB Cluster distributions for recent versions of Linux, Windows, Solaris, and MacOS X. For more detailed information about platform support for NDB Cluster and the NDB Cluster Auto-Installer, see http://www.mysql.com/support/supportedplatforms/cluster.html.

The NDB Cluster Auto-Installer is not supported with NDB 7.5.0 or 7.5.1 (Bug #79853, Bug #22502247).

Supported Web browsers.  The Web-based installer is supported with recent versions of Firefox and Microsoft Internet Explorer. It should also work with recent versions of Opera, Safari, and Chrome, although we have not thoroughly tested for compability with these browsers.

Required software—setup host.  The following software must be installed on the host where the Auto-Installer is run:

  • Python 2.6 or higher.  The Auto-Installer requires the Python interpreter and standard libraries. If these are not already installed on the system, you may be able to add them using the system's package manager. Otherwise, they can be downloaded from http://python.org/download/.

  • Paramiko 1.7.7.1 or higher.  This is required to communicate with remote hosts using SSH. You can download it from http://www.lag.net/paramiko/. Paramiko may also be available from your system's package manager.

  • Pycrypto version 2.6 or higher.  This cryptography module is required by Paramiko. If it is not available using your system's package manage, you can download it from https://www.dlitz.net/software/pycrypto/.

All of the software in the preceding list is included in the Windows version of the configuration tool, and does not need to be installed separately.

The Paramiko and Pycrypto libraries are required only if you intend to deploy NDB Cluster nodes on remote hosts, and are not needed if all nodes are on the same host where the installer is run.

Required software—remote hosts.  The only software required for remote hosts where you wish to deploy NDB Cluster nodes is the SSH server, which is usually installed by default on Linux and Solaris systems. Several alternatives are available for Windows; for an overview of these, see http://en.wikipedia.org/wiki/Comparison_of_SSH_servers.

An additional requirement when using multiple hosts is that it is possible to authenticate to any of the remote hosts using SSH and the proper keys or user credentials, as discussed in the next few paragraphs:

Authentication and security.  Three basic security or authentication mechanisms for remote access are available to the Auto-Installer, which we list and describe here:

  • SSH.  A secure shell connection is used to enable the back end to perform actions on remote hosts. For this reason, an SSH server must be running on the remote host. In addition, the system user running the installer must have access to the remote server, either with a user name and password, or by using public and private keys.

    Important

    You should never use the system root account for remote access, as this is extremely insecure. In addition, mysqld cannot normally be started by system root. For these and other reasons, you should provide SSH credentials for a regular user account on the target system, and not for system root. For more information about this issue, see Section 7.1.5, “How to Run MySQL as a Normal User”.

  • HTTPS.  Remote communication between the Web browser front end and the back end is not encrypted by default, which means that information such as the user's SSH password is transmitted in clear text that is readable to anyone. For communication from a remote client to be encrypted, the back end must have a certificate, and the front end must communicate with the back end using HTTPS rather than HTTP. Enabling HTTPS is accomplished most easily through issuing a self-signed certificate. Once the certificate is issued, you must make sure that it is used. You can do this by starting ndb_setup.py from the command line with the --use-https and --cert-file options.

  • Certificate-based authentication.  The back end ndb_setup.py process can execute commands on the local host as well as remote hosts. This means that anyone connecting to the back end can take charge of how commands are executed. To reject unwanted connections to the back end, a certificate may be required for authentication of the client. In this case, a certificate must be issued by the user, installed in the browser, and made available to the back end for authentication purposes. You can enact this requirement (together with or in place of password or key authentication) by starting ndb_setup.py with the --ca-certs-file option.

There is no need or requirement for secure authentication when the client browser is running on the same host as the Auto-Installer back end.

See also Section 21.5.12, “NDB Cluster Security Issues”, which discusses security considerations to take into account when deploying NDB Cluster , as well as Chapter 7, Security, for more general MySQL security information.

21.2.1.2 NDB Cluster Auto-Installer Overview

The NDB Cluster Auto-Installer is made up of two components. The front end is a GUI client implemented as a Web page that loads and runs in a standard Web browser such as Firefox or Microsoft Internet Explorer (see Section 21.2.1.1, “NDB Cluster Auto-Installer Requirements”). The back end is a server process (ndb_setup.py) that runs on the local machine or on another host to which you have access.

These two components (client and server) communicate with each other using standard HTTP requests and responses. The back end can manage NDB Cluster software programs on any host where the back end user has granted access. If the NDB Cluster software is on a different host, the back end relies on SSH for access, using the Paramiko library for executing commands remotely (see Section 21.2.1.1, “NDB Cluster Auto-Installer Requirements”).

The remainder of this section is concerned primarily with the Web client. For more information about using the command-line tool, see Section 21.4.23, “ndb_setup.py — Start browser-based Auto-Installer for NDB Cluster ”.

NDB Cluster Auto-Installer Interface.  This section describes the layout and navigation of the NDB Cluster Auto-Installer, whose Welcome screen looks similar to what is shown here when it is first opened in the Web browser:

Figure 21.5 Welcome Screen For NDB Cluster Auto-Installer

Welcome screen for NDB Cluster Auto-Installer.

You can access the installer UI by selecting either of the options Create New NDB Cluster or Continue Previous Cluster Configuration. A typical screen in the Auto-Installer includes the following elements:

  1. Display panel.  The central area where data regarding configuration settings and controls for changing them are displayed.

  2. Breadcrumb navigation.  Located in the top left and top center of the GUI, the breadcrumb navigation bar consists of a series of titles linking to screens that correspond to steps in the configuration of an NDB Cluster . The breadcrumb allows you to jump between these stages in arbritrary order.

  3. Sequential navigation.  This consists of a set of buttons labelled Previous, Next, and Finished, and can be found in the lower right-hand corner of the GUI. The sequential navigation is used to move between steps in the suggested order.

  4. Settings and Help menus.  These menus can be found in the top right corner of the GUI (to the right of the breadcrumb navigation bar). Settings provides a way check and possibly alter configuration settings for the Auto-Installer; Help can be used to access the installer's built-in help files.

The locations of the elements just described are shown here in a typical page in the Auto-Installer; the numbers superimposed thereupon correspond to those used in the preceding list.

Figure 21.6 Layout of the NDB Cluster Auto-Installer GUI

Layout of the NDB Cluster Auto-Installer GUI.

All of these elements except for the display panel are described in greater detail in the remainder of this section. Section 21.2.1.3, “Using the NDB Cluster Auto-Installer”, describes the panels shown in the display area as well as the functionality of each panel and the controls it contains.

Arbitrary and sequential navigation.  The Auto-Installer can display any of a number of pages covering different stages in the setup and configuration of an NDB Cluster deployment. You can navigate between pages in either of two ways. The first of these is the breadcrumb trail navigation toolbar displaying the titles of the various pages (in which the title of the current page is highlighted and disabled). From these, any desired page, in any desired order, can be reached by selecting the title of the corresponding page. This toolbar is shown here:

Figure 21.7 Detail of NDB Cluster Auto-Installer breadcrumb navigation, showing page titles/links

Detail of NDB Cluster Auto-Installer breadcrumb navigation, showing page titles/links.

The second navigation mechanism provided by the Auto-Installer consists of the Next, Previous, and Finish sequential navigation buttons at the bottom right of the page. These can be used to move to the next or previous page in predetermined order, or to go to the very last page. The buttons are enabled and disabled as needed, so that you cannot, for example, advance beyond the last page.

Settings and Help menus.  These menus are positioned adjacent to one another in the top right corner of the GUI, as shown earlier in this section. The Settings menu is shown here in more detail:

Figure 21.8 NDB Cluster Auto-Installer Settings menu detail

NDB Cluster Auto-Installer Settings menu detail.

The entries in the Settings menu are described here, in the following list:

  • Clear configuration and restart: Remove all hosts and processes; reset all parameter values to their defaults; start the installer over at the first page.

  • Automatically save configuration as cookies: Save your configuration information—such as host names, process data, and parameter values—as a cookie in the browser. When this option is chosen, all information except any SSH password is saved. This means that you can quit and restart the browser, and continue working on the same configuration from where you left off at the end of the previous session).

    Since the SSH password is never saved, you must supply this once again at the beginning of a new session, if one is used.

  • Show advanced configuration options: Show advanced configuration parameters in the Auto-Installer and make these settable by the user.

    Once set, the advanced parameters continue to be used in the configuration file until they are explicitly changed or reset. This is regardless of whether the advanced parameters are currently visible in the installer; in other words, disabling the menu item does not reset the values of any of these parameters.

  • Automatically get resource information for new hosts: Query new hosts automatically for hardware resource information to pre-populate a number of configuration options and values. In this case, the suggested values are not mandatory, but they are used unless explicitly changed using the appropriate editing options in the installer.

As with the installer's navigation elements, one or more of the entries in the Settings menu may be disabled due to choices you have made previously.

The Help menu is shown here, as it appears when expanded:

Figure 21.9 The NDB Cluster Auto-Installer Help menu, expanded

The NDB Cluster Auto-Installer Help menu, expanded.

The Help menu provides several options, described in the following list:

  • Content: Show the built-in user guide. This is opened in a separate browser window, so that it can be used simultaneously with the installer without interrupting workflow.

  • Current page: Open the built-in user guide to the section describing the page currently displayed in the installer.

  • About: This will show a small dialog displaying the installer name and the version number of the NDB Cluster distribution it was supplied with, similar to what is shown here:

    Figure 21.10 The NDB Cluster Auto-Installer About dialog

    The NDB Cluster Auto-Installer About dialog.

The Auto-Installer also provides context-sensitive help in the form of tooltips for most input widgets. One of these tooltips is displayed when the mouse hovers over a widget or the small question mark which can sometimes appear next to a widget label.

In addition, the names of NDB Cluster configuration parameters are linked to their descriptions in the online NDB Cluster documentation, so that if you click on the name of a given parameter, the documentation for that parameter is shown in a separate window.

21.2.1.3 Using the NDB Cluster Auto-Installer

The NDB Cluster Auto-Installer consists of several pages, each corresponding to a step in the process used to configure and deploy an NDB Cluster , and listed here:

  • Welcome: Begin using the Auto-Installer by choosing either to configure a new NDB Cluster , or to continue configuring an existing one.

  • Define Cluster: Set basic information about the cluster as a whole, such as name, hosts, and load type. Here you can also set the SSH authentication type for accessing remote hosts, if needed.

  • Define Hosts: Identify the hosts where you intend to run NDB Cluster processes.

  • Define Processes: Assign one or more processes of a given type or types to each cluster host.

  • Define Attributes: Set configuration attributes for processes or types of processes.

  • Deploy Cluster: Deploy the cluster with the configuration set previously; start and stop the deployed cluster.

The following sections describe in greater detail the purpose and function of each of these pages, in the order just listed.

21.2.1.3.1 Starting the NDB Cluster Auto-Installer

The Auto-Installer is provided together with the NDB Cluster software. (See Section 21.2, “NDB Cluster Installation”.) The present section explains how to start the installer. You can do by invoking the ndb_setup.py executable. ndb_setup.py is found in the bin within the NDB Cluster installation directory; a typical location might be /usr/local/mysql/bin on a Linux system or C:\Program Files\MySQL\MySQL Server 5.6\bin on a Windows system, but this can vary according to where the NDB Cluster software is installed on your system.

On Windows, you can also start the installer by running setup.bat in the NDB Cluster installation directory. When invoked from the command line, it accepts the same options as does ndb_setup.py.

ndb_setup.py can be started with any of several options that affect its operation, but it is usually sufficient to allow the default settings be used, in which case you can start ndb_setup.py by either of the following two methods:

  1. Navigate to the NDB Cluster bin directory in a terminal and invoke it from the command line, without any additional arguments or options, like this:

    shell> ndb_setup
    

    This works regardless of operating platform.

  2. Navigate to the NDB Cluster bin directory in a file browser (such Windows Explorer on Windows, or Konqueror, Dolphin, or Nautilus on Linux) and activate (usually by double-clicking) the ndb_setup.py file icon. This works on Windows, and should work with most common Linux desktops as well.

    On Windows, you can also navigate to the NDB Cluster installation directory and activate the setup.bat file icon.

In either case, once ndb_setup.py is invoked, the Auto-Installer's Welcome screen should open in the system's default Web browser.

In some cases, you may wish to use non-default settings for the installer, such as specifying a different port for the Auto-Installer's included Web server to run on, in which case you must invoke ndb_setup.py with one or more startup options with values overriding the necessary defaults. The same startup options can be used on Windows systems with the setup.bat file supplied for such platforms in the NDB Cluster software distribution. This can be done using the command line, but if you want or need to start the installer from a desktop or file browser while emplying one or more of these options, it is also possible to create a script or batch file containing the proper invocation, then to double-click its file icon in the file browser to start the installer. (On Linux systems, you might also need to make the script file executable first.) For information about advanced startup options for the NDB Cluster Auto-Installer, see Section 21.4.23, “ndb_setup.py — Start browser-based Auto-Installer for NDB Cluster ”.

21.2.1.3.2 NDB Cluster Auto-Installer Welcome Screen

The Welcome screen is loaded in the default browser when ndb_setup.py is invoked, as shown here:

Figure 21.11 The NDB Cluster Auto-Installer Welcome screen (Closeup)

The NDB Cluster Auto-Installer Welcome screen.

This screen provides the following two choices for entering the installer, one of which must be selected to continue:

  1. Create New NDB Cluster : Start the Auto-Installer with a completely new cluster to be set up and deployed.

  2. Continue Previous Cluster Configuration: Start the Auto-Installer at the same point where the previous session ended, with all previous settings preserved.

The second option requires that the browser be able to access its cookies from the previous session, as these provide the mechanism by which configuration and other information generated during a session is stored. In other words, to continue the previous session with the Auto-Installer, you must use the same web browser running on the same host as you did for the previous session.

21.2.1.3.3 NDB Cluster Auto-Installer Define Cluster Screen

The Define Cluster screen is the first screen to appear following the choice made in the Welcome screen, and is used for setting general properties of the cluster. The layout of the Define Cluster screen is shown here:

Figure 21.12 The NDB Cluster Auto-Installer Define Cluster screen

The NDB Cluster Auto-Installer Define Cluster screen.

The Define Cluster screen allows you to set a number of general properties for the cluster, as described in this list:

  • Cluster name: A name that identifies the cluster. The default is MyCluster.

  • Host list: A comma-delimited list of one or more hosts where cluster processes should run. By default, this is 127.0.0.1. If you add remote hosts to the list, you must be able to connect to them using the SSH Credentials supplied.

  • Application type: Choose one of the following:

    1. Simple testing: Minimal resource usage for small-scale testing. This the default. Not intended for production environments.

    2. Web: Maximize performance for the given hardware.

    3. Real-time: Maximize performance while maximizing sensitivity to timeouts in order to minimize the time needed to detect failed cluster processes.

  • Write load: Choose a level for the anticipated number of writes for the cluster as a whole. You can choose any one of the following levels:

    1. Low: The expected load includes fewer than 100 write transactions for second.

    2. Medium: The expected load includes 100 to 1000 write transactions per second.

    3. High: The expected load includes more than 1000 write transactions per second.

  • SSH Credentials: Choose Key-Based SSH or enter User and Password credentials. The SSH key or a user name with password is required for connecting to any remote hosts specified in the Host list. By default, Key-Based SSH is selected, and the User and Password fileds are blank.

21.2.1.3.4 NDB Cluster Auto-Installer Define Hosts Screen

The Define Hosts screen, shown here, provides a means of viewing and specifying several key properties of each cluster host:

Figure 21.13 NDB Cluster Define Hosts screen

NDB Cluster Define Hosts screen.

The hosts currently entered are displayed in the grid with various pieces of information. You can add hosts by clicking the Add hosts button and entering a list of one or more comma-separated host names, IP addresses, or both (as when editing the host list on the Define Cluster screen).

Similarly, you can remove one or more hosts using the button labelled Remove selected host(s). When you remove a host in this fashion, any process which was configured for that host is also removed.

If Automatically get resource information for new hosts is checked in the Settings menu, the Auto-Installer attempts to retrieve the platform name, amount of memory, and number of CPU cores and to fill these in automatically. The status of this is displayed in the Resource info column. Fetching the information from remote hosts is not instantaneous and may take some time, particularly from remote hosts running Windows.

If the SSH user credentials on the Define Cluster screen are changed, the tool tries to refresh the hardware information from any hosts for which information is missing. However, if a given field has already been edited, the user-supplied information is not overwritten by any value fetched from that host.

The hardware resource information, platform name, installation directory, and data directory can be edited by the user by clicking the corresponding cell in the grid, by selecting one or more hosts and clicking the button labelled Edit selected host(s). This causes a dialog box to appear, in which these fields can be edited, as shown here:

Figure 21.14 NDB Cluster Auto-Installer Edit Hosts dialog

NDB Cluster Auto-Installer Edit Hosts dialog.

When more than one host is selected, any edited values are applied to all selected hosts.

21.2.1.3.5 NDB Cluster Auto-Installer Define Processes Screen

The Define Processes screen, shown here, provides a way to assign NDB Cluster processes (nodes) to cluster hosts:

Figure 21.15 NDB Cluster Auto-Installer Define Processes dialog

The NDB Cluster Auto-Installer Define Processes screen.

The left-hand portion of this screen contains a process tree showing cluster hosts and processes set up to run on each one. On the right is a panel which displays information about the item currently selected in the tree.

When this screen is accessed for the first time for a given cluster, a default set of processes is defined for you, based on the number of hosts. If you later return to the Define Hosts screen, remove all hosts, and add new hosts, this also causes a new default set of processes to be defined.

NDB Cluster processes are of the following types:

  • Management node.  Performs administrative tasks such as stopping individual data nodes, querying node and cluster status, and making backups. Executable: ndb_mgmd.

  • Single-threaded data node.  Stores data and executes queries. Executable: ndbd.

  • Multi threaded data node.  Stores data and executes queries with multiple worker threads executing in parallel. Executable: ndbmtd.

  • SQL node.  MySQL server for executing SQL queries against NDB. Executable: mysqld.

  • API node.  A client accessing data in NDB by means of the NDB API or other low-level client API, rather than by using SQL. See MySQL NDB Cluster API Developer Guide, for more information.

For more information about process (node) types, see Section 21.1.1, “NDB Cluster Core Concepts”.

Processes shown in the tree are numbered sequentially by type, for each host—for example, SQL node 1, SQL node 2, and so on—to simplify identification.

Each management node, data node, or SQL process must be assigned to a specific host, and is not allowed to run on any other host. An API node may be assigned to a single host, but this is not required. Instead, you can assign it to the special Any host entry which the tree also contains in addition to any other hosts, and which acts as a placeholder for processes that are allowed to run on any host. Only API processes may use this Any host entry.

Adding processes.  To add a new process to a given host, either right-click that host's entry in the tree, then select the Add process popup when it appears, or select a host in the process tree, and press the Add process button below the process tree. Performing either of these actions opens the add process dialog, as shown here:

Figure 21.16 NDB Cluster Auto-Installer Add Process Dialog

Dialog used in the NDB Cluster Auto-Installer Define Processes screeen for adding a new cluster process.

Here you can select from among the available process types described earlier this section; you can also enter an arbitrary process name to take the place of the suggested value, if desired.

Removing processes.  To delete a process, right-click on a process in the tree and select delete process from the pop up menu that appears, or select a process, then use the delete process button below the process tree.

When a process is selected in the process tree, information about that process is displayed in the panel to the right of the tree, where you can change the process name and possibly its type. Important: Currently, you can change a single-threaded data node (ndbd) to a multi-threaded data node (ndbmtd), or the reverse, only; no other process type changes are allowed. If you want to make a change between any other process types, you must delete the original process first, then add a new process of the desired type.

21.2.1.3.6 NDB Cluster Auto-Installer Define Attributes Screen

This screen has a layout similar to that of the Define Processes screen, with a process tree at the left. Unlike that screen's tree, the Define Attributes process tree is organized by process or node type, with single-threaded and multi-threaded data nodes considered to be of the same type for this purpose, in groups labelled Management Layer, Data Layer, SQL Layer, and API Layer. A panel to the right of this tree displays information regarding the item currently selected. The Define Attributes screen is shown here:

Figure 21.17 NDB Cluster Auto-Installer Define Attributes screen

NDB Cluster Auto-Installer Define Attributes screen.

A checkbox labelled Show advanced configuration is located below the process tree. Checking this box makes advanced options visible in the information pane. These options are set and used whether or not they are visible.

You can edit attributes for a single process by selecting that process from the tree, or for all processes of the same type in the cluster by selecting one of the Layer folders. A per-process value set for a given attribute overrides any per-group setting for that attribute that would otherwise apply to the process in question. An example of such an information panel (for an SQL process) is shown here:

Figure 21.18 Define Attributes Detail With SQL Process Attributes

Define Attributes process with information panel showing attributes for an SQL process selected in the tree.

For some of the attributes shown in the information panel, a button bearing a plus sign is displayed to the right, which means that the value of this attribute can be overridden. This + button activates an input widget for the attribute, enabling you to change its value. When the value has been overridden, this button changes into a button showing an X, as shown here:

Figure 21.19 Define Attributes Detail, Overriding Attribute Default Value

The default value for an SQL process attribute has been overridden, as indicated with the X button.

Clicking the X button next to an attribute undoes any changes made to it; it immediately reverts to the predefined value.

All configuration attributes have predefined values calculated by the installer, based such factors as host name, node ID, node type, and so on. In most cases, these values may be left as they are. If you are not familiar with it already, it is highly recommended that you read the applicable documentation before making changes to any of the attribute values. To make finding this information easier, each attribute name shown in the information panel is linked to its description in the online NDB Cluster documentation.

21.2.1.3.7 NDB Cluster Auto-Installer Deploy Cluster Screen

This screen allows you to perform the following tasks:

  • Review process startup commands and configuration files to be applied

  • Distribute configuration files by creating any necessary files and directories on all cluster hosts—that is, deploy the cluster as presently configured

  • Start and stop the cluster

The Deploy Cluster screen is shown here:

Figure 21.20 NDB Cluster Auto-Installer Deploy Cluster screen

NDB Cluster Auto-Installer Deploy Cluster screen.

Like the Define Attributes screen, this screens features a process tree, organized by process type, on the left hand side. Next to each process is a status icon whose color indicates the current status of the process: green if it is running; yellow if it is starting or stopping; red if the process is stopped.

To the right of the process tree are two information panels, the upper panel showing the startup command or commands needed to start the selected process. (For some processes, more than one command may be required—for example, if initialization is necessary.) The lower panel shows the contents of the configuration file, if any, for the given process; currently, the management node process is only type of process having a configuration file. Other process types are configured using command-line parameters when starting the process, or by obtaining configuration information from the management nodes as needed in real time.

Three buttons are located immediately below the process tree. These are labelled as and perform the functions described in the following list:

  • Deploy cluster: Verify that the configuration is valid. Create any directories required on the cluster hosts, and distribute the configuration files onto the hosts. A progress bar shows how far the deployment has proceeded.

  • Start cluster: The cluster is deployed as with Deploy cluster, after which all cluster processes are started in the correct order.

    Starting these processes may take some time. If the estimated time to completion is too large, the installer provides an opportunity to cancel or to continue of the startup procedure. A progress bar indicates the current status of the startup procedure, as shown here:

    Figure 21.21 Progress Bar With Status of Node Startup Process

    Progress bar showing status of node startup process.

    The process status icons adjoining the process tree mentioned previously also update with the status of each process.

  • Stop cluster: After the cluster has been started, you can stop it using the this. As with starting the cluster, cluster shutdown is not instantaneous, and may require some time complete. A progress bar, similar to that displayed during cluster startup, shows the approximate current status of the cluster shutdown procedure, as do the process status icons adjoining the process tree.

The Auto-Installer generates a my.cnf file containing the appropriate options for each mysqld process in the cluster.

21.2.2 Installation of NDB Cluster on Linux

This section covers installation methods for NDB Cluster on Linux and other Unix-like operating systems. While the next few sections refer to a Linux operating system, the instructions and procedures given there should be easily adaptable to other supported Unix-like platforms. For manual installation and setup instructions specific to Windows systems, see Section 21.2.3, “Installing NDB Cluster on Windows”.

Each NDB Cluster host computer must have the correct executable programs installed. A host running an SQL node must have installed on it a MySQL Server binary (mysqld). Management nodes require the management server daemon (ndb_mgmd); data nodes require the data node daemon (ndbd or ndbmtd). It is not necessary to install the MySQL Server binary on management node hosts and data node hosts. It is recommended that you also install the management client (ndb_mgm) on the management server host.

Installation of NDB Cluster on Linux can be done using precompiled binaries from Oracle (downloaded as a .tar.gz archive), with RPM packages (also available from Oracle), or from source code. All three of these installation methods are described in the section that follow.

Regardless of the method used, it is still necessary following installation of the NDB Cluster binaries to create configuration files for all cluster nodes, before you can start the cluster. See Section 21.2.4, “Initial Configuration of NDB Cluster”.

21.2.2.1 Installing an NDB Cluster Binary Release on Linux

This section covers the steps necessary to install the correct executables for each type of Cluster node from precompiled binaries supplied by Oracle.

For setting up a cluster using precompiled binaries, the first step in the installation process for each cluster host is to download the latest NDB Cluster 7.5 binary archive (mysql-cluster-gpl-7.5.7-linux-i686-glibc23.tar.gz from the NDB Cluster downloads area. We assume that you have placed this file in each machine's /var/tmp directory. (If you do require a custom binary, see Section 2.9.3, “Installing MySQL Using a Development Source Tree”.)

Note

After completing the installation, do not yet start any of the binaries. We show you how to do so following the configuration of the nodes (see Section 21.2.4, “Initial Configuration of NDB Cluster”).

SQL nodes.  On each of the machines designated to host SQL nodes, perform the following steps as the system root user:

  1. Check your /etc/passwd and /etc/group files (or use whatever tools are provided by your operating system for managing users and groups) to see whether there is already a mysql group and mysql user on the system. Some OS distributions create these as part of the operating system installation process. If they are not already present, create a new mysql user group, and then add a mysql user to this group:

    shell> groupadd mysql
    shell> useradd -g mysql -s /bin/false mysql
    

    The syntax for useradd and groupadd may differ slightly on different versions of Unix, or they may have different names such as adduser and addgroup.

  2. Change location to the directory containing the downloaded file, unpack the archive, and create a symbolic link named mysql to the mysql directory.

    Note

    The actual file and directory names vary according to the NDB Cluster version number.

    shell> cd /var/tmp
    shell> tar -C /usr/local -xzvf mysql-cluster-gpl-7.5.7-linux2.6.tar.gz
    shell> ln -s /usr/local/mysql-cluster-gpl-7.5.7-linux2.6-i686 /usr/local/mysql
    
  3. Change location to the mysql directory and set up the system databases using mysqld --initialize as shown here:

    shell> cd mysql
    shell> mysqld --initialize
    

    This generates a random password for the MySQL root account. If you do not want the random password to be generated, you can substitute the --initialize-insecure option for --initialize. In either case, you should review Section 2.10.1.1, “Initializing the Data Directory Manually Using mysqld”, for additional information before performing this step. See also Section 5.4.4, “mysql_secure_installation — Improve MySQL Installation Security”.

    Alternatively, you can change location to the mysql directory and run mysql_install_db to create the system databases:

    shell> cd mysql
    shell> scripts/mysql_install_db --user=mysql
    

    However, this method is not recommended, due to the fact that mysql_install_db is deprecated, and thus subject to removal in a future release.

  4. Set the necessary permissions for the MySQL server and data directories:

    shell> chown -R root .
    shell> chown -R mysql data
    shell> chgrp -R mysql .
    
  5. Copy the MySQL startup script to the appropriate directory, make it executable, and set it to start when the operating system is booted up:

    shell> cp support-files/mysql.server /etc/rc.d/init.d/
    shell> chmod +x /etc/rc.d/init.d/mysql.server
    shell> chkconfig --add mysql.server
    

    (The startup scripts directory may vary depending on your operating system and version—for example, in some Linux distributions, it is /etc/init.d.)

    Here we use Red Hat's chkconfig for creating links to the startup scripts; use whatever means is appropriate for this purpose on your platform, such as update-rc.d on Debian.

Remember that the preceding steps must be repeated on each machine where an SQL node is to reside.

Data nodes.  Installation of the data nodes does not require the mysqld binary. Only the NDB Cluster data node executable ndbd (single-threaded) or ndbmtd (multi-threaded) is required. These binaries can also be found in the .tar.gz archive. Again, we assume that you have placed this archive in /var/tmp.

As system root (that is, after using sudo, su root, or your system's equivalent for temporarily assuming the system administrator account's privileges), perform the following steps to install the data node binaries on the data node hosts:

  1. Change location to the /var/tmp directory, and extract the ndbd and ndbmtd binaries from the archive into a suitable directory such as /usr/local/bin:

    shell> cd /var/tmp
    shell> tar -zxvf mysql-5.7.18-ndb-7.5.7-linux-i686-glibc23.tar.gz
    shell> cd mysql-5.7.18-ndb-7.5.7-linux-i686-glibc23
    shell> cp bin/ndbd /usr/local/bin/ndbd
    shell> cp bin/ndbmtd /usr/local/bin/ndbmtd
    

    (You can safely delete the directory created by unpacking the downloaded archive, and the files it contains, from /var/tmp once ndb_mgm and ndb_mgmd have been copied to the executables directory.)

  2. Change location to the directory into which you copied the files, and then make both of them executable:

    shell> cd /usr/local/bin
    shell> chmod +x ndb*
    

The preceding steps should be repeated on each data node host.

Although only one of the data node executables is required to run an NDB Cluster data node, we have shown you how to install both ndbd and ndbmtd in the preceding instructions. We recommend that you do this when installing or upgrading NDB Cluster , even if you plan to use only one of them, since this will save time and trouble in the event that you later decide to change from one to the other.

Note

The data directory on each machine hosting a data node is /usr/local/mysql/data. This piece of information is essential when configuring the management node. (See Section 21.2.4, “Initial Configuration of NDB Cluster”.)

Management nodes.  Installation of the management node does not require the mysqld binary. Only the NDB Cluster management server (ndb_mgmd) is required; you most likely want to install the management client (ndb_mgm) as well. Both of these binaries also be found in the .tar.gz archive. Again, we assume that you have placed this archive in /var/tmp.

As system root, perform the following steps to install ndb_mgmd and ndb_mgm on the management node host:

  1. Change location to the /var/tmp directory, and extract the ndb_mgm and ndb_mgmd from the archive into a suitable directory such as /usr/local/bin:

    shell> cd /var/tmp
    shell> tar -zxvf mysql-5.7.18-ndb-7.5.7-linux2.6-i686.tar.gz
    shell> cd mysql-5.7.18-ndb-7.5.7-linux2.6-i686
    shell> cp bin/ndb_mgm* /usr/local/bin
    

    (You can safely delete the directory created by unpacking the downloaded archive, and the files it contains, from /var/tmp once ndb_mgm and ndb_mgmd have been copied to the executables directory.)

  2. Change location to the directory into which you copied the files, and then make both of them executable:

    shell> cd /usr/local/bin
    shell> chmod +x ndb_mgm*
    

In Section 21.2.4, “Initial Configuration of NDB Cluster”, we create configuration files for all of the nodes in our example NDB Cluster .

21.2.2.2 Installing NDB Cluster from RPM

This section covers the steps necessary to install the correct executables for each type of NDB Cluster node using RPM packages supplied by Oracle beginning with NDB 7.5.4. For information about RPMs for previous versions of NDB Cluster , see Installation using old-style RPMs (NDB 7.5.3 and earlier).

RPMs are available for both 32-bit and 64-bit Linux platforms. The filenames for these RPMs use the following pattern:

mysql-cluster-license-component-ver-rev.distro.arch.rpm
  
    license:= {commercial | community}
    
    component: {management-server | data-node | server | client | other—see text}
    
    ver: major.minor.release
    
    rev: major.minor
    
    distro: {el5 | el6 | el7 | sles12}
    
    arch: {i686 | x86_64}

license indicates whether the RPM is part of a Commercial or Community release of NDB Cluster . In the remainder of this section, we assume for the examples that you are installing a Community release.

Possible values for component, with descriptions, can be found in the following table:

ComponentDescription
auto-installerNDB Cluster Auto Installer program; see Section 21.2.1, “The NDB Cluster Auto-Installer”, for usage
clientMySQL and NDB client programs; includes mysql client, ndb_mgm client, and other client tools
commonCharacter set and error message information needed by the MySQL server
data-nodendbd and ndbmtd data node binaries
develHeaders and library files needed for MySQL client development
embeddedEmbedded MySQL server
embedded-compatBackwards-compatible embedded MySQL server
embedded-develHeader and library files for developing applications for embedded MySQL
javaJAR files needed for support of ClusterJ applications
libsMySQL client libraries
libs-compatBackwards-compatible MySQL client libraries
management-serverThe NDB Cluster management server (ndb_mgmd)
memcachedFiles needed to support ndbmemcache
ndbclientNDB client library for running NDB API and MGM API applications (libndbclient)
ndbclient-develHeader and other files needed for developing NDB API and MGM API applications
nodejsFiles needed to set up Node.JS support for NDB Cluster
serverThe MySQL server (mysqld) with NDB storage engine support included, and associated MySQL server programs
testmysqltest, other MySQL test programs, and support files

A single bundle (.tar file) of all NDB Cluster RPMs for a given platform and architecture is also available. The name of this file follows the pattern shown here:

mysql-cluster-license-ver-rev.distro.arch.rpm-tar

You can extract the individual RPM files from this file using tar or your preferred tool for extracting archives.

The components required to install the three major types of NDB Cluster nodes are given in the following list:

  • Management node: management-server

  • Data node: data-node

  • SQL node: server and common

In addition, the client RPM should be installed to provide the ndb_mgm management client on at least one management node. You may also wish to install it on SQL nodes, to have mysql and other MySQL client programs available on these. We discuss installation of nodes by type later in this section.

ver represents the three-part NDB storage engine version number in 7.5.x format, shown as 7.5.7 in the examples. rev provides the RPM revision number in major.minor format. In the examples shown in this section, we use 1.1 for this value.

The distro (Linux distribution) is one of rhel5 (Oracle Linux 5, Red Hat Enterprise Linux 4 and 5), el6 (Oracle Linux 6, Red Hat Enterprise Linux 6), el7 (Oracle Linux 7, Red Hat Enterprise Linux 7), or sles12 (SUSE Enterprise Linux 12). For the examples in this section, we assume that the host runs Oracle Linux 7, Red Hat Enterprise Linux 7, or the equivalent (el7).

arch is i686 for 32-bit RPMs and x86_64 for 64-bit versions. In the examples shown here, we assume a 64-bit platform.

The NDB Cluster version number in the RPM file names (shown here as 7.5.7) can vary according to the version which you are actually using. It is very important that all of the Cluster RPMs to be installed have the same version number. The architecture should also be appropriate to the machine on which the RPM is to be installed; in particular, you should keep in mind that 64-bit RPMs (x86_64) cannot be used with 32-bit operating systems (use i686 for the latter).

Data nodes.  On a computer that is to host an NDB Cluster data node it is necessary to install only the data-node RPM. To do so, copy this RPM to the data node host, and run the following command as the system root user, replacing the name shown for the RPM as necessary to match that of the RPM downloaded from the MySQL web site:

shell> rpm -Uhv mysql-cluster-community-data-node-7.5.7-1.1.el7.x86_64.rpm

This installs the ndbd and ndbmtd data node binaries in /usr/sbin. Either of these can be used to run a data node process on this host.

SQL nodes.  Copy the server and common RPMs to each machine to be used for hosting an NDB Cluster SQL node (server requires common). Install the server RPM by executing the following command as the system root user, replacing the name shown for the RPM as necessary to match the name of the RPM downloaded from the MySQL web site:

shell> rpm -Uhv mysql-cluster-community-server-7.5.7-1.1.el7.x86_64.rpm

This installs the MySQL server binary (mysqld), with NDB storage engine support, in the /usr/sbin directory. It also installs all needed MySQL Server support files and useful MySQL server programs, including the mysql.server and mysqld_safe startup scripts (in /usr/share/mysql and /usr/bin, respectively). The RPM installer should take care of general configuration issues (such as creating the mysql user and group, if needed) automatically.

Important

You must use the versions of these RPMs released for NDB Cluster ; those released for the standard MySQL server do not provide support for the NDB storage engine.

To administer the SQL node (MySQL server), you should also install the client RPM, as shown here:

shell> rpm -Uhv mysql-cluster-community-client-7.5.7-1.1.el7.x86_64.rpm

This installs the mysql client and other MySQL client programs, such as mysqladmin and mysqldump, to /usr/bin.

Management nodes.  To install the NDB Cluster management server, it is necessary only to use the management-server RPM. Copy this RPM to the computer intended to host the management node, and then install it by running the following command as the system root user (replace the name shown for the RPM as necessary to match that of the management-server RPM downloaded from the MySQL web site):

shell> rpm -Uhv mysql-cluster-commercial-management-server-7.5.7-1.1.el7.x86_64.rpm

This RPM installs the management server binary ndb_mgmd in the /usr/sbin directory. While this is the only program actually required for running a management node, it is also a good idea to have the ndb_mgm NDB Cluster management client available as well. You can obtain this program, as well as other NDB client programs such as ndb_desc and ndb_config, by installing the client RPM as described previously.

Note

Previously, ndb_mgm was installed by the same RPM used to install the management server. In NDB 7.5.4 and later, all NDB client programs are obtained from the same client RPM that installs mysql and other MySQL clients.

See Section 2.5.5, “Installing MySQL on Linux Using RPM Packages from Oracle”, for general information about installing MySQL using RPMs supplied by Oracle.

After installing from RPM, you still need to configure the cluster; see Section 21.2.4, “Initial Configuration of NDB Cluster”, for the relevant information.

Installation using old-style RPMs (NDB 7.5.3 and earlier).  The information in the remainder of this section applies only to NDB 7.5.3 and earlier, and provides the steps necessary to install the correct executables for each type of NDB Cluster node using old-style RPM packages as supplied by Oracle prior to NDB 7.5.4. The filenames for these old-style RPMs use the following pattern:

MySQL-Cluster-component-producttype-ndbversion-revision.distribution.architecture.rpm

component:= {server | client [| other]}

producttype:= {gpl | advanced}

ndbversion:= major.minor.release

distribution:= {sles11 | rhel5 | el6}

architecture:= {i386 | x86_64}

The component can be server or client. (Other values are possible, but since only the server and client components are required for a working NDB Cluster installation, we do not discuss them here.) The producttype for Community RPMs downloaded from http://dev.mysql.com/downloads/cluster/ is always gpl; advanced is used to indicate commercial releases. ndbversion represents the three-part NDB storage engine version number in 7.5.x format; we use 7.5.3 throughout the rest of this section. The RPM revision is shown as 1 in the examples following. The distribution can be one of sles11 (SUSE Enterprise Linux 11), rhel5 (Oracle Linux 5, Red Hat Enterprise Linux 4 and 5), or el6 (Oracle Linux 6, Red Hat Enterprise Linux 6). The architecture is i386 for 32-bit RPMs and x86_64 for 64-bit versions.

For an NDB Cluster , one and possibly two RPMs are required:

  • The server RPM (for example, MySQL-Cluster-server-gpl-7.5.3-1.sles11.i386.rpm), which supplies the core files needed to run a MySQL Server with NDBCLUSTER storage engine support (that is, as an NDB Cluster SQL node) as well as all NDB Cluster executables, including the management node, data node, and ndb_mgm client binaries. This RPM is always required for installing NDB Cluster .

  • If you do not have your own client application capable of administering a MySQL server, you should also obtain and install the client RPM (for example, MySQL-Cluster-client-gpl-7.5.3-1.sles11.i386.rpm), which supplies the mysql client

It is very important that all of the Cluster RPMs to be installed have the same version number. The architecture designation should also be appropriate to the machine on which the RPM is to be installed; in particular, you should keep in mind that 64-bit RPMs cannot be used with 32-bit operating systems.

Data nodes.  On a computer that is to host a cluster data node it is necessary to install only the server RPM. To do so, copy this RPM to the data node host, and run the following command as the system root user, replacing the name shown for the RPM as necessary to match that of the RPM downloaded from the MySQL web site:

shell> rpm -Uhv MySQL-Cluster-server-gpl-7.5.3-1.sles11.i386.rpm

Although this installs all NDB Cluster binaries, only the program ndbd or ndbmtd (both in /usr/sbin) is actually needed to run an NDB Cluster data node.

SQL nodes.  On each machine to be used for hosting a cluster SQL node, install the server RPM by executing the following command as the system root user, replacing the name shown for the RPM as necessary to match the name of the RPM downloaded from the MySQL web site:

shell> rpm -Uhv MySQL-Cluster-server-gpl-7.5.3-1.sles11.i386.rpm

This installs the MySQL server binary (mysqld) with NDB storage engine support in the /usr/sbin directory, as well as all needed MySQL Server support files. It also installs the mysql.server and mysqld_safe startup scripts (in /usr/share/mysql and /usr/bin, respectively). The RPM installer should take care of general configuration issues (such as creating the mysql user and group, if needed) automatically.

To administer the SQL node (MySQL server), you should also install the client RPM, as shown here:

shell> rpm -Uhv MySQL-Cluster-client-gpl-7.5.3-1.sles11.i386.rpm

This installs the mysql client program.

Management nodes.  To install the NDB Cluster management server, it is necessary only to use the server RPM. Copy this RPM to the computer intended to host the management node, and then install it by running the following command as the system root user (replace the name shown for the RPM as necessary to match that of the server RPM downloaded from the MySQL web site):

shell> rpm -Uhv MySQL-Cluster-server-gpl-7.5.3-1.sles11.i386.rpm

Although this RPM installs many other files, only the management server binary ndb_mgmd (in the /usr/sbin directory) is actually required for running a management node. The server RPM also installs ndb_mgm, the NDB management client.

See Section 2.5.5, “Installing MySQL on Linux Using RPM Packages from Oracle”, for general information about installing MySQL using RPMs supplied by Oracle. See Section 21.2.4, “Initial Configuration of NDB Cluster”, for information about required post-installation configuration.

21.2.2.3 Installing NDB Cluster Using .deb Files

The section provides information about installing NDB Cluster on Debian and related Linux distributions such Ubuntu using the .deb files supplied by Oracle for this purpose.

Oracle provides .deb installer files for NDB Cluster 7.5 for 32-bit and 64-bit platforms. For a Debian-based system, only a single installer file is necessary. This file is named using the pattern shown here, according to the applicable NDB Cluster version, Debian version, and architecture:

mysql-cluster-gpl-ndbver-debiandebianver-arch.deb

Here, ndbver is the 3-part NDB engine version number, debianver is the major version of Debian (6.0 or 7), and arch is one of i686 or x86_64. In the examples that follow, we assume you wish to install NDB 7.5.7 on a 64-bit Debian 7 system; in this case, the installer file is named mysql-cluster-gpl-7.5.7-debian7-x86_64.deb.

Once you have downloaded the appropriate .deb file, you can install it from the command line using dpkg, like this:

shell> dpkg -i mysql-cluster-gpl-7.5.7-debian7-i686.deb

You can also remove it using dpkg as shown here:

shell> dpkg -r mysql

The installer file should also be compatible with most graphical package managers that work with .deb files, such as GDebi for the Gnome desktop.

The .deb file installs NDB Cluster under /opt/mysql/server-version/, where version is the 2-part release series version for the included MySQL server. For NDB 7.5, this is always 5.7. The directory layout is the same as that for the generic Linux binary distribution (see Table 2.3, “MySQL Installation Layout for Generic Unix/Linux Binary Package”), with the exception that startup scripts and configuration files are found in support-files instead of share. All NDB Cluster executables, such as ndb_mgm, ndbd, and ndb_mgmd, are placed in the bin directory.

21.2.2.4 Building NDB Cluster from Source on Linux

This section provides information about compiling NDB Cluster on Linux and other Unix-like platforms. Building NDB Cluster from source is similar to building the standard MySQL Server, although it differs in a few key respects discussed here. For general information about building MySQL from source, see Section 2.9, “Installing MySQL from Source”. For information about compiling NDB Cluster on Windows platforms, see Section 21.2.3.2, “Compiling and Installing NDB Cluster from Source on Windows”.

Building NDB Cluster requires using the NDB Cluster sources. These are available from the NDB Cluster downloads page at http://dev.mysql.com/downloads/cluster/. The archived source file should have a name similar to mysql-cluster-gpl-7.5.7.tar.gz. You can also obtain MySQL development sources from launchpad.net. Building NDB Cluster from standard MySQL Server 5.7 sources is not supported.

The WITH_NDBCLUSTER_STORAGE_ENGINE option for CMake causes the binaries for the management nodes, data nodes, and other NDB Cluster programs to be built; it also causes mysqld to be compiled with NDB storage engine support. This option (or its alias WITH_NDBCLUSTER) is required when building NDB Cluster .

Important

The WITH_NDB_JAVA option is enabled by default. This means that, by default, if CMake cannot find the location of Java on your system, the configuration process fails; if you do not wish to enable Java and ClusterJ support, you must indicate this explicitly by configuring the build using -DWITH_NDB_JAVA=OFF. Use WITH_CLASSPATH to provide the Java classpath if needed.

For more information about CMake options specific to building NDB Cluster , see Options for Compiling NDB Cluster .

After you have run make && make install (or your system's equivalent), the result is similar to what is obtained by unpacking a precompiled binary to the same location.

Management nodes.  When building from source and running the default make install, the management server and management client binaries (ndb_mgmd and ndb_mgm) can be found in /usr/local/mysql/bin. Only ndb_mgmd is required to be present on a management node host; however, it is also a good idea to have ndb_mgm present on the same host machine. Neither of these executables requires a specific location on the host machine's file system.

Data nodes.  The only executable required on a data node host is the data node binary ndbd or ndbmtd. (mysqld, for example, does not have to be present on the host machine.) By default, when building from source, this file is placed in the directory /usr/local/mysql/bin. For installing on multiple data node hosts, only ndbd or ndbmtd need be copied to the other host machine or machines. (This assumes that all data node hosts use the same architecture and operating system; otherwise you may need to compile separately for each different platform.) The data node binary need not be in any particular location on the host's file system, as long as the location is known.

When compiling NDB Cluster from source, no special options are required for building multi-threaded data node binaries. Configuring the build with NDB storage engine support causes ndbmtd to be built automatically; make install places the ndbmtd binary in the installation bin directory along with mysqld, ndbd, and ndb_mgm.

SQL nodes.  If you compile MySQL with clustering support, and perform the default installation (using make install as the system root user), mysqld is placed in /usr/local/mysql/bin. Follow the steps given in Section 2.9, “Installing MySQL from Source” to make mysqld ready for use. If you want to run multiple SQL nodes, you can use a copy of the same mysqld executable and its associated support files on several machines. The easiest way to do this is to copy the entire /usr/local/mysql directory and all directories and files contained within it to the other SQL node host or hosts, then repeat the steps from Section 2.9, “Installing MySQL from Source” on each machine. If you configure the build with a nondefault PREFIX option, you must adjust the directory accordingly.

In Section 21.2.4, “Initial Configuration of NDB Cluster”, we create configuration files for all of the nodes in our example NDB Cluster .

21.2.3 Installing NDB Cluster on Windows

This section describes installation procedures for NDB Cluster on Windows hosts. NDB Cluster 7.5 binaries for Windows can be obtained from http://dev.mysql.com/downloads/cluster/. For information about installing NDB Cluster on Windows from a binary release provided by Oracle, see Section 21.2.3.1, “Installing NDB Cluster on Windows from a Binary Release”.

It is also possible to compile and install NDB Cluster from source on Windows using Microsoft Visual Studio. For more information, see Section 21.2.3.2, “Compiling and Installing NDB Cluster from Source on Windows”.

21.2.3.1 Installing NDB Cluster on Windows from a Binary Release

This section describes a basic installation of NDB Cluster on Windows using a binary no-install NDB Cluster release provided by Oracle, using the same 4-node setup outlined in the beginning of this section (see Section 21.2, “NDB Cluster Installation”), as shown in the following table:

NodeIP Address
Management (MGMD) node192.168.0.10
MySQL server (SQL) node192.168.0.20
Data (NDBD) node "A"192.168.0.30
Data (NDBD) node "B"192.168.0.40

As on other platforms, the NDB Cluster host computer running an SQL node must have installed on it a MySQL Server binary (mysqld.exe). You should also have the MySQL client (mysql.exe) on this host. For management nodes and data nodes, it is not necessary to install the MySQL Server binary; however, each management node requires the management server daemon (ndb_mgmd.exe); each data node requires the data node daemon (ndbd.exe or ndbmtd.exe). For this example, we refer to ndbd.exe as the data node executable, but you can install ndbmtd.exe, the multi-threaded version of this program, instead, in exactly the same way. You should also install the management client (ndb_mgm.exe) on the management server host. This section covers the steps necessary to install the correct Windows binaries for each type of NDB Cluster node.

Note

As with other Windows programs, NDB Cluster executables are named with the .exe file extension. However, it is not necessary to include the .exe extension when invoking these programs from the command line. Therefore, we often simply refer to these programs in this documentation as mysqld, mysql, ndb_mgmd, and so on. You should understand that, whether we refer (for example) to mysqld or mysqld.exe, either name means the same thing (the MySQL Server program).

For setting up an NDB Cluster using Oracles's no-install binaries, the first step in the installation process is to download the latest NDB Cluster Windows binary archive from http://dev.mysql.com/downloads/cluster/. This archive has a filename of the form mysql-cluster-gpl-noinstall-ver-winarch.zip, where ver is the NDB storage engine version (such as 7.5.7), and arch is the architecture (32 for 32-bit binaries, and 64 for 64-bit binaries). For example, the NDB Cluster 7.5.7 no-install archive for 32-bit Windows systems is named mysql-cluster-gpl-noinstall-7.5.7-win32.zip.

You can run 32-bit NDB Cluster binaries on both 32-bit and 64-bit versions of Windows; however, 64-bit NDB Cluster binaries can be used only on 64-bit versions of Windows. If you are using a 32-bit version of Windows on a computer that has a 64-bit CPU, then you must use the 32-bit NDB Cluster binaries.

To minimize the number of files that need to be downloaded from the Internet or copied between machines, we start with the computer where you intend to run the SQL node.

SQL node.  We assume that you have placed a copy of the no-install archive in the directory C:\Documents and Settings\username\My Documents\Downloads on the computer having the IP address 192.168.0.20, where username is the name of the current user. (You can obtain this name using ECHO %USERNAME% on the command line.) To install and run NDB Cluster executables as Windows services, this user should be a member of the Administrators group.

Extract all the files from the archive. The Extraction Wizard integrated with Windows Explorer is adequate for this task. (If you use a different archive program, be sure that it extracts all files and directories from the archive, and that it preserves the archive's directory structure.) When you are asked for a destination directory, enter C:\, which causes the Extraction Wizard to extract the archive to the directory C:\mysql-cluster-gpl-noinstall-ver-winarch. Rename this directory to C:\mysql.

It is possible to install the NDB Cluster binaries to directories other than C:\mysql\bin; however, if you do so, you must modify the paths shown in this procedure accordingly. In particular, if the MySQL Server (SQL node) binary is installed to a location other than C:\mysql or C:\Program Files\MySQL\MySQL Server 5.7, or if the SQL node's data directory is in a location other than C:\mysql\data or C:\Program Files\MySQL\MySQL Server 5.7\data, extra configuration options must be used on the command line or added to the my.ini or my.cnf file when starting the SQL node. For more information about configuring a MySQL Server to run in a nonstandard location, see Section 2.3.5, “Installing MySQL on Microsoft Windows Using a noinstall Zip Archive”.

For a MySQL Server with NDB Cluster support to run as part of an NDB Cluster , it must be started with the options --ndbcluster and --ndb-connectstring. While you can specify these options on the command line, it is usually more convenient to place them in an option file. To do this, create a new text file in Notepad or another text editor. Enter the following configuration information into this file:

[mysqld]
# Options for mysqld process:
ndbcluster                      # run NDB storage engine
ndb-connectstring=192.168.0.10  # location of management server

You can add other options used by this MySQL Server if desired (see Section 2.3.5.2, “Creating an Option File”), but the file must contain the options shown, at a minimum. Save this file as C:\mysql\my.ini. This completes the installation and setup for the SQL node.

Data nodes.  An NDB Cluster data node on a Windows host requires only a single executable, one of either ndbd.exe or ndbmtd.exe. For this example, we assume that you are using ndbd.exe, but the same instructions apply when using ndbmtd.exe. On each computer where you wish to run a data node (the computers having the IP addresses 192.168.0.30 and 192.168.0.40), create the directories C:\mysql, C:\mysql\bin, and C:\mysql\cluster-data; then, on the computer where you downloaded and extracted the no-install archive, locate ndbd.exe in the C:\mysql\bin directory. Copy this file to the C:\mysql\bin directory on each of the two data node hosts.

To function as part of an NDB Cluster , each data node must be given the address or hostname of the management server. You can supply this information on the command line using the --ndb-connectstring or -c option when starting each data node process. However, it is usually preferable to put this information in an option file. To do this, create a new text file in Notepad or another text editor and enter the following text:

[mysql_cluster]
# Options for data node process:
ndb-connectstring=192.168.0.10  # location of management server

Save this file as C:\mysql\my.ini on the data node host. Create another text file containing the same information and save it on as C:mysql\my.ini on the other data node host, or copy the my.ini file from the first data node host to the second one, making sure to place the copy in the second data node's C:\mysql directory. Both data node hosts are now ready to be used in the NDB Cluster , which leaves only the management node to be installed and configured.

Management node.  The only executable program required on a computer used for hosting an NDB Cluster management node is the management server program ndb_mgmd.exe. However, in order to administer the NDB Cluster once it has been started, you should also install the NDB Cluster management client program ndb_mgm.exe on the same machine as the management server. Locate these two programs on the machine where you downloaded and extracted the no-install archive; this should be the directory C:\mysql\bin on the SQL node host. Create the directory C:\mysql\bin on the computer having the IP address 192.168.0.10, then copy both programs to this directory.

You should now create two configuration files for use by ndb_mgmd.exe:

  1. A local configuration file to supply configuration data specific to the management node itself. Typically, this file needs only to supply the location of the NDB Cluster global configuration file (see item 2).

    To create this file, start a new text file in Notepad or another text editor, and enter the following information:

    [mysql_cluster]
    # Options for management node process
    config-file=C:/mysql/bin/config.ini
    

    Save this file as the text file C:\mysql\bin\my.ini.

  2. A global configuration file from which the management node can obtain configuration information governing the NDB Cluster as a whole. At a minimum, this file must contain a section for each node in the NDB Cluster , and the IP addresses or hostnames for the management node and all data nodes (HostName configuration parameter). It is also advisable to include the following additional information:

    Create a new text file using a text editor such as Notepad, and input the following information:

    [ndbd default]
    # Options affecting ndbd processes on all data nodes:
    NoOfReplicas=2                      # Number of replicas
    DataDir=C:/mysql/cluster-data       # Directory for each data node's data files
                                        # Forward slashes used in directory path,
                                        # rather than backslashes. This is correct;
                                        # see Important note in text
    DataMemory=80M    # Memory allocated to data storage
    IndexMemory=18M   # Memory allocated to index storage
                      # For DataMemory and IndexMemory, we have used the
                      # default values. Since the "world" database takes up
                      # only about 500KB, this should be more than enough for
                      # this example Cluster setup.
    
    [ndb_mgmd]
    # Management process options:
    HostName=192.168.0.10               # Hostname or IP address of management node
    DataDir=C:/mysql/bin/cluster-logs   # Directory for management node log files
    
    [ndbd]
    # Options for data node "A":
                                    # (one [ndbd] section per data node)
    HostName=192.168.0.30           # Hostname or IP address
    
    [ndbd]
    # Options for data node "B":
    HostName=192.168.0.40           # Hostname or IP address
    
    [mysqld]
    # SQL node options:
    HostName=192.168.0.20           # Hostname or IP address
    

    Save this file as the text file C:\mysql\bin\config.ini.

Important

A single backslash character (\) cannot be used when specifying directory paths in program options or configuration files used by NDB Cluster on Windows. Instead, you must either escape each backslash character with a second backslash (\\), or replace the backslash with a forward slash character (/). For example, the following line from the [ndb_mgmd] section of an NDB Cluster config.ini file does not work:

DataDir=C:\mysql\bin\cluster-logs

Instead, you may use either of the following:

DataDir=C:\\mysql\\bin\\cluster-logs  # Escaped backslashes
DataDir=C:/mysql/bin/cluster-logs     # Forward slashes

For reasons of brevity and legibility, we recommend that you use forward slashes in directory paths used in NDB Cluster program options and configuration files on Windows.

21.2.3.2 Compiling and Installing NDB Cluster from Source on Windows

Oracle provides precompiled NDB Cluster binaries for Windows which should be adequate for most users. However, if you wish, it is also possible to compile NDB Cluster for Windows from source code. The procedure for doing this is almost identical to the procedure used to compile the standard MySQL Server binaries for Windows, and uses the same tools. However, there are two major differences:

  • To build NDB Cluster , you must use the NDB Cluster sources, which you can obtain from http://dev.mysql.com/downloads/cluster/.

    Attempting to build NDB Cluster from the source code for the standard MySQL Server is likely not to be successful, and is not supported by Oracle.

  • You must configure the build using the WITH_NDBCLUSTER_STORAGE_ENGINE or WITH_NDBCLUSTER option in addition to any other build options you wish to use with CMake. (WITH_NDBCLUSTER is supported as an alias for WITH_NDBCLUSTER_STORAGE_ENGINE, and works in exactly the same way.)

Important

The WITH_NDB_JAVA option is enabled by default. This means that, by default, if CMake cannot find the location of Java on your system, the configuration process fails; if you do not wish to enable Java and ClusterJ support, you must indicate this explicitly by configuring the build using -DWITH_NDB_JAVA=OFF. (Bug #12379735) Use WITH_CLASSPATH to provide the Java classpath if needed.

For more information about CMake options specific to building NDB Cluster , see Options for Compiling NDB Cluster .

Once the build process is complete, you can create a Zip archive containing the compiled binaries; Section 2.9.2, “Installing MySQL Using a Standard Source Distribution” provides the commands needed to perform this task on Windows systems. The NDB Cluster binaries can be found in the bin directory of the resulting archive, which is equivalent to the no-install archive, and which can be installed and configured in the same manner. For more information, see Section 21.2.3.1, “Installing NDB Cluster on Windows from a Binary Release”.

21.2.3.3 Initial Startup of NDB Cluster on Windows

Once the NDB Cluster executables and needed configuration files are in place, performing an initial start of the cluster is simply a matter of starting the NDB Cluster executables for all nodes in the cluster. Each cluster node process must be started separately, and on the host computer where it resides. The management node should be started first, followed by the data nodes, and then finally by any SQL nodes.

  1. On the management node host, issue the following command from the command line to start the management node process. The output should appear similar to what is shown here:

    C:\mysql\bin> ndb_mgmd
    2010-06-23 07:53:34 [MgmtSrvr] INFO -- NDB Cluster Management Server. mysql-5.7.18-ndb-7.5.7
    2010-06-23 07:53:34 [MgmtSrvr] INFO -- Reading cluster configuration from 'config.ini'
    

    The management node process continues to print logging output to the console. This is normal, because the management node is not running as a Windows service. (If you have used NDB Cluster on a Unix-like platform such as Linux, you may notice that the management node's default behavior in this regard on Windows is effectively the opposite of its behavior on Unix systems, where it runs by default as a Unix daemon process. This behavior is also true of NDB Cluster data node processes running on Windows.) For this reason, do not close the window in which ndb_mgmd.exe is running; doing so kills the management node process. (See Section 21.2.3.4, “Installing NDB Cluster Processes as Windows Services”, where we show how to install and run NDB Cluster processes as Windows services.)

    The required -f option tells the management node where to find the global configuration file (config.ini). The long form of this option is --config-file.

    Important

    An NDB Cluster management node caches the configuration data that it reads from config.ini; once it has created a configuration cache, it ignores the config.ini file on subsequent starts unless forced to do otherwise. This means that, if the management node fails to start due to an error in this file, you must make the management node re-read config.ini after you have corrected any errors in it. You can do this by starting ndb_mgmd.exe with the --reload or --initial option on the command line. Either of these options works to refresh the configuration cache.

    It is not necessary or advisable to use either of these options in the management node's my.ini file.

    For additional information about options which can be used with ndb_mgmd, see Section 21.4.4, “ndb_mgmd — The NDB Cluster Management Server Daemon”, as well as Section 21.4.27, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

  2. On each of the data node hosts, run the command shown here to start the data node processes:

    C:\mysql\bin> ndbd
    2010-06-23 07:53:46 [ndbd] INFO -- Configuration fetched from 'localhost:1186', generation: 1
    

    In each case, the first line of output from the data node process should resemble what is shown in the preceding example, and is followed by additional lines of logging output. As with the management node process, this is normal, because the data node is not running as a Windows service. For this reason, do not close the console window in which the data node process is running; doing so kills ndbd.exe. (For more information, see Section 21.2.3.4, “Installing NDB Cluster Processes as Windows Services”.)

  3. Do not start the SQL node yet; it cannot connect to the cluster until the data nodes have finished starting, which may take some time. Instead, in a new console window on the management node host, start the NDB Cluster management client ndb_mgm.exe, which should be in C:\mysql\bin on the management node host. (Do not try to re-use the console window where ndb_mgmd.exe is running by typing CTRL+C, as this kills the management node.) The resulting output should look like this:

    C:\mysql\bin> ndb_mgm
    -- NDB Cluster -- Management Client --
    ndb_mgm>
    

    When the prompt ndb_mgm> appears, this indicates that the management client is ready to receive NDB Cluster management commands. You can observe the status of the data nodes as they start by entering ALL STATUS at the management client prompt. This command causes a running report of the data nodes's startup sequence, which should look something like this:

    ndb_mgm> ALL STATUS
    Connected to Management Server at: localhost:1186
    Node 2: starting (Last completed phase 3) (mysql-5.7.18-ndb-7.5.7)
    Node 3: starting (Last completed phase 3) (mysql-5.7.18-ndb-7.5.7)
    
    Node 2: starting (Last completed phase 4) (mysql-5.7.18-ndb-7.5.7)
    Node 3: starting (Last completed phase 4) (mysql-5.7.18-ndb-7.5.7)
    
    Node 2: Started (version 7.5.7)
    Node 3: Started (version 7.5.7)
    
    ndb_mgm>
    
    Note

    Commands issued in the management client are not case-sensitive; we use uppercase as the canonical form of these commands, but you are not required to observe this convention when inputting them into the ndb_mgm client. For more information, see Section 21.5.2, “Commands in the NDB Cluster Management Client”.

    The output produced by ALL STATUS is likely to vary from what is shown here, according to the speed at which the data nodes are able to start, the release version number of the NDB Cluster software you are using, and other factors. What is significant is that, when you see that both data nodes have started, you are ready to start the SQL node.

    You can leave ndb_mgm.exe running; it has no negative impact on the performance of the NDB Cluster , and we use it in the next step to verify that the SQL node is connected to the cluster after you have started it.

  4. On the computer designated as the SQL node host, open a console window and navigate to the directory where you unpacked the NDB Cluster binaries (if you are following our example, this is C:\mysql\bin).

    Start the SQL node by invoking mysqld.exe from the command line, as shown here:

    C:\mysql\bin> mysqld --console
    

    The --console option causes logging information to be written to the console, which can be helpful in the event of problems. (Once you are satisfied that the SQL node is running in a satisfactory manner, you can stop it and restart it out without the --console option, so that logging is performed normally.)

    In the console window where the management client (ndb_mgm.exe) is running on the management node host, enter the SHOW command, which should produce output similar to what is shown here:

    ndb_mgm> SHOW
    Connected to Management Server at: localhost:1186
    Cluster Configuration
    ---------------------
    [ndbd(NDB)]     2 node(s)
    id=2    @192.168.0.30  (Version: 5.7.18-ndb-7.5.7, Nodegroup: 0, *)
    id=3    @192.168.0.40  (Version: 5.7.18-ndb-7.5.7, Nodegroup: 0)
    
    [ndb_mgmd(MGM)] 1 node(s)
    id=1    @192.168.0.10  (Version: 5.7.18-ndb-7.5.7)
    
    [mysqld(API)]   1 node(s)
    id=4    @192.168.0.20  (Version: 5.7.18-ndb-7.5.7)
    

    You can also verify that the SQL node is connected to the NDB Cluster in the mysql client (mysql.exe) using the SHOW ENGINE NDB STATUS statement.

You should now be ready to work with database objects and data using NDB Cluster 's NDBCLUSTER storage engine. See Section 21.2.6, “NDB Cluster Example with Tables and Data”, for more information and examples.

You can also install ndb_mgmd.exe, ndbd.exe, and ndbmtd.exe as Windows services. For information on how to do this, see Section 21.2.3.4, “Installing NDB Cluster Processes as Windows Services”).

21.2.3.4 Installing NDB Cluster Processes as Windows Services

Once you are satisfied that NDB Cluster is running as desired, you can install the management nodes and data nodes as Windows services, so that these processes are started and stopped automatically whenever Windows is started or stopped. This also makes it possible to control these processes from the command line with the appropriate NET START or NET STOP command, or using the Windows graphical Services utility.

Installing programs as Windows services usually must be done using an account that has Administrator rights on the system.

To install the management node as a service on Windows, invoke ndb_mgmd.exe from the command line on the machine hosting the management node, using the --install option, as shown here:

C:\> C:\mysql\bin\ndb_mgmd.exe --install
Installing service 'NDB Cluster  Management Server'
  as '"C:\mysql\bin\ndbd.exe" "--service=ndb_mgmd"'
Service successfully installed.
Important

When installing an NDB Cluster program as a Windows service, you should always specify the complete path; otherwise the service installation may fail with the error The system cannot find the file specified.

The --install option must be used first, ahead of any other options that might be specified for ndb_mgmd.exe. However, it is preferable to specify such options in an options file instead. If your options file is not in one of the default locations as shown in the output of ndb_mgmd.exe --help, you can specify the location using the --config-file option.

Now you should be able to start and stop the management server like this:

C:\> NET START ndb_mgmd
The NDB Cluster  Management Server service is starting.
The NDB Cluster  Management Server service was started successfully.

C:\> NET STOP ndb_mgmd
The NDB Cluster  Management Server service is stopping..
The NDB Cluster  Management Server service was stopped successfully.

You can also start or stop the management server as a Windows service using the descriptive name, as shown here:

C:\> NET START 'NDB Cluster  Management Server'
The NDB Cluster  Management Server service is starting.
The NDB Cluster  Management Server service was started successfully.

C:\> NET STOP  'NDB Cluster  Management Server'
The NDB Cluster  Management Server service is stopping..
The NDB Cluster  Management Server service was stopped successfully.

However, it is usually simpler to specify a short service name or to permit the default service name to be used when installing the service, and then reference that name when starting or stopping the service. To specify a service name other than ndb_mgmd, append it to the --install option, as shown in this example:

C:\> C:\mysql\bin\ndb_mgmd.exe --install=mgmd1
Installing service 'NDB Cluster  Management Server'
  as '"C:\mysql\bin\ndb_mgmd.exe" "--service=mgmd1"'
Service successfully installed.

Now you should be able to start or stop the service using the name you have specified, like this:

C:\> NET START mgmd1
The NDB Cluster  Management Server service is starting.
The NDB Cluster  Management Server service was started successfully.

C:\> NET STOP mgmd1
The NDB Cluster  Management Server service is stopping..
The NDB Cluster  Management Server service was stopped successfully.

To remove the management node service, invoke ndb_mgmd.exe with the --remove option, as shown here:

C:\> C:\mysql\bin\ndb_mgmd.exe --remove
Removing service 'NDB Cluster  Management Server'
Service successfully removed.

If you installed the service using a service name other than the default, you can remove the service by passing this name as the value of the --remove option, like this:

C:\> C:\mysql\bin\ndb_mgmd.exe --remove=mgmd1
Removing service 'mgmd1'
Service successfully removed.

Installation of an NDB Cluster data node process as a Windows service can be done in a similar fashion, using the --install option for ndbd.exe (or ndbmtd.exe), as shown here:

C:\> C:\mysql\bin\ndbd.exe --install
Installing service 'NDB Cluster  Data Node Daemon' as '"C:\mysql\bin\ndbd.exe" "--service=ndbd"'
Service successfully installed.

Now you can start or stop the data node using either the default service name or the descriptive name with net start or net stop, as shown in the following example:

C:\> NET START ndbd
The NDB Cluster  Data Node Daemon service is starting.
The NDB Cluster  Data Node Daemon service was started successfully.

C:\> NET STOP ndbd
The NDB Cluster  Data Node Daemon service is stopping..
The NDB Cluster  Data Node Daemon service was stopped successfully.

C:\> NET START 'NDB Cluster  Data Node Daemon'
The NDB Cluster  Data Node Daemon service is starting.
The NDB Cluster  Data Node Daemon service was started successfully.

C:\> NET STOP 'NDB Cluster  Data Node Daemon'
The NDB Cluster  Data Node Daemon service is stopping..
The NDB Cluster  Data Node Daemon service was stopped successfully.

To remove the data node service, invoke ndbd.exe with the --remove option, as shown here:

C:\> C:\mysql\bin\ndbd.exe --remove
Removing service 'NDB Cluster  Data Node Daemon'
Service successfully removed.

As with ndb_mgmd.exe (and mysqld.exe), when installing ndbd.exe as a Windows service, you can also specify a name for the service as the value of --install, and then use it when starting or stopping the service, like this:

C:\> C:\mysql\bin\ndbd.exe --install=dnode1
Installing service 'dnode1' as '"C:\mysql\bin\ndbd.exe" "--service=dnode1"'
Service successfully installed.

C:\> NET START dnode1
The NDB Cluster  Data Node Daemon service is starting.
The NDB Cluster  Data Node Daemon service was started successfully.

C:\> NET STOP dnode1
The NDB Cluster  Data Node Daemon service is stopping..
The NDB Cluster  Data Node Daemon service was stopped successfully.

If you specified a service name when installing the data node service, you can use this name when removing it as well, by passing it as the value of the --remove option, as shown here:

C:\> C:\mysql\bin\ndbd.exe --remove=dnode1
Removing service 'dnode1'
Service successfully removed.

Installation of the SQL node as a Windows service, starting the service, stopping the service, and removing the service are done in a similar fashion, using mysqld --install, NET START, NET STOP, and mysqld --remove. For additional information, see Section 2.3.5.8, “Starting MySQL as a Windows Service”.

21.2.4 Initial Configuration of NDB Cluster

In this section, we discuss manual configuration of an installed NDB Cluster by creating and editing configuration files.

NDB Cluster also provides a GUI installer which can be used to perform the configuration without the need to edit text files in a separate application. For more information, see Section 21.2.1, “The NDB Cluster Auto-Installer”.

For our four-node, four-host NDB Cluster (see Cluster nodes and host computers), it is necessary to write four configuration files, one per node host.

  • Each data node or SQL node requires a my.cnf file that provides two pieces of information: a connection string that tells the node where to find the management node, and a line telling the MySQL server on this host (the machine hosting the data node) to enable the NDBCLUSTER storage engine.

    For more information on connection strings, see Section 21.3.3.3, “NDB Cluster Connection Strings”.

  • The management node needs a config.ini file telling it how many replicas to maintain, how much memory to allocate for data and indexes on each data node, where to find the data nodes, where to save data to disk on each data node, and where to find any SQL nodes.

Configuring the data nodes and SQL nodes.  The my.cnf file needed for the data nodes is fairly simple. The configuration file should be located in the /etc directory and can be edited using any text editor. (Create the file if it does not exist.) For example:

shell> vi /etc/my.cnf
Note

We show vi being used here to create the file, but any text editor should work just as well.

For each data node and SQL node in our example setup, my.cnf should look like this:

[mysqld]
# Options for mysqld process:
ndbcluster                      # run NDB storage engine

[mysql_cluster]
# Options for NDB Cluster  processes:
ndb-connectstring=192.168.0.10  # location of management server

After entering the preceding information, save this file and exit the text editor. Do this for the machines hosting data node A, data node B, and the SQL node.

Important

Once you have started a mysqld process with the ndbcluster and ndb-connectstring parameters in the [mysqld] and [mysql_cluster] sections of the my.cnf file as shown previously, you cannot execute any CREATE TABLE or ALTER TABLE statements without having actually started the cluster. Otherwise, these statements will fail with an error. This is by design.

Configuring the management node.  The first step in configuring the management node is to create the directory in which the configuration file can be found and then to create the file itself. For example (running as root):

shell> mkdir /var/lib/mysql-cluster
shell> cd /var/lib/mysql-cluster
shell> vi config.ini

For our representative setup, the config.ini file should read as follows:

[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=2    # Number of replicas
DataMemory=80M    # How much memory to allocate for data storage
IndexMemory=18M   # How much memory to allocate for index storage
                  # For DataMemory and IndexMemory, we have used the
                  # default values. Since the "world" database takes up
                  # only about 500KB, this should be more than enough for
                  # this example NDB Cluster setup.
ServerPort=2202   # This the default value; however, you can use any
                  # port that is free for all the hosts in the cluster
                  # Note1: It is recommended that you do not specify the port
                  # number at all and simply allow the default value to be used
                  # instead
                  # Note2: The port was formerly specified using the PortNumber 
                  # TCP parameter; this parameter is no longer available in NDB
                  # Cluster 7.5.

[ndb_mgmd]
# Management process options:
HostName=192.168.0.10           # Hostname or IP address of MGM node
DataDir=/var/lib/mysql-cluster  # Directory for MGM node log files

[ndbd]
# Options for data node "A":
                                # (one [ndbd] section per data node)
HostName=192.168.0.30           # Hostname or IP address
NodeId=2                        # Node ID for this data node
DataDir=/usr/local/mysql/data   # Directory for this data node's data files

[ndbd]
# Options for data node "B":
HostName=192.168.0.40           # Hostname or IP address
NodeId=3                        # Node ID for this data node
DataDir=/usr/local/mysql/data   # Directory for this data node's data files

[mysqld]
# SQL node options:
HostName=192.168.0.20           # Hostname or IP address
                                # (additional mysqld connections can be
                                # specified for this node for various
                                # purposes such as running ndb_restore)
Note

The world database can be downloaded from http://dev.mysql.com/doc/index-other.html.

After all the configuration files have been created and these minimal options have been specified, you are ready to proceed with starting the cluster and verifying that all processes are running. We discuss how this is done in Section 21.2.5, “Initial Startup of NDB Cluster”.

For more detailed information about the available NDB Cluster configuration parameters and their uses, see Section 21.3.3, “NDB Cluster Configuration Files”, and Section 21.3, “Configuration of NDB Cluster”. For configuration of NDB Cluster as relates to making backups, see Section 21.5.3.3, “Configuration for NDB Cluster Backups”.

Note

The default port for Cluster management nodes is 1186; the default port for data nodes is 2202. However, the cluster can automatically allocate ports for data nodes from those that are already free.

21.2.5 Initial Startup of NDB Cluster

Starting the cluster is not very difficult after it has been configured. Each cluster node process must be started separately, and on the host where it resides. The management node should be started first, followed by the data nodes, and then finally by any SQL nodes:

  1. On the management host, issue the following command from the system shell to start the management node process:

    shell> ndb_mgmd -f /var/lib/mysql-cluster/config.ini
    

    The first time that it is started, ndb_mgmd must be told where to find its configuration file, using the -f or --config-file option. (See Section 21.4.4, “ndb_mgmd — The NDB Cluster Management Server Daemon”, for details.)

    For additional options which can be used with ndb_mgmd, see Section 21.4.27, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

  2. On each of the data node hosts, run this command to start the ndbd process:

    shell> ndbd
    
  3. If you used RPM files to install MySQL on the cluster host where the SQL node is to reside, you can (and should) use the supplied startup script to start the MySQL server process on the SQL node.

If all has gone well, and the cluster has been set up correctly, the cluster should now be operational. You can test this by invoking the ndb_mgm management node client. The output should look like that shown here, although you might see some slight differences in the output depending upon the exact version of MySQL that you are using:

shell> ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> SHOW
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @192.168.0.30  (Version: 5.7.18-ndb-7.5.7, Nodegroup: 0, *)
id=3    @192.168.0.40  (Version: 5.7.18-ndb-7.5.7, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @192.168.0.10  (Version: 5.7.18-ndb-7.5.7)

[mysqld(API)]   1 node(s)
id=4    @192.168.0.20  (Version: 5.7.18-ndb-7.5.7)

The SQL node is referenced here as [mysqld(API)], which reflects the fact that the mysqld process is acting as an NDB Cluster API node.

Note

The IP address shown for a given NDB Cluster SQL or other API node in the output of SHOW is the address used by the SQL or API node to connect to the cluster data nodes, and not to any management node.

You should now be ready to work with databases, tables, and data in NDB Cluster . See Section 21.2.6, “NDB Cluster Example with Tables and Data”, for a brief discussion.

21.2.6 NDB Cluster Example with Tables and Data

Note

The information in this section applies to NDB Cluster running on both Unix and Windows platforms.

Working with database tables and data in NDB Cluster is not much different from doing so in standard MySQL. There are two key points to keep in mind:

  • For a table to be replicated in the cluster, it must use the NDBCLUSTER storage engine. To specify this, use the ENGINE=NDBCLUSTER or ENGINE=NDB option when creating the table:

    CREATE TABLE tbl_name (col_name column_definitions) ENGINE=NDBCLUSTER;
    

    Alternatively, for an existing table that uses a different storage engine, use ALTER TABLE to change the table to use NDBCLUSTER:

    ALTER TABLE tbl_name ENGINE=NDBCLUSTER;
    
  • Every NDBCLUSTER table has a primary key. If no primary key is defined by the user when a table is created, the NDBCLUSTER storage engine automatically generates a hidden one. Such a key takes up space just as does any other table index. (It is not uncommon to encounter problems due to insufficient memory for accommodating these automatically created indexes.)

If you are importing tables from an existing database using the output of mysqldump, you can open the SQL script in a text editor and add the ENGINE option to any table creation statements, or replace any existing ENGINE options. Suppose that you have the world sample database on another MySQL server that does not support NDB Cluster , and you want to export the City table:

shell> mysqldump --add-drop-table world City > city_table.sql

The resulting city_table.sql file will contain this table creation statement (and the INSERT statements necessary to import the table data):

DROP TABLE IF EXISTS `City`;
CREATE TABLE `City` (
  `ID` int(11) NOT NULL auto_increment,
  `Name` char(35) NOT NULL default '',
  `CountryCode` char(3) NOT NULL default '',
  `District` char(20) NOT NULL default '',
  `Population` int(11) NOT NULL default '0',
  PRIMARY KEY  (`ID`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

INSERT INTO `City` VALUES (1,'Kabul','AFG','Kabol',1780000);
INSERT INTO `City` VALUES (2,'Qandahar','AFG','Qandahar',237500);
INSERT INTO `City` VALUES (3,'Herat','AFG','Herat',186800);
(remaining INSERT statements omitted)

You need to make sure that MySQL uses the NDBCLUSTER storage engine for this table. There are two ways that this can be accomplished. One of these is to modify the table definition before importing it into the Cluster database. Using the City table as an example, modify the ENGINE option of the definition as follows:

DROP TABLE IF EXISTS `City`;
CREATE TABLE `City` (
  `ID` int(11) NOT NULL auto_increment,
  `Name` char(35) NOT NULL default '',
  `CountryCode` char(3) NOT NULL default '',
  `District` char(20) NOT NULL default '',
  `Population` int(11) NOT NULL default '0',
  PRIMARY KEY  (`ID`)
) ENGINE=NDBCLUSTER DEFAULT CHARSET=latin1;

INSERT INTO `City` VALUES (1,'Kabul','AFG','Kabol',1780000);
INSERT INTO `City` VALUES (2,'Qandahar','AFG','Qandahar',237500);
INSERT INTO `City` VALUES (3,'Herat','AFG','Herat',186800);
(remaining INSERT statements omitted)

This must be done for the definition of each table that is to be part of the clustered database. The easiest way to accomplish this is to do a search-and-replace on the file that contains the definitions and replace all instances of TYPE=engine_name or ENGINE=engine_name with ENGINE=NDBCLUSTER. If you do not want to modify the file, you can use the unmodified file to create the tables, and then use ALTER TABLE to change their storage engine. The particulars are given later in this section.

Assuming that you have already created a database named world on the SQL node of the cluster, you can then use the mysql command-line client to read city_table.sql, and create and populate the corresponding table in the usual manner:

shell> mysql world < city_table.sql

It is very important to keep in mind that the preceding command must be executed on the host where the SQL node is running (in this case, on the machine with the IP address 192.168.0.20).

To create a copy of the entire world database on the SQL node, use mysqldump on the noncluster server to export the database to a file named world.sql; for example, in the /tmp directory. Then modify the table definitions as just described and import the file into the SQL node of the cluster like this:

shell> mysql world < /tmp/world.sql

If you save the file to a different location, adjust the preceding instructions accordingly.

Running SELECT queries on the SQL node is no different from running them on any other instance of a MySQL server. To run queries from the command line, you first need to log in to the MySQL Monitor in the usual way (specify the root password at the Enter password: prompt):

shell> mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1 to server version: 5.7.18-ndb-7.5.7

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql>

We simply use the MySQL server's root account and assume that you have followed the standard security precautions for installing a MySQL server, including setting a strong root password. For more information, see Section 2.10.4, “Securing the Initial MySQL Accounts”.

It is worth taking into account that Cluster nodes do not make use of the MySQL privilege system when accessing one another. Setting or changing MySQL user accounts (including the root account) effects only applications that access the SQL node, not interaction between nodes. See Section 21.5.12.2, “NDB Cluster and MySQL Privileges”, for more information.

If you did not modify the ENGINE clauses in the table definitions prior to importing the SQL script, you should run the following statements at this point:

mysql> USE world;
mysql> ALTER TABLE City ENGINE=NDBCLUSTER;
mysql> ALTER TABLE Country ENGINE=NDBCLUSTER;
mysql> ALTER TABLE CountryLanguage ENGINE=NDBCLUSTER;

Selecting a database and running a SELECT query against a table in that database is also accomplished in the usual manner, as is exiting the MySQL Monitor:

mysql> USE world;
mysql> SELECT Name, Population FROM City ORDER BY Population DESC LIMIT 5;
+-----------+------------+
| Name      | Population |
+-----------+------------+
| Bombay    |   10500000 |
| Seoul     |    9981619 |
| São Paulo |    9968485 |
| Shanghai  |    9696300 |
| Jakarta   |    9604900 |
+-----------+------------+
5 rows in set (0.34 sec)

mysql> \q
Bye

shell>

Applications that use MySQL can employ standard APIs to access NDB tables. It is important to remember that your application must access the SQL node, and not the management or data nodes. This brief example shows how we might execute the SELECT statement just shown by using the PHP 5.X mysqli extension running on a Web server elsewhere on the network:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
  "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
  <meta http-equiv="Content-Type"
           content="text/html; charset=iso-8859-1">
  <title>SIMPLE mysqli SELECT</title>
</head>
<body>
<?php
  # connect to SQL node:
  $link = new mysqli('192.168.0.20', 'root', 'root_password', 'world');
  # parameters for mysqli constructor are:
  #   host, user, password, database

  if( mysqli_connect_errno() )
    die("Connect failed: " . mysqli_connect_error());

  $query = "SELECT Name, Population
            FROM City
            ORDER BY Population DESC
            LIMIT 5";

  # if no errors...
  if( $result = $link->query($query) )
  {
?>
<table border="1" width="40%" cellpadding="4" cellspacing ="1">
  <tbody>
  <tr>
    <th width="10%">City</th>
    <th>Population</th>
  </tr>
<?
    # then display the results...
    while($row = $result->fetch_object())
      printf("<tr>\n  <td align=\"center\">%s</td><td>%d</td>\n</tr>\n",
              $row->Name, $row->Population);
?>
  </tbody
</table>
<?
  # ...and verify the number of rows that were retrieved
    printf("<p>Affected rows: %d</p>\n", $link->affected_rows);
  }
  else
    # otherwise, tell us what went wrong
    echo mysqli_error();

  # free the result set and the mysqli connection object
  $result->close();
  $link->close();
?>
</body>
</html>

We assume that the process running on the Web server can reach the IP address of the SQL node.

In a similar fashion, you can use the MySQL C API, Perl-DBI, Python-mysql, or MySQL Connectors to perform the tasks of data definition and manipulation just as you would normally with MySQL.

21.2.7 Safe Shutdown and Restart of NDB Cluster

To shut down the cluster, enter the following command in a shell on the machine hosting the management node:

shell> ndb_mgm -e shutdown

The -e option here is used to pass a command to the ndb_mgm client from the shell. (See Section 21.4.27, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”, for more information about this option.) The command causes the ndb_mgm, ndb_mgmd, and any ndbd or ndbmtd processes to terminate gracefully. Any SQL nodes can be terminated using mysqladmin shutdown and other means. On Windows platforms, assuming that you have installed the SQL node as a Windows service, you can use NET STOP MYSQL.

To restart the cluster on Unix platforms, run these commands:

  • On the management host (192.168.0.10 in our example setup):

    shell> ndb_mgmd -f /var/lib/mysql-cluster/config.ini
    
  • On each of the data node hosts (192.168.0.30 and 192.168.0.40):

    shell> ndbd
    
  • Use the ndb_mgm client to verify that both data nodes have started successfully.

  • On the SQL host (192.168.0.20):

    shell> mysqld_safe &
    

On Windows platforms, assuming that you have installed all NDB Cluster processes as Windows services using the default service names (see Section 21.2.3.4, “Installing NDB Cluster Processes as Windows Services”), you can restart the cluster as follows:

  • On the management host (192.168.0.10 in our example setup), execute the following command:

    C:\> NET START ndb_mgmd
    
  • On each of the data node hosts (192.168.0.30 and 192.168.0.40), execute the following command:

    C:\> NET START ndbd
    
  • On the management node host, use the ndb_mgm client to verify that the management node and both data nodes have started successfully (see Section 21.2.3.3, “Initial Startup of NDB Cluster on Windows”).

  • On the SQL node host (192.168.0.20), execute the following command:

    C:\> NET START mysql
    

In a production setting, it is usually not desirable to shut down the cluster completely. In many cases, even when making configuration changes, or performing upgrades to the cluster hardware or software (or both), which require shutting down individual host machines, it is possible to do so without shutting down the cluster as a whole by performing a rolling restart of the cluster. For more information about doing this, see Section 21.5.5, “Performing a Rolling Restart of an NDB Cluster”.

21.2.8 Upgrading and Downgrading NDB Cluster

This section provides information about NDB Cluster software and table file compatibility between different NDB Cluster 7.5 releases with regard to performing upgrades and downgrades as well as compatibility matrices and notes. You are expected already to be familiar with installing and configuring an NDB Cluster prior to attempting an upgrade or downgrade. See Section 21.3, “Configuration of NDB Cluster”.

Important

Only compatibility between MySQL versions with regard to NDBCLUSTER is taken into account in this section, and there are likely other issues to be considered. As with any other MySQL software upgrade or downgrade, you are strongly encouraged to review the relevant portions of the MySQL Manual for the MySQL versions from which and to which you intend to migrate, before attempting an upgrade or downgrade of the NDB Cluster software. See Section 2.11.1, “Upgrading MySQL”.

The table shown here provides information on NDB Cluster upgrade and downgrade compatibility among different releases of NDB 7.5. Additional notes about upgrades and downgrades to, from, or within the NDB Cluster 7.5 release series can be found following the table.

Upgrades and Downgrades, NDB Cluster 7.5

Figure 21.22 NDB Cluster Upgrade and Downgrade Compatibility, MySQL NDB Cluster 7.5

NDB Cluster 7.5.x upgrade/downgrade compatibility

Version support.  The following versions of NDB Cluster are supported for upgrades to GA releases of NDB Cluster 7.5 (7.5.4 and later):

  • NDB Cluster 7.4 GA releases (7.4.4 and later)

  • NDB Cluster 7.3 GA releases (7.3.2 and later)

  • NDB Cluster 7.2 GA releases (7.2.4 and later)

Known Issues.  The following issues are known to occur when upgrading to or between the stated releases:

  • When upgrading from NDB 7.5.2 or 7.5.3 to a later version, the use of mysqld with the --initialize and --ndbcluster options together caused problems later running mysql_upgrade.

    When run with --initialize, the server does not require NDB support; having NDB enabled at this time can cause problems with ndbinfo tables. To keep this from happening, the --initialize option now causes mysqld to ignore the --ndbcluster option if the latter is also specified.

    A workaround for an upgrade that has failed for these reasons can be accomplished as follows:

    1. Perform a rolling restart of the entire cluster

    2. Delete all .frm files in the data/ndbinfo directory

    3. Run mysql_upgrade.

    (Bug #81689, Bug #82724, Bug #24521927, Bug #23518923)

  • During an online upgrade from an NDB Cluster 7.3 release to an NDB 7.4 (or later) release, the failures of several data nodes running the lower version during local checkpoints (LCPs), and just prior to upgrading these nodes, led to additional node failures following the upgrade. This was due to lingering elements of the EMPTY_LCP protocol initiated by the older nodes as part of an LCP-plus-restart sequence, and which is no longer used in NDB 7.4 and later due to LCP optimizations implemented in those versions. This issue was fixed in NDB 7.5.4. (Bug #23129433)

  • Beginning with NDB 7.5.2, the ndb_binlog_index table uses the InnoDB storage engine. (Use of the MyISAM storage engine for this table continues to be supported for backward compatibility.)

    When upgrading a previous release to NDB 7.5.2 or later, you can use the --force --upgrade-system-tables options with mysql_upgrade so that it performs ALTER TABLE ... ENGINE=INNODB on the ndb_binlog_index table.

    For more information, see Section 21.6.4, “NDB Cluster Replication Schema and Tables”.

  • Online upgrades from previous versions of NDB Cluster to NDB 7.5.1 were not possible due to missing entries in the matrix used to test upgrade compatibility between versions. (Bug #22024947)

    Also in NDB 7.5.1, mysql_upgrade failed to upgrade the sys schema if a sys database directory existed but was empty. (Bug #81352, Bug #23249846, Bug #22875519)

21.3 Configuration of NDB Cluster

A MySQL server that is part of an NDB Cluster differs in one chief respect from a normal (nonclustered) MySQL server, in that it employs the NDB storage engine. This engine is also referred to sometimes as NDBCLUSTER, although NDB is preferred.

To avoid unnecessary allocation of resources, the server is configured by default with the NDB storage engine disabled. To enable NDB, you must modify the server's my.cnf configuration file, or start the server with the --ndbcluster option.

This MySQL server is a part of the cluster, so it also must know how to access a management node to obtain the cluster configuration data. The default behavior is to look for the management node on localhost. However, should you need to specify that its location is elsewhere, this can be done in my.cnf, or with the mysql client. Before the NDB storage engine can be used, at least one management node must be operational, as well as any desired data nodes.

For more information about --ndbcluster and other mysqld options specific to NDB Cluster , see Section 21.3.3.8.1, “MySQL Server Options for NDB Cluster”.

You can use also the NDB Cluster Auto-Installer to set up and deploy an NDB Cluster on one or more hosts using a browser-based GUI. For more information, see Section 21.2.1, “The NDB Cluster Auto-Installer”.

For general information about installing NDB Cluster , see Section 21.2, “NDB Cluster Installation”.

21.3.1 Quick Test Setup of NDB Cluster

To familiarize you with the basics, we will describe the simplest possible configuration for a functional NDB Cluster . After this, you should be able to design your desired setup from the information provided in the other relevant sections of this chapter.

First, you need to create a configuration directory such as /var/lib/mysql-cluster, by executing the following command as the system root user:

shell> mkdir /var/lib/mysql-cluster

In this directory, create a file named config.ini that contains the following information. Substitute appropriate values for HostName and DataDir as necessary for your system.

# file "config.ini" - showing minimal setup consisting of 1 data node,
# 1 management server, and 3 MySQL servers.
# The empty default sections are not required, and are shown only for
# the sake of completeness.
# Data nodes must provide a hostname but MySQL Servers are not required
# to do so.
# If you don't know the hostname for your machine, use localhost.
# The DataDir parameter also has a default value, but it is recommended to
# set it explicitly.
# Note: [db], [api], and [mgm] are aliases for [ndbd], [mysqld], and [ndb_mgmd],
# respectively. [db] is deprecated and should not be used in new installations.

[ndbd default]
NoOfReplicas= 1

[mysqld  default]
[ndb_mgmd default]
[tcp default]

[ndb_mgmd]
HostName= myhost.example.com

[ndbd]
HostName= myhost.example.com
DataDir= /var/lib/mysql-cluster

[mysqld]
[mysqld]
[mysqld]

You can now start the ndb_mgmd management server. By default, it attempts to read the config.ini file in its current working directory, so change location into the directory where the file is located and then invoke ndb_mgmd:

shell> cd /var/lib/mysql-cluster
shell> ndb_mgmd

Then start a single data node by running ndbd:

shell> ndbd

For command-line options which can be used when starting ndbd, see Section 21.4.27, “Options Common to NDB Cluster Programs — Options Common to NDB Cluster Programs”.

By default, ndbd looks for the management server at localhost on port 1186.

Note

If you have installed MySQL from a binary tarball, you will need to specify the path of the ndb_mgmd and ndbd servers explicitly. (Normally, these will be found in /usr/local/mysql/bin.)

Finally, change location to the MySQL data directory (usually /var/lib/mysql or /usr/local/mysql/data), and make sure that the my.cnf file contains the option necessary to enable the NDB storage engine:

[mysqld]
ndbcluster

You can now start the MySQL server as usual:

shell> mysqld_safe --user=mysql &

Wait a moment to make sure the MySQL server is running properly. If you see the notice mysql ended, check the server's .err file to find out what went wrong.

If all has gone well so far, you now can start using the cluster. Connect to the server and verify that the NDBCLUSTER storage engine is enabled:

shell> mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1 to server version: 5.7.19

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> SHOW ENGINES\G
...
*************************** 12. row ***************************
Engine: NDBCLUSTER
Support: YES
Comment: Clustered, fault-tolerant, memory-based tables
*************************** 13. row ***************************
Engine: NDB
Support: YES
Comment: Alias for NDBCLUSTER
...

The row numbers shown in the preceding example output may be different from those shown on your system, depending upon how your server is configured.

Try to create an NDBCLUSTER table:

shell> mysql
mysql> USE test;
Database changed

mysql> CREATE TABLE ctest (i INT) ENGINE=NDBCLUSTER;
Query OK, 0 rows affected (0.09 sec)

mysql> SHOW CREATE TABLE ctest \G
*************************** 1. row ***************************
       Table: ctest
Create Table: CREATE TABLE `ctest` (
  `i` int(11) default NULL
) ENGINE=ndbcluster DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

To check that your nodes were set up properly, start the management client:

shell> ndb_mgm

Use the SHOW command from within the management client to obtain a report on the cluster's status:

ndb_mgm> SHOW
Cluster Configuration
---------------------
[ndbd(NDB)]     1 node(s)
id=2    @127.0.0.1  (Version: 5.7.18-ndb-7.5.7, Nodegroup: 0, *)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @127.0.0.1  (Version: 5.7.18-ndb-7.5.7)

[mysqld(API)]   3 node(s)
id=3    @127.0.0.1  (Version: 5.7.18-ndb-7.5.7)
id=4 (not connected, accepting connect from any host)
id=5 (not connected, accepting connect from any host)

At this point, you have successfully set up a working NDB Cluster . You can now store data in the cluster by using any table created with ENGINE=NDBCLUSTER or its alias ENGINE=NDB.

21.3.2 Overview of NDB Cluster Configuration Parameters, Options, and Variables

The next several sections provide summary tables of NDB Cluster node configuration parameters used in the config.ini file to govern various aspects of node behavior, as well as of options and variables read by mysqld from a my.cnf file or from the command line when run as an NDB Cluster process. Each of the node parameter tables lists the parameters for a given type (ndbd, ndb_mgmd, mysqld, computer, tcp, shm, or sci). All tables include the data type for the parameter, option, or variable, as well as its default, mimimum, and maximum values as applicable.

Considerations when restarting nodes.  For node parameters, these tables also indicate what type of restart is required (node restart or system restart)—and whether the restart must be done with --initial—to change the value of a given configuration parameter. When performing a node restart or an initial node restart, all of the cluster's data nodes must be restarted in turn (also referred to as a rolling restart). It is possible to update cluster configuration parameters marked as node online—that is, without shutting down the cluster—in this fashion. An initial node restart requires restarting each ndbd process with the --initial option.

A system restart requires a complete shutdown and restart of the entire cluster. An initial system restart requires taking a backup of the cluster, wiping the cluster file system after shutdown, and then restoring from the backup following the restart.

In any cluster restart, all of the cluster's management servers must be restarted for them to read the updated configuration parameter values.

Important

Values for numeric cluster parameters can generally be increased without any problems, although it is advisable to do so progressively, making such adjustments in relatively small increments. Many of these can be increased online, using a rolling restart.

However, decreasing the values of such parameters—whether this is done using a node restart, node initial restart, or even a complete system restart of the cluster—is not to be undertaken lightly; it is recommended that you do so only after careful planning and testing. This is especially true with regard to those parameters that relate to memory usage and disk space. In addition, it is the generally the case that configuration parameters relating to memory and disk usage can be raised using a simple node restart, but they require an initial node restart to be lowered.

Because some of these parameters can be used for configuring more than one type of cluster node, they may appear in more than one of the tables.

Note

4294967039 often appears as a maximum value in these tables. This value is defined in the NDBCLUSTER sources as MAX_INT_RNIL and is equal to 0xFFFFFEFF, or 232 − 28 − 1.

21.3.2.1 NDB Cluster Data Node Configuration Parameters

The summary table in this section provides information about parameters used in the [ndbd] or [ndbd default] sections of a config.ini file for configuring NDB Cluster data nodes. For detailed descriptions and other additional information about each of these parameters, see Section 21.3.3.6, “Defining NDB Cluster Data Nodes”.

These parameters also apply to ndbmtd, the multi-threaded version of ndbd. For more information, see Section 21.4.3, “ndbmtd — The NDB Cluster Data Node Daemon (Multi-Threaded)”.

Restart types.  Changes in NDB Cluster configuration parameters do not take effect until the cluster is restarted. The type of restart required to change a given parameter is indicated in the summary table as follows:

For more information about restart types, see Section 21.3.2, “Overview of NDB Cluster Configuration Parameters, Options, and Variables”.

NDB Cluster also supports the addition of new data node groups online, to a running cluster. For more information, see Section 21.5.14, “Adding NDB Cluster Data Nodes Online”.

Table 21.1 Data Node Configuration Parameters

Parameter NameType or UnitsRestart TypeIn Version ... (and later)
Default Value
Minimum/Maximum or Permitted Values

Arbitration

enumerationNNDB 7.5.0
Default
Default, Disabled, WaitExternal

ArbitrationTimeout

millisecondsNNDB 7.5.0
7500
10 / 4294967039 (0xFFFFFEFF)

BackupDataBufferSize

bytesNNDB 7.5.1
16M
512K / 4294967039 (0xFFFFFEFF)

BackupDataDir

pathINNDB 7.5.0
FileSystemPath
...

BackupDiskWriteSpeedPct

percentNNDB 7.5.0
50
0 / 90

BackupLogBufferSize

bytesNNDB 7.5.0
16M
2M / 4294967039 (0xFFFFFEFF)

BackupMaxWriteSize

bytesNNDB 7.5.0
1M
256K / 4294967039 (0xFFFFFEFF)

BackupMemory

bytesNNDB 7.5.0
32M
0 / 4294967039 (0xFFFFFEFF)

BackupReportFrequency

secondsNNDB 7.5.0
0
0 / 4294967039 (0xFFFFFEFF)

BackupWriteSize

bytesNNDB 7.5.0
256K
32K / 4294967039 (0xFFFFFEFF)

BatchSizePerLocalScan

integerNNDB 7.5.0
256
1 / 992

BuildIndexThreads

numericSNDB 7.5.0
0
0 / 128

CompressedBackup

booleanNNDB 7.5.0
false
true, false

CompressedLCP

booleanNNDB 7.5.0
false
true, false

ConnectCheckIntervalDelay

millisecondsNNDB 7.5.0
0
0 / 4294967039 (0xFFFFFEFF)

CrashOnCorruptedTuple

booleanSNDB 7.5.0
true
true, false

DataDir

pathINNDB 7.5.0
.
...

DataMemory

bytesNNDB 7.5.0
80M
1M / 1024G

DefaultHashMapSize

LDM threadsNNDB 7.5.0
3840
0 / 3840

DictTrace

bytesNNDB 7.5.0
undefined
0 / 100

DiskIOThreadPool

threadsNNDB 7.5.0
2
0 / 4294967039 (0xFFFFFEFF)

Diskless

true|false (1|0)ISNDB 7.5.0
false
true, false

DiskPageBufferEntries

32K pagesNNDB 7.5.0
10
1 / 1000

DiskPageBufferMemory

bytesNNDB 7.5.0
64M
4M / 1T

DiskSyncSize

bytesNNDB 7.5.0
4M
32K / 4294967039 (0xFFFFFEFF)

ExecuteOnComputer

nameSNDB 7.5.0
[none]
...

ExtraSendBufferMemory

bytesNNDB 7.5.0
0
0 / 32G

FileSystemPath

pathINNDB 7.5.0
DataDir
...

FileSystemPathDataFiles

filenameINNDB 7.5.0
[see text]
...

FileSystemPathDD

filenameINNDB 7.5.0
FileSystemPath
...

FileSystemPathUndoFiles

filenameINNDB 7.5.0
[see text]
...

FragmentLogFileSize

bytesINNDB 7.5.0
16M
4M / 1G

HeartbeatIntervalDbApi

millisecondsNNDB 7.5.0
1500
100 / 4294967039 (0xFFFFFEFF)

HeartbeatIntervalDbDb

millisecondsNNDB 7.5.0
5000
10 / 4294967039 (0xFFFFFEFF)

HeartbeatOrder

numericSNDB 7.5.0
0
0 / 65535

HostName

name or IP addressNNDB 7.5.0
localhost
...

IndexMemory

bytesNNDB 7.5.0
18M
1M / 1T

IndexStatAutoCreate

booleanSNDB 7.5.0
false
false, true

IndexStatAutoUpdate

booleanSNDB 7.5.0
false
false, true

IndexStatSaveScale

percentageINNDB 7.5.0
100
0 / 4294967039 (0xFFFFFEFF)

IndexStatSaveSize

bytesINNDB 7.5.0
32768
0 / 4294967039 (0xFFFFFEFF)

IndexStatTriggerPct

percentageINNDB 7.5.0
100
0 / 4294967039 (0xFFFFFEFF)

IndexStatTriggerScale

percentageINNDB 7.5.0
100
0 / 4294967039 (0xFFFFFEFF)

IndexStatUpdateDelay

secondsINNDB 7.5.0
60
0 / 4294967039 (0xFFFFFEFF)

InitFragmentLogFiles

[see values]INNDB 7.5.0
SPARSE
SPARSE, FULL

InitialLogFileGroup

stringSNDB 7.5.0
[see text]
...

InitialNoOfOpenFiles

filesNNDB 7.5.0
27
20 / 4294967039 (0xFFFFFEFF)

InitialTablespace

stringSNDB 7.5.0
[see text]
...

LateAlloc

numericNNDB 7.5.0
1
0 / 1

LcpScanProgressTimeout

secondNNDB 7.5.0
60
0 / 4294967039 (0xFFFFFEFF)

LockExecuteThreadToCPU

set of CPU IDsNNDB 7.5.0
0
...

LockMaintThreadsToCPU

CPU IDNNDB 7.5.0
0
0 / 64K

LockPagesInMainMemory

numericNNDB 7.5.0
0
0 / 2

LogLevelCheckpoint

log levelNNDB 7.5.0
0
0 / 15

LogLevelCongestion

levelrNNDB 7.5.0
0
0 / 15

LogLevelConnection

integerNNDB 7.5.0
0
0 / 15

LogLevelError

integerNNDB 7.5.0
0
0 / 15

LogLevelInfo

integerNNDB 7.5.0
0
0 / 15

LogLevelNodeRestart

integerNNDB 7.5.0
0
0 / 15

LogLevelShutdown

integerNNDB 7.5.0
0
0 / 15

LogLevelStartup

integerNNDB 7.5.0
1
0 / 15

LogLevelStatistic

integerNNDB 7.5.0
0
0 / 15

LongMessageBuffer

bytesNNDB 7.5.0
64M
512K / 4294967039 (0xFFFFFEFF)

MaxAllocate

unsignedNNDB 7.5.0
32M
1M / 1G

MaxBufferedEpochs

epochsNNDB 7.5.0
100
0 / 100000

MaxBufferedEpochBytes

bytesNNDB 7.5.0
26214400
26214400 (0x01900000) / 4294967039 (0xFFFFFEFF)

MaxDiskWriteSpeed

numericSNDB 7.5.0
20M
1M / 1024G

MaxDiskWriteSpeedOtherNodeRestart

numericSNDB 7.5.0
50M
1M / 1024G

MaxDiskWriteSpeedOwnRestart

numericSNDB 7.5.0
200M
1M / 1024G

MaxDMLOperationsPerTransaction

operations (DML)NNDB 7.5.0
4294967295
32 / 4294967295

MaxLCPStartDelay

secondsNNDB 7.5.0
0
0 / 600

MaxNoOfAttributes

integerNNDB 7.5.0
1000
32 / 4294967039 (0xFFFFFEFF)

MaxNoOfConcurrentIndexOperations

integerNNDB 7.5.0
8K
0 / 4294967039 (0xFFFFFEFF)

MaxNoOfConcurrentOperations

integerNNDB 7.5.0
32K
32 / 4294967039 (0xFFFFFEFF)

MaxNoOfConcurrentScans

integerNNDB 7.5.0
256
2 / 500

MaxNoOfConcurrentSubOperations

unsignedNNDB 7.5.0
256
0 / 4294967039 (0xFFFFFEFF)

MaxNoOfConcurrentTransactions

integerNNDB 7.5.0
4096
32 / 4294967039 (0xFFFFFEFF)

MaxNoOfFiredTriggers

integerNNDB 7.5.0
4000
0 / 4294967039 (0xFFFFFEFF)

MaxNoOfLocalOperations

integerNNDB 7.5.0
UNDEFINED
32 / 4294967039 (0xFFFFFEFF)

MaxNoOfLocalScans

integerNNDB 7.5.0
[see text]
32 / 4294967039 (0xFFFFFEFF)

MaxNoOfOpenFiles

unsignedNNDB 7.5.0
0
20 / 4294967039 (0xFFFFFEFF)

MaxNoOfOrderedIndexes

integerNNDB 7.5.0
128
0 / 4294967039 (0xFFFFFEFF)

MaxNoOfSavedMessages

integerNNDB 7.5.0
25
0 / 4294967039 (0xFFFFFEFF)

MaxNoOfSubscribers

unsignedNNDB 7.5.0
0
0 / 4294967039 (0xFFFFFEFF)

MaxNoOfSubscriptions

unsignedNNDB 7.5.0
0
0 / 4294967039 (0xFFFFFEFF)

MaxNoOfTables

integerNNDB 7.5.0
128
8 / 20320

MaxNoOfTriggers

integerNNDB 7.5.0
768
0 / 4294967039 (0xFFFFFEFF)

MaxParallelCopyInstances

integerSNDB 7.5.0
0
0 / 64

MaxParallelScansPerFragment

bytesNNDB 7.5.0
256
1 / 4294967039 (0xFFFFFEFF)

MaxStartFailRetries

unsignedNNDB 7.5.0
3
0 / 4294967039 (0xFFFFFEFF)

MemReportFrequency

unsignedNNDB 7.5.0
0
0 / 4294967039 (0xFFFFFEFF)

MinDiskWriteSpeed

numericSNDB 7.5.0
10M
1M / 1024G

MinFreePct

unsignedNNDB 7.5.0
5
0 / 100

NodeGroup

 ISNDB 7.5.0
[none]
0 / 65536

NodeId

unsignedISNDB 7.5.0
[none]
1 / 48

NoOfFragmentLogFiles

integerINNDB 7.5.0
16
3 / 4294967039 (0xFFFFFEFF)

NoOfReplicas

integerISNDB 7.5.0
2
1 / 4

Numa

integerNNDB 7.5.0
1
0 - 1

ODirect

booleanNNDB 7.5.0
false
true, false

RealtimeScheduler

booleanNNDB 7.5.0
false
true, false

RedoBuffer

bytesNNDB 7.5.0
32M
1M / 4294967039 (0xFFFFFEFF)

RedoOverCommitCounter

numericNNDB 7.5.0
3
0 / 4294967039 (0xFFFFFEFF)

RedoOverCommitLimit

secondsNNDB 7.5.0
20
0 / 4294967039 (0xFFFFFEFF)

RestartOnErrorInsert

error codeNNDB 7.5.0
2
0 / 4

SchedulerExecutionTimer

µsNNDB 7.5.0
50
0 / 11000

SchedulerResponsiveness

integerSNDB 7.5.0
5
0 / 10

SchedulerSpinTimer

µsNNDB 7.5.0
0
0 / 500

ServerPort

unsignedSNDB 7.5.0
[none]
1 / 64K

SharedGlobalMemory

bytesNNDB 7.5.0
128M
0 / 64T

StartFailRetryDelay

unsignedNNDB 7.5.0
0
0 / 4294967039 (0xFFFFFEFF)

StartFailureTimeout

millisecondsNNDB 7.5.0
0
0 / 4294967039 (0xFFFFFEFF)

StartNoNodeGroupTimeout

millisecondsNNDB 7.5.0
15000
0 / 4294967039 (0xFFFFFEFF)

StartPartialTimeout

millisecondsNNDB 7.5.0
30000
0 / 4294967039 (0xFFFFFEFF)

StartPartitionedTimeout

millisecondsNNDB 7.5.0
60000
0 / 4294967039 (0xFFFFFEFF)

StartupStatusReportFrequency

secondsNNDB 7.5.0
0
0 / 4294967039 (0xFFFFFEFF)

StopOnError

booleanNNDB 7.5.0
1
0, 1

StringMemory

% or bytesSNDB 7.5.0
25
0 / 4294967039 (0xFFFFFEFF)

TcpBind_INADDR_ANY

booleanNNDB 7.5.0
false
true, false

TimeBetweenEpochs

millisecondsNNDB 7.5.0
100
0 / 32000

TimeBetweenEpochsTimeout

millisecondsNNDB 7.5.0
0
0 / 256000

TimeBetweenGlobalCheckpoints

millisecondsNNDB 7.5.0
2000
20 / 32000

TimeBetweenGlobalCheckpointsTimeout

millisecondsNNDB 7.5.0
120000
10 / 4294967039 (0xFFFFFEFF)

TimeBetweenInactiveTransactionAbortCheck

millisecondsNNDB 7.5.0
1000
1000 / 4294967039 (0xFFFFFEFF)

TimeBetweenLocalCheckpoints

number of 4-byte words, as a base-2 logarithmNNDB 7.5.0
20
0 / 31

TimeBetweenWatchDogCheck

millisecondsNNDB 7.5.0
6000
70 / 4294967039 (0xFFFFFEFF)

TimeBetweenWatchDogCheckInitial

millisecondsNNDB 7.5.0
6000
70 / 4294967039 (0xFFFFFEFF)

TotalSendBufferMemory

bytesNNDB 7.5.0
0
256K / 4294967039 (0xFFFFFEFF)

TransactionBufferMemory

bytesNNDB 7.5.0
1M
1K / 4294967039 (0xFFFFFEFF)

TransactionDeadlockDetectionTimeout

millisecondsNNDB 7.5.0
1200
50 / 4294967039 (0xFFFFFEFF)

TransactionInactiveTimeout

millisecondsNNDB 7.5.0
[see text]
0 / 4294967039 (0xFFFFFEFF)

TwoPassInitialNodeRestartCopy

booleanNNDB 7.5.0
false
true, false

UndoDataBuffer

unsignedNNDB 7.5.0
16M
1M / 4294967039 (0xFFFFFEFF)

UndoIndexBuffer

unsignedNNDB 7.5.0
2M
1M / 4294967039 (0xFFFFFEFF)

Table 21.2 Multi-Threaded Data Node Configuration Parameters

Parameter NameType or UnitsRestart TypeIn Version ... (and later)
Default Value
Minimum/Maximum or Permitted Values

MaxNoOfExecutionThreads

integerISNDB 7.5.0
2
2 / 72

NoOfFragmentLogParts

numericINNDB 7.5.0
4
4, 8, 12, 16, 24, 32

ThreadConfig

stringISNDB 7.5.0
''
...

21.3.2.2 NDB Cluster Management Node Configuration Parameters

The summary table in this section provides information about parameters used in the [ndb_mgmd] or [mgm] sections of a config.ini file for configuring NDB Cluster management nodes. For detailed descriptions and other additional information about each of these parameters, see Section 21.3.3.5, “Defining an NDB Cluster Management Server”.

Restart types.  Changes in NDB Cluster configuration parameters do not take effect until the cluster is restarted. The type of restart required to change a given parameter is indicated in the summary table as follows:

For more information about restart types, see Section 21.3.2, “Overview of NDB Cluster Configuration Parameters, Options, and Variables”.

Table 21.3 Management Node Configuration Parameters

Parameter NameType or UnitsRestart TypeIn Version ... (and later)
Default Value
Minimum/Maximum or Permitted Values

ArbitrationDelay

millisecondsNNDB 7.5.0
0
0 / 4294967039 (0xFFFFFEFF)

ArbitrationRank

0-2NNDB 7.5.0
1
0 / 2

DataDir

pathNNDB 7.5.0
.
...

ExecuteOnComputer

nameSNDB 7.5.0
[none]
...

HeartbeatIntervalMgmdMgmd

millisecondsNNDB 7.5.0
1500
100 / 4294967039 (0xFFFFFEFF)

HeartbeatThreadPriority

stringSNDB 7.5.0
[none]
...

HostName

name or IP addressNNDB 7.5.0
[none]
...

Id

unsignedISNDB 7.5.0
[none]
1 / 255

LogDestination

{CONSOLE|SYSLOG|FILE}NNDB 7.5.0
[see text]
...

NodeId

unsignedISNDB 7.5.0
[none]
1 / 255

PortNumber

unsignedSNDB 7.5.0
1186
0 / 64K

PortNumberStats

unsignedNNDB 7.5.0
[none]
0 / 64K

TotalSendBufferMemory

bytesNNDB 7.5.0
0
256K / 4294967039 (0xFFFFFEFF)

wan

booleanNNDB 7.5.0
false
true, false

Note

After making changes in a management node's configuration, it is necessary to perform a rolling restart of the cluster for the new configuration to take effect. See Section 21.3.3.5, “Defining an NDB Cluster Management Server”, for more information.

To add new management servers to a running NDB Cluster , it is also necessary perform a rolling restart of all cluster nodes after modifying any existing config.ini files. For more information about issues arising when using multiple management nodes, see Section 21.1.6.10, “Limitations Relating to Multiple NDB Cluster Nodes”.

21.3.2.3 NDB Cluster SQL Node and API Node Configuration Parameters

The summary table in this section provides information about parameters used in the [mysqld] and [api] sections of a config.ini file for configuring NDB Cluster SQL nodes and API nodes. For detailed descriptions and other additional information about each of these parameters, see Section 21.3.3.7, “Defining SQL and Other API Nodes in an NDB Cluster”.

Note

For a discussion of MySQL server options for NDB Cluster , see Section 21.3.3.8.1, “MySQL Server Options for NDB Cluster”; for information about MySQL server system variables relating to NDB Cluster , see Section 21.3.3.8.2, “NDB Cluster System Variables”.

Restart types.  Changes in NDB Cluster configuration parameters do not take effect until the cluster is restarted. The type of restart required to change a given parameter is indicated in the summary table as follows:

For more information about restart types, see Section 21.3.2, “Overview of NDB Cluster Configuration Parameters, Options, and Variables”.

Table 21.4 SQL Node / API Node Configuration Parameters

Parameter NameType or UnitsRestart TypeIn Version ... (and later)
Default Value
Minimum/Maximum or Permitted Values

ApiVerbose

bytesNNDB 7.5.2
undefined
0 / 100

ArbitrationDelay

millisecondsNNDB 7.5.0
0
0 / 4294967039 (0xFFFFFEFF)

ArbitrationRank

0-2NNDB 7.5.0
0
0 / 2

AutoReconnect

booleanNNDB 7.5.0
false
true, false

BatchByteSize

bytesNNDB 7.5.0
16K
1K / 1M

BatchSize

recordsNNDB 7.5.0
256
1 / 992

ConnectBackoffMaxTime

integerNNDB 7.5.0
0
0 / 4294967039 (0xFFFFFEFF)

ConnectionMap

stringNNDB 7.5.0
[none]
...

DefaultHashMapSize

bucketsNNDB 7.5.0
3840
0 / 3840

DefaultOperationRedoProblemAction

enumerationSNDB 7.5.0
QUEUE
ABORT, QUEUE

EventLogBufferSize

bytesSNDB 7.5.0
8192
0 / 64K

ExecuteOnComputer

nameSNDB 7.5.0
[none]
...

ExtraSendBufferMemory

bytesNNDB 7.5.0
0
0 / 4294967039 (0xFFFFFEFF)

HeartbeatThreadPriority

stringSNDB 7.5.0
[none]
...

HostName

name or IP addressNNDB 7.5.0
[none]
...

Id

unsignedISNDB 7.5.0
[none]
1 / 255

MaxScanBatchSize

bytesNNDB 7.5.0
256K
32K / 16M

NodeId

unsignedISNDB 7.5.0
[none]
1 / 255

StartConnectBackoffMaxTime

integerNNDB 7.5.0
0
0 / 4294967039 (0xFFFFFEFF)

TotalSendBufferMemory

bytesNNDB 7.5.0
0
256K / 4294967039 (0xFFFFFEFF)

wan

booleanNNDB 7.5.0
false
true, false

Note

To add new SQL or API nodes to the configuration of a running NDB Cluster , it is necessary to perform a rolling restart of all cluster nodes after adding new [mysqld] or [api] sections to the config.ini file (or files, if you are using more than one management server). This must be done before the new SQL or API nodes can connect to the cluster.

It is not necessary to perform any restart of the cluster if new SQL or API nodes can employ previously unused API slots in the cluster configuration to connect to the cluster.

21.3.2.4 Other NDB Cluster Configuration Parameters

The summary tables in this section provide information about parameters used in the [computer], [tcp], [shm], and [sci] sections of a config.ini file for configuring NDB Cluster management nodes. For detailed descriptions and other additional information about individual parameters, see Section 21.3.3.9, “NDB Cluster TCP/IP Connections”, Section 21.3.3.11, “NDB Cluster Shared-Memory Connections”, or Section 21.3.3.12, “SCI Transport Connections in NDB Cluster”, as appropriate.

Restart types.  Changes in NDB Cluster configuration parameters do not take effect until the cluster is restarted. The type of restart required to change a given parameter is indicated in the summary tables as follows:

For more information about restart types, see Section 21.3.2, “Overview of NDB Cluster Configuration Parameters, Options, and Variables”.

Table 21.5 Computer Configuration Parameters

Parameter NameType or UnitsRestart TypeIn Version ... (and later)
Default Value
Minimum/Maximum or Permitted Values

HostName

name or IP addressNNDB 7.5.0
[none]
...

Id

stringISNDB 7.5.0
[none]
...

Table 21.6 TCP Configuration Parameters

Parameter NameType or UnitsRestart TypeIn Version ... (and later)
Default Value
Minimum/Maximum or Permitted Values

Checksum

booleanNNDB 7.5.0
false
true, false

Group

unsignedNNDB 7.5.0
55
0 / 200

NodeId1

numericNNDB 7.5.0
[none]
...

NodeId2

numericNNDB 7.5.0
[none]
...

NodeIdServer

numericNNDB 7.5.0
[none]
...

OverloadLimit

bytesNNDB 7.5.0
0
0 / 4294967039 (0xFFFFFEFF)

PortNumber

unsignedSNDB 7.5.0
[none]
0 / 64K

Proxy

stringNNDB 7.5.0
[none]
...

ReceiveBufferMemory

bytesNNDB 7.5.0
2M
16K / 4294967039 (0xFFFFFEFF)

SendBufferMemory

unsignedNNDB 7.5.0
2M
256K / 4294967039 (0xFFFFFEFF)

SendSignalId

booleanNNDB 7.5.0
[see text]
true, false

TCP_MAXSEG_SIZE

unsignedNNDB 7.5.0
0
0 / 2G

TCP_RCV_BUF_SIZE

unsignedNNDB 7.5.0
0
0 / 2G

TCP_SND_BUF_SIZE

unsignedNNDB 7.5.0
0
0 / 2G

TcpBind_INADDR_ANY

booleanNNDB 7.5.0
false
true, false

Table 21.7 Shared Memory Configuration Parameters

Parameter NameType or UnitsRestart TypeIn Version ... (and later)
Default Value
Minimum/Maximum or Permitted Values

Checksum

booleanNNDB 7.5.0
true
true, false

Group

unsignedNNDB 7.5.0
35
0 / 200

NodeId1

numericNNDB 7.5.0
[none]
...

NodeId2

numericNNDB 7.5.0
[none]
...

NodeIdServer

numericNNDB 7.5.0
[none]
...

OverloadLimit

bytesNNDB 7.5.0
0
0 / 4294967039 (0xFFFFFEFF)

PortNumber

unsignedSNDB 7.5.0
[none]
0 / 64K

SendSignalId

booleanNNDB 7.5.0
false
true, false

ShmKey

unsignedNNDB 7.5.0
[none]
0 / 4294967039 (0xFFFFFEFF)

ShmSize

bytesNNDB 7.5.0
1M
64K / 4294967039 (0xFFFFFEFF)

Signum

unsignedNNDB 7.5.0
[none]
0 / 4294967039 (0xFFFFFEFF)

Table 21.8 SCI Configuration Parameters

Parameter NameType or UnitsRestart TypeIn Version ... (and later)
Default Value
Minimum/Maximum or Permitted Values

Checksum

booleanNNDB 7.5.0
false
true, false

Group

unsignedNNDB 7.5.0
15
0 / 200

Host1SciId0

unsignedNNDB 7.5.0
[none]
0 / 4294967039 (0xFFFFFEFF)

Host1SciId1

unsignedNNDB 7.5.0
0
0 / 4294967039 (0xFFFFFEFF)

Host2SciId0

unsignedNNDB 7.5.0
[none]
0 / 4294967039 (0xFFFFFEFF)

Host2SciId1

unsignedNNDB 7.5.0
0
0 / 4294967039 (0xFFFFFEFF)

NodeId1

numericNNDB 7.5.0
[none]
...

NodeId2

numericNNDB 7.5.0
[none]
...

NodeIdServer

numericNNDB 7.5.0
[none]
...

OverloadLimit

bytesNNDB 7.5.0
0
0 / 4294967039 (0xFFFFFEFF)

PortNumber

unsignedSNDB 7.5.0
[none]
0 / 64K

SendLimit

unsignedNNDB 7.5.0
8K
128 / 32K

SendSignalId

booleanNNDB 7.5.0
true
true, false

SharedBufferSize

unsignedNNDB 7.5.0
10M
64K / 4294967039 (0xFFFFFEFF)

21.3.2.5 NDB Cluster mysqld Option and Variable Reference

The following table provides a list of the command-line options, server and status variables applicable within mysqld when it is running as an SQL node in an NDB Cluster . For a table showing all command-line options, server and status variables available for use with mysqld, see Section 6.1.3, “Server Option and Variable Reference”.

Table 21.9 MySQL Server Options and Variables for MySQL Cluster: MySQL NDB Cluster 7.5

Option or Variable Name
Command Line System Variable Status Variable
Option File Scope Dynamic
Notes

Com_show_ndb_status

No No Yes
No Both No

DESCRIPTION: Count of SHOW NDB STATUS statements

Handler_discover

No No Yes
No Both No

DESCRIPTION: Number of times that tables have been discovered

ndb-batch-size

Yes Yes No
Yes Global No

DESCRIPTION: Size (in bytes) to use for NDB transaction batches

ndb-blob-read-batch-bytes

Yes Yes No
Yes Both Yes

DESCRIPTION: Specifies size in bytes that large BLOB reads should be batched into. 0 = no limit.

ndb-blob-write-batch-bytes

Yes Yes No
Yes Both Yes

DESCRIPTION: Specifies size in bytes that large BLOB writes should be batched into. 0 = no limit.

ndb-cluster-connection-pool

Yes Yes Yes
Yes Global No

DESCRIPTION: Number of connections to the cluster used by MySQL

ndb-cluster-connection-pool-nodeids

Yes Yes No
Yes Global No

DESCRIPTION: Comma-separated list of node IDs for connections to the cluster used by MySQL; the number of nodes in the list must be the same as the value set for --ndb-cluster-connection-pool

ndb-connectstring

Yes No No
Yes No

DESCRIPTION: Point to the management server that distributes the cluster configuration

ndb-default-column-format

Yes Yes No
Yes Global Yes

DESCRIPTION: Use this value (FIXED or DYNAMIC) by default for COLUMN_FORMAT and ROW_FORMAT options when creating or adding columns to a table.

ndb-deferred-constraints

Yes Yes No
Yes Both Yes

DESCRIPTION: Specifies that constraint checks on unique indexes (where these are supported) should be deferred until commit time. Not normally needed or used; for testing purposes only.

ndb-distribution

Yes Yes No
Yes Global Yes

DESCRIPTION: Default distribution for new tables in NDBCLUSTER (KEYHASH or LINHASH, default is KEYHASH)

ndb-log-apply-status

Yes Yes No
Yes Global No

DESCRIPTION: Cause a MySQL server acting as a slave to log mysql.ndb_apply_status updates received from its immediate master in its own binary log, using its own server ID. Effective only if the server is started with the --ndbcluster option.

ndb-log-empty-epochs

Yes Yes No
Yes Global Yes

DESCRIPTION: When enabled, causes epochs in which there were no changes to be written to the ndb_apply_status and ndb_binlog_index tables, even when --log-slave-updates is enabled.

ndb-log-empty-update

Yes Yes No
Yes Global Yes

DESCRIPTION: When enabled, causes updates that produced no changes to be written to the ndb_apply_status and ndb_binlog_index tables, even when --log-slave-updates is enabled.

ndb-log-exclusive-reads

Yes Yes No
Yes Both Yes

DESCRIPTION: Log primary key reads with exclusive locks; allow conflict resolution based on read conflicts.

ndb-log-orig

Yes Yes No
Yes Global No

DESCRIPTION: Log originating server id and epoch in mysql.ndb_binlog_index table.

ndb-log-transaction-id

Yes Yes No
Yes Global No

DESCRIPTION: Write NDB transaction IDs in the binary log. Requires --log-bin-v1-events=OFF.

ndb-mgmd-host

Yes No No
Yes No

DESCRIPTION: Set the host (and port, if desired) for connecting to management server

ndb-nodeid

Yes No Yes
Yes Global No

DESCRIPTION: MySQL Cluster node ID for this MySQL server

ndb-recv-thread-activation-threshold

Yes No No
Yes No

DESCRIPTION: Activation threshold when receive thread takes over the polling of the cluster connection (measured in concurrently active threads)

ndb-recv-thread-cpu-mask

Yes No No
Yes No

DESCRIPTION: CPU mask for locking receiver threads to specific CPUs; specified as hexadecimal. See documentation for details.

ndb-transid-mysql-connection-map

Yes No No
No No

DESCRIPTION: Enable or disable the ndb_transid_mysql_connection_map plugin; that is, enable or disable the INFORMATION_SCHEMA table having that name.

ndb-wait-connected

Yes Yes No
Yes Global No

DESCRIPTION: Time (in seconds) for the MySQL server to wait for connection to cluster management and data nodes before accepting MySQL client connections.

ndb-wait-setup

Yes Yes No
Yes Global No

DESCRIPTION: Time (in seconds) for the MySQL server to wait for NDB engine setup to complete.

ndb-allow-copying-alter-table

Yes Yes No
Yes Both Yes

DESCRIPTION: Set to OFF to keep ALTER TABLE from using copying operations on NDB tables

Ndb_api_bytes_received_count

No No Yes
No Global No

DESCRIPTION: Amount of data (in bytes) received from the data nodes by this MySQL Server (SQL node).

Ndb_api_bytes_received_count_session

No No Yes
No Session No

DESCRIPTION: Amount of data (in bytes) received from the data nodes in this client session.

Ndb_api_bytes_received_count_slave

No No Yes
No Global No

DESCRIPTION: Amount of data (in bytes) received from the data nodes by this slave.

Ndb_api_bytes_sent_count

No No Yes
No Global No

DESCRIPTION: Amount of data (in bytes) sent to the data nodes by this MySQL Server (SQL node).

Ndb_api_bytes_sent_count_slave

No No Yes
No Global No

DESCRIPTION: Amount of data (in bytes) sent to the data nodes by this slave.

Ndb_api_event_bytes_count_injector

No No Yes
No Global No

DESCRIPTION: Number of bytes of events received by the NDB binary log injector thread.

Ndb_api_event_data_count_injector

No No Yes
No Global No

DESCRIPTION: Number of row change events received by the NDB binary log injector thread.

Ndb_api_event_nondata_count_injector

No No Yes
No Global No

DESCRIPTION: Number of events received, other than row change events, by the NDB binary log injector thread.

Ndb_api_pk_op_count

No No Yes
No Global No

DESCRIPTION: Number of operations based on or using primary keys by this MySQL Server (SQL node).

Ndb_api_pk_op_count_session

No No Yes
No Session No

DESCRIPTION: Number of operations based on or using primary keys in this client session.

Ndb_api_pk_op_count_slave

No No Yes
No Global No

DESCRIPTION: Number of operations based on or using primary keys by this slave.

Ndb_api_pruned_scan_count

No No Yes
No Global No

DESCRIPTION: Number of scans that have been pruned to a single partition by this MySQL Server (SQL node).

Ndb_api_pruned_scan_count_session

No No Yes
No Session No

DESCRIPTION: Number of scans that have been pruned to a single partition in this client session.

Ndb_api_range_scan_count_slave

No No Yes
No Global No

DESCRIPTION: Number of range scans that have been started by this slave.

Ndb_api_read_row_count

No No Yes
No Global No

DESCRIPTION: Total number of rows that have been read by this MySQL Server (SQL node).

Ndb_api_read_row_count_session

No No Yes
No Session No

DESCRIPTION: Total number of rows that have been read in this client session.

Ndb_api_scan_batch_count_slave

No No Yes
No Global No

DESCRIPTION: Number of batches of rows received by this slave.

Ndb_api_table_scan_count

No No Yes
No Global No

DESCRIPTION: Number of table scans that have been started, including scans of internal tables, by this MySQL Server (SQL node).

Ndb_api_table_scan_count_session

No No Yes
No Session No

DESCRIPTION: Number of table scans that have been started, including scans of internal tables, in this client session.

Ndb_api_trans_abort_count

No No Yes
No Global No

DESCRIPTION: Number of transactions aborted by this MySQL Server (SQL node).

Ndb_api_trans_abort_count_session

No No Yes
No Session No

DESCRIPTION: Number of transactions aborted in this client session.

Ndb_api_trans_abort_count_slave

No No Yes
No Global No

DESCRIPTION: Number of transactions aborted by this slave.

Ndb_api_trans_close_count

No No Yes
No Global No

DESCRIPTION: Number of transactions aborted (may be greater than the sum of TransCommitCount and TransAbortCount) by this MySQL Server (SQL node).

Ndb_api_trans_close_count_session

No No Yes
No Session No

DESCRIPTION: Number of transactions aborted (may be greater than the sum of TransCommitCount and TransAbortCount) in this client session.

Ndb_api_trans_close_count_slave

No No Yes
No Global No

DESCRIPTION: Number of transactions aborted (may be greater than the sum of TransCommitCount and TransAbortCount) by this slave.

Ndb_api_trans_commit_count

No No Yes
No Global No

DESCRIPTION: Number of transactions committed by this MySQL Server (SQL node).

Ndb_api_trans_commit_count_session

No No Yes
No Session No

DESCRIPTION: Number of transactions committed in this client session.

Ndb_api_trans_commit_count_slave

No No Yes
No Global No

DESCRIPTION: Number of transactions committed by this slave.

Ndb_api_trans_local_read_row_count_slave

No No Yes
No Global No

DESCRIPTION: Total number of rows that have been read by this slave.

Ndb_api_trans_start_count

No No Yes
No Global No

DESCRIPTION: Number of transactions started by this MySQL Server (SQL node).

Ndb_api_trans_start_count_session

No No Yes
No Session No

DESCRIPTION: Number of transactions started in this client session.

Ndb_api_trans_start_count_slave

No No Yes
No Global No

DESCRIPTION: Number of transactions started by this slave.

Ndb_api_uk_op_count

No No Yes
No Global No

DESCRIPTION: Number of operations based on or using unique keys by this MySQL Server (SQL node).

Ndb_api_uk_op_count_slave

No No Yes
No Global No

DESCRIPTION: Number of operations based on or using unique keys by this slave.

Ndb_api_wait_exec_complete_count

No No Yes
No Global No

DESCRIPTION: Number of times thread has been blocked while waiting for execution of an operation to complete by this MySQL Server (SQL node).

Ndb_api_wait_exec_complete_count_session

No No Yes
No Session No

DESCRIPTION: Number of times thread has been blocked while waiting for execution of an operation to complete in this client session.

Ndb_api_wait_exec_complete_count_slave

No No Yes
No Global No

DESCRIPTION: Number of times thread has been blocked while waiting for execution of an operation to complete by this slave.

Ndb_api_wait_meta_request_count

No No Yes
No Global No

DESCRIPTION: Number of times thread has been blocked waiting for a metadata-based signal by this MySQL Server (SQL node).

Ndb_api_wait_meta_request_count_session

No No Yes
No Session No

DESCRIPTION: Number of times thread has been blocked waiting for a metadata-based signal in this client session.

Ndb_api_wait_nanos_count

No No Yes
No Global No

DESCRIPTION: Total time (in nanoseconds) spent waiting for some type of signal from the data nodes by this MySQL Server (SQL node).

Ndb_api_wait_nanos_count_session

No No Yes
No Session No

DESCRIPTION: Total time (in nanoseconds) spent waiting for some type of signal from the data nodes in this client session.

Ndb_api_wait_nanos_count_slave

No No Yes
No Global No

DESCRIPTION: Total time (in nanoseconds) spent waiting for some type of signal from the data nodes by this slave.

Ndb_api_wait_scan_result_count

No No Yes
No Global No

DESCRIPTION: Number of times thread has been blocked while waiting for a scan-based signal by this MySQL Server (SQL node).

Ndb_api_wait_scan_result_count_session

No No Yes
No Session No

DESCRIPTION: Number of times thread has been blocked while waiting for a scan-based signal in this client session.

Ndb_api_wait_scan_result_count_slave

No No Yes
No Global No

DESCRIPTION: Number of times thread has been blocked while waiting for a scan-based signal by this slave.

ndb_autoincrement_prefetch_sz

Yes Yes No
Yes Both Yes

DESCRIPTION: NDB auto-increment prefetch size

ndb_cache_check_time

Yes Yes No
Yes Global Yes

DESCRIPTION: Number of milliseconds between checks of cluster SQL nodes made by the MySQL query cache

ndb_clear_apply_status

Yes Yes No
No Global Yes

DESCRIPTION: Causes RESET SLAVE to clear all rows from the ndb_apply_status table. ON by default.

Ndb_cluster_node_id

No No Yes
No Both No

DESCRIPTION: If the server is acting as a MySQL Cluster node, then the value of this variable its node ID in the cluster

Ndb_config_from_host

No No Yes
No Both No

DESCRIPTION: The host name or IP address of the Cluster management server. Formerly Ndb_connected_host

Ndb_config_from_port

No No Yes
No Both No

DESCRIPTION: The port for connecting to Cluster management server. Formerly Ndb_connected_port

Ndb_conflict_fn_epoch_trans

No No Yes
No Global No

DESCRIPTION: Number of rows that have been found in conflict by the NDB$EPOCH_TRANS() conflict detection function

Ndb_conflict_fn_max

No No Yes
No Global No

DESCRIPTION: If the server is part of a MySQL Cluster involved in cluster replication, the value of this variable indicates the number of times that conflict resolution based on "greater timestamp wins" has been applied

Ndb_conflict_fn_old

No No Yes
No Global No

DESCRIPTION: If the server is part of a MySQL Cluster involved in cluster replication, the value of this variable indicates the number of times that "same timestamp wins" conflict resolution has been applied

Ndb_conflict_trans_detect_iter_count

No No Yes
No Global No

DESCRIPTION: Number of internal iterations required to commit an epoch transaction. Should be (slightly) greater than or equal to Ndb_conflict_trans_conflict_commit_count.

Ndb_conflict_trans_row_reject_count

No No Yes
No Global No

DESCRIPTION: Total number of rows realigned after being found in conflict by a transactional conflict function. Includes Ndb_conflict_trans_row_conflict_count and any rows included in or dependent on conflicting transactions.

ndb_data_node_neighbour

Yes Yes No
Yes Global Yes

DESCRIPTION: Specifies cluster data node "closest" to this MySQL Server, for transaction hinting and fully replicated tables

ndb_default_column_format

Yes Yes No
Yes Global Yes

DESCRIPTION: Sets default row format and column format (FIXED or DYNAMIC) used for new NDB tables.

ndb_deferred_constraints

Yes Yes No
Yes Both Yes

DESCRIPTION: Specifies that constraint checks should be deferred (where these are supported). Not normally needed or used; for testing purposes only.

ndb_distribution

Yes Yes No
Yes Global Yes

DESCRIPTION: Default distribution for new tables in NDBCLUSTER (KEYHASH or LINHASH, default is KEYHASH)

ndb_eventbuffer_free_percent

Yes Yes No
Yes Global Yes

DESCRIPTION: Percentage of free memory that should be available in event buffer before resumption of buffering, after reaching limit set by ndb_eventbuffer_max_alloc.

ndb_eventbuffer_max_alloc

Yes Yes No
Yes Global Yes

DESCRIPTION: Maximum memory that can be allocated for buffering events by the NDB API. Defaults to 0 (no limit).

ndb_extra_logging

Yes Yes No
Yes Global Yes

DESCRIPTION: Controls logging of MySQL Cluster schema, connection, and data distribution events in the MySQL error log

ndb_force_send

Yes Yes No
Yes Both Yes

DESCRIPTION: Forces sending of buffers to NDB immediately, without waiting for other threads

ndb_fully_replicated

Yes Yes No
Yes Both Yes

DESCRIPTION: Whether new NDB tables are fully replicated

ndb_index_stat_enable

Yes Yes No
Yes Both Yes

DESCRIPTION: Use NDB index statistics in query optimization

ndb_index_stat_option

Yes Yes No
Yes Both Yes

DESCRIPTION: Comma-separated list of tunable options for NDB index statistics; the list should contain no spaces

ndb_join_pushdown

No Yes No
No Both Yes

DESCRIPTION: Enables pushing down of joins to data nodes

ndb_log_apply_status

Yes Yes No
Yes Global No

DESCRIPTION: Whether or not a MySQL server acting as a slave logs mysql.ndb_apply_status updates received from its immediate master in its own binary log, using its own server ID.

ndb_log_bin

Yes Yes No
No Both Yes

DESCRIPTION: Write updates to NDB tables in the binary log. Effective only if binary logging is enabled with --log-bin.

ndb_log_binlog_index

Yes Yes No
No Global Yes

DESCRIPTION: Insert mapping between epochs and binary log positions into the ndb_binlog_index table. Defaults to ON. Effective only if binary logging is enabled on the server.

ndb_log_empty_epochs

Yes Yes No
Yes Global Yes

DESCRIPTION: When enabled, epochs in which there were no changes are written to the ndb_apply_status and ndb_binlog_index tables, even when log_slave_updates is enabled.

ndb_log_empty_update

Yes Yes No
Yes Global Yes

DESCRIPTION: When enabled, updates which produce no changes are written to the ndb_apply_status and ndb_binlog_index tables, even when log_slave_updates is enabled.

ndb_log_exclusive_reads

Yes Yes No
Yes Both Yes

DESCRIPTION: Log primary key reads with exclusive locks; allow conflict resolution based on read conflicts.

ndb_log_orig

Yes Yes No
Yes Global No

DESCRIPTION: Whether the id and epoch of the originating server are recorded in the mysql.ndb_binlog_index table. Set using the --ndb-log-orig option when starting mysqld.

ndb_log_transaction_id

No Yes No
No Global No

DESCRIPTION: Whether NDB transaction IDs are written into the binary log. (Read-only.)

ndb_log_updated_only

Yes Yes No
Yes Global Yes

DESCRIPTION: Log complete rows (ON) or updates only (OFF)

Ndb_number_of_data_nodes

No No Yes
No Global No

DESCRIPTION: If the server is part of a MySQL Cluster, the value of this variable is the number of data nodes in the cluster

ndb_optimization_delay

No Yes No
No Global Yes

DESCRIPTION: Sets the number of milliseconds to wait between processing sets of rows by OPTIMIZE TABLE on NDB tables.

ndb_optimized_node_selection

Yes Yes No
Yes Global No

DESCRIPTION: Determines how an SQL node chooses a cluster data node to use as transaction coordinator

Ndb_pushed_queries_defined

No No Yes
No Global No

DESCRIPTION: Number of joins that API nodes have attempted to push down to the data nodes

Ndb_pushed_queries_executed

No No Yes
No Global No

DESCRIPTION: Number of joins successfully pushed down and executed on the data nodes

ndb_read_backup

Yes Yes No
Yes Global Yes

DESCRIPTION: Enable read from any replica

ndb_recv_thread_activation_threshold

No No No
No No

DESCRIPTION: Activation threshold when receive thread takes over the polling of the cluster connection (measured in concurrently active threads)

ndb_recv_thread_cpu_mask

No Yes No
No Global Yes

DESCRIPTION: CPU mask for locking receiver threads to specific CPUs; specified as hexadecimal. See documentation for details.

ndb_report_thresh_binlog_epoch_slip

Yes Yes No
Yes Global Yes

DESCRIPTION: NDB 7.5.4 and later: Threshold for number of epochs completely buffered, but not yet consumed by binlog injector thread which when exceeded generates BUFFERED_EPOCHS_OVER_THRESHOLD event buffer status message; prior to NDB 7.5.4: Threshold for number of epochs to lag behind before reporting binary log status

ndb_report_thresh_binlog_mem_usage

Yes Yes No
Yes Global Yes

DESCRIPTION: This is a threshold on the percentage of free memory remaining before reporting binary log status

Ndb_scan_count

No No Yes
No Global No

DESCRIPTION: The total number of scans executed by NDB since the cluster was last started

ndb_show_foreign_key_mock_tables

Yes Yes No
Yes Global Yes

DESCRIPTION: Show the mock tables used to support foreign_key_checks=0.

ndb_slave_conflict_role

Yes Yes No
Yes Global Yes

DESCRIPTION: Role for slave to play in conflict detection and resolution. Value is one of PRIMARY, SECONDARY, PASS, or NONE (default). Can be changed only when slave SQL thread is stopped. See documentation for further information.

Ndb_slave_max_replicated_epoch

No Yes No
No Global No

DESCRIPTION: The most recently committed NDB epoch on this slave. When this value is greater than or equal to Ndb_conflict_last_conflict_epoch, no conflicts have yet been detected.

ndb_table_no_logging

No Yes No
No Session Yes

DESCRIPTION: NDB tables created when this setting is enabled are not checkpointed to disk (although table schema files are created). The setting in effect when the table is created with or altered to use NDBCLUSTER persists for the lifetime of the table.

ndb_table_temporary

No Yes No
No Session Yes

DESCRIPTION: NDB tables are not persistent on disk: no schema files are created and the tables are not logged

ndb_use_exact_count

No Yes No
No Both Yes

DESCRIPTION: Use exact row count when planning queries

ndb_use_transactions

Yes Yes No
Yes Both Yes

DESCRIPTION: Forces NDB to use a count of records during SELECT COUNT(*) query planning to speed up this type of query

ndb_version

No Yes No
No Global No

DESCRIPTION: Shows build and NDB engine version as an integer.

ndb_version_string

No Yes No
No Global No

DESCRIPTION: Shows build information including NDB engine version in ndb-x.y.z format.

ndbcluster

Yes No No
Yes No

DESCRIPTION: Enable NDB Cluster (if this version of MySQL supports it)

Disabled by --skip-ndbcluster

ndbinfo_database

No Yes No
No Global No

DESCRIPTION: The name used for the NDB information database; read only.

ndbinfo_max_bytes

Yes Yes No
No Both Yes

DESCRIPTION: Used for debugging only.

ndbinfo_max_rows

Yes Yes No
No Both Yes

DESCRIPTION: Used for debugging only.

ndbinfo_offline

No Yes No
No Global Yes

DESCRIPTION: Put the ndbinfo database into offline mode, in which no rows are returned from tables or views.

ndbinfo_show_hidden

Yes Yes No
No Both Yes

DESCRIPTION: Whether to show ndbinfo internal base tables in the mysql client. The default is OFF.

ndbinfo_table_prefix

Yes Yes No
No Both Yes

DESCRIPTION: The prefix to use for naming ndbinfo internal base tables

ndbinfo_version

No Yes No
No Global No

DESCRIPTION: The version of the ndbinfo engine; read only.

server-id-bits

Yes Yes No
Yes Global No

DESCRIPTION: Sets the number of least significant bits in the server_id actually used for identifying the server, permitting NDB API applications to store application data in the most significant bits. server_id must be less than 2 to the power of this value.

server_id_bits

Yes Yes No
Yes Global No

DESCRIPTION: The effective value of server_id if the server was started with the --server-id-bits option set to a nondefault value.

slave_allow_batching

Yes Yes No
Yes Global Yes

DESCRIPTION: Turns update batching on and off for a replication slave

transaction_allow_batching

No Yes No
No Session Yes

DESCRIPTION: Allows batching of statements within a transaction. Disable AUTOCOMMIT to use.


21.3.3 NDB Cluster Configuration Files

Configuring NDB Cluster requires working with two files:

  • my.cnf: Specifies options for all NDB Cluster executables. This file, with which you should be familiar with from previous work with MySQL, must be accessible by each executable running in the cluster.

  • config.ini: This file, sometimes known as the global configuration file, is read only by the NDB Cluster management server, which then distributes the information contained therein to all processes participating in the cluster. config.ini contains a description of each node involved in the cluster. This includes configuration parameters for data nodes and configuration parameters for connections between all nodes in the cluster. For a quick reference to the sections that can appear in this file, and what sorts of configuration parameters may be placed in each section, see Sections of the config.ini File.

Caching of configuration data.  NDB uses stateful configuration. Rather than reading the global configuration file every time the management server is restarted, the management server caches the configuration the first time it is started, and thereafter, the global configuration file is read only when one of the following conditions is true:

  • The management server is started using the --initial option.  When --initial is used, the global configuration file is re-read, any existing cache files are deleted, and the management server creates a new configuration cache.

  • The management server is started using the --reload option.  The --reload option causes the management server to compare its cache with the global configuration file. If they differ, the management server creates a new configuration cache; any existing configuration cache is preserved, but not used. If the management server's cache and the global configuration file contain the same configuration data, then the existing cache is used, and no new cache is created.

  • The management server is started using --config-cache=FALSE.  This disables --config-cache (enabled by default), and can be used to force the management server to bypass configuration caching altogether. In this case, the management server ignores any configuration files that may be present, always reading its configuration data from the config.ini file instead.

  • No configuration cache is found.  In this case, the management server reads the global configuration file and creates a cache containing the same configuration data as found in the file.

Configuration cache files.  The management server by default creates configuration cache files in a directory named mysql-cluster in the MySQL installation directory. (If you build NDB Cluster from source on a Unix system, the default location is /usr/local/mysql-cluster.) This can be overridden at runtime by starting the management server with the --configdir option. Configuration cache files are binary files named according to the pattern ndb_node_id_config.bin.seq_id, where node_id is the management server's node ID in the cluster, and seq_id is a cache idenitifer. Cache files are numbered sequentially using seq_id, in the order in which they are created. The management server uses the latest cache file as determined by the seq_id.

Note

It is possible to roll back to a previous configuration by deleting later configuration cache files, or by renaming an earlier cache file so that it has a higher seq_id. However, since configuration cache files are written in a binary format, you should not attempt to edit their contents by hand.

For more information about the --configdir, --config-cache, --initial, and --reload options for the NDB Cluster management server, see Section 21.4.4, “ndb_mgmd — The NDB Cluster Management Server Daemon”.

We are continuously making improvements in Cluster configuration and attempting to simplify this process. Although we strive to maintain backward compatibility, there may be times when introduce an incompatible change. In such cases we will try to let Cluster users know in advance if a change is not backward compatible. If you find such a change and we have not documented it, please report it in the MySQL bugs database using the instructions given in Section 1.7, “How to Report Bugs or Problems”.

21.3.3.1 NDB Cluster Configuration: Basic Example

To support NDB Cluster , you will need to update my.cnf as shown in the following example. You may also specify these parameters on the command line when invoking the executables.

Note

The options shown here should not be confused with those that are used in config.ini global configuration files. Global configuration options are discussed later in this section.

# my.cnf
# example additions to my.cnf for NDB Cluster 
# (valid in MySQL 5.7)

# enable ndbcluster storage engine, and provide connection string for
# management server host (default port is 1186)
[mysqld]
ndbcluster
ndb-connectstring=ndb_mgmd.mysql.com


# provide connection string for management server host (default port: 1186)
[ndbd]
connect-string=ndb_mgmd.mysql.com

# provide connection string for management server host (default port: 1186)
[ndb_mgm]
connect-string=ndb_mgmd.mysql.com

# provide location of cluster configuration file
[ndb_mgmd]
config-file=/etc/config.ini

(For more information on connection strings, see Section 21.3.3.3, “NDB Cluster Connection Strings”.)

# my.cnf
# example additions to my.cnf for NDB Cluster 
# (will work on all versions)

# enable ndbcluster storage engine, and provide connection string for management
# server host to the default port 1186
[mysqld]
ndbcluster
ndb-connectstring=ndb_mgmd.mysql.com:1186
Important

Once you have started a mysqld process with the NDBCLUSTER and ndb-connectstring parameters in the [mysqld] in the my.cnf file as shown previously, you cannot execute any CREATE TABLE or ALTER TABLE statements without having actually started the cluster. Otherwise, these statements will fail with an error. This is by design.

You may also use a separate [mysql_cluster] section in the cluster my.cnf file for settings to be read and used by all executables:

# cluster-specific settings
[mysql_cluster]
ndb-connectstring=ndb_mgmd.mysql.com:1186

For additional NDB variables that can be set in the my.cnf file, see Section 21.3.3.8.2, “NDB Cluster System Variables”.

The NDB Cluster global configuration file is by convention named config.ini (but this is not required). If needed, it is read by ndb_mgmd at startup and can be placed in any location that can be read by it. The location and name of the configuration are specified using --config-file=path_name with ndb_mgmd on the command line. This option has no default value, and is ignored if ndb_mgmd uses the configuration cache.

The global configuration file for NDB Cluster uses INI format, which consists of sections preceded by section headings (surrounded by square brackets), followed by the appropriate parameter names and values. One deviation from the standard INI format is that the parameter name and value can be separated by a colon (:) as well as the equal sign (=); however, the equal sign is preferred. Another deviation is that sections are not uniquely identified by section name. Instead, unique sections (such as two different nodes of the same type) are identified by a unique ID specified as a parameter within the section.

Default values are defined for most parameters, and can also be specified in config.ini. To create a default value section, simply add the word default to the section name. For example, an [ndbd] section contains parameters that apply to a particular data node, whereas an [ndbd default] section contains parameters that apply to all data nodes. Suppose that all data nodes should use the same data memory size. To configure them all, create an [ndbd default] section that contains a DataMemory line to specify the data memory size.

Note

In some older releases of NDB Cluster , there was no default value for NoOfReplicas, which always had to be specified explicitly in the [ndbd default] section. Although this parameter now has a default value of 2, which is the recommended setting in most common usage scenarios, it is still recommended practice to set this parameter explicitly.

The global configuration file must define the computers and nodes involved in the cluster and on which computers these nodes are located. An example of a simple configuration file for a cluster consisting of one management server, two data nodes and two MySQL servers is shown here:

# file "config.ini" - 2 data nodes and 2 SQL nodes
# This file is placed in the startup directory of ndb_mgmd (the
# management server)
# The first MySQL Server can be started from any host. The second
# can be started only on the host mysqld_5.mysql.com

[ndbd default]
NoOfReplicas= 2
DataDir= /var/lib/mysql-cluster

[ndb_mgmd]
Hostname= ndb_mgmd.mysql.com
DataDir= /var/lib/mysql-cluster

[ndbd]
HostName= ndbd_2.mysql.com

[ndbd]
HostName= ndbd_3.mysql.com

[mysqld]
[mysqld]
HostName= mysqld_5.mysql.com
Note

The preceding example is intended as a minimal starting configuration for purposes of familiarization with NDB Cluster , and is almost certain not to be sufficient for production settings. See Section 21.3.3.2, “Recommended Starting Configuration for NDB Cluster”, which provides a more complete example starting configuration.

Each node has its own section in the config.ini file. For example, this cluster has two data nodes, so the preceding configuration file contains two [ndbd] sections defining these nodes.

Note

Do not place comments on the same line as a section heading in the config.ini file; this causes the management server not to start because it cannot parse the configuration file in such cases.

Sections of the config.ini File

There are six different sections that you can use in the config.ini configuration file, as described in the following list:

You can define default values for each section. All Cluster parameter names are case-insensitive, which differs from parameters specified in my.cnf or my.ini files.

21.3.3.2 Recommended Starting Configuration for NDB Cluster

Achieving the best performance from an NDB Cluster depends on a number of factors including the following:

  • NDB Cluster software version

  • Numbers of data nodes and SQL nodes

  • Hardware

  • Operating system

  • Amount of data to be stored

  • Size and type of load under which the cluster is to operate

Therefore, obtaining an optimum configuration is likely to be an iterative process, the outcome of which can vary widely with the specifics of each NDB Cluster deployment. Changes in configuration are also likely to be indicated when changes are made in the platform on which the cluster is run, or in applications that use the NDB Cluster 's data. For these reasons, it is not possible to offer a single configuration that is ideal for all usage scenarios. However, in this section, we provide a recommended base configuration.

Starting config.ini file.  The following config.ini file is a recommended starting point for configuring a cluster running NDB Cluster 7.5:

# TCP PARAMETERS

[tcp default]
SendBufferMemory=2M
ReceiveBufferMemory=2M

# Increasing the sizes of these 2 buffers beyond the default values
# helps prevent bottlenecks due to slow disk I/O.

# MANAGEMENT NODE PARAMETERS

[ndb_mgmd default]
DataDir=path/to/management/server/data/directory

# It is possible to use a different data directory for each management
# server, but for ease of administration it is preferable to be
# consistent.

[ndb_mgmd]
HostName=management-server-A-hostname
# NodeId=management-server-A-nodeid

[ndb_mgmd]
HostName=management-server-B-hostname
# NodeId=management-server-B-nodeid

# Using 2 management servers helps guarantee that there is always an
# arbitrator in the event of network partitioning, and so is
# recommended for high availability. Each management server must be
# identified by a HostName. You may for the sake of convenience specify
# a NodeId for any management server, although one will be allocated
# for it automatically; if you do so, it must be in the range 1-255
# inclusive and must be unique among all IDs specified for cluster
# nodes.

# DATA NODE PARAMETERS

[ndbd default]
NoOfReplicas=2

# Using 2 replicas is recommended to guarantee availability of data;
# using only 1 replica does not provide any redundancy, which means
# that the failure of a single data node causes the entire cluster to
# shut down. We do not recommend using more than 2 replicas, since 2 is
# sufficient to provide high availability, and we do not currently test
# with greater values for this parameter.

LockPagesInMainMemory=1

# On Linux and Solaris systems, setting this parameter locks data node
# processes into memory. Doing so prevents them from swapping to disk,
# which can severely degrade cluster performance.

DataMemory=3072M
IndexMemory=384M

# The values provided for DataMemory and IndexMemory assume 4 GB RAM
# per data node. However, for best results, you should first calculate
# the memory that would be used based on the data you actually plan to
# store (you may find the ndb_size.pl utility helpful in estimating
# this), then allow an extra 20% over the calculated values. Naturally,
# you should ensure that each data node host has at least as much
# physical memory as the sum of these two values.

# ODirect=1

# Enabling this parameter causes NDBCLUSTER to try using O_DIRECT
# writes for local checkpoints and redo logs; this can reduce load on
# CPUs. We recommend doing so when using NDB Cluster  on systems running
# Linux kernel 2.6 or later.

NoOfFragmentLogFiles=300
DataDir=path/to/data/node/data/directory
MaxNoOfConcurrentOperations=100000

SchedulerSpinTimer=400
SchedulerExecutionTimer=100
RealTimeScheduler=1
# Setting these parameters allows you to take advantage of real-time scheduling
# of NDB threads to achieve increased throughput when using ndbd. They
# are not needed when using ndbmtd; in particular, you should not set
# RealTimeScheduler for ndbmtd data nodes.

TimeBetweenGlobalCheckpoints=1000
TimeBetweenEpochs=200
RedoBuffer=32M

# CompressedLCP=1
# CompressedBackup=1
# Enabling CompressedLCP and CompressedBackup causes, respectively, local
checkpoint files and backup files to be compressed, which can result in a space
savings of up to 50% over noncompressed LCPs and backups.

# MaxNoOfLocalScans=64
MaxNoOfTables=1024
MaxNoOfOrderedIndexes=256

[ndbd]
HostName=data-node-A-hostname
# NodeId=data-node-A-nodeid

LockExecuteThreadToCPU=1
LockMaintThreadsToCPU=0
# On systems with multiple CPUs, these parameters can be used to lock NDBCLUSTER
# threads to specific CPUs

[ndbd]
HostName=data-node-B-hostname
# NodeId=data-node-B-nodeid

LockExecuteThreadToCPU=1
LockMaintThreadsToCPU=0

# You must have an [ndbd] section for every data node in the cluster;
# each of these sections must include a HostName. Each section may
# optionally include a NodeId for convenience, but in most cases, it is
# sufficient to allow the cluster to allocate node IDs dynamically. If
# you do specify the node ID for a data node, it must be in the range 1
# to 48 inclusive and must be unique among all IDs specified for
# cluster nodes.

# SQL NODE / API NODE PARAMETERS

[mysqld]
# HostName=sql-node-A-hostname
# NodeId=sql-node-A-nodeid

[mysqld]

[mysqld]

# Each API or SQL node that connects to the cluster requires a [mysqld]
# or [api] section of its own. Each such section defines a connection
# slot; you should have at least as many of these sections in the
# config.ini file as the total number of API nodes and SQL nodes that
# you wish to have connected to the cluster at any given time. There is
# no performance or other penalty for having extra slots available in
# case you find later that you want or need more API or SQL nodes to
# connect to the cluster at the same time.
# If no HostName is specified for a given [mysqld] or [api] section,
# then any API or SQL node may use that slot to connect to the
# cluster. You may wish to use an explicit HostName for one connection slot
# to guarantee that an API or SQL node from that host can always
# connect to the cluster. If you wish to prevent API or SQL nodes from
# connecting from other than a desired host or hosts, then use a
# HostName for every [mysqld] or [api] section in the config.ini file.
# You can if you wish define a node ID (NodeId parameter) for any API or
# SQL node, but this is not necessary; if you do so, it must be in the
# range 1 to 255 inclusive and must be unique among all IDs specified
# for cluster nodes.

Recommended my.cnf options for SQL nodes.  MySQL Servers acting as NDB Cluster SQL nodes must always be started with the --ndbcluster and --ndb-connectstring options, either on the command line or in my.cnf. In addition, set the following options for all mysqld processes in the cluster, unless your setup requires otherwise:

  • --ndb-use-exact-count=0

  • --ndb-index-stat-enable=0

  • --ndb-force-send=1

  • --engine-condition-pushdown=1

21.3.3.3 NDB Cluster Connection Strings

With the exception of the NDB Cluster management server (ndb_mgmd), each node that is part of an NDB Cluster requires a connection string that points to the management server's location. This connection string is used in establishing a connection to the management server as well as in performing other tasks depending on the node's role in the cluster. The syntax for a connection string is as follows:

[nodeid=node_id, ]host-definition[, host-definition[, ...]]

host-definition:
    host_name[:port_number]

node_id is an integer greater than or equal to 1 which identifies a node in config.ini. host_name is a string representing a valid Internet host name or IP address. port_number is an integer referring to a TCP/IP port number.

example 1 (long):    "nodeid=2,myhost1:1100,myhost2:1100,192.168.0.3:1200"
example 2 (short):   "myhost1"

localhost:1186 is used as the default connection string value if none is provided. If port_num is omitted from the connection string, the default port is 1186. This port should always be available on the network because it has been assigned by IANA for this purpose (see http://www.iana.org/assignments/port-numbers for details).

By listing multiple host definitions, it is possible to designate several redundant management servers. An NDB Cluster data or API node attempts to contact successive management servers on each host in the order specified, until a successful connection has been established.

It is also possible to specify in a connection string one or more bind addresses to be used by nodes having multiple network interfaces for connecting to management servers. A bind address consists of a hostname or network address and an optional port number. This enhanced syntax for connection strings is shown here:

[nodeid=node_id, ]
    [bind-address=host-definition, ]
    host-definition[; bind-address=host-definition]
    host-definition[; bind-address=host-definition]
    [, ...]]

host-definition:
    host_name[:port_number]

If a single bind address is used in the connection string prior to specifying any management hosts, then this address is used as the default for connecting to any of them (unless overridden for a given management server; see later in this section for an example). For example, the following connection string causes the node to use 192.168.178.242 regardless of the management server to which it connects:

bind-address=192.168.178.242, poseidon:1186, perch:1186

If a bind address is specified following a management host definition, then it is used only for connecting to that management node. Consider the following connection string:

poseidon:1186;bind-address=localhost, perch:1186;bind-address=192.168.178.242

In this case, the node uses localhost to connect to the management server running on the host named poseidon and 192.168.178.242 to connect to the management server running on the host named perch.

You can specify a default bind address and then override this default for one or more specific management hosts. In the following example, localhost is used for connecting to the management server running on host poseidon; since 192.168.178.242 is specified first (before any management server definitions), it is the default bind address and so is used for connecting to the management servers on hosts perch and orca:

bind-address=192.168.178.242,poseidon:1186;bind-address=localhost,perch:1186,orca:2200

There are a number of different ways to specify the connection string:

  • Each executable has its own command-line option which enables specifying the management server at startup. (See the documentation for the respective executable.)

  • It is also possible to set the connection string for all nodes in the cluster at once by placing it in a [mysql_cluster] section in the management server's my.cnf file.

  • For backward compatibility, two other options are available, using the same syntax:

    1. Set the NDB_CONNECTSTRING environment variable to contain the connection string.

    2. Write the connection string for each executable into a text file named Ndb.cfg and place this file in the executable's startup directory.

    However, these are now deprecated and should not be used for new installations.

The recommended method for specifying the connection string is to set it on the command line or in the my.cnf file for each executable.

21.3.3.4 Defining Computers in an NDB Cluster

The [computer] section has no real significance other than serving as a way to avoid the need of defining host names for each node in the system. All parameters mentioned here are required.

  • Id

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0string[none]...IS

    This is a unique identifier, used to refer to the host computer elsewhere in the configuration file.

    Important

    The computer ID is not the same as the node ID used for a management, API, or data node. Unlike the case with node IDs, you cannot use NodeId in place of Id in the [computer] section of the config.ini file.

  • HostName

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0name or IP address[none]...N

    This is the computer's hostname or IP address.

21.3.3.5 Defining an NDB Cluster Management Server

The [ndb_mgmd] section is used to configure the behavior of the management server. If multiple management servers are employed, you can specify parameters common to all of them in an [ndb_mgmd default] section. [mgm] and [mgm default] are older aliases for these, supported for backward compatibility.

All parameters in the following list are optional and assume their default values if omitted.

Note

If neither the ExecuteOnComputer nor the HostName parameter is present, the default value localhost will be assumed for both.

  • Id

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0unsigned[none]1 - 255IS

    Each node in the cluster has a unique identity. For a management node, this is represented by an integer value in the range 1 to 255, inclusive. This ID is used by all internal cluster messages for addressing the node, and so must be unique for each NDB Cluster node, regardless of the type of node.

    Note

    Data node IDs must be less than 49. If you plan to deploy a large number of data nodes, it is a good idea to limit the node IDs for management nodes (and API nodes) to values greater than 48.

    The use of the Id parameter for identifying management nodes is deprecated in favor of NodeId. Although Id continues to be supported for backward compatibility, it now generates a warning and is subject to removal in a future version of NDB Cluster .

  • NodeId

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0unsigned[none]1 - 255IS

    Each node in the cluster has a unique identity. For a management node, this is represented by an integer value in the range 1 to 255 inclusive. This ID is used by all internal cluster messages for addressing the node, and so must be unique for each NDB Cluster node, regardless of the type of node.

    Note

    Data node IDs must be less than 49. If you plan to deploy a large number of data nodes, it is a good idea to limit the node IDs for management nodes (and API nodes) to values greater than 48.

    NodeId is the preferred parameter name to use when identifying management nodes. Although the older Id continues to be supported for backward compatibility, it is now deprecated and generates a warning when used; it is also subject to removal in a future NDB Cluster release.

  • ExecuteOnComputer

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0name[none]...S

    This refers to the Id set for one of the computers defined in a [computer] section of the config.ini file.

    Important

    This parameter is deprecated as of NDB 7.5.0, and is subject to removal in a future release. Use the HostName parameter instead.

  • PortNumber

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0unsigned11860 - 64KS

    This is the port number on which the management server listens for configuration requests and management commands.

  • HostName

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0name or IP address[none]...N

    Specifying this parameter defines the hostname of the computer on which the management node is to reside. To specify a hostname other than localhost, either this parameter or ExecuteOnComputer is required.

  • LogDestination

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0{CONSOLE|SYSLOG|FILE}[see text]...N

    This parameter specifies where to send cluster logging information. There are three options in this regard—CONSOLE, SYSLOG, and FILE—with FILE being the default:

    • CONSOLE outputs the log to stdout:

      CONSOLE
      
    • SYSLOG sends the log to a syslog facility, possible values being one of auth, authpriv, cron, daemon, ftp, kern, lpr, mail, news, syslog, user, uucp, local0, local1, local2, local3, local4, local5, local6, or local7.

      Note

      Not every facility is necessarily supported by every operating system.

      SYSLOG:facility=syslog
      
    • FILE pipes the cluster log output to a regular file on the same machine. The following values can be specified:

      • filename: The name of the log file.

        The default log file name used in such cases is ndb_nodeid_cluster.log.

      • maxsize: The maximum size (in bytes) to which the file can grow before logging rolls over to a new file. When this occurs, the old log file is renamed by appending .N to the file name, where N is the next number not yet used with this name.

      • maxfiles: The maximum number of log files.

      FILE:filename=cluster.log,maxsize=1000000,maxfiles=6
      

      The default value for the FILE parameter is FILE:filename=ndb_node_id_cluster.log,maxsize=1000000,maxfiles=6, where node_id is the ID of the node.

    It is possible to specify multiple log destinations separated by semicolons as shown here:

    CONSOLE;SYSLOG:facility=local0;FILE:filename=/var/log/mgmd
    
  • ArbitrationRank

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.00-210 - 2N

    This parameter is used to define which nodes can act as arbitrators. Only management nodes and SQL nodes can be arbitrators. ArbitrationRank can take one of the following values:

    • 0: The node will never be used as an arbitrator.

    • 1: The node has high priority; that is, it will be preferred as an arbitrator over low-priority nodes.

    • 2: Indicates a low-priority node which be used as an arbitrator only if a node with a higher priority is not available for that purpose.

    Normally, the management server should be configured as an arbitrator by setting its ArbitrationRank to 1 (the default for management nodes) and those for all SQL nodes to 0 (the default for SQL nodes).

    You can disable arbitration completely either by setting ArbitrationRank to 0 on all management and SQL nodes, or by setting the Arbitration parameter in the [ndbd default] section of the config.ini global configuration file. Setting Arbitration causes any settings for ArbitrationRank to be disregarded.

  • ArbitrationDelay

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0milliseconds00 - 4294967039 (0xFFFFFEFF)N

    An integer value which causes the management server's responses to arbitration requests to be delayed by that number of milliseconds. By default, this value is 0; it is normally not necessary to change it.

  • DataDir

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0path....N

    This specifies the directory where output files from the management server will be placed. These files include cluster log files, process output files, and the daemon's process ID (PID) file. (For log files, this location can be overridden by setting the FILE parameter for LogDestination as discussed previously in this section.)

    The default value for this parameter is the directory in which ndb_mgmd is located.

  • PortNumberStats

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0unsigned[none]0 - 64KN

    This parameter specifies the port number used to obtain statistical information from an NDB Cluster management server. It has no default value.

  • Wan

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0booleanfalsetrue, falseN

    Use WAN TCP setting as default.

  • HeartbeatThreadPriority

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0string[none]...S

    Set the scheduling policy and priority of heartbeat threads for management and API nodes.

    The syntax for setting this parameter is shown here:

    HeartbeatThreadPriority = policy[, priority]
    
    policy:
      {FIFO | RR}
    

    When setting this parameter, you must specify a policy. This is one of FIFO (first in, first out) or RR (round robin). The policy value is followed optionally by the priority (an integer).

  • TotalSendBufferMemory

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0bytes0256K - 4294967039 (0xFFFFFEFF)N

    This parameter is used to determine the total amount of memory to allocate on this node for shared send buffer memory among all configured transporters.

    If this parameter is set, its minimum permitted value is 256KB; 0 indicates that the parameter has not been set. For more detailed information, see Section 21.3.3.13, “Configuring NDB Cluster Send Buffer Parameters”.

  • HeartbeatIntervalMgmdMgmd

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0milliseconds1500100 - 4294967039 (0xFFFFFEFF)N

    Specify the interval between heartbeat messages used to determine whether another management node is on contact with this one. The management node waits after 3 of these intervals to declare the connection dead; thus, the default setting of 1500 milliseconds causes the management node to wait for approximately 1600 ms before timing out.

Note

After making changes in a management node's configuration, it is necessary to perform a rolling restart of the cluster for the new configuration to take effect.

To add new management servers to a running NDB Cluster , it is also necessary to perform a rolling restart of all cluster nodes after modifying any existing config.ini files. For more information about issues arising when using multiple management nodes, see Section 21.1.6.10, “Limitations Relating to Multiple NDB Cluster Nodes”.

21.3.3.6 Defining NDB Cluster Data Nodes

The [ndbd] and [ndbd default] sections are used to configure the behavior of the cluster's data nodes.

[ndbd] and [ndbd default] are always used as the section names whether you are using ndbd or ndbmtd binaries for the data node processes.

There are many parameters which control buffer sizes, pool sizes, timeouts, and so forth. The only mandatory parameter is either one of ExecuteOnComputer or HostName; this must be defined in the local [ndbd] section.

The parameter NoOfReplicas should be defined in the [ndbd default] section, as it is common to all Cluster data nodes. It is not strictly necessary to set NoOfReplicas, but it is good practice to set it explicitly.

Most data node parameters are set in the [ndbd default] section. Only those parameters explicitly stated as being able to set local values are permitted to be changed in the [ndbd] section. Where present, HostName, NodeId and ExecuteOnComputer must be defined in the local [ndbd] section, and not in any other section of config.ini. In other words, settings for these parameters are specific to one data node.

For those parameters affecting memory usage or buffer sizes, it is possible to use K, M, or G as a suffix to indicate units of 1024, 1024×1024, or 1024×1024×1024. (For example, 100K means 100 × 1024 = 102400.) Parameter names and values are currently case-sensitive.

Information about configuration parameters specific to NDB Cluster Disk Data tables can be found later in this section (see Disk Data Configuration Parameters).

All of these parameters also apply to ndbmtd (the multi-threaded version of ndbd). Three additional data node configuration parameters—MaxNoOfExecutionThreads, ThreadConfig, and NoOfFragmentLogParts—apply to ndbmtd only; these have no effect when used with ndbd. For more information, see Multi-Threading Configuration Parameters (ndbmtd). See also Section 21.4.3, “ndbmtd — The NDB Cluster Data Node Daemon (Multi-Threaded)”.

Identifying data nodes.  The NodeId or Id value (that is, the data node identifier) can be allocated on the command line when the node is started or in the configuration file.

  • NodeId

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0unsigned[none]1 - 48IS

    A unique node ID is used as the node's address for all cluster internal messages. For data nodes, this is an integer in the range 1 to 48 inclusive. Each node in the cluster must have a unique identifier.

    NodeId is the only supported parameter name to use when identifying data nodes. (Id was removed in NDB 7.5.0.)

  • ExecuteOnComputer

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0name[none]...S

    This refers to the Id set for one of the computers defined in a [computer] section.

    Important

    This parameter is deprecated as of NDB 7.5.0, and is subject to removal in a future release. Use the HostName parameter instead.

  • HostName

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0name or IP addresslocalhost...N

    Specifying this parameter defines the hostname of the computer on which the data node is to reside. To specify a hostname other than localhost, either this parameter or ExecuteOnComputer is required.

  • ServerPort

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0unsigned[none]1 - 64KS

    Each node in the cluster uses a port to connect to other nodes. By default, this port is allocated dynamically in such a way as to ensure that no two nodes on the same host computer receive the same port number, so it should normally not be necessary to specify a value for this parameter.

    However, if you need to be able to open specific ports in a firewall to permit communication between data nodes and API nodes (including SQL nodes), you can set this parameter to the number of the desired port in an [ndbd] section or (if you need to do this for multiple data nodes) the [ndbd default] section of the config.ini file, and then open the port having that number for incoming connections from SQL nodes, API nodes, or both.

    Note

    Connections from data nodes to management nodes is done using the ndb_mgmd management port (the management server's PortNumber) so outgoing connections to that port from any data nodes should always be permitted.

  • TcpBind_INADDR_ANY

    Setting this parameter to TRUE or 1 binds IP_ADDR_ANY so that connections can be made from anywhere (for autogenerated connections). The default is FALSE (0).

  • NodeGroup

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0 [none]0 - 65536IS

    This parameter can be used to assign a data node to a specific node group. It is read only when the cluster is started for the first time, and cannot be used to reassign a data node to a different node group online. It is generally not desirable to use this parameter in the [ndbd default] section of the config.ini file, and care must be taken not to assign nodes to node groups in such a way that an invalid numbers of nodes are assigned to any node groups.

    The NodeGroup parameter is chiefly intended for use in adding a new node group to a running NDB Cluster without having to perform a rolling restart. For this purpose, you should set it to 65536 (the maximum value). You are not required to set a NodeGroup value for all cluster data nodes, only for those nodes which are to be started and added to the cluster as a new node group at a later time. For more information, see Section 21.5.14.3, “Adding NDB Cluster Data Nodes Online: Detailed Example”.

  • NoOfReplicas

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0integer21 - 4IS

    This global parameter can be set only in the [ndbd default] section, and defines the number of replicas for each table stored in the cluster. This parameter also specifies the size of node groups. A node group is a set of nodes all storing the same information.

    Node groups are formed implicitly. The first node group is formed by the set of data nodes with the lowest node IDs, the next node group by the set of the next lowest node identities, and so on. By way of example, assume that we have 4 data nodes and that NoOfReplicas is set to 2. The four data nodes have node IDs 2, 3, 4 and 5. Then the first node group is formed from nodes 2 and 3, and the second node group by nodes 4 and 5. It is important to configure the cluster in such a manner that nodes in the same node groups are not placed on the same computer because a single hardware failure would cause the entire cluster to fail.

    If no node IDs are provided, the order of the data nodes will be the determining factor for the node group. Whether or not explicit assignments are made, they can be viewed in the output of the management client's SHOW command.

    The default value for NoOfReplicas is 2. This is the recommended value for most production environments.

    Important

    While the maximum possible value for this parameter is 4, setting NoOfReplicas to a value greater than 2 is not supported in production.

    Warning

    Setting NoOfReplicas to 1 means that there is only a single copy of all Cluster data; in this case, the loss of a single data node causes the cluster to fail because there are no additional copies of the data stored by that node.

    The value for this parameter must divide evenly into the number of data nodes in the cluster. For example, if there are two data nodes, then NoOfReplicas must be equal to either 1 or 2, since 2/3 and 2/4 both yield fractional values; if there are four data nodes, then NoOfReplicas must be equal to 1, 2, or 4.

  • DataDir

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0path....IN

    This parameter specifies the directory where trace files, log files, pid files and error logs are placed.

    The default is the data node process working directory.

  • FileSystemPath

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0pathDataDir...IN

    This parameter specifies the directory where all files created for metadata, REDO logs, UNDO logs (for Disk Data tables), and data files are placed. The default is the directory specified by DataDir.

    Note

    This directory must exist before the ndbd process is initiated.

    The recommended directory hierarchy for NDB Cluster includes /var/lib/mysql-cluster, under which a directory for the node's file system is created. The name of this subdirectory contains the node ID. For example, if the node ID is 2, this subdirectory is named ndb_2_fs.

  • BackupDataDir

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0path[see text]...IN

    This parameter specifies the directory in which backups are placed.

    Important

    The string '/BACKUP' is always appended to this value. For example, if you set the value of BackupDataDir to /var/lib/cluster-data, then all backups are stored under /var/lib/cluster-data/BACKUP. This also means that the effective default backup location is the directory named BACKUP under the location specified by the FileSystemPath parameter.

Data Memory, Index Memory, and String Memory

DataMemory and IndexMemory are [ndbd] parameters specifying the size of memory segments used to store the actual records and their indexes. In setting values for these, it is important to understand how DataMemory and IndexMemory are used, as they usually need to be updated to reflect actual usage by the cluster:

  • DataMemory

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0bytes80M1M - 1024GN

    This parameter defines the amount of space (in bytes) available for storing database records. The entire amount specified by this value is allocated in memory, so it is extremely important that the machine has sufficient physical memory to accommodate it.

    The memory allocated by DataMemory is used to store both the actual records and indexes. There is a 16-byte overhead on each record; an additional amount for each record is incurred because it is stored in a 32KB page with 128 byte page overhead (see below). There is also a small amount wasted per page due to the fact that each record is stored in only one page.

    For variable-size table attributes, the data is stored on separate data pages, allocated from DataMemory. Variable-length records use a fixed-size part with an extra overhead of 4 bytes to reference the variable-size part. The variable-size part has 2 bytes overhead plus 2 bytes per attribute.

    The maximum record size is 14000 bytes.

    The memory space defined by DataMemory is also used to store ordered indexes, which use about 10 bytes per record. Each table row is represented in the ordered index. A common error among users is to assume that all indexes are stored in the memory allocated by IndexMemory, but this is not the case: Only primary key and unique hash indexes use this memory; ordered indexes use the memory allocated by DataMemory. However, creating a primary key or unique hash index also creates an ordered index on the same keys, unless you specify USING HASH in the index creation statement. This can be verified by running ndb_desc -d db_name table_name in the management client.

    Currently, NDB Cluster can use a maximum of 512 MB for hash indexes per partition, which means in some cases it is possible to get Table is full errors in MySQL client applications even when ndb_mgm -e "ALL REPORT MEMORYUSAGE" shows significant free DataMemory. This can also pose a problem with data node restarts on nodes that are heavily loaded with data. You can force NDB to create extra partitions for NDB Cluster tables and thus have more memory available for hash indexes by using the MAX_ROWS option for CREATE TABLE. In general, setting MAX_ROWS to twice the number of rows that you expect to store in the table should be sufficient. You can also use the MinFreePct configuration parameter to help avoid problems with node restarts. (Bug #13436216)

    The memory space allocated by DataMemory consists of 32KB pages, which are allocated to table fragments. Each table is normally partitioned into the same number of fragments as there are data nodes in the cluster. Thus, for each node, there are the same number of fragments as are set in NoOfReplicas.

    Once a page has been allocated, it is currently not possible to return it to the pool of free pages, except by deleting the table. (This also means that DataMemory pages, once allocated to a given table, cannot be used by other tables.) Performing a data node recovery also compresses the partition because all records are inserted into empty partitions from other live nodes.

    The DataMemory memory space also contains UNDO information: For each update, a copy of the unaltered record is allocated in the DataMemory. There is also a reference to each copy in the ordered table indexes. Unique hash indexes are updated only when the unique index columns are updated, in which case a new entry in the index table is inserted and the old entry is deleted upon commit. For this reason, it is also necessary to allocate enough memory to handle the largest transactions performed by applications using the cluster. In any case, performing a few large transactions holds no advantage over using many smaller ones, for the following reasons:

    • Large transactions are not any faster than smaller ones

    • Large transactions increase the number of operations that are lost and must be repeated in event of transaction failure

    • Large transactions use more memory

    The default value for DataMemory is 80MB; the minimum is 1MB. There is no maximum size, but in reality the maximum size has to be adapted so that the process does not start swapping when the limit is reached. This limit is determined by the amount of physical RAM available on the machine and by the amount of memory that the operating system may commit to any one process. 32-bit operating systems are generally limited to 2−4GB per process; 64-bit operating systems can use more. For large databases, it may be preferable to use a 64-bit operating system for this reason.

  • IndexMemory

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0bytes18M1M - 1TN

    This parameter controls the amount of storage used for hash indexes in NDB Cluster . Hash indexes are always used for primary key indexes, unique indexes, and unique constraints. When defining a primary key or a unique index, two indexes are created, one of which is a hash index used for all tuple accesses as well as lock handling. This index is also used to enforce unique constraints.

    You can estimate the size of a hash index using this formula:

                size  = ( (fragments * 32K) + (rows * 18) )
                * replicas
    
              

    fragments is the number of fragments, replicas is the number of replicas (normally 2), and rows is the number of rows. If a table has one million rows, 8 fragments, and 2 replicas, the expected index memory usage is calculated as shown here:

              
      ((8 * 32K) + (1000000 * 18)) * 2 = ((8 * 32768) + (1000000 * 18)) * 2
      = (262144 + 18000000) * 2
      = 18262144 * 2 = 36524288 bytes = ~35MB
    

    Index statistics for ordered indexes (when these are enabled) are stored in the mysql.ndb_index_stat_sample table. Since this table has a hash index, this adds to index memory usage. An upper bound to the number of rows for a given ordered index can be calculated as follows:

      sample_size= key_size + ((key_attributes + 1) * 4)
    
      sample_rows = IndexStatSaveSize
                    * ((0.01 * IndexStatSaveScale * log2(rows * sample_size)) + 1)
                    / sample_size
    

    In the preceding formula, key_size is the size of the ordered index key in bytes, key_attributes is the number ot attributes in the ordered index key, and rows is the number of rows in the base table.

    Assume that table t1 has 1 million rows and an ordered index named ix1 on two four-byte integers. Assume in addition that IndexStatSaveSize and IndexStatSaveScale are set to their default values (32K and 100, respectively). Using the previous 2 formulas, we can calculate as follows:

      sample_size = 8  + ((1 + 2) * 4) = 20 bytes
    
      sample_rows = 32K
                    * ((0.01 * 100 * log2(1000000*20)) + 1)
                    / 20
                    = 32768 * ( (1 * ~16.811) +1) / 20
                    = 32768 * ~17.811 / 20
                    = ~29182 rows
    

    The expected index memory usage is thus 2 * 18 * 29182 = ~1050550 bytes.

    The default value for IndexMemory is 18MB. The minimum is 1MB.

  • StringMemory

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0% or bytes250 - 4294967039 (0xFFFFFEFF)S

    This parameter determines how much memory is allocated for strings such as table names, and is specified in an [ndbd] or [ndbd default] section of the config.ini file. A value between 0 and 100 inclusive is interpreted as a percent of the maximum default value, which is calculated based on a number of factors including the number of tables, maximum table name size, maximum size of .FRM files, MaxNoOfTriggers, maximum column name size, and maximum default column value.

    A value greater than 100 is interpreted as a number of bytes.

    The default value is 25—that is, 25 percent of the default maximum.

    Under most circumstances, the default value should be sufficient, but when you have a great many Cluster tables (1000 or more), it is possible to get Error 773 Out of string memory, please modify StringMemory config parameter: Permanent error: Schema error, in which case you should increase this value. 25 (25 percent) is not excessive, and should prevent this error from recurring in all but the most extreme conditions.

The following example illustrates how memory is used for a table. Consider this table definition:

CREATE TABLE example (
  a INT NOT NULL,
  b INT NOT NULL,
  c INT NOT NULL,
  PRIMARY KEY(a),
  UNIQUE(b)
) ENGINE=NDBCLUSTER;

For each record, there are 12 bytes of data plus 12 bytes overhead. Having no nullable columns saves 4 bytes of overhead. In addition, we have two ordered indexes on columns a and b consuming roughly 10 bytes each per record. There is a primary key hash index on the base table using roughly 29 bytes per record. The unique constraint is implemented by a separate table with b as primary key and a as a column. This other table consumes an additional 29 bytes of index memory per record in the example table as well 8 bytes of record data plus 12 bytes of overhead.

Thus, for one million records, we need 58MB for index memory to handle the hash indexes for the primary key and the unique constraint. We also need 64MB for the records of the base table and the unique index table, plus the two ordered index tables.

You can see that hash indexes takes up a fair amount of memory space; however, they provide very fast access to the data in return. They are also used in NDB Cluster to handle uniqueness constraints.

Currently, the only partitioning algorithm is hashing and ordered indexes are local to each node. Thus, ordered indexes cannot be used to handle uniqueness constraints in the general case.

An important point for both IndexMemory and DataMemory is that the total database size is the sum of all data memory and all index memory for each node group. Each node group is used to store replicated information, so if there are four nodes with two replicas, there will be two node groups. Thus, the total data memory available is 2 × DataMemory for each data node.

It is highly recommended that DataMemory and IndexMemory be set to the same values for all nodes. Data distribution is even over all nodes in the cluster, so the maximum amount of space available for any node can be no greater than that of the smallest node in the cluster.

DataMemory and IndexMemory can be changed, but decreasing either of these can be risky; doing so can easily lead to a node or even an entire NDB Cluster that is unable to restart due to there being insufficient memory space. Increasing these values should be acceptable, but it is recommended that such upgrades are performed in the same manner as a software upgrade, beginning with an update of the configuration file, and then restarting the management server followed by restarting each data node in turn.

MinFreePct.  A proportion (5% by default) of data node resources including DataMemory and IndexMemory is kept in reserve to insure that the data node does not exhaust its memory when performing a restart. This can be adjusted using the MinFreePct data node configuration parameter (default 5).

Effective VersionType/UnitsDefaultRange/ValuesRestart Type
NDB 7.5.0unsigned50 - 100N

Updates do not increase the amount of index memory used. Inserts take effect immediately; however, rows are not actually deleted until the transaction is committed.

Transaction parameters.  The next few [ndbd] parameters that we discuss are important because they affect the number of parallel transactions and the sizes of transactions that can be handled by the system. MaxNoOfConcurrentTransactions sets the number of parallel transactions possible in a node. MaxNoOfConcurrentOperations sets the number of records that can be in update phase or locked simultaneously.

Both of these parameters (especially MaxNoOfConcurrentOperations) are likely targets for users setting specific values and not using the default value. The default value is set for systems using small transactions, to ensure that these do not use excessive memory.

MaxDMLOperationsPerTransaction sets the maximum number of DML operations that can be performed in a given transaction.

  • MaxNoOfConcurrentTransactions

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0integer409632 - 4294967039 (0xFFFFFEFF)N

    Each cluster data node requires a transaction record for each active transaction in the cluster. The task of coordinating transactions is distributed among all of the data nodes. The total number of transaction records in the cluster is the number of transactions in any given node times the number of nodes in the cluster.

    Transaction records are allocated to individual MySQL servers. Each connection to a MySQL server requires at least one transaction record, plus an additional transaction object per table accessed by that connection. This means that a reasonable minimum for the total number of transactions in the cluster can be expressed as

    MinTotalNoOfConcurrentTransactions =
        (maximum number of tables accessed in any single transaction + 1)
        * number of SQL nodes
    

    Suppose that there are 10 SQL nodes using the cluster. A single join involving 10 tables requires 11 transaction records; if there are 10 such joins in a transaction, then 10 * 11 = 110 transaction records are required for this transaction, per MySQL server, or 110 * 10 = 1100 transaction records total. Each data node can be expected to handle MinTotalNoOfConcurrentTransactions / number of data nodes. For an NDB Cluster having 4 data nodes, this would mean setting MaxNoOfConcurrentTransactions on each data node to 1100 / 4 = 275. In addition, you should provide for failure recovery by ensuring that a single node group can accommodate all concurrent transactions; in other words, that each data node's MaxNoOfConcurrentTransactions is sufficient to cover a number of transactions equal to MinTotalNoOfConcurrentTransactions / number of node groups. If this cluster has a single node group, then MaxNoOfConcurrentTransactions should be set to 1100 (the same as the total number of concurrent transactions for the entire cluster).

    In addition, each transaction involves at least one operation; for this reason, the value set for MaxNoOfConcurrentTransactions should always be no more than the value of MaxNoOfConcurrentOperations.

    This parameter must be set to the same value for all cluster data nodes. This is due to the fact that, when a data node fails, the oldest surviving node re-creates the transaction state of all transactions that were ongoing in the failed node.

    It is possible to change this value using a rolling restart, but the amount of traffic on the cluster must be such that no more transactions occur than the lower of the old and new levels while this is taking place.

    The default value is 4096.

  • MaxNoOfConcurrentOperations

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0integer32K32 - 4294967039 (0xFFFFFEFF)N

    It is a good idea to adjust the value of this parameter according to the size and number of transactions. When performing transactions which involve only a few operations and records, the default value for this parameter is usually sufficient. Performing large transactions involving many records usually requires that you increase its value.

    Records are kept for each transaction updating cluster data, both in the transaction coordinator and in the nodes where the actual updates are performed. These records contain state information needed to find UNDO records for rollback, lock queues, and other purposes.

    This parameter should be set at a minimum to the number of records to be updated simultaneously in transactions, divided by the number of cluster data nodes. For example, in a cluster which has four data nodes and which is expected to handle one million concurrent updates using transactions, you should set this value to 1000000 / 4 = 250000. To help provide resiliency against failures, it is suggested that you set this parameter to a value that is high enough to permit an individual data node to handle the load for its node group. In other words, you should set the value equal to total number of concurrent operations / number of node groups. (In the case where there is a single node group, this is the same as the total number of concurrent operations for the entire cluster.)

    Because each transaction always involves at least one operation, the value of MaxNoOfConcurrentOperations should always be greater than or equal to the value of MaxNoOfConcurrentTransactions.

    Read queries which set locks also cause operation records to be created. Some extra space is allocated within individual nodes to accommodate cases where the distribution is not perfect over the nodes.

    When queries make use of the unique hash index, there are actually two operation records used per record in the transaction. The first record represents the read in the index table and the second handles the operation on the base table.

    The default value is 32768.

    This parameter actually handles two values that can be configured separately. The first of these specifies how many operation records are to be placed with the transaction coordinator. The second part specifies how many operation records are to be local to the database.

    A very large transaction performed on an eight-node cluster requires as many operation records in the transaction coordinator as there are reads, updates, and deletes involved in the transaction. However, the operation records of the are spread over all eight nodes. Thus, if it is necessary to configure the system for one very large transaction, it is a good idea to configure the two parts separately. MaxNoOfConcurrentOperations will always be used to calculate the number of operation records in the transaction coordinator portion of the node.

    It is also important to have an idea of the memory requirements for operation records. These consume about 1KB per record.

  • MaxNoOfLocalOperations

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0integerUNDEFINED32 - 4294967039 (0xFFFFFEFF)N

    By default, this parameter is calculated as 1.1 × MaxNoOfConcurrentOperations. This fits systems with many simultaneous transactions, none of them being very large. If there is a need to handle one very large transaction at a time and there are many nodes, it is a good idea to override the default value by explicitly specifying this parameter.

  • MaxDMLOperationsPerTransaction

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0operations (DML)429496729532 - 4294967295N

    This parameter limits the size of a transaction. The transaction is aborted if it requires more than this many DML operations. The minimum number of operations per transaction is 32; however, you can set MaxDMLOperationsPerTransaction to 0 to disable any limitation on the number of DML operations per transaction. The maximum (and default) is 4294967295.

Transaction temporary storage.  The next set of [ndbd] parameters is used to determine temporary storage when executing a statement that is part of a Cluster transaction. All records are released when the statement is completed and the cluster is waiting for the commit or rollback.

The default values for these parameters are adequate for most situations. However, users with a need to support transactions involving large numbers of rows or operations may need to increase these values to enable better parallelism in the system, whereas users whose applications require relatively small transactions can decrease the values to save memory.

  • MaxNoOfConcurrentIndexOperations

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0integer8K0 - 4294967039 (0xFFFFFEFF)N

    For queries using a unique hash index, another temporary set of operation records is used during a query's execution phase. This parameter sets the size of that pool of records. Thus, this record is allocated only while executing a part of a query. As soon as this part has been executed, the record is released. The state needed to handle aborts and commits is handled by the normal operation records, where the pool size is set by the parameter MaxNoOfConcurrentOperations.

    The default value of this parameter is 8192. Only in rare cases of extremely high parallelism using unique hash indexes should it be necessary to increase this value. Using a smaller value is possible and can save memory if the DBA is certain that a high degree of parallelism is not required for the cluster.

  • MaxNoOfFiredTriggers

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0integer40000 - 4294967039 (0xFFFFFEFF)N

    The default value of MaxNoOfFiredTriggers is 4000, which is sufficient for most situations. In some cases it can even be decreased if the DBA feels certain the need for parallelism in the cluster is not high.

    A record is created when an operation is performed that affects a unique hash index. Inserting or deleting a record in a table with unique hash indexes or updating a column that is part of a unique hash index fires an insert or a delete in the index table. The resulting record is used to represent this index table operation while waiting for the original operation that fired it to complete. This operation is short-lived but can still require a large number of records in its pool for situations with many parallel write operations on a base table containing a set of unique hash indexes.

  • TransactionBufferMemory

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0bytes1M1K - 4294967039 (0xFFFFFEFF)N

    The memory affected by this parameter is used for tracking operations fired when updating index tables and reading unique indexes. This memory is used to store the key and column information for these operations. It is only very rarely that the value for this parameter needs to be altered from the default.

    The default value for TransactionBufferMemory is 1MB.

    Normal read and write operations use a similar buffer, whose usage is even more short-lived. The compile-time parameter ZATTRBUF_FILESIZE (found in ndb/src/kernel/blocks/Dbtc/Dbtc.hpp) set to 4000 × 128 bytes (500KB). A similar buffer for key information, ZDATABUF_FILESIZE (also in Dbtc.hpp) contains 4000 × 16 = 62.5KB of buffer space. Dbtc is the module that handles transaction coordination.

Scans and buffering.  There are additional [ndbd] parameters in the Dblqh module (in ndb/src/kernel/blocks/Dblqh/Dblqh.hpp) that affect reads and updates. These include ZATTRINBUF_FILESIZE, set by default to 10000 × 128 bytes (1250KB) and ZDATABUF_FILE_SIZE, set by default to 10000*16 bytes (roughly 156KB) of buffer space. To date, there have been neither any reports from users nor any results from our own extensive tests suggesting that either of these compile-time limits should be increased.

  • MaxNoOfConcurrentScans

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0integer2562 - 500N

    This parameter is used to control the number of parallel scans that can be performed in the cluster. Each transaction coordinator can handle the number of parallel scans defined for this parameter. Each scan query is performed by scanning all partitions in parallel. Each partition scan uses a scan record in the node where the partition is located, the number of records being the value of this parameter times the number of nodes. The cluster should be able to sustain MaxNoOfConcurrentScans scans concurrently from all nodes in the cluster.

    Scans are actually performed in two cases. The first of these cases occurs when no hash or ordered indexes exists to handle the query, in which case the query is executed by performing a full table scan. The second case is encountered when there is no hash index to support the query but there is an ordered index. Using the ordered index means executing a parallel range scan. The order is kept on the local partitions only, so it is necessary to perform the index scan on all partitions.

    The default value of MaxNoOfConcurrentScans is 256. The maximum value is 500.

  • MaxNoOfLocalScans

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0integer[see text]32 - 4294967039 (0xFFFFFEFF)N

    Specifies the number of local scan records if many scans are not fully parallelized. When the number of local scan records is not provided, it is calculated as shown here:

    4 * MaxNoOfConcurrentScans * [# data nodes] + 2
    

    The minimum value is 32.

  • BatchSizePerLocalScan

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0integer2561 - 992N

    This parameter is used to calculate the number of lock records used to handle concurrent scan operations.

    BatchSizePerLocalScan has a strong connection to the BatchSize defined in the SQL nodes.

  • LongMessageBuffer

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0bytes64M512K - 4294967039 (0xFFFFFEFF)N

    This is an internal buffer used for passing messages within individual nodes and between nodes. The default is 64MB.

    This parameter seldom needs to be changed from the default.

  • MaxParallelCopyInstances

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0integer00 - 64S

    This parameter sets the parallelization used in the copy phase of a node restart or system restart, when a node that is currently just starting is synchronised with a node that already has current data by copying over any changed records from the node that is up to date. Because full parallelism in such cases can lead to overload situations, MaxParallelCopyInstances provides a means to decrease it. This parameter's default value 0. This value means that the effective parallelism is equal to the number of LDM instances in the node just starting as well as the node updating it.

  • MaxParallelScansPerFragment

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0bytes2561 - 4294967039 (0xFFFFFEFF)N

    It is possible to configure the maximum number of parallel scans (TUP scans and TUX scans) allowed before they begin queuing for serial handling. You can increase this to take advantage of any unused CPU when performing large number of scans in parallel and improve their performance.

    The default value for this parameter is 256.

Memory Allocation

MaxAllocate

Effective VersionType/UnitsDefaultRange/ValuesRestart Type
NDB 7.5.0unsigned32M1M - 1GN

This is the maximum size of the memory unit to use when allocating memory for tables. In cases where NDB gives Out of memory errors, but it is evident by examining the cluster logs or the output of DUMP 1000 that all available memory has not yet been used, you can increase the value of this parameter (or MaxNoOfTables, or both) to cause NDB to make sufficient memory available.

Hash Map Size

DefaultHashMapSize

Effective VersionType/UnitsDefaultRange/ValuesRestart Type
NDB 7.5.0LDM threads38400 - 3840N

The size of the table hash maps used by NDB is configurable using this parameter. DefaultHashMapSize can take any of three possible values (0, 240, 3840). These values and their effects are described in the following table:

ValueDescription / Effect
0Use the lowest value set, if any, for this parameter among all data nodes and API nodes in the cluster; if it is not set on any data or API node, use the default value.
240Original hash map size (used by default in all NDB Cluster releases prior to NDB 7.2.7)
3840Larger hash map size (used by default beginning with NDB 7.2.7)

The original intended use for this parameter was to facilitate upgrades and especially downgrades to and from very old releases with differing default hash map sizes. This is not an issue when upgrading from NDB Cluster 7.4 to NDB Cluster 7.5.

Logging and checkpointing.  The following [ndbd] parameters control log and checkpoint behavior.

  • NoOfFragmentLogFiles

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0integer163 - 4294967039 (0xFFFFFEFF)IN

    This parameter sets the number of REDO log files for the node, and thus the amount of space allocated to REDO logging. Because the REDO log files are organized in a ring, it is extremely important that the first and last log files in the set (sometimes referred to as the head and tail log files, respectively) do not meet. When these approach one another too closely, the node begins aborting all transactions encompassing updates due to a lack of room for new log records.

    A REDO log record is not removed until both required local checkpoints have been completed since that log record was inserted. Checkpointing frequency is determined by its own set of configuration parameters discussed elsewhere in this chapter.

    The default parameter value is 16, which by default means 16 sets of 4 16MB files for a total of 1024MB. The size of the individual log files is configurable using the FragmentLogFileSize parameter. In scenarios requiring a great many updates, the value for NoOfFragmentLogFiles may need to be set as high as 300 or even higher to provide sufficient space for REDO logs.

    If the checkpointing is slow and there are so many writes to the database that the log files are full and the log tail cannot be cut without jeopardizing recovery, all updating transactions are aborted with internal error code 410 (Out of log file space temporarily). This condition prevails until a checkpoint has completed and the log tail can be moved forward.

    Important

    This parameter cannot be changed on the fly; you must restart the node using --initial. If you wish to change this value for all data nodes in a running cluster, you can do so using a rolling node restart (using --initial when starting each data node).

  • FragmentLogFileSize

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0bytes16M4M - 1GIN

    Setting this parameter enables you to control directly the size of redo log files. This can be useful in situations when NDB Cluster is operating under a high load and it is unable to close fragment log files quickly enough before attempting to open new ones (only 2 fragment log files can be open at one time); increasing the size of the fragment log files gives the cluster more time before having to open each new fragment log file. The default value for this parameter is 16M.

    For more information about fragment log files, see the description for NoOfFragmentLogFiles.

  • InitFragmentLogFiles

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0[see values]SPARSESPARSE, FULLIN

    By default, fragment log files are created sparsely when performing an initial start of a data node—that is, depending on the operating system and file system in use, not all bytes are necessarily written to disk. However, it is possible to override this behavior and force all bytes to be written, regardless of the platform and file system type being used, by means of this parameter. InitFragmentLogFiles takes either of two values:

    • SPARSE. Fragment log files are created sparsely. This is the default value.

    • FULL. Force all bytes of the fragment log file to be written to disk.

    Depending on your operating system and file system, setting InitFragmentLogFiles=FULL may help eliminate I/O errors on writes to the REDO log.

  • MaxNoOfOpenFiles

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0unsigned020 - 4294967039 (0xFFFFFEFF)N

    This parameter sets a ceiling on how many internal threads to allocate for open files. Any situation requiring a change in this parameter should be reported as a bug.

    The default value is 0. However, the minimum value to which this parameter can be set is 20.

  • InitialNoOfOpenFiles

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0files2720 - 4294967039 (0xFFFFFEFF)N

    This parameter sets the initial number of internal threads to allocate for open files.

    The default value is 27.

  • MaxNoOfSavedMessages

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0integer250 - 4294967039 (0xFFFFFEFF)N

    This parameter sets the maximum number of errors written in the error log as well as the maximum number of trace files that are kept before overwriting the existing ones. Trace files are generated when, for whatever reason, the node crashes.

    The default is 25, which sets these maximums to 25 error messages and 25 trace files.

  • MaxLCPStartDelay

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0seconds00 - 600N

    In parallel data node recovery, only table data is actually copied and synchronized in parallel; synchronization of metadata such as dictionary and checkpoint information is done in a serial fashion. In addition, recovery of dictionary and checkpoint information cannot be executed in parallel with performing of local checkpoints. This means that, when starting or restarting many data nodes concurrently, data nodes may be forced to wait while a local checkpoint is performed, which can result in longer node recovery times.

    It is possible to force a delay in the local checkpoint to permit more (and possibly all) data nodes to complete metadata synchronization; once each data node's metadata synchronization is complete, all of the data nodes can recover table data in parallel, even while the local checkpoint is being executed. To force such a delay, set MaxLCPStartDelay, which determines the number of seconds the cluster can wait to begin a local checkpoint while data nodes continue to synchronize metadata. This parameter should be set in the [ndbd default] section of the config.ini file, so that it is the same for all data nodes. The maximum value is 600; the default is 0.

  • LcpScanProgressTimeout

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0second600 - 4294967039 (0xFFFFFEFF)N

    A local checkpoint fragment scan watchdog checks periodically for no progress in each fragment scan performed as part of a local checkpoint, and shuts down the node if there is no progress after a given amount of time has elapsed. This interval can be set using the LcpScanProgressTimeout data node configuration parameter, which sets the maximum time for which the local checkpoint can be stalled before the LCP fragment scan watchdog shuts down the node.

    The default value is 60 seconds (providing compatibility with previous releases). Setting this parameter to 0 disables the LCP fragment scan watchdog altogether.

Metadata objects.  The next set of [ndbd] parameters defines pool sizes for metadata objects, used to define the maximum number of attributes, tables, indexes, and trigger objects used by indexes, events, and replication between clusters.

Note

These act merely as suggestions to the cluster, and any that are not specified revert to the default values shown.

  • MaxNoOfAttributes

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0integer100032 - 4294967039 (0xFFFFFEFF)N

    This parameter sets a suggested maximum number of attributes that can be defined in the cluster; like MaxNoOfTables, it is not intended to function as a hard upper limit.

    (In older NDB Cluster releases, this parameter was sometimes treated as a hard limit for certain operations. This caused problems with NDB Cluster Replication, when it was possible to create more tables than could be replicated, and sometimes led to confusion when it was possible [or not possible, depending on the circumstances] to create more than MaxNoOfAttributes attributes.)

    The default value is 1000, with the minimum possible value being 32. The maximum is 4294967039. Each attribute consumes around 200 bytes of storage per node due to the fact that all metadata is fully replicated on the servers.

    When setting MaxNoOfAttributes, it is important to prepare in advance for any ALTER TABLE statements that you might want to perform in the future. This is due to the fact, during the execution of ALTER TABLE on a Cluster table, 3 times the number of attributes as in the original table are used, and a good practice is to permit double this amount. For example, if the NDB Cluster table having the greatest number of attributes (greatest_number_of_attributes) has 100 attributes, a good starting point for the value of MaxNoOfAttributes would be 6 * greatest_number_of_attributes = 600.

    You should also estimate the average number of attributes per table and multiply this by MaxNoOfTables. If this value is larger than the value obtained in the previous paragraph, you should use the larger value instead.

    Assuming that you can create all desired tables without any problems, you should also verify that this number is sufficient by trying an actual ALTER TABLE after configuring the parameter. If this is not successful, increase MaxNoOfAttributes by another multiple of MaxNoOfTables and test it again.

  • MaxNoOfTables

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0integer1288 - 20320N

    A table object is allocated for each table and for each unique hash index in the cluster. This parameter sets a suggested maximum number of table objects for the cluster as a whole; like MaxNoOfAttributes, it is not intended to function as a hard upper limit.

    (In older NDB Cluster releases, this parameter was sometimes treated as a hard limit for certain operations. This caused problems with NDB Cluster Replication, when it was possible to create more tables than could be replicated, and sometimes led to confusion when it was possible [or not possible, depending on the circumstances] to create more than MaxNoOfTables tables.)

    For each attribute that has a BLOB data type an extra table is used to store most of the BLOB data. These tables also must be taken into account when defining the total number of tables.

    The default value of this parameter is 128. The minimum is 8 and the maximum is 20320. Each table object consumes approximately 20KB per node.

    Note

    The sum of MaxNoOfTables and MaxNoOfOrderedIndexes must not exceed 232 − 2 (4294967294).

  • MaxNoOfOrderedIndexes

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0integer1280 - 4294967039 (0xFFFFFEFF)N

    For each ordered index in the cluster, an object is allocated describing what is being indexed and its storage segments. By default, each index so defined also defines an ordered index. Each unique index and primary key has both an ordered index and a hash index. MaxNoOfOrderedIndexes sets the total number of ordered indexes that can be in use in the system at any one time.

    The default value of this parameter is 128. Each index object consumes approximately 10KB of data per node.

    Note

    The sum of MaxNoOfTables and MaxNoOfOrderedIndexes must not exceed 232 − 2 (4294967294).

  • MaxNoOfTriggers

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0integer7680 - 4294967039 (0xFFFFFEFF)N

    Internal update, insert, and delete triggers are allocated for each unique hash index. (This means that three triggers are created for each unique hash index.) However, an ordered index requires only a single trigger object. Backups also use three trigger objects for each normal table in the cluster.

    Replication between clusters also makes use of internal triggers.

    This parameter sets the maximum number of trigger objects in the cluster.

    The default value is 768.

  • MaxNoOfSubscriptions

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0unsigned00 - 4294967039 (0xFFFFFEFF)N

    Each NDB table in an NDB Cluster requires a subscription in the NDB kernel. For some NDB API applications, it may be necessary or desirable to change this parameter. However, for normal usage with MySQL servers acting as SQL nodes, there is not any need to do so.

    The default value for MaxNoOfSubscriptions is 0, which is treated as equal to MaxNoOfTables. Each subscription consumes 108 bytes.

  • MaxNoOfSubscribers

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0unsigned00 - 4294967039 (0xFFFFFEFF)N

    This parameter is of interest only when using NDB Cluster Replication. The default value is 0, which is treated as 2 * MaxNoOfTables; that is, there is one subscription per NDB table for each of two MySQL servers (one acting as the replication master and the other as the slave). Each subscriber uses 16 bytes of memory.

    When using circular replication, multi-master replication, and other replication setups involving more than 2 MySQL servers, you should increase this parameter to the number of mysqld processes included in replication (this is often, but not always, the same as the number of clusters). For example, if you have a circular replication setup using three NDB Cluster s, with one mysqld attached to each cluster, and each of these mysqld processes acts as a master and as a slave, you should set MaxNoOfSubscribers equal to 3 * MaxNoOfTables.

    For more information, see Section 21.6, “NDB Cluster Replication”.

  • MaxNoOfConcurrentSubOperations

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0unsigned2560 - 4294967039 (0xFFFFFEFF)N

    This parameter sets a ceiling on the number of operations that can be performed by all API nodes in the cluster at one time. The default value (256) is sufficient for normal operations, and might need to be adjusted only in scenarios where there are a great many API nodes each performing a high volume of operations concurrently.

Boolean parameters.  The behavior of data nodes is also affected by a set of [ndbd] parameters taking on boolean values. These parameters can each be specified as TRUE by setting them equal to 1 or Y, and as FALSE by setting them equal to 0 or N.

  • LateAlloc

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0numeric10 - 1N

    Allocate memory for this data node after a connection to the management server has been established. Enabled by default.

  • LockPagesInMainMemory

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0numeric00 - 2N

    For a number of operating systems, including Solaris and Linux, it is possible to lock a process into memory and so avoid any swapping to disk. This can be used to help guarantee the cluster's real-time characteristics.

    This parameter takes one of the integer values 0, 1, or 2, which act as shown in the following list:

    • 0: Disables locking. This is the default value.

    • 1: Performs the lock after allocating memory for the process.

    • 2: Performs the lock before memory for the process is allocated.

    If the operating system is not configured to permit unprivileged users to lock pages, then the data node process making use of this parameter may have to be run as system root. (LockPagesInMainMemory uses the mlockall function. From Linux kernel 2.6.9, unprivileged users can lock memory as limited by max locked memory. For more information, see ulimit -l and http://linux.die.net/man/2/mlock).

    Note

    In older NDB Cluster releases, this parameter was a Boolean. 0 or false was the default setting, and disabled locking. 1 or true enabled locking of the process after its memory was allocated. NDB Cluster 7.5 treats true or false for the value of this parameter as an error.

    Important

    Beginning with glibc 2.10, glibc uses per-thread arenas to reduce lock contention on a shared pool, which consumes real memory. In general, a data node process does not need per-thread arenas, since it does not perform any memory allocation after startup. (This difference in allocators does not appear to affect performance significantly.)

    The glibc behavior is intended to be configurable via the MALLOC_ARENA_MAX environment variable, but a bug in this mechanism prior to glibc 2.16 meant that this variable could not be set to less than 8, so that the wasted memory could not be reclaimed. (Bug #15907219; see also http://sourceware.org/bugzilla/show_bug.cgi?id=13137 for more information concerning this issue.)

    One possible workaround for this problem is to use the LD_PRELOAD environment variable to preload a jemalloc memory allocation library to take the place of that supplied with glibc.

  • StopOnError

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0boolean10, 1N

    This parameter specifies whether a data node process should exit or perform an automatic restart when an error condition is encountered.

    This parameter's default value is 1; this means that, by default, an error causes the data node process to halt.

    When an error is encountered and StopOnError is 0, the data node process is restarted.

    Prior to NDB Cluster 7.5.5, if the data node process exits in an uncontrolled fashion (due, for example, to performing kill -9 on the data node process while performing a query, or to a segmentation fault), and StopOnError is set to 0, the angel process attempts to restart it in exactly the same way as it was started previously—that is, using the same startup options that were employed the last time the node was started. Thus, if the data node process was originally started using the --initial option, it is also restarted with --initial. This means that, in such cases, if the failure occurs on a sufficient number of data nodes in a very short interval, the effect is the same as if you had performed an initial restart of the entire cluster, leading to loss of all data. This issue is resolved in NDB Cluster 7.5.5 and later NDB 7.5 releases (Bug #83510, Bug #24945638).

    Users of MySQL Cluster Manager should note that, when StopOnError equals 1, this prevents the MySQL Cluster Manager agent from restarting any data nodes after it has performed its own restart and recovery. See Starting and Stopping the Agent on Linux, for more information.

  • CrashOnCorruptedTuple

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0booleantruetrue, falseS

    When this parameter is enabled, it forces a data node to shut down whenever it encounters a corrupted tuple. In NDB 7.5, it is enabled by default.

  • Diskless

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0true|false (1|0)falsetrue, falseIS

    It is possible to specify NDB Cluster tables as diskless, meaning that tables are not checkpointed to disk and that no logging occurs. Such tables exist only in main memory. A consequence of using diskless tables is that neither the tables nor the records in those tables survive a crash. However, when operating in diskless mode, it is possible to run ndbd on a diskless computer.

    Important

    This feature causes the entire cluster to operate in diskless mode.

    When this feature is enabled, Cluster online backup is disabled. In addition, a partial start of the cluster is not possible.

    Diskless is disabled by default.

  • ODirect

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0booleanfalsetrue, falseN

    Enabling this parameter causes NDB to attempt using O_DIRECT writes for LCP, backups, and redo logs, often lowering kswapd and CPU usage. When using NDB Cluster on Linux, enable ODirect if you are using a 2.6 or later kernel.

    ODirect is disabled by default.

  • RestartOnErrorInsert

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0error code20 - 4N

    This feature is accessible only when building the debug version where it is possible to insert errors in the execution of individual blocks of code as part of testing.

    This feature is disabled by default.

  • CompressedBackup

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0booleanfalsetrue, falseN

    Enabling this parameter causes backup files to be compressed. The compression used is equivalent to gzip --fast, and can save 50% or more of the space required on the data node to store uncompressed backup files. Compressed backups can be enabled for individual data nodes, or for all data nodes (by setting this parameter in the [ndbd default] section of the config.ini file).

    Important

    You cannot restore a compressed backup to a cluster running a MySQL version that does not support this feature.

    The default value is 0 (disabled).

  • CompressedLCP

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0booleanfalsetrue, falseN

    Setting this parameter to 1 causes local checkpoint files to be compressed. The compression used is equivalent to gzip --fast, and can save 50% or more of the space required on the data node to store uncompressed checkpoint files. Compressed LCPs can be enabled for individual data nodes, or for all data nodes (by setting this parameter in the [ndbd default] section of the config.ini file).

    Important

    You cannot restore a compressed local checkpoint to a cluster running a MySQL version that does not support this feature.

    The default value is 0 (disabled).

Controlling Timeouts, Intervals, and Disk Paging

There are a number of [ndbd] parameters specifying timeouts and intervals between various actions in Cluster data nodes. Most of the timeout values are specified in milliseconds. Any exceptions to this are mentioned where applicable.

  • TimeBetweenWatchDogCheck

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0milliseconds600070 - 4294967039 (0xFFFFFEFF)N

    To prevent the main thread from getting stuck in an endless loop at some point, a watchdog thread checks the main thread. This parameter specifies the number of milliseconds between checks. If the process remains in the same state after three checks, the watchdog thread terminates it.

    This parameter can easily be changed for purposes of experimentation or to adapt to local conditions. It can be specified on a per-node basis although there seems to be little reason for doing so.

    The default timeout is 6000 milliseconds (6 seconds).

  • TimeBetweenWatchDogCheckInitial

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0milliseconds600070 - 4294967039 (0xFFFFFEFF)N

    This is similar to the TimeBetweenWatchDogCheck parameter, except that TimeBetweenWatchDogCheckInitial controls the amount of time that passes between execution checks inside a database node in the early start phases during which memory is allocated.

    The default timeout is 6000 milliseconds (6 seconds).

  • StartPartialTimeout

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0milliseconds300000 - 4294967039 (0xFFFFFEFF)N

    This parameter specifies how long the Cluster waits for all data nodes to come up before the cluster initialization routine is invoked. This timeout is used to avoid a partial Cluster startup whenever possible.

    This parameter is overridden when performing an initial start or initial restart of the cluster.

    The default value is 30000 milliseconds (30 seconds). 0 disables the timeout, in which case the cluster may start only if all nodes are available.

  • StartPartitionedTimeout

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0milliseconds600000 - 4294967039 (0xFFFFFEFF)N

    If the cluster is ready to start after waiting for StartPartialTimeout milliseconds but is still possibly in a partitioned state, the cluster waits until this timeout has also passed. If StartPartitionedTimeout is set to 0, the cluster waits indefinitely.

    This parameter is overridden when performing an initial start or initial restart of the cluster.

    The default timeout is 60000 milliseconds (60 seconds).

  • StartFailureTimeout

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0milliseconds00 - 4294967039 (0xFFFFFEFF)N

    If a data node has not completed its startup sequence within the time specified by this parameter, the node startup fails. Setting this parameter to 0 (the default value) means that no data node timeout is applied.

    For nonzero values, this parameter is measured in milliseconds. For data nodes containing extremely large amounts of data, this parameter should be increased. For example, in the case of a data node containing several gigabytes of data, a period as long as 10−15 minutes (that is, 600000 to 1000000 milliseconds) might be required to perform a node restart.

  • StartNoNodeGroupTimeout

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0milliseconds150000 - 4294967039 (0xFFFFFEFF)N

    When a data node is configured with Nodegroup = 65536, is regarded as not being assigned to any node group. When that is done, the cluster waits StartNoNodegroupTimeout milliseconds, then treats such nodes as though they had been added to the list passed to the --nowait-nodes option, and starts. The default value is 15000 (that is, the management server waits 15 seconds). Setting this parameter equal to 0 means that the cluster waits indefinitely.

    StartNoNodegroupTimeout must be the same for all data nodes in the cluster; for this reason, you should always set it in the [ndbd default] section of the config.ini file, rather than for individual data nodes.

    See Section 21.5.14, “Adding NDB Cluster Data Nodes Online”, for more information.

  • HeartbeatIntervalDbDb

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0milliseconds500010 - 4294967039 (0xFFFFFEFF)N

    One of the primary methods of discovering failed nodes is by the use of heartbeats. This parameter states how often heartbeat signals are sent and how often to expect to receive them. Heartbeats cannot be disabled.

    After missing four heartbeat intervals in a row, the node is declared dead. Thus, the maximum time for discovering a failure through the heartbeat mechanism is five times the heartbeat interval.

    The default heartbeat interval is 5000 milliseconds (5 seconds). This parameter must not be changed drastically and should not vary widely between nodes. If one node uses 5000 milliseconds and the node watching it uses 1000 milliseconds, obviously the node will be declared dead very quickly. This parameter can be changed during an online software upgrade, but only in small increments.

    See also Network communication and latency, as well as the description of the ConnectCheckIntervalDelay configuration parameter.

  • HeartbeatIntervalDbApi

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0milliseconds1500100 - 4294967039 (0xFFFFFEFF)N

    Each data node sends heartbeat signals to each MySQL server (SQL node) to ensure that it remains in contact. If a MySQL server fails to send a heartbeat in time it is declared dead, in which case all ongoing transactions are completed and all resources released. The SQL node cannot reconnect until all activities initiated by the previous MySQL instance have been completed. The three-heartbeat criteria for this determination are the same as described for HeartbeatIntervalDbDb.

    The default interval is 1500 milliseconds (1.5 seconds). This interval can vary between individual data nodes because each data node watches the MySQL servers connected to it, independently of all other data nodes.

    For more information, see Network communication and latency.

  • HeartbeatOrder

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0numeric00 - 65535S

    Data nodes send heartbeats to one another in a circular fashion whereby each data node monitors the previous one. If a heartbeat is not detected by a given data node, this node declares the previous data node in the circle dead (that is, no longer accessible by the cluster). The determination that a data node is dead is done globally; in other words; once a data node is declared dead, it is regarded as such by all nodes in the cluster.

    It is possible for heartbeats between data nodes residing on different hosts to be too slow compared to heartbeats between other pairs of nodes (for example, due to a very low heartbeat interval or temporary connection problem), such that a data node is declared dead, even though the node can still function as part of the cluster. .

    In this type of situation, it may be that the order in which heartbeats are transmitted between data nodes makes a difference as to whether or not a particular data node is declared dead. If this declaration occurs unnecessarily, this can in turn lead to the unnecessary loss of a node group and as thus to a failure of the cluster.

    Consider a setup where there are 4 data nodes A, B, C, and D running on 2 host computers host1 and host2, and that these data nodes make up 2 node groups, as shown in the following table:

    Node Group

    Nodes Running on host1

    Nodes Running on host2

    Node Group 0:

    Node A

    Node B

    Node Group 1:

    Node C

    Node D

    Suppose the heartbeats are transmitted in the order A->B->C->D->A. In this case, the loss of the heartbeat between the hosts causes node B to declare node A dead and node C to declare node B dead. This results in loss of Node Group 0, and so the cluster fails. On the other hand, if the order of transmission is A->B->D->C->A (and all other conditions remain as previously stated), the loss of the heartbeat causes nodes A and D to be declared dead; in this case, each node group has one surviving node, and the cluster survives.

    The HeartbeatOrder configuration parameter makes the order of heartbeat transmission user-configurable. The default value for HeartbeatOrder is zero; allowing the default value to be used on all data nodes causes the order of heartbeat transmission to be determined by NDB. If this parameter is used, it must be set to a nonzero value (maximum 65535) for every data node in the cluster, and this value must be unique for each data node; this causes the heartbeat transmission to proceed from data node to data node in the order of their HeartbeatOrder values from lowest to highest (and then directly from the data node having the highest HeartbeatOrder to the data node having the lowest value, to complete the circle). The values need not be consecutive; for example, to force the heartbeat transmission order A->B->D->C->A in the scenario outlined previously, you could set the HeartbeatOrder values as shown here:

    NodeHeartbeatOrder
    A10
    B20
    C30
    D25

    To use this parameter to change the heartbeat transmission order in a running NDB Cluster , you must first set HeartbeatOrder for each data node in the cluster in the global configuration (config.ini) file (or files). To cause the change to take effect, you must perform either of the following:

    • A complete shutdown and restart of the entire cluster.

    • 2 rolling restarts of the cluster in succession. All nodes must be restarted in the same order in both rolling restarts.

    You can use DUMP 908 to observe the effect of this parameter in the data node logs.

  • ConnectCheckIntervalDelay

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0milliseconds00 - 4294967039 (0xFFFFFEFF)N

    This parameter enables connection checking between data nodes after one of them has failed heartbeat checks for 5 intervals of up to HeartbeatIntervalDbDb milliseconds.

    Such a data node that further fails to respond within an interval of ConnectCheckIntervalDelay milliseconds is considered suspect, and is considered dead after two such intervals. This can be useful in setups with known latency issues.

    The default value for this parameter is 0 (disabled).

  • TimeBetweenLocalCheckpoints

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0number of 4-byte words, as a base-2 logarithm200 - 31N

    This parameter is an exception in that it does not specify a time to wait before starting a new local checkpoint; rather, it is used to ensure that local checkpoints are not performed in a cluster where relatively few updates are taking place. In most clusters with high update rates, it is likely that a new local checkpoint is started immediately after the previous one has been completed.

    The size of all write operations executed since the start of the previous local checkpoints is added. This parameter is also exceptional in that it is specified as the base-2 logarithm of the number of 4-byte words, so that the default value 20 means 4MB (4 × 220) of write operations, 21 would mean 8MB, and so on up to a maximum value of 31, which equates to 8GB of write operations.

    All the write operations in the cluster are added together. Setting TimeBetweenLocalCheckpoints to 6 or less means that local checkpoints will be executed continuously without pause, independent of the cluster's workload.

  • TimeBetweenGlobalCheckpoints

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0milliseconds200020 - 32000N

    When a transaction is committed, it is committed in main memory in all nodes on which the data is mirrored. However, transaction log records are not flushed to disk as part of the commit. The reasoning behind this behavior is that having the transaction safely committed on at least two autonomous host machines should meet reasonable standards for durability.

    It is also important to ensure that even the worst of cases—a complete crash of the cluster—is handled properly. To guarantee that this happens, all transactions taking place within a given interval are put into a global checkpoint, which can be thought of as a set of committed transactions that has been flushed to disk. In other words, as part of the commit process, a transaction is placed in a global checkpoint group. Later, this group's log records are flushed to disk, and then the entire group of transactions is safely committed to disk on all computers in the cluster.

    This parameter defines the interval between global checkpoints. The default is 2000 milliseconds.

  • TimeBetweenGlobalCheckpointsTimeout

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0milliseconds12000010 - 4294967039 (0xFFFFFEFF)N

    This parameter defines the minimum timeout between global checkpoints. The default is 120000 milliseconds.

  • TimeBetweenEpochs

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0milliseconds1000 - 32000N

    This parameter defines the interval between synchronization epochs for NDB Cluster Replication. The default value is 100 milliseconds.

    TimeBetweenEpochs is part of the implementation of micro-GCPs, which can be used to improve the performance of NDB Cluster Replication.

  • TimeBetweenEpochsTimeout

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0milliseconds00 - 256000N

    This parameter defines a timeout for synchronization epochs for NDB Cluster Replication. If a node fails to participate in a global checkpoint within the time determined by this parameter, the node is shut down. The default value is 0; in other words, the timeout is disabled.

    TimeBetweenEpochsTimeout is part of the implementation of micro-GCPs, which can be used to improve the performance of NDB Cluster Replication.

    The current value of this parameter and a warning are written to the cluster log whenever a GCP save takes longer than 1 minute or a GCP save takes longer than 10 seconds.

    Setting this parameter to zero has the effect of disabling GCP stops caused by save timeouts, commit timeouts, or both. The maximum possible value for this parameter is 256000 milliseconds.

  • MaxBufferedEpochs

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0epochs1000 - 100000N

    The number of unprocessed epochs by which a subscribing node can lag behind. Exceeding this number causes a lagging subscriber to be disconnected.

    The default value of 100 is sufficient for most normal operations. If a subscribing node does lag enough to cause disconnections, it is usually due to network or scheduling issues with regard to processes or threads. (In rare circumstances, the problem may be due to a bug in the NDB client.) It may be desirable to set the value lower than the default when epochs are longer.

    Disconnection prevents client issues from affecting the data node service, running out of memory to buffer data, and eventually shutting down. Instead, only the client is affected as a result of the disconnect (by, for example gap events in the binary log), forcing the client to reconnect or restart the process.

  • MaxBufferedEpochBytes

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0bytes2621440026214400 (0x01900000) - 4294967039 (0xFFFFFEFF)N

    The total number of bytes allocated for buffering epochs by this node.

  • TimeBetweenInactiveTransactionAbortCheck

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0milliseconds10001000 - 4294967039 (0xFFFFFEFF)N

    Timeout handling is performed by checking a timer on each transaction once for every interval specified by this parameter. Thus, if this parameter is set to 1000 milliseconds, every transaction will be checked for timing out once per second.

    The default value is 1000 milliseconds (1 second).

  • TransactionInactiveTimeout

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0milliseconds[see text]0 - 4294967039 (0xFFFFFEFF)N

    This parameter states the maximum time that is permitted to lapse between operations in the same transaction before the transaction is aborted.

    The default for this parameter is 4G (also the maximum). For a real-time database that needs to ensure that no transaction keeps locks for too long, this parameter should be set to a relatively small value. Setting it to 0 means that the application never times out. The unit is milliseconds.

  • TransactionDeadlockDetectionTimeout

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0milliseconds120050 - 4294967039 (0xFFFFFEFF)N

    When a node executes a query involving a transaction, the node waits for the other nodes in the cluster to respond before continuing. This parameter sets the amount of time that the transaction can spend executing within a data node, that is, the time that the transaction coordinator waits for each data node participating in the transaction to execute a request.

    A failure to respond can occur for any of the following reasons:

    • The node is dead

    • The operation has entered a lock queue

    • The node requested to perform the action could be heavily overloaded.

    This timeout parameter states how long the transaction coordinator waits for query execution by another node before aborting the transaction, and is important for both node failure handling and deadlock detection.

    The default timeout value is 1200 milliseconds (1.2 seconds).

    The minimum for this parameter is 50 milliseconds.

  • DiskSyncSize

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0bytes4M32K - 4294967039 (0xFFFFFEFF)N

    This is the maximum number of bytes to store before flushing data to a local checkpoint file. This is done to prevent write buffering, which can impede performance significantly. This parameter is not intended to take the place of TimeBetweenLocalCheckpoints.

    Note

    When ODirect is enabled, it is not necessary to set DiskSyncSize; in fact, in such cases its value is simply ignored.

    The default value is 4M (4 megabytes).

  • MaxDiskWriteSpeed

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0numeric20M1M - 1024GS

    Set the maximum rate for writing to disk, in bytes per second, by local checkpoints and backup operations when no restarts (by this data node or any other data node) are taking place in this NDB Cluster .

    For setting the maximum rate of disk writes allowed while this data node is restarting, use MaxDiskWriteSpeedOwnRestart. For setting the maximum rate of disk writes allowed while other data nodes are restarting, use MaxDiskWriteSpeedOtherNodeRestart. The minimum speed for disk writes by all LCPs and backup operations can be adjusted by setting MinDiskWriteSpeed.

  • MaxDiskWriteSpeedOtherNodeRestart

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0numeric50M1M - 1024GS

    Set the maximum rate for writing to disk, in bytes per second, by local checkpoints and backup operations when one or more data nodes in this NDB Cluster are restarting, other than this node.

    For setting the maximum rate of disk writes allowed while this data node is restarting, use MaxDiskWriteSpeedOwnRestart. For setting the maximum rate of disk writes allowed when no data nodes are restarting anywhere in the cluster, use MaxDiskWriteSpeed. The minimum speed for disk writes by all LCPs and backup operations can be adjusted by setting MinDiskWriteSpeed.

  • MaxDiskWriteSpeedOwnRestart

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0numeric200M1M - 1024GS

    Set the maximum rate for writing to disk, in bytes per second, by local checkpoints and backup operations while this data node is restarting.

    For setting the maximum rate of disk writes allowed while other data nodes are restarting, use MaxDiskWriteSpeedOtherNodeRestart. For setting the maximum rate of disk writes allowed when no data nodes are restarting anywhere in the cluster, use MaxDiskWriteSpeed. The minimum speed for disk writes by all LCPs and backup operations can be adjusted by setting MinDiskWriteSpeed.

  • MinDiskWriteSpeed

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0numeric10M1M - 1024GS

    Set the minimum rate for writing to disk, in bytes per second, by local checkpoints and backup operations.

    The maximum rates of disk writes allowed for LCPs and backups under various conditions are adjustable using the parameters MaxDiskWriteSpeed, MaxDiskWriteSpeedOwnRestart, and MaxDiskWriteSpeedOtherNodeRestart. See the descriptions of these parameters for more information.

  • ArbitrationTimeout

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0milliseconds750010 - 4294967039 (0xFFFFFEFF)N

    This parameter specifies how long data nodes wait for a response from the arbitrator to an arbitration message. If this is exceeded, the network is assumed to have split.

    The default value is 7500 milliseconds (7.5 seconds).

  • Arbitration

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0enumerationDefaultDefault, Disabled, WaitExternalN

    The Arbitration parameter enables a choice of arbitration schemes, corresponding to one of 3 possible values for this parameter:

    • Default.  This enables arbitration to proceed normally, as determined by the ArbitrationRank settings for the management and API nodes. This is the default value.

    • Disabled.  Setting Arbitration = Disabled in the [ndbd default] section of the config.ini file to accomplishes the same task as setting ArbitrationRank to 0 on all management and API nodes. When Arbitration is set in this way, any ArbitrationRank settings are ignored.

    • WaitExternal.  The Arbitration parameter also makes it possible to configure arbitration in such a way that the cluster waits until after the time determined by ArbitrationTimeout has passed for an external cluster manager application to perform arbitration instead of handling arbitration internally. This can be done by setting Arbitration = WaitExternal in the [ndbd default] section of the config.ini file. For best results with the WaitExternal setting, it is recommended that ArbitrationTimeout be 2 times as long as the interval required by the external cluster manager to perform arbitration.

    Important

    This parameter should be used only in the [ndbd default] section of the cluster configuration file. The behavior of the cluster is unspecified when Arbitration is set to different values for individual data nodes.

  • RestartSubscriberConnectTimeout

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0ms120000 - 4294967039 (0xFFFFFEFF)S

    This parameter determines the time that a data node waits for subscribing API nodes to connect. Once this timeout expires, any missing API nodes are disconnected from the cluster. To disable this timeout, set RestartSubscriberConnectTimeout to 0.

    While this parameter is specified in milliseconds, the timeout itself is resolved to the next-greatest whole second.

Buffering and logging.  Several [ndbd] configuration parameters enable the advanced user to have more control over the resources used by node processes and to adjust various buffer sizes at need.

These buffers are used as front ends to the file system when writing log records to disk. If the node is running in diskless mode, these parameters can be set to their minimum values without penalty due to the fact that disk writes are faked by the NDB storage engine's file system abstraction layer.

  • UndoIndexBuffer

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0unsigned2M1M - 4294967039 (0xFFFFFEFF)N

    The UNDO index buffer, whose size is set by this parameter, is used during local checkpoints. The NDB storage engine uses a recovery scheme based on checkpoint consistency in conjunction with an operational REDO log. To produce a consistent checkpoint without blocking the entire system for writes, UNDO logging is done while performing the local checkpoint. UNDO logging is activated on a single table fragment at a time. This optimization is possible because tables are stored entirely in main memory.

    The UNDO index buffer is used for the updates on the primary key hash index. Inserts and deletes rearrange the hash index; the NDB storage engine writes UNDO log records that map all physical changes to an index page so that they can be undone at system restart. It also logs all active insert operations for each fragment at the start of a local checkpoint.

    Reads and updates set lock bits and update a header in the hash index entry. These changes are handled by the page-writing algorithm to ensure that these operations need no UNDO logging.

    This buffer is 2MB by default. The minimum value is 1MB, which is sufficient for most applications. For applications doing extremely large or numerous inserts and deletes together with large transactions and large primary keys, it may be necessary to increase the size of this buffer. If this buffer is too small, the NDB storage engine issues internal error code 677 (Index UNDO buffers overloaded).

    Important

    It is not safe to decrease the value of this parameter during a rolling restart.

  • UndoDataBuffer

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0unsigned16M1M - 4294967039 (0xFFFFFEFF)N

    This parameter sets the size of the UNDO data buffer, which performs a function similar to that of the UNDO index buffer, except the UNDO data buffer is used with regard to data memory rather than index memory. This buffer is used during the local checkpoint phase of a fragment for inserts, deletes, and updates.

    Because UNDO log entries tend to grow larger as more operations are logged, this buffer is also larger than its index memory counterpart, with a default value of 16MB.

    This amount of memory may be unnecessarily large for some applications. In such cases, it is possible to decrease this size to a minimum of 1MB.

    It is rarely necessary to increase the size of this buffer. If there is such a need, it is a good idea to check whether the disks can actually handle the load caused by database update activity. A lack of sufficient disk space cannot be overcome by increasing the size of this buffer.

    If this buffer is too small and gets congested, the NDB storage engine issues internal error code 891 (Data UNDO buffers overloaded).

    Important

    It is not safe to decrease the value of this parameter during a rolling restart.

  • RedoBuffer

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0bytes32M1M - 4294967039 (0xFFFFFEFF)N

    All update activities also need to be logged. The REDO log makes it possible to replay these updates whenever the system is restarted. The NDB recovery algorithm uses a fuzzy checkpoint of the data together with the UNDO log, and then applies the REDO log to play back all changes up to the restoration point.

    RedoBuffer sets the size of the buffer in which the REDO log is written. The default value is 32MB; the minimum value is 1MB.

    If this buffer is too small, the NDB storage engine issues error code 1221 (REDO log buffers overloaded). For this reason, you should exercise care if you attempt to decrease the value of RedoBuffer as part of an online change in the cluster's configuration.

    ndbmtd allocates a separate buffer for each LDM thread (see ThreadConfig). For example, with 4 LDM threads, an ndbmtd data node actually has 4 buffers and allocates RedoBuffer bytes to each one, for a total of 4 * RedoBuffer bytes.

  • EventLogBufferSize

    Effective VersionType/UnitsDefaultRange/ValuesRestart Type
    NDB 7.5.0bytes81920 - 64KS

    Controls the size of the circular buffer used for NDB log events within data nodes.

Controlling log messages.  In managing the cluster, it is very important to be able to control the number of log messages sent for various event types to stdout. For each event category, there are 16 possible event levels (numbered 0 through 15). Setting event reporting for a given event category to level 15 means all event reports in that category are sent to stdout; setting it to 0 means that there will be no event reports made in that category.

By default, only the startup message is sent to stdout, with the remaining event reporting level defaults being set to 0. The reason for this is that these messages are also sent