Chapter 3 MySQL Utilities Administrative Tasks

Table of Contents

3.1 Binary Log Operations
3.1.1 How do you change the location of the binary log files?
3.1.2 How do you purge obsolete binary log files safely?
3.2 Database Operations
3.2.1 How do you provision a slave?
3.2.2 How do you make a copy of a database on the same server?
3.2.3 How can you make a copy of a database and change the storage engine?
3.2.4 How do you tell if a table on server A has the same structure as the same table on server B?
3.2.5 How do you synchronize a table on two servers where neither is up-to-date?
3.3 General Operations
3.3.1 How do you know how much space your data uses?
3.3.2 How do you recover the CREATE statement from a damaged or offline server?
3.3.3 How do you create a new user with the same privileges as another user?
3.3.4 Is there an easy way to know what options are used with each utility?
3.3.5 How do you find redundant or duplicate indexes and know which ones to drop?
3.3.6 How do you find all objects that start with a given name prefix?
3.3.7 How do you run a process every night to kill certain connections?
3.4 High Availability Operations
3.4.1 How do you setup and use replication?
3.4.2 How do you add new servers to an existing topology and change the master role?
3.4.3 How do you setup and use automatic failover?
3.4.4 How do you restore the previous master to service after failover?
3.4.5 How do you find all of the slaves attached to a master server?
3.4.6 How Can you determine if data was replicated correctly?
3.4.7 How do you fix errant transactions on the replication topology?
3.5 Server Operations
3.5.1 How can you create a temporary copy (running instance) of a server for testing?
3.5.2 How do you find what MySQL servers are running on a local machine?
3.5.3 How do you setup and use a secure (encrypted) connection between Utilities and a MySQL server?
3.6 Specialized Operations
3.6.1 How do you record only login events in the audit log?
3.6.2 How do you copy or move the audit log?
3.6.3 How can you find the INSERT and UPDATE queries that failed in the audit log?
3.6.4 How do you find connections by the user 'root' in the audit log and show the results in CSV format?

MySQL Utilities provides a command-line set of tools for working with MySQL Servers and databases. MySQL Utilities fully supports MySQL Server versions 5.1 and above. It is also compatible with MySQL Server 5.0, but not every feature of 5.0 may be supported. It does not support MySQL Server versions 4.x.

In this section, we present a number of example administrative tasks introduced by an example "How do I?" question. Included in each is a description of the need, objective, goals, example execution, and a discussion about the specific options and techniques illustrated. Also included is a description of the specific permissions required to execute the utilities demonstrated and tips for using the utility.

These task descriptions are not a substitute for the full manual of each utility, rather, they represent examples of how you can use the utility. For a complete description of the utility and all of its options and arguments, see the manual page for that utility elsewhere in this manual.

3.1 Binary Log Operations

The tasks described in this section relate to those that are performed on or with binary log files.

3.1.1 How do you change the location of the binary log files?

At some point a user might want to change the location where the binary log files are stored (by default in the datadir). There are many reasons for separating the binary log files from the database data. These include fault tolerance, performance, and disk management. For example, you may want to store the binary log files on a different device.

You can use (or change) the --log-bin startup option, but simply changing this option is not enough and it is likely result in errors when starting the MySQL server, especially for replication scenarios. In fact, the existing binary log files also need to be moved to the new location and/or the entries in the respective index file updated in order to preserve any replication topologies that rely on the binary log files.

Objectives

Change the location for the binary log files on an existing MySQL server.

Executing this task manually can be tedious and error prone, since it requires existing binary log files to be moved to the new location and the binary log index file to be correctly updated. Fortunately, the mysqlbinlogmove utility can help us perform this task in an easy way, moving all files and automatically updating the respective index file entries appropriately.

Let's assume that a server is currently running and that it was started with --log-bin=server-bin, which means that the binary log files are created in the datadir with the base filename 'server-bin'. Let's also consider that the datadir is /var/lib/mysql and that the new target directory for the binary log files is /mysql/server/binlogs.

Example Execution

  1. Stop the running MySQL server.

  2. Start the mysqlbinlogmove utility and specify the source directory of the binary log files and the target directory.

    shell> mysqlbinlogmove --binlog-dir=/var/lib/mysql \
              /mysql/server/binlogs
    #
    # Moving bin-log files...
    # - server-bin.000001
    # - server-bin.000002
    # - server-bin.000003
    # - server-bin.000004
    # - server-bin.000005
    # - server-bin.000006
    # - server-bin.000007
    # - server-bin.000008
    # - server-bin.000009
    # - server-bin.000010
    # - server-bin.000011
    #
    #...done.
    #
    

  3. Restart the MySQL server with the new value for the --log-bin option: --log-bin=/mysql/server/binlogs/server-bin.

Discussion

The above example illustrates how to change the binary log directory in an effortless way using the mysqlbinlogmove utility to move existing binary log files to the desired location.

Changing the --log-bin startup option requires the restart of the MySQL server. Moreover, the server also needs to be stopped in order to move all binary log files correctly, otherwise an error might occur while moving files currently in use by the server. In this case, to relocate all available binary log files with the mysqlbinlogmove utility, we simply need to specify their source directory using the --binlog-dir option and the target directory as an argument.

The binary log files are identified based on the default filename format, i.e. with a base filename ending with '-bin'. If a custom basename is used, not ending with '-bin', then the --bin-log-basename option must be used to specify it.

In the above example, this option is not required because the binary log basename 'server-bin' matches the default format. Similarly, if a custom location or name is used for the binary log index file, it must be specified using the --bin-log-index option. By default, the binary log index file is assumed to be located in the specified source binary log directory and to use the default filename format.

As we can see in the above example, the mysqlbinlogmove utility displays the list of all files that are moved. This helps to confirm that all files were relocated as expected.

Permissions Required

The system user used to execute the utility must have read and write access to the source and destination directories in order to move the binary log files successfully.

Tips and Tricks

By default, the utility only moves binary log files. Use the --log-type option with the value 'relay' to move relay log files, or 'all' to move both binary log and relay log files.

The mysqlbinlogmove utility can also be used to move some binary log files to a different location while the server is still running. For example, suppose you want to archive the binary log files or free some disk space on the server's partition. In this case, the --server option should be used instead of the --binlog-dir option like in the above example. The utility ensures that the binary files that might be in use by the server (those with the higher sequence number) are not moved.

The utility also provides options to filter the files to relocate based on their sequence number using the --sequence option, or their last modification date using the --modified-before option.

3.1.2 How do you purge obsolete binary log files safely?

At some point MySQL servers may generate a significant number of binary log files that may consume considerable hard drive space and may be a good reason a user might want to delete them. In a replication scenario, it is necessary to know which binary log files can be deleted without breaking replication (you can safely archive or delete those that are not in use), since replication relies on the existence of the binary log files.

Users need to check each slave to determine the latest master binary log file and position (for anonymous replication). This information is then used to determine which of the binary logs on the master may be removed. That is, which files have been read by all of the slaves. If GTIDs are enabled, we would need to know the list of GTIDs read on the slave in order to determine which binary log files are not in use.

Objectives

The goal is to purge all unnecessary binary log files on the MySQL server in a replication topology.

Another important goal is to automate the operation. Executing this task manually can be tedious and error prone, since it requires getting data from and verifying each slave. The more slaves, the more complicated this becomes.

Fortunately, the mysqlbinlogpurge utility can help us to perform this task in an easy and safe manner by determining all the unnecessary files automatically on each slave and purging them on the master.

Let's assume that a master server is currently running on port 13001 and a few slaves are connected.

Example Execution

  1. Run the mysqlbinlogpurge utility specifying the master connection string using the --master option and either the --slaves to indicate each slave connection string, or the --discover-slaves-login to query the master for all registered slaves and use the user name and password specified to connect and determine the binary log files that can be purged.

    shell> mysqlbinlogpurge --master=root:root@localhost:13001 \
              --discover-slaves-login=rpl:rpl
    # Discovering slaves for master at localhost:13001
    # Discovering slave at localhost:13002
    # Found slave: localhost:13002
    # Discovering slave at localhost:13003
    # Found slave: localhost:13003
    # Discovering slave at localhost:13004
    # Found slave: localhost:13004
    # Latest binlog file replicated by all slaves: mysql-bin.000005
    # Purging binary logs prior to 'mysql-bin.000006'
    

Discussion

The previous example illustrates how to purge the binary log files from a master in a replication scenario.

We used the --master option for the master server but for the slaves, we provided the option --discover-slaves-login and the utility used the specified information to determinate the available binary log files on the server and for each slave verified the latest binary log file that has been loaded, finally purges the latest binary log that is not required for any of the slaves. if want

In example, all binary log files that the utility determinate that were not required by any of the slaves were purged, but the option --binlog can be used to specify the first binary log file to keep (not to purge) from the not required binary log files. The binary log files that remain on the master are from the indicated file to the current active binary log file.

As we can see in the example, the mysqlbinlogmove utility displays latest binary log file replicated by all slaves, and if you want the utility to display the current binary log file being read by the I/O thread, we can use the --verbose option.

Permissions Required

The user requires the SUPER and REPLICATION SLAVE privileges to purge the binary log files.

Tips and Tricks

The --dry-run option can be used to display only the latest binary log file matched by all the slaves without actually purge the binary log files. Use it along with -vv to display even more information such as the status of the SQL and I/O threads of each slave.

3.2 Database Operations

The tasks described in this section relate to those that are performed on or with one or more databases.

3.2.1 How do you provision a slave?

When working with replication, one of the most frequent maintenance tasks is adding a new slave for scale out. Although adding a new slave has been simplified with utilities like mysqlreplicate, provisioning the slave (copying data and getting replication started properly) can be a challenge (or at least tedious) if done manually.

Fortunately, we have two utilities - mysqldbexport and mysqldbimport - that have been designed to work with replication so that when the export is generated, you can include the proper replication control statements in the output stream.

Objectives

Perform slave provisioning using mysqldbexport and mysqldbimport.

Example Execution

shell> mysqldbexport --server=root:root@localhost:13001 --all --export=both --rpl=master --rpl-user=rpl:rpl > data.sql
shell> mysqldbimport --server=root:root@localhost:13002 data.sql

# Source on localhost: ... connected.
# Importing definitions from data.sql.
ERROR: The import operation contains GTID statements that require the global gtid_executed
system variable on the target to be empty (no value). The gtid_executed value must be reset
by issuing a RESET MASTER command on the target prior to attempting the import operation.
Once the global gtid_executed value is cleared, you may retry the import.

shell> mysql -uroot -proot -h 127.0.0.1 --port=13002 -e "RESET MASTER"
shell> mysqldbimport --server=root:root@localhost:13002 data.sql

# Source on localhost: ... connected.
# Importing definitions from data.sql.
CAUTION: The following 1 warning messages were included in the import file:
# WARNING: A partial export from a server that has GTIDs enabled will by default include
the GTIDs of all transactions, even those that changed suppressed parts of the database.
If you don't want to generate the GTID statement, use the --skip-gtid option. To export all
databases, use the --all and --export=both options.
#...done.
        

Discussion

There are several operations listed here. The first one we see is the execution of the mysqldbexport utility to create a file that includes an export of all databases as designated with the --all option. We add the '--export=both' option to ensure we include the definitions as well as the data.

We also add the --rpl=master option which instructs mysqldbexport to generate the replication commands with respect to the source server being the master. Lastly, we include the replication user and password to be included in the CHANGE MASTER command.

Next, we see an attempt to run the import using mysqldbimport but we see there is an error. The reason for the error is the mysqldbimport utility detected a possible problem on the slave whereby there were global transaction identifiers (GTIDs) recorded from the master. You can see this situation if you setup replication prior to running the import. The way to resolve the problem is to run the RESET MASTER command on the slave as shown in the next operation.

We then see a rerun of the import and in this case it succeeds. We see a warning that is issued any time there are replication commands detected in the input stream whenever GTIDs are enabled.

Permissions Required

The user used to read data from the master must have the SELECT privilege on all databases exported. The user on the slave must have the SUPER privilege to start replication.

Tips and Tricks

The warning issued during the import concerning GTIDs is to ensure you are aware that the process for gathering the proper GTIDs to execute on the slave include transactions from all databases. Thus, if you ran a partial export that includes the replication commands and you have GTIDs enabled, you should use the --skip-rpl option to skip the replication commands and restart replication manually.

Should your data be large enough to make the use of mysqldbexport impractical, you can use mysqldbexport to generate the correct replication commands anyway by using the --export=definitions option. This generates the SQL statements for the objects but not the data. You can then use the replication commands generated with your own backup and restore tools.

You can use the option --rpl=slave to generate a output stream that considers the source server a slave and uses the source servers master settings for generating the CHANGE MASTER command.

If you need to copy users, we can use the mysqluserclone to copy user accounts.

3.2.2 How do you make a copy of a database on the same server?

If you are working with a database and want to experiment with changes to objects or data either from direct manipulation (SQL commands) or as a result of interaction with an application, it is prudent to always have a copy to fall back to if something should go wrong.

Naturally, a full backup is key for any production server but what if you just want to do something as a test or as a prototype? Sure, you can restore from your backup when the test is complete but who has the time for that? Why not just make a copy of the database in question and use it in the experiment/test?

Objectives

The goal is to make a copy of a database and rename it to another name. We want to do this on a single database server without resorting to messy file copies and/or stopping the server.

In this case, we want to copy the world database in its entirety and rename the copy to world_clone.

The utility of choice here is named mysqldbcopy and it is capable of copying databases from server to another or on the same server. The following is an example of using the utility.

Example Execution

shell> mysqldbcopy --source=root:root@localhost \
          --destination=root:root@localhost world:world_clone
# Source on localhost: ... connected.
# Destination on localhost: ... connected.
# Copying database world renamed as world_clone
# Copying TABLE world.city
# Copying TABLE world.country
# Copying TABLE world.countrylanguage
# Copying data for TABLE world.city
# Copying data for TABLE world.country
# Copying data for TABLE world.countrylanguage
#...done.

shell> mysql -uroot -p -e "SHOW DATABASES"
+--------------------+
| Database           |
+--------------------+
| information_schema |
| employees          |
| mysql              |
| world              |
| world_clone        |
+--------------------+
        

Discussion

Notice we specified the source of the database we wanted to copy as well as the destination. In this case, they are the same server. You must specify it this way so that it is clear we are operating on the same server.

Notice how we specified the new name. We used the old_name:new_name syntax. You can do this for as many databases as you want to copy. That's right - you can copy multiple databases with a single command renaming each along the way.

To copy a database without renaming it (if the destination is a different server), you can omit the :new_name portion.

Permissions Required

The user must have SELECT privileges for the database(s) on the source server and have CREATE, INSERT, UPDATE on the destination server.

Tips and Tricks

You can copy all of the databases on a source server to the destination by using the --all option, although this option does not permit rename actions. To rename, you must specify the databases one at a time using the old_name:new_name syntax.

You can specify certain objects to exclude (skip) in the copy. Use the --skip option to omit the type of objects. For example, you may want to exclude copying of triggers, procedures, and functions. In this case, use the option '--skip=TRIGGERS,PROCEDURES,FUNCTIONS'. The values are case-insensitive and written in uppercase for emphasis.

The copy is replication and GTID aware and takes actions to preserve the binary log events during the copy.

You can set the locking type with the --locking option. Possible values include: no-locks = do not use any table locks, lock-all = use table locks but no transaction and no consistent read, and snapshot (default): consistent read using a single transaction.

Risks

Should the copy fail in the middle, the destination databases may be incomplete or inconsistent. Should this occur, drop (delete) the destination database in question, repair the cause of the failure, and restart the copy.

3.2.3 How can you make a copy of a database and change the storage engine?

Sometimes you may have need to create a copy of a database but want to change the storage engine of all tables to another engine.

For example, if you are migrating your database to InnoDB (a wise choice), you can copy the database to a new database on a new server and change the storage engine to InnoDB for all of the tables. For this, we can use the mysqldbcopy utility.

Objectives

In this example, we want to make a copy of the world database but change the storage engine to InnoDB and rename the database accordingly.

You can cause all tables in the destination databases to use a different storage engine with the --new-storage-engine option.

Example Execution

shell> mysqldbcopy --source=root:root@localhost:3306 \
--destination=root:root@localhost:3307 --new-storage-engine=InnoDB \
world:world_innodb
# Source on localhost: ... connected.
# Destination on localhost: ... connected.
# Copying database world renamed as world_innodb
# Replacing ENGINE=MyISAM with ENGINE=InnoDB for table `world_innodb`.city.
# Copying TABLE world_innodb.city
# Replacing ENGINE=MyISAM with ENGINE=InnoDB for table `world_innodb`.country.
# Copying TABLE world_innodb.country
# Replacing ENGINE=MyISAM with ENGINE=InnoDB for table `world_innodb`.countrylanguage.
# Copying TABLE world_innodb.countrylanguage
# Copying data for TABLE world_innodb.city
# Copying data for TABLE world_innodb.country
# Copying data for TABLE world_innodb.countrylanguage
#...done.

shell> mysql -uroot -p -h 127.0.0.1 --port=3307 -e "SHOW DATABASES"
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| sakila             |
| world              |
| world_innodb       |
+--------------------+

shell> mysql -uroot -p -h 127.0.0.1 --port=3307 -e "SHOW CREATE TABLE world_innodb.countrylanguage\G"
*************************** 1. row ***************************
       Table: countrylanguage
Create Table: CREATE TABLE `countrylanguage` (
  `CountryCode` char(3) NOT NULL DEFAULT '',
  `Language` char(30) NOT NULL DEFAULT '',
  `IsOfficial` enum('T','F') NOT NULL DEFAULT 'F',
  `Percentage` float(4,1) NOT NULL DEFAULT '0.0',
  PRIMARY KEY (`CountryCode`,`Language`),
  KEY `CountryCode` (`CountryCode`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
        

Discussion

Notice here we created a copy of the database and changed all tables in the destination database to use the InnoDB storage engine with the --new-storage-engine option.

We show proof of the change by displaying the CREATE statement for one of the tables on the destination server.

Notice we also renamed the database by using the old_name:new_name syntax.

Permissions Required

The user must have SELECT privileges for the database(s) on the source server and have CREATE, INSERT, UPDATE on the destination server.

Tips and Tricks

You can exclude specific options by using the --exclude option specifying an SQL pattern expression. For example, to exclude objects that start with xy, use '--exclude=xy%'.

You can use REGEXP patterns in the --exclude option by specifying --regexp in addition to the --exclude option.

Risks

Should the copy fail in the middle, the destination databases may be incomplete or inconsistent. Should this occur, drop the destination database in question, repair the cause of the failure, and restart the copy.

If you are changing the storage engine from InnoDB, you may encounter warnings or errors if the tables contain foreign keys and the new storage engine does not support foreign keys.

3.2.4 How do you tell if a table on server A has the same structure as the same table on server B?

Multiple database servers that are kept synchronized manually or are compartmentalized for security purposes but are by practice kept up-to-date manually are prone to unintentional (and sometimes intentional) divergence.

For example, you may maintain a production server and a development server. The development server may have the same databases with the same structures as the production server (but maybe not the same data). However, the natural course of development, administrative tasks, and maintenance can sometimes leave the development server behind.

When this happens, you need to have a way to quickly check the schema for a table on the production server to see if the development server has the same structure. The utility of choice for this operation is mysqldiff.

Objectives

The goal is to compare a table schema on one server to another and show they differ.

Example Execution

shell> mysqldiff --server1=root:root@localhost \
--server2=root:root@localhost:3307 world.city:world.city --changes-for=server2
# server1 on localhost: ... connected.
# server2 on localhost: ... connected.
# Comparing world.city to world.city                               [FAIL]
# Object definitions differ. (--changes-for=server2)
#

--- world.city
+++ world.city
@@ -4,6 +4,7 @@
   `CountryCode` char(3) NOT NULL DEFAULT '',
   `District` char(20) NOT NULL DEFAULT '',
   `Population` int(11) NOT NULL DEFAULT '0',
+  `Climate` enum('tropical','dry','mild','continental','polar') DEFAULT NULL,
   PRIMARY KEY (`ID`),
   KEY `CountryCode` (`CountryCode`),
   CONSTRAINT `city_ibfk_1` FOREIGN KEY (`CountryCode`) REFERENCES `Country` (`Code`)
Compare failed. One or more differences found.
        

Discussion

Notice to accomplish this task, we simply specified each server with --server1 and --server2 then specified the database objects to compare with the db.object:db.object syntax.

Permissions Required

The user must have SELECT privileges for both objects on both servers as well as SELECT on the mysql database.

Tips and Tricks

You can set the direction of the compare by using the --changes-for option. For example, to see the changes for server1 as the target, use '--changes-for=server1'.

3.2.5 How do you synchronize a table on two servers where neither is up-to-date?

When working with servers that are used in different networks or are compartmentalized, or simply intentionally manually redundant (they do not use replication), or perhaps through some crisis, you may encounter a situation where a table (or an entire database) diverge.

We don't simply want to know which rows differ, rather, we need to know the SQL statements needed to bring the tables into synchronization. Furthermore, we aren't sure which table is most out of date so we'd like to see the transformation statements for both directions.

In this case, it would be very helpful to know exactly how the tables differ. For this, we use the mysqldbcompare utility.

Objectives

The goal is to generate the SQL transformation statements to synchronize the tables.

Example Execution

shell> mysqldbcompare --server1=root:root@localhost:13001 --server2=root:root@localhost:13002 \
          menagerie -a --difftype=SQL --show-reverse --quiet
# Checking databases menagerie on server1 and menagerie on server2
#

#
# Row counts are not the same among `menagerie`.`pet` and `menagerie`.`pet`.
#
# Transformation for --changes-for=server1:
#

DELETE FROM `menagerie`.`pet` WHERE `pet_num` = '10';
DELETE FROM `menagerie`.`pet` WHERE `pet_num` = '12';
INSERT INTO `menagerie`.`pet` (`pet_num`, `name`, `owner`, `species`, `sex`, `birth`, `death`)
  VALUES('11', 'Violet', 'Annette', 'dog', 'f', '2010-10-20', NULL);

#
# Transformation for reverse changes (--changes-for=server2):
#
# DELETE FROM `menagerie`.`pet` WHERE `pet_num` = '11';
# INSERT INTO `menagerie`.`pet` (`pet_num`, `name`, `owner`, `species`, `sex`, `birth`, `death`)
#   VALUES('10', 'JonJon', 'Annette', 'dog', 'm', '2010-10-20', '2012-07-01');
# INSERT INTO `menagerie`.`pet` (`pet_num`, `name`, `owner`, `species`, `sex`, `birth`, `death`)
#   VALUES('12', 'Charlie', 'Annette', 'dog', 'f', '2010-10-20', NULL);
#        

Discussion

In the example above, we connected to two servers and compare the database named menagerie. We enabled the transformation statements using a combination of options as follows.

The --difftype=SQL option instructs the utility to generate the SQL statements.

The --show-reverse option instructs the utility to generate the differences in both direction. That is, from the perspective of server1 as compared to server2 and server2 as compared to server1. By convention, the second set is commented out should you wish to pipe the output to a consumer.

Lastly, the --quiet option simply turns off the verbosity of print statements that normally occur for communicating progress.

Permissions Required

The user must have the SELECT privilege for the databases on both servers.

Tips and Tricks

You can change the direction using the --changes-for option. For example, '--changes-for=server1' is the default direction and '--changes-for=server2' is the reverse. In the second case, the --show-reverse displays the perspective of server1 commented out for convenience and to make it easier to determine which is the alternative direction.

3.3 General Operations

The tasks described in this section include general tasks such as reporting information about a server and searching for objects or processes on a server.

3.3.1 How do you know how much space your data uses?

When preparing to create a backup or when performing maintenance on a server, it is often the case we need to know how much space is used by our data and the logs the server maintains. Fortunately, there is a utility for that.

Objectives

Show the disk space used by the databases and all logs using the mysqldiskusage utility.

Example Execution

shell> sudo env PYTHONPATH=$PYTHONPATH mysqldiskusage \
--server=root:root@localhost --all
# Source on localhost: ... connected.
# Database totals:
+-----------------+--------------+
| db_name         |       total  |
+-----------------+--------------+
| oltp2           | 829,669      |
| bvm             | 15,129       |
| db1             | 9,895        |
| db2             | 11,035       |
| employees       | 206,117,692  |
| griots          | 14,415       |
| mysql           | 995,722      |
| oltp1           | 177,393      |
| room_temp       | 9,847        |
| sakila          | 791,727      |
| test            | 647,911      |
| test_arduino    | 9,999        |
| welford_kindle  | 72,032       |
| world           | 472,785      |
| world_innodb    | 829,669      |
+-----------------+--------------+

Total database disk usage = 210,175,251 bytes or 200.44 MB

# Log information.
+--------------------+--------------+
| log_name           |        size  |
+--------------------+--------------+
| host123.log        | 957,282,265  |
| host123-slow.log   |     123,647  |
| host123.local.err  | 321,772,803  |
+--------------------+--------------+

Total size of logs = 1,279,178,715 bytes or 1.19 GB

# Binary log information:
Current binary log file = my_log.000287
+----------------+---------+
| log_file       | size    |
+----------------+---------+
| my_log.000285  | 252208  |
| my_log.000286  | 256     |
| my_log.000287  | 3063    |
| my_log.index   | 48      |
+----------------+---------+

Total size of binary logs = 255,575 bytes or 249.58 KB

# Server is not an active slave - no relay log information.
# InnoDB tablespace information:
+--------------+--------------+
| innodb_file  |        size  |
+--------------+--------------+
| ib_logfile0  |   5,242,880  |
| ib_logfile1  |   5,242,880  |
| ibdata1      | 815,792,128  |
| ibdata2      |  52,428,800  |
+--------------+--------------+

Total size of InnoDB files = 889,192,448 bytes or 848.00 MB

InnoDB freespace = 635,437,056 bytes or 606.00 MB

Discussion

To see all of the logs, we use the --all option which shows all logs and the InnoDB disk usage.

Notice we used elevated privileges to allow for reading of all of the files and databases in the data directory. In this case, the data directory is owned by the mysql user and a normal user account does not have read access.

The --all option instructs the utility to list all databases even if they contain no data.

Permissions Required

The user must have permissions to read the data directory or use an administrator or super user (sudo) account as shown in the example.

Tips and Tricks

You can run mysqldiskusage without privileges to read the data directory but in this case you may see an estimate of the disk usage rather than actual bytes used. You may also not be able to see a list of the logs if you run the utility remotely.

3.3.2 How do you recover the CREATE statement from a damaged or offline server?

When things go wrong badly enough that your server is down or cannot be restarted, but you can still access the data on disk, you may find yourself faced with a number of complex recovery tasks.

One of those is the need to discover the structure of a particular table or set of tables. Perhaps this is needed for an emergency recovery, a redeployment, or setup for a forensic investigation. Whatever the case, without a running MySQL server it is not possible to know the structure of a table unless you keep meticulous notes and/or use some form of high availability (redundancy) or source control for your database schemas.

Fortunately, there is a utility for situations like this. The mysqlfrm utility can be used to discover the structure of a table directly from the .frm files.

Objectives

With a downed or offline server, discover the structure of a table. More specifically, generate the CREATE TABLE SQL command.

Example Execution

shell> sudo env PYTHONPATH=$PYTHONPATH mysqlfrm --basedir=/usr/local/mysql \
          --port=3333 --user=user /usr/local/mysql/data/welford_kindle/books.frm

# Spawning server with --user=user.
# Starting the spawned server on port 3333 ... done.
# Reading .frm files
#
# Reading the books.frm file.
#
# CREATE statement for /usr/local/mysql/data/kindle/books.frm:
#

CREATE TABLE `welford_kindle`.`books` (
  `ISBN` char(32) NOT NULL PRIMARY KEY,
  `title` char(128) DEFAULT NULL,
  `purchase_date` date DEFAULT NULL,
  `cost` float(10,2) DEFAULT NULL
) ENGINE=MyISAM DEFAULT CHARSET=latin1

#...done.

Discussion

For this example, we used three required parameters; the base directory for the offline server (basedir), a new port to use for the spawned server (port), and a user name to use to run the spawned server (port). The later is necessary since we must launch the mysqlfrm utility as root (sudo) in order to be able to read (copy) files from the protected data directory of the host server.

The --port option is always required for running the utility in default mode (it is not needed for diagnostic mode). You must supply a valid unused port. The utility checks to see if the port is in use and if so produces an error. The port is used to spawn a temporary instance of a MySQL server in order to attempt to recover the .frm file. This instance is shutdown at the end of the process and is not used for diagnostic mode.

We use the --basedir option instead of the --server option because we were faced with a situation where the original server was offline (down). Note that you can use the --basedir option for a running server if you do not want the utility to connect to the original server in any way.

Permissions Required

The permissions for using mysqlfrm vary and depend entirely on how you use it. If you use the utility to read .frm files in a protected folder like the example above (in either mode), you must have the ability to run the spawned server with privileges that allow you to read the protected files. For example, you could use a user account that has root-level privileges.

If you use the utility with a server connection, the user you use to connect must have the ability to read system variables at a minimum including read access to the mysql database.

You should never use the root user to spawn the server nor should you use the mysql user when spawning the server or running the utility.

Tips and Tricks

The utility is designed to work on the host where the .frm files reside. It does not permit connecting to a remote host to read .frm files.

If something goes wrong during the spawning of the server, use the verbosity option three times (-vvv) to turn on maximum depth debug statements. This ensures you see all of the messages from the start of the spawned server from bootstrap onward. Look for errors in these statements as to why the spawned server did not start.

If you do not want to permit the utility to launch a temporary instance of a MySQL server, you should use the diagnostic mode instead. However, the diagnostic mode may not recover all of the options for a table.

Risks

The utility performs a best effort approximation of the CREATE statement when run in diagnostic mode. As such, if you read a .frm file that uses character sets or collations other than the default and you do not use a --server option to connect to a server to read the character sets, this can result in miscalculated column sizes.

For example, suppose your default character set is latin1 which uses 1 byte per character. Let us also suppose you are attempting to read a .frm file that uses a character set that uses 3 bytes per character. Furthermore, we have no server to connect. In this case, the column sizes may be off by a factor of 3. A case in point would be a field such as col_a char(3) would appear in the output of the mysqlfrm utility as col_a char(9).

To mitigate risks such as this and to produce the most accurate CREATE statement in diagnostic mode, always use the --server option.

3.3.3 How do you create a new user with the same privileges as another user?

The MySQL privilege system permits you to create a set of permissions for each user. Sometimes the set of permissions are complex and may require multiple GRANT statements to effect. Other times, the user may acquire privileges over time.

Regardless of how it came about, you may find yourself needing to create a new user that has the same privileges as another user.

Objectives

The goal is to create one or more users whose permissions are identical to an original user on a single server.

Rather than discover what those privileges are using a SHOW GRANTS FOR statement, copy them into a script, modify it, copy and paste again for each user, etc., etc., we can use a single command to copy one user to a list of new users. We can even set different passwords for each user as we go.

Let's assume we have a user, joe@localhost, who has a long list of permissions. We need to create a clone of his user account for two new users, sally and john. Each of these users requires a new password.

Example Execution

shell> mysqluserclone --source=root@localhost \
          --destination=root@localhost \
          joe@localhost sally:secret1@localhost john:secret2@localhost
# Source on localhost: ... connected.
# Destination on localhost: ... connected.
# Cloning 2 users...
# Cloning joe@localhost to user sally:secret1@localhost
# Cloning joe@localhost to user john:secret2@localhost
# ...done.

Discussion

In the above example, we see the use of the mysqluserclone utility to clone the joe user to two new user accounts.

Notice we used the --source option to connect to the original server and --destination for the same server.

After that, we simply list the user we want to clone and the new users we want to create. In this case we use the format username:password@host to specify the user account name, password (optional), and host.

When the utility finishes, you have two new user accounts that have the same privileges as the original user; joe@localhost.

Permissions Required

On the source server, the user must have the SELECT privilege for the mysql database.

On the destination server, the user must have the global CREATE USER privilege or the INSERT privilege for the mysql database as well as the GRANT OPTION privilege, and the privileges that the original user has (you grant a privilege you do not have yourself).

Tips and Tricks

You can use --destination option to specify a different server to copy a user account to another server.

Use the --dump option with only the --source option to see all user accounts.

Use the --include-global-privileges option to include GRANT statements that the user@host combination matches. This is useful for copying user accounts from one server to another where there are global privileges in effect.

3.3.4 Is there an easy way to know what options are used with each utility?

There are many utilities and it is not always easy to remember all of the options and parameters associated with each. Sometimes we need to run several utilities using nearly the same options. For example, you may want to run several utilities logging into a particular server. Rather than retype the connection information each time, you would like to save the option value some way and reuse it.

Fortunately, the mysqluc utility does this and more. It is named the MySQL Users' Console and provides type completion for options, utility names, and even user-defined variables for working with common option values. Not only that, it also provides the ability to get help for any utility supported.

Objectives

Discover what utilities exist and find the options for certain utilities.

Run several utilities with the same server using the type completion feature to make using the suite of utilities easier.

Example Execution

Note

In the example below, keystrokes are represented using square brackets. For example, [TAB] indicates the tab key was pressed. Similarly, portions in the commands specific with angle brackets are values you would replace with actual values. For example, <user> indicates you would place the user's login name here.


shell> mysqluc
Launching console ...

Welcome to the MySQL Utilities Client (mysqluc) version 1.6.4
Copyright (c) 2010, 2016 Oracle and/or its affiliates. All rights reserved.
This is a release of dual licensed MySQL Utilities. For the avoidance of
doubt, this particular copy of the software is released
under the version 2 of the GNU General Public License.
MySQL Utilities is brought to you by Oracle.

Type 'help' for a list of commands or press TAB twice for list of utilities.

mysqluc> help
Command                 Description                                      
----------------------  ---------------------------------------------------
help utilities          Display list of all utilities supported.         
help <utility>          Display help for a specific utility.             
show errors             Display errors captured during the execution of the
                        utilities.                                       
clear errors            clear captured errors.                           
show last error         Display the last error captured during the       
                        execution of the utilities                       
help | help commands    Show this list.                                  
exit | quit             Exit the console.                                
set <variable>=<value>  Store a variable for recall in commands.         
show options            Display list of options specified by the user on 
                        launch.                                          
show variables          Display list of variables.                       
<ENTER>                 Press ENTER to execute command.                  
<ESCAPE>                Press ESCAPE to clear the command entry.         
<DOWN>                  Press DOWN to retrieve the previous command.     
<UP>                    Press UP to retrieve the next command in history.
<TAB>                   Press TAB for type completion of utility, option,
                        or variable names.                               
<TAB><TAB>              Press TAB twice for list of matching type        
                        completion (context sensitive).

mysqluc> help utilities
Utility           Description                                            
----------------  ---------------------------------------------------------
mysqlauditadmin    audit log maintenance utility                         
mysqlauditgrep     audit log search utility                              
mysqlbinlogmove    binary log relocate utility                           
mysqlbinlogpurge   purges unnecessary binary log files                   
mysqlbinlogrotate  rotates the active binary log file                    
mysqldbcompare     compare databases for consistency                     
mysqldbcopy        copy databases from one server to another             
mysqldbexport      export metadata and data from databases               
mysqldbimport      import metadata and data from files                   
mysqldiff          compare object definitions among objects where the    
                   difference is how db1.obj1 differs from db2.obj2      
mysqldiskusage     show disk usage for databases                         
mysqlfailover      automatic replication health monitoring and failover  
mysqlfrm           show CREATE TABLE from .frm files                     
mysqlgrants        display grants per object                             
mysqlindexcheck    check for duplicate or redundant indexes              
mysqlmetagrep      search metadata                                       
mysqlprocgrep      search process information                            
mysqlreplicate     establish replication with a master                   
mysqlrpladmin      administration utility for MySQL replication          
mysqlrplcheck      check replication                                     
mysqlrplms         establish multi-source replication                    
mysqlrplshow       show slaves attached to a master                      
mysqlrplsync       replication synchronization checker utility           
mysqlserverclone   start another instance of a running server            
mysqlserverinfo    show server information                               
mysqlslavetrx      skip transactions on slaves                           
mysqluserclone     clone a MySQL user account to one or more new users   

mysqluc> help mysqldb[TAB][TAB]
Utility         Description                                              
--------------  -----------------------------------------------------------
mysqldbcompare  compare databases for consistency                        
mysqldbcopy     copy databases from one server to another                
mysqldbexport   export metadata and data from databases                  
mysqldbimport   import metadata and data from files


mysqluc> mysqlrplshow --m[TAB][TAB]

Option                 Description                                       
---------------------  ----------------------------------------------------
--master=MASTER        connection information for master server in the   
                       form: <user>[:<password>]@<host>[:<port>][:<socket>]
                       or <login-path>[:<port>][:<socket>].              
--max-depth=MAX_DEPTH  limit the traversal to this depth. Valid only with
                       the --recurse option. Valid values are non-negative
                       integers.        

mysqluc> mysqlrplshow --mast[TAB]er=<user>:<password>@localhost:13001

The console has detected that the utility 'mysqlrplshow' ended with an error code.
You can get more information about the error by running the console command 'show last error'.

mysqluc> show last error
Execution of utility: mysqlrplshow --master=<user>:<password>@localhost:13001
returned errorcode: 2 with error message:
Usage: mysqlrplshow.py --master=root@localhost:3306

mysqlrplshow.py: error: The --discover-slaves-login is required to test slave connectivity.

mysqluc> mysqlrplshow --master=<user>:<password>@localhost:13001 \
            --discover-slaves-login=<user>:<password>
# master on localhost: ... connected.
# Finding slaves for master: localhost:13001

# Replication Topology Graph
localhost:13001 (MASTER)
   |
   +--- localhost:13002 - (SLAVE)
   |
   +--- localhost:13003 - (SLAVE)
   |
   +--- localhost:13004 - (SLAVE)
   |
   +--- localhost:13005 - (SLAVE)


mysqluc>   

Discussion

There is a lot going on here in this example! Let's look through the command entries as they occur in the text.

The first command, mysqluc, starts the users' console. Once the console starts, a welcome banner is displayed followed by a simple prompt, mysqluc>. No additional options or parameters are necessary. However, it should be noted that you can pass commands to the console to execute on start. For a complete list of options, see MySQL Users' Console manual page.

The next command, help, shows the help for the users' console itself. As you can see, there are a number of options available. You can set user defined variables, discover the help for other utilities, display the latest error, and see the options used to start the console.

The help utilities command shows you a list of the available utilities and a short description of each.

Next, we decide we want to get help for one of the database utilities but we do not remember the name. We know it starts with mysqldb but we are not sure of the rest. In this case, if we type mysqldb then hit TAB twice, the users' console shows us a list of all of the utilities that begin with mysqldb.

Now let's say we want to see a graph of our replication topology but we are not sure what the option for specifying the master. In this case, we type the command to launch the mysqlrplshow utility and type the start of the option, '--m', then press TAB twice. What we see is there are two options that match that prefix. Notice we also see a short description (help) for each. This is a real time saving feature for the users' console.

Notice in the next segment we do not have to type the entire name of the option. In this case we typed '--mast[TAB]' which the users' console completed with '--master='. This is tab completion for option names.

Notice the result of the command we entered, mysqlrplshow '--master=user:password@localhost:13001'. There was an error here. We can see the error with the show errors command. We see in the error we failed to provide any connection information for the slaves.

Once we correct that omission, the last command shows how the users' console executes a utility and displays the results in the same stream as the console - much like the mysql client command-line tool.

Permissions Required

There are no special permissions required to run mysqluc however, you must have the necessary privileges to execute the desired utilities.

3.3.5 How do you find redundant or duplicate indexes and know which ones to drop?

MySQL allows its users to create several indexes that might be the same (duplicate indexes) or partially similar (redundant indexes) in its structure. Although duplicate indexes have no advantages, there are some cases where redundant indexes might be helpful. However, both have disadvantages. Duplicate and redundant indexes slow down update and insert operations. As a result it is usually a good idea to find and remove them.

Doing this manually would be a time consuming task, especially for big databases or databases with lots of tables and that is why there is a utility to automate this type of task: mysqlindexcheck.

Objectives

Our goal is to use the mysqlindexcheck utility to help us find duplicate and redundant indexes. For that we are going to use the following table as an example:

CREATE TABLE `test_db`.`indexcheck_test`(
       `emp_id` INT(11) NOT NULL,
       `fiscal_number` int(11) NOT NULL,
       `name` VARCHAR(50) NOT NULL,
       `surname` VARCHAR (50) NOT NULL,
       `job_title` VARCHAR (20),
       `hire_date` DATE default NULL,
       `birthday` DATE default NULL,
       PRIMARY KEY (`emp_id`),
       KEY `idx_fnumber`(`fiscal_number`),
       UNIQUE KEY `idx_unifnumber` (`fiscal_number`),
       UNIQUE KEY `idx_uemp_id` (`emp_id`),
       KEY `idx_full_name` (`name`, `surname`),
       KEY `idx_full_name_dup` (`name`, `surname`),
       KEY `idx_name` (`name`),
       KEY `idx_surname` (`surname`),
       KEY `idx_reverse_name` (`surname`,`name`),
       KEY `ìdx_id_name` (`emp_id`, `name`),
       KEY `idx_id_hdate` (`emp_id`, `hire_date`),
       KEY `idx_id_bday` (`emp_id`, `birthday`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8      

Example Execution

shell> mysqlindexcheck --server=test_user@localhost:13010 test_db.indexcheck_test
# Source on localhost: ... connected.
# The following indexes are duplicates or redundant for table test_db.indexcheck_test:
#
CREATE INDEX `idx_uemp_id` ON `test_db`.`indexcheck_test` (`emp_id`) USING BTREE
#     may be redundant or duplicate of:
ALTER TABLE `test_db`.`indexcheck_test` ADD PRIMARY KEY (`emp_id`)
#
CREATE INDEX `idx_fnumber` ON `test_db`.`indexcheck_test` (`fiscal_number`) USING BTREE
#     may be redundant or duplicate of:
CREATE INDEX `idx_unifnumber` ON `test_db`.`indexcheck_test` (`fiscal_number`) USING BTREE
#
CREATE INDEX `idx_full_name_dup` ON `test_db`.`indexcheck_test` (`name`, `surname`) USING BTREE
#     may be redundant or duplicate of:
CREATE INDEX `idx_full_name` ON `test_db`.`indexcheck_test` (`name`, `surname`) USING BTREE
#
CREATE INDEX `idx_name` ON `test_db`.`indexcheck_test` (`name`) USING BTREE
#     may be redundant or duplicate of:
CREATE INDEX `idx_full_name` ON `test_db`.`indexcheck_test` (`name`, `surname`) USING BTREE
#
CREATE INDEX `idx_surname` ON `test_db`.`indexcheck_test` (`surname`) USING BTREE
#     may be redundant or duplicate of:
CREATE INDEX `idx_reverse_name` ON `test_db`.`indexcheck_test` (`surname`, `name`) USING BTREE
#
ALTER TABLE `test_db`.`indexcheck_test` ADD PRIMARY KEY (`emp_id`)
#     may be redundant or duplicate of:
CREATE INDEX `ìdx_id_name` ON `test_db`.`indexcheck_test` (`emp_id`, `name`) USING BTREE
#
CREATE INDEX `idx_id_hdate` ON `test_db`.`indexcheck_test` (`emp_id`, `hire_date`) USING BTREE
#     may be redundant or duplicate of:
CREATE INDEX `ìdx_id_name` ON `test_db`.`indexcheck_test` (`emp_id`, `name`) USING BTREE
#
CREATE INDEX `idx_id_bday` ON `test_db`.`indexcheck_test` (`emp_id`, `birthday`) USING BTREE
#     may be redundant or duplicate of:
CREATE INDEX `ìdx_id_name` ON `test_db`.`indexcheck_test` (`emp_id`, `name`) USING BTREE
# The following indexes for table test_db.indexcheck_test contain the clustered index and
# might be redundant:
#
CREATE INDEX `idx_uemp_id` ON `test_db`.`indexcheck_test` (`emp_id`) USING BTREE
#
CREATE INDEX `ìdx_id_name` ON `test_db`.`indexcheck_test` (`emp_id`, `name`) USING BTREE
#
CREATE INDEX `idx_id_hdate` ON `test_db`.`indexcheck_test` (`emp_id`, `hire_date`) USING BTREE
#
CREATE INDEX `idx_id_bday` ON `test_db`.`indexcheck_test` (`emp_id`, `birthday`) USING BTREE      

Discussion

As we can see, the utility first points out that neither the idx_uemp_id index nor the idx_fnumber are necessary and it points out why. The first, idx_uemp_id, is redundant because the primary key already ensures that emp_id values have to be unique. As for idx_fnumber, it is also redundant because of idx_ufnumber, a UNIQUE type index which also works as a regular index. Then it points out that idx_full_name_dup is also not necessary. In this case it is a duplicate of the idx_full_name index since it contains the exact same columns on the same order.

Notice that it also indicates that idx_name, idx_surname and even the PRIMARY INDEX on emp_id might be redundant. This happens because we are dealing with BTREE type indexes and for this type of indexes an index X is redundant to an index Y if and only if the first n columns in X also appear in Y.

Given that we are using InnoDB engine, it also warns us that `idx_uemp_id`, `ìdx_id_name`, `idx_id_hdate` and `idx_id_bday` might not be needed. This happens because, in InnoDB, secondary indexes contain the primary key columns for the row that are not in the secondary index.

Note

The indexes identified are just indications of redundant and duplicate indexes. They must not be followed blindly because redundant indexes can be useful depending on how you use (query) your tables.

Permissions Required

Regarding the privileges needed to run this utility, the test_user needs SELECT privilege on the mysql database as well as for the databases which tables are being checked.

Tips and Tricks

You can use the -d option to generate the SQL drop statements needed to remove the indexes.

The --stats option can be used alone or together with either --best or --worst to show statistics about the indexes.

Use the --show-indexes option to show each table together with its indexes.

3.3.6 How do you find all objects that start with a given name prefix?

One of the challenges for database administrators who manage servers with thousands of objects is the task of finding an object by name. Sometimes all you have to go on is the name of a table or perhaps an obscure reference to a partial name. This can come about through a diagnosis of a problem connection, application, or via an incomplete description from a defect report.

It is also possible you need to simply check to see if certain things exist. For example, suppose among your databases are parts or inventory data and you want to check to see if there are any functions or procedures that operate on a column named 'cost'. Moreover, you want to see anything related that has 'cost' as part of its name.

Whatever the case, it would be a big time saver if you could search through all of the database objects and see a list of the objects whose name matches a prefix (or pattern). Fortunately, the mysqlmetagrep utility can get this done.

Objectives

Find all objects whose name begins with a known prefix. More specifically, find any mention of the word 'cost'.

Example Execution

shell> mysqlmetagrep --server=root:root@localhost --body --pattern='%cost%'
+------------------------+--------------+--------------+----------+-------------+------------------+
| Connection             | Object Type  | Object Name  | Database | Field Type  | Matches          |
+------------------------+--------------+--------------+----------+-------------+------------------+
| root:*@localhost:3306  | FUNCTION     | adjust_cost  | griots   | ROUTINE     | adjust_cost      |
| root:*@localhost:3306  | TABLE        | supplies     | griots   | COLUMN      | cost             |
| root:*@localhost:3306  | TABLE        | film         | sakila   | COLUMN      | replacement_cost |
+------------------------+--------------+--------------+-----------------+-------------+-------------+

shell> mysql -uroot -proot -e "SHOW CREATE FUNCTION griots.adjust_cost \G"
*************************** 1. row ***************************
            Function: adjust_cost
            sql_mode:
     Create Function: CREATE DEFINER=`root`@`localhost` FUNCTION `adjust_cost`(cost double)
     RETURNS double DETERMINISTIC
return cost * 1.10
character_set_client: latin1
collation_connection: latin1_swedish_ci
  Database Collation: latin1_swedish_ci      

Discussion

In this example, we see the use of the database pattern '%cost%' to find objects that have 'cost' anywhere in their name. We also see the use of the --body option to instruct the utility to look inside procedures and functions. This can be very handy to locate routines that manipulate data as you can see.

Notice once we found a routine that had 'cost' mentioned, we can examine its body via the SHOW CREATE FUNCTION command to see just how it is using the column 'cost'. In this case, we see someone has written a function to adjust the cost by 10%'.

Therefore, not only can you find objects that have anything named 'cost', you can also discover any hidden logic that may operate on something named 'cost'.

Permissions Required

The user must have the SELECT privilege on the mysql database.

Tips and Tricks

If you are familiar with using regular expressions, you can use the --regexp option to use regular expressions instead of database patterns. For example, the regular expression for the search above would be --pattern='^.*cost.*' --basic-regex.

3.3.7 How do you run a process every night to kill certain connections?

Some database administrators use nightly routines to perform maintenance on their databases or servers. Sometimes these routines can be blocked by long running queries or applications that hang onto locks for longer than expected.

Naturally, priority is given to the application and maintenance routines are often canceled rather than interfere with an application. Should it happen that you subscribe to this notion and you have a routine that is still being blocked or for some reason hasn't completed by a certain time, you need a quick way to generate an event to kill the connection involved. This is where the mysqlprocgrep utility can help.

Objectives

The objective is to generate an event that kills all connections based on a user login ('msaladin') but only if that connection is trying to run a custom administration script named 'my_admin_thingy'.

Example Execution

shell> mysqlprocgrep --sql-body \
          --match-command='my_admin_thingy%' --match-user='msaladin%'  --kill-connection
DECLARE kill_done INT;
DECLARE kill_cursor CURSOR FOR
  SELECT
        Id, User, Host, Db, Command, Time, State, Info
      FROM
        INFORMATION_SCHEMA.PROCESSLIST
      WHERE
          COMMAND LIKE 'my_admin_thingy%'
        AND
          USER LIKE 'msaladin%'
OPEN kill_cursor;
BEGIN
   DECLARE id BIGINT;
   DECLARE EXIT HANDLER FOR NOT FOUND SET kill_done = 1;
   kill_loop: LOOP
      FETCH kill_cursor INTO id;
      KILL CONNECTION id;
   END LOOP kill_loop;
END;
CLOSE kill_cursor;      

Discussion

Notice in the example above, we did not connect to any server to get this information. That is one of the great things about this utility - you can generate all manner of SQL statements for finding processes and try them out on a test system before incorporating them into your events, triggers, and routines.

We specified the user with the --match-user option using a wildcard in case the user is logged in from a different system. We also specified the name of the maintenance routine in the same manner in case it gets renamed with a version number or some such.

The output of this utility then is the SQL statement we need to use to find and kill the connections that meet these criteria. Armed with this, we can make a procedure we can call from an event and execute the SQL at a precise time every day.

Permissions Required

The user must have the SELECT permission on the mysql database.

Tips and Tricks

If you are familiar with using regular expressions, you can use the --regexp option to use regular expressions instead of database patterns.

3.4 High Availability Operations

The tasks described in this section include those for replication and general to specific high availability tasks such as automatic failover.

3.4.1 How do you setup and use replication?

MySQL has built-in support for several types of replication. Replication is usually employed with the purpose of increasing the performance and/or the fault-tolerance of the server and by extension the application. However, setting up replication can be a somewhat complicated and error prone process. But fear not, MySQL Utilities has tools that can help simplify and even automate several replication related tasks.

Consider a scenario where replication is used to obtain scalability, i.e. to increase the performance. Let us imagine an online shopping service. The shop started out small so a single server was enough to handle all the requests, however now it has become quite popular and as a result that single server is no longer able to handle all the requests. Being an online store, most of the operations are read operations (checking existing products, reviews, stock availability, etc).

Objectives

Our goal is to use replication in order to improve the throughput of the service by adding more servers which becomes replicas of the already existing server. These replicas allows scaling out of the service by taking up all the read requests, leaving the old server (now called the master) in charge of the writes. Rather than doing everything "by hand" with the mysql command line, we are going to setup this replication scenario using a single script, mysqlreplicate which does most of the hard work for us. We then check the result using the mysqlrpladmin utility.

Let us assume the existing server, Server1, is running on port 13001 on the local machine with IP 192.168.1.1 and that we want to add 2 new servers, Server2 running on 192.168.1.2:13001 and Server3 running on 192.168.1.3:3306.

Example Execution

shell> mysqlreplicate --master=m_account@192.168.1.1:13001 \
--slave=slave_acc1@192.168.1.2:13001 --rpl-user=repl:slavepass -b
# master on 192.168.1.1: ... connected.
# slave on 192.168.1.2: ... connected.
# Checking for binary logging on master...
# Setting up replication...
# ...done.

shell> mysqlreplicate --master=m_account@192.168.1.1:13001 \
          --slave=slave_acc2@192.168.1.3:3306 --rpl-user=repl:slavepass -b
# master on 192.168.1.1: ... connected.
# slave on 192.168.1.3: ... connected.
# Checking for binary logging on master...
# Setting up replication...
# ...done.

shell> mysqlrplcheck --master=m_account@192.168.1.1:13001 \
          --slave=slave_acc1@192.168.1.2:13001

# master on 192.168.1.1: ... connected.
# slave on 192.168.1.2: ... connected.
Test Description                                                     Status
---------------------------------------------------------------------------
Checking for binary logging on master                                [pass]
Are there binlog exceptions?                                         [pass]
Replication user exists?                                             [pass]
Checking server_id values                                            [pass]
Checking server_uuid values                                          [pass]
Is slave connected to master?                                        [pass]
Check master information file                                        [pass]
Checking InnoDB compatibility                                        [pass]
Checking storage engines compatibility                               [pass]
Checking lower_case_table_names settings                             [pass]
Checking slave delay (seconds behind master)                         [FAIL]

Slave is NNN seconds behind master.

# ...done.

shell> mysqlrplcheck --master=m_account@192.168.1.1:13001 \
          --slave=slave_acc2@192.168.1.3:3306

# master on 192.168.1.1: ... connected.
# slave on 192.168.1.3: ... connected.
Test Description                                                     Status
---------------------------------------------------------------------------
Checking for binary logging on master                                [pass]
Are there binlog exceptions?                                         [pass]
Replication user exists?                                             [pass]
Checking server_id values                                            [pass]
Checking server_uuid values                                          [pass]
Is slave connected to master?                                        [pass]
Check master information file                                        [pass]
Checking InnoDB compatibility                                        [pass]
Checking storage engines compatibility                               [pass]
Checking lower_case_table_names settings                             [pass]
Checking slave delay (seconds behind master)                         [FAIL]

Slave is N seconds behind master.

# ...done.

Discussion

In the above example we made use of the mysqlreplicate utility to setup a single tier replication topology, where the existing server is now the master for the two new servers which act as slaves. Notice how we used the address of the old existing server in the --master option and in the --slave option we used the addresses of the new servers. Also notice the use of the -b flag, this makes replication start from the first event recorded in the master's binary log.

Also notice how we used the mysqlrplcheck utility to check the health of the replication. In this case, the failing test "Check slave delay" is expected, since the slaves are catching up with the master. When the slaves have read and applied all the transactions from the master's binary log the "Check slave delay" test passes. Also, in case the slave wasn't properly configured and pointing to the master specified the "Is slave connect to master" test would notify us of that with a FAIL or WARN status.

Permissions Required

The m_account user needs the following privileges for the mysqlreplicate: SELECT and INSERT privileges on mysql database, REPLICATION SLAVE, REPLICATION CLIENT and GRANT OPTION. As for the slave_acc users, they need the SUPER privilege. The repl user, used as the argument for the --rpl-user option, is either created automatically or if it exists, it needs the REPLICATION SLAVE privilege.

Also, when using GTIDs, the slave_acc users must also have SELECT privilege over the mysql database in order to run the mysqlrplcheck utility successfully.

Tips and Tricks

In the mysqlreplicate utility we could have also used the --test-db option which creates a dummy database to test the replication setup. However, the mysqlrplcheck provides more detailed information in that regard.

As previously stated, the -b option tells the utility to start replication from the first event recorded in the master's binary log. Omitting this flag, in turn, makes the slaves replicate only what is stored in the master's binary log from the present moment onward.

Furthermore, using the --master-log-file and --master-log-pos options it is possible to specify respectively the master log file and the master log position from which the slave starts its replication process.

The -p flag can be used to ensure that the replication setup is only executed in case the storage engines match in both the master and the slave.

Regarding the mysqlrplcheck utility, we can use the -s option to check the output of the show slave status command. This can be useful for instance to check what might be causing the "Is slave connected" test to fail. We can also use the --master-log-file option to specify the name of the master information file to read.

Lastly, we can use the --verbose option in order to get more information about what is happening "under the hood".

3.4.2 How do you add new servers to an existing topology and change the master role?

We examine a scenario similar to the previous one where we want to make one of the two new slaves added the new master server (perhaps because it has better specs and is faster).

Objectives

Our goal in this example it create replication configuration with 3 servers, two new ones and an existing one, and we want to replicate all the information, but make one of the new servers the master server.

Like the previous example, lets assume that the existing server, Server1, is running on port 13001 on the local machine with IP 192.168.1.1 that the two new machines with mysql server instances are Server2 running on 192.168.1.2:13001 and Server3 running on 192.168.1.3:3306. We want to make Server2 the new master.

Example Execution

shell> mysqlreplicate --master=m_account@192.168.1.1:13001 \
          --slave=slave_acc1@192.168.1.2:13001 --rpl-user=repl:slavepass -b
# master on 192.168.1.1: ... connected.
# slave on 192.168.1.2: ... connected.
# Checking for binary logging on master...
# Setting up replication...
# ...done.

shell> mysqlreplicate --master=m_account@192.168.1.1:13001 \
          --slave=slave_acc2@192.168.1.3:3306 --rpl-user=repl:slavepass -b
# master on 192.168.1.1: ... connected.
# slave on 192.168.1.3: ... connected.
# Checking for binary logging on master...
# Setting up replication...
# ...done.

shell> mysqlrpladmin --master=m_account@192.168.1.1:13001 \
          --slaves=slave_acc1@192.168.1.2:13001,slave_acc2@192.168.1.3:3306 health

# Checking privileges.
#
# Replication Topology Health:
+--------------+--------+---------+--------+------------+------------------------------+
| host         | port   | role    | state  | gtid_mode  | health                       |
+--------------+--------+---------+--------+------------+------------------------------+
| 192.168.1.1  | 13001  | MASTER  | UP     | ON         | OK                           |
| 192.168.1.2  | 13001  | SLAVE   | UP     | ON         | Slave delay is NNN seconds   |
| 192.168.1.3  | 3306   | SLAVE   | UP     | ON         | Slave delay is NNN seconds   |
+--------------+--------+---------+--------+------------+------------------------------+
# ...done.

shell> mysqlrpladmin --master=m_account@192.168.1.1:13001 \
          --slaves=slave_acc1@192.168.1.2:13001,slave_acc2@192.168.1.3:3306 health

# Checking privileges.
#
# Replication Topology Health:
+--------------+--------+---------+--------+------------+---------+
| host         | port   | role    | state  | gtid_mode  | health  |
+--------------+--------+---------+--------+------------+---------+
| 192.168.1.1  | 13001  | MASTER  | UP     | ON         | OK      |
| 192.168.1.2  | 13001  | SLAVE   | UP     | ON         | OK      |
| 192.168.1.3  | 3306   | SLAVE   | UP     | ON         | OK      |
+--------------+--------+---------+--------+------------+---------+
# ...done.

shell> mysqlrpladmin --master=m_account@192.168.1.1:13001 \
          --slaves=slave_acc1@192.168.1.2:13001,slave_acc2@192.168.1.3:3306 \
          --new-master=slave_acc1@localhost:13002 --demote-master switchover
# Checking privileges.
# Performing switchover from master at 192.168.1.1:13001 to slave at 192.168.1.2:13001.
# Checking candidate slave prerequisites.
# Checking slaves configuration to master.
# Waiting for slaves to catch up to old master.
# Stopping slaves.
# Performing STOP on all slaves.
# Demoting old master to be a slave to the new master.
# Switching slaves to new master.
# Starting all slaves.
# Performing START on all slaves.
# Checking slaves for errors.
# Switchover complete.
#
# Replication Topology Health:
+--------------+--------+---------+--------+------------+---------+
| host         | port   | role    | state  | gtid_mode  | health  |
+--------------+--------+---------+--------+------------+---------+
| 192.168.1.2  | 13001  | MASTER  | UP     | ON         | OK      |
| 192.168.1.1  | 13001  | SLAVE   | UP     | ON         | OK      |
| 192.168.1.3  | 3306   | SLAVE   | UP     | ON         | OK      |
+--------------+--------+---------+--------+------------+---------+      

Discussion

As with our previous scenario we used the mysqlreplicate utility to set up a replication topology between the existing server and the two new servers. Notice the use of the -b flag which this replication start from the first event recorded in the master's binary log.

After creating the replication topology, we made use of the mysqlrpladmin utility specifying both the master and slave servers and using the health command to check the status of the replication. Since our master server had lots of information, it is normal for the new slaves to take some time to catch up, thus the slave delay message on the health column of the output.

However, if all goes well, after some time the slaves will eventually catch up, and when that happens, the health column shows an OK status.

Once this was established, we used the mysqlrpladmin utility yet again, this time with switchover command. Using the --new-master option, we specify the server that becomes the new master. We also used the --demote-master option, which turns the old master into a slave. If we left that option out, the old master would still behave as a master just without any slaves.

After the switchover, Server2 becomes the master server for both Server1 and Server3 which are now the slaves.

Permissions Required

The m_account user needs the following privileges for the mysqlreplicate: SELECT and INSERT privileges on mysql database, REPLICATION SLAVE, REPLICATION CLIENT and GRANT OPTION. As for the slave_acc users, they need the SUPER privilege. The repl user, used as the argument for the --rpl-user option, is either created automatically or if it exists, it needs the REPLICATION SLAVE privilege.

To run the mysqlrpladmin utility with the health command, the m_account used on the master needs an extra SUPER privilege.

As for the switchover command all the users need the following privileges: SUPER, GRANT OPTION, SELECT, RELOAD, DROP, CREATE and REPLICATION SLAVE

Tips and Tricks

We can use the --discover-slaves-login option for mysqlrpladmin in order to detect the slaves automatically instead of manually specifying the slaves.

The mysqlrpladmin utility allows users to specify a script to execute before and after the failover and switchover operations using the --exec-before and --exec-after options respectively. Note that the script specified using the exec-after option only runs in case the switchover/failover executes successfully.

We can use the mysqlrpladmin utility to start and stop all the slaves with the start/stop commands. Using the stop command only stops servers that are actually slaves of the specified master thus preventing us from stopping unwanted servers.

3.4.3 How do you setup and use automatic failover?

Once your replication topology is setup, it is important to consider the possible occurrences of failures in order to maintain the high availability level of your system. Several failures independently from their cause (network connection issue, hard drive crash, cosmic lightning, etc.) can stop the replication process by making the master no longer accessible by its slaves.

In this type of situation, it is desirable to promote one of the slaves to the master while the problem with the old master is resolved. It is better to have an application to monitor the replicate topology and perform failover automatically, minimizing downtime and keeping replication running smoothly. This is where the mysqlfailover utility shines.

Objectives

The goal is to start the mysqlfailover utility to monitor a replication topology and perform failover automatically when required.

When the current master fails, manually promoting a slave to the new master can be a very tedious and error prone task, as all the remaining slave have to be redirected to the new master and the new master needs to catch up with all the slaves to make sure that no transactions is lost.

Fortunately, the mysqlfailover utility is capable of executing this full process automatically and in a optimized way.

Let's assume that a replication topology with one master (server1:3311) and four slaves (server2:3312, server3:3313, server4:3314, server:3315) was previously setup.

Example Execution

Start the mysqlfailover utility (in console mode - default):

shell> mysqlfailover --master=root@server1:3311 \
--slaves=root@server2:3312,root@server3:3313,root@server4:3314,root@server5:3315 \
--log=log.txt --rpl-user=rpl:rpl
NOTE: Log file 'log.txt' does not exist. Will be created.
# Checking privileges.

MySQL Replication Failover Utility
Failover Mode = auto     Next Interval = Fri Jul 26 10:17:52 2013

Master Information
------------------
Binary Log File    Position  Binlog_Do_DB  Binlog_Ignore_DB
master-bin.000001  151                                     

GTID Executed Set
None

Replication Health Status
+----------+-------+---------+--------+------------+---------+
| host     | port  | role    | state  | gtid_mode  | health  |
+----------+-------+---------+--------+------------+---------+
| server1  | 3311  | MASTER  | UP     | ON         | OK      |
| server2  | 3312  | SLAVE   | UP     | ON         | OK      |
| server3  | 3313  | SLAVE   | UP     | ON         | OK      |
| server4  | 3314  | SLAVE   | UP     | ON         | OK      |
| server5  | 3315  | SLAVE   | UP     | ON         | OK      |
+----------+-------+---------+--------+------------+---------+

Q-quit R-refresh H-health G-GTID Lists U-UUIDs L-log entries      

Now imagine that the master crashed or is no longer reachable, then after a predefined time interval (by default 15 seconds) we can observe that the failover process starts automatically:

Failover starting in 'auto' mode...
# Candidate slave server2:3312 will become the new master.
# Checking slaves status (before failover).
# Preparing candidate for failover.
# Creating replication user if it does not exist.
# Stopping slaves.
# Performing STOP on all slaves.
# Switching slaves to new master.
# Disconnecting new master as slave.
# Starting slaves.
# Performing START on all slaves.
# Checking slaves for errors.
# Failover complete.

Failover console will restart in 5 seconds.

[...]

MySQL Replication Failover Utility
Failover Mode = auto     Next Interval = Fri Jul 26 10:25:17 2013

Master Information
------------------
Binary Log File    Position  Binlog_Do_DB  Binlog_Ignore_DB
master-bin.000001  151                                     

GTID Executed Set
None

Replication Health Status
+----------+-------+---------+--------+------------+---------+
| host     | port  | role    | state  | gtid_mode  | health  |
+----------+-------+---------+--------+------------+---------+
| server2  | 3312  | MASTER  | UP     | ON         | OK      |
| server3  | 3313  | SLAVE   | UP     | ON         | OK      |
| server4  | 3314  | SLAVE   | UP     | ON         | OK      |
| server5  | 3315  | SLAVE   | UP     | ON         | OK      |
+----------+-------+---------+--------+------------+---------+


Q-quit R-refresh H-health G-GTID Lists U-UUIDs L-log entries      

Discussion

The above example illustrates how to start the mysqlfailover utility to monitor the health of the replication topology and reconfigure the topology when failover occurs.

To setup this feature, we simply need to specify the master's connection with the --master option, the list of slaves with the --slaves option and the replication user (login and password) using the --rpl-user option. As an alternative to the --slaves options, you can use the --discover-slaves-login specifying a user and password (or login-path) to connect to the slaves. The utility attempts to discover all of the slaves connected with the master using the specified login and password. For the above example, '--discover-slaves-login=root' could be used.

The --discover-slaves-login can be very handy especially if there is a huge number of slaves in the topology, but bear in mind that the explicit specification of slaves is safer and that discovery can fail to find some servers. In particular, it is important to note that in order for slaves to be discovered, they must be started with the '--report-host' and '--report-port' options with the correct values and they must be connected to the master (I/O thread running) otherwise discovery fails.

It is also recommended to use the --log options to specify a file to register all events, warning and errors. This is useful to keep a record of what happened. For example, to determine when failover occurred and if the process completed without errors or warnings.

An important matter to discuss is the order in which the servers are select as candidates for failover. No distinction is made in terms of the number of transactions to select the most up-to-date slave to become the new master. The reason is very simple; this criteria is non-deterministic as many circumstances (i.e., network load, server maintenance operations) can temporarily influence the performance of a slave and could lead to an incorrect selection of the most appropriate candidate. For example, the slave with the best hardware should be in the long run the most appropriate candidate to become the new master, but for some unanticipated reason it might actually have fewer transactions than other servers when the master crashed. Therefore, a more deterministic criteria based on the order in which the servers are specified is used, allowing the user to control the order in which the candidates are selected. The first server that meets the required election criteria, consisting on simple sanity checks (server reachable and running with the required options: GTID ON and binary logging enabled), is chosen. More specifically, the selection of the new master follows this order: first, sequentially check the list of servers specified by the --candidates option, then the servers listed in the --slaves option, and finally check any discovered slaves in an unordered way.

Permissions Required

The user must have permissions to configure replication.

Tips and Tricks

In the above example the mysqlfailover utility was started in the default console mode, but it can also be executed as a daemon. For that purpose, the --daemon option needs to be used, more specifically simply add '--daemon=start' to the command line. When mysqlfailover is executed as a daemon, no output is displayed and all the information is logged to file specified for the --log option, which is mandatory in this case. To stop the execution of the mysqlfailover daemon, simply invoke the utility using the option '--daemon=stop'. No other options is required to stop the daemon unless a specific pidfile (which contains the process PID) was specified with the --pidfile option to start the daemon and in this case the same option value is also required to stop it.

Another useful feature is the possibility to run external scripts along the execution of the utility to perform customized actions. The following options can be used to execute different scripts at distinct moments of the mysqlfailover execution: --exec-fail-check to specify a script to run periodically at each predefined interval instead of the default check (i.e., master is reachable and alive) to detect the need to failover, --exec-before to specify a script to execute before starting failover, --exec-after to execute a script at the end of failover process, --exec-post-failover to run a script after completing the failover process (before displaying the health report).

3.4.4 How do you restore the previous master to service after failover?

After a successful failover, it is sometimes required to restore the initial topology and promote the crashed server to become the master again (or even a new server with distinctive hardware characteristics). Sometimes failover can be triggered by a simple network issue (not affecting the health of the initial master server) and after being resolved, it may be desirable to put the old master back in the replication topology. We can do this with several of the high availability utilities.

Objectives

The goal of this task is simply to replace the new master of a replication topology with the previous one that might have been demoted as result of successful automatic failover execution. It is assumed that the server to be restored as master is healthy and any previous issue (that triggered failover) have been resolved.

Let's consider the previous topology after failover, now with a new master (server2:3312) and three slaves (server3:3313, server4:3314, server:3315), and that we want to promote the initial server (server1:3311) to master again.

Performing this task manually can be delicate as one wrong or missing step can lead to errors and errors in the replication topology or even to the lost of some transaction. Once more MySQL Utilities can provide a precious assistance to perform this task, in this case requiring the user to follow three simple steps to restore the initial topology as shown below.

Example Execution

There are several steps involved in solving this problem. We walk through each in turn.

You must first stop running the mysqlfailover utility instance and start the (old) master to be restored, i.e. server1:3311.

Next, set the old master (server1:3311) as a slave of the current new master (server2:3312):

shell> mysqlreplicate --master=root@server2:3312 --slave=root@server1:3311 -rpl-user=rpl:rpl
# master on localhost: ... connected.
# slave on localhost: ... connected.
# Checking for binary logging on master...
# Setting up replication...
# ...done.
        

Next, switchover to the previous master to restore the initial replication topology:

shell> mysqlrpladmin --master=root@server2:3312 \
          --slaves=root@server2:3313,root@server4:3314,root@server5:3315 \
          --rpl-user=rpl:rpl --new-master=root@server1:3311 --demote-master switchover
# Checking privileges.
# Performing switchover from master at server2:3312 to slave at server1:3311.
# Checking candidate slave prerequisites.
# Checking slaves configuration to master.
# Waiting for slaves to catch up to old master.
# Stopping slaves.
# Performing STOP on all slaves.
# Demoting old master to be a slave to the new master.
# Switching slaves to new master.
# Starting all slaves.
# Performing START on all slaves.
# Checking slaves for errors.
# Switchover complete.
#
# Replication Topology Health:
+----------+-------+---------+--------+------------+---------+
| host     | port  | role    | state  | gtid_mode  | health  |
+----------+-------+---------+--------+------------+---------+
| server1  | 3311  | MASTER  | UP     | ON         | OK      |
| server2  | 3312  | SLAVE   | UP     | ON         | OK      |
| server3  | 3313  | SLAVE   | UP     | ON         | OK      |
| server4  | 3314  | SLAVE   | UP     | ON         | OK      |
| server5  | 3315  | SLAVE   | UP     | ON         | OK      |
+----------+-------+---------+--------+------------+---------+
# ...done.        

The initial replication topology is now restored and mysqlfailover can be restarted (but using --force) as initially:

shell> mysqlfailover --master=root@server1:3311 \
          --slaves=root@server2:3312,root@server3:3313,root@server4:3314,server5:3315 \
          --log=log.txt --rpl-user=rpl:rpl --force
# Checking privileges.

MySQL Replication Failover Utility
Failover Mode = auto     Next Interval = Sat Jul 27 02:17:12 2013

Master Information
------------------
Binary Log File    Position  Binlog_Do_DB  Binlog_Ignore_DB
master-bin.000002  151

GTID Executed Set
None

Replication Health Status
+----------+-------+---------+--------+------------+---------+
| host     | port  | role    | state  | gtid_mode  | health  |
+----------+-------+---------+--------+------------+---------+
| server1  | 3311  | MASTER  | UP     | ON         | OK      |
| server2  | 3312  | SLAVE   | UP     | ON         | OK      |
| server3  | 3313  | SLAVE   | UP     | ON         | OK      |
| server4  | 3314  | SLAVE   | UP     | ON         | OK      |
| server5  | 3315  | SLAVE   | UP     | ON         | OK      |
+----------+-------+---------+--------+------------+---------+

Q-quit R-refresh H-health G-GTID Lists U-UUIDs L-log entries                

Discussion

The most important step is the execution of the switchover command with the mysqlrpladmin utility. The previous steps can be seen as a preparation for switchover. The first step simply makes sure that the server is running and that there is no mysqlfailover instance still running that could affect the correct execution of switchover. The second step sets the old master as a slave of the new master, because the switchover command can only be performed with slaves. This step also allows the old master to catch up with the new master. If many transaction have been performed on the new master it is recommended to wait a while to let the old master catch up before switchover, otherwise the switchover command might take longer.

As expected, the execution of the switchover command requires the specification of the current and new master with the --master and --new-master options as well as the list of slaves in the topology using the --slaves option (without need to list the new master). The replication user is specified with the --rpl-user option. In this specific example, the use of the option --demote-master is important, because without it the current master (server2:3312) is not be demoted and set as a slave of the new master (server1:3311).

The mysqlrpladmin utility executes and displays information about all required actions to perform switchover. After completing the switchover process, a health report is displayed that you can use to confirm the successful execution of the command and verify that the topology has changed as expected.

After completing these simple steps, the replication topology is back to its initial structure (before failover) with its old master. Therefore, mysqlfailover is ready to be executed again to monitor the topology and reestablish automatic failover.

Permissions Required

The user have permissions to configure replication.

Tips and Tricks

It is important to wait for the old master to catch up with the new master in order to ensure that no transactions are lost. Depending on the time the old master was down or not accessible it might take a considerable time for the old master to execute all missing transactions. MySQL Utilities provide tools that allow the visualizations of the slaves status, namely the 'health' command of the mysqlrpladmin utility.

An alternative set of steps could have been followed to perform the desired task, using the failover command from mysqlrpladmin instead of switchover. In this case, the old master should be specified in the candidates list using the option --candidates to be chosen as the preferred slave to become the new master (no need for the --master, --new-master and --demote-master options). However, an additional step are required to set the previous master (server2:3312) as a slave of the old master (server1:3311) using the mysqlreplicate utility because failover does not demote the previous master as it assumes that it is not available. Notice that unlike switchover that fails if the server specified by the --new-master option does not meet the requirements to become master, failover choses another server from the slaves list to become the new master if the one specified in by the --candidates option is not suitable. It is important to keep this behavior differences in mind when deciding which command to apply.

The mysqlfailover utility registers its execution on the servers in order to avoid concurrent executions of the utility, which may lead to errors and inconsistent state during failover. If the utility detects that another instance might be running, it is started in "fail" mode (not taking any action when it detects that the master failed). The mysqlfailover instance registration is cleared when the utility exits, and it is expected that registration process can fail on crashed or not accessible servers. The --force option overwrite the instance execution check allowing to surpass registration failure on (crashed) old masters, allowing the mysqlfailover utility to start in 'auto' mode.

3.4.5 How do you find all of the slaves attached to a master server?

When you have a topology that has grown over time - many slaves have been added from time-to-time - it may not be so easy to remember which servers are connected as slaves and even which are slaves to a given master.

Most often you want to know the state of those slaves at-a-glance. Rather than connect to each slave individually, it would be nice to know what the state of each slaves threads using a single command.

Objectives

Show a map of the slaves connected to a master including the state of each slaves threads (IO and SQL). We can do this with a single command using the mysqlrplshow utility.

Example Execution

shell> mysqlrplshow --master=root:root@localhost:13001 \
          --disco=root:root --verbosity
# master on localhost: ... connected.
# Finding slaves for master: localhost:13001

# Replication Topology Graph
localhost:13001 (MASTER)
   |
   +--- localhost:13002 [IO: Yes, SQL: Yes] - (SLAVE)
   |
   +--- localhost:13003 [IO: Yes, SQL: Yes] - (SLAVE)
   |
   +--- localhost:13004 [IO: Yes, SQL: Yes] - (SLAVE)
   |
   +--- localhost:13005 [IO: Yes, SQL: Yes] - (SLAVE)      

Discussion

Notice the use of the mysqlrplshow utility. Not only did it show us the slaves attached to the master, it also displayed the state of the IO and SQL thread for each slave.

We used the master server for the --master option but for the slaves, we provided the option --discover-slaves-login which provides the user name and password for the account used to connect to each slave. Without this, we would not be able to determine if the slave is attached (currently) or the state of its threads.

The --discover-slaves-login option applies to all slaves. If you do not have the same user defined on all of your slaves, you can use the --prompt option to prompt for the user and password for each slave.

To get the state of the slave threads, use the --verbose option.

Permissions Required

The user connected to the master must have the REPLICATION SLAVE privilege.

The user specified with the --discover-slaves-login option that logs into each slave must have the REPLICATION CLIENT privilege.

Tips and Tricks

You can also display multiple tiered topologies by providing the --recurse option.

Notice in the example we used the option --discover-slaves-login but specified only --disco=. This is a shortcut feature built into every utility. If you type the first N letters of a utility that uniquely identifies it among the options for said utility, the utility accepts it as if you typed the entire string. For example, the full name of the option we used is --discover-slaves-login.

3.4.6 How Can you determine if data was replicated correctly?

Once the replication system is setup and running, it is not uncommon that one might want to verify if the data is being replicated correctly on the slaves. In normal circumstances, the same data is expected on the master and its slaves (excluding the use of filtering rules). Nevertheless, faults at the data level can introduce inconsistent changes on servers without raising any kind of error. These data inconsistencies can result from bugs, hardware malfunction, human errors, or unauthorized access.

It is desirable to detect these issues, in order to fix them and ultimately prevent them from happening again. Determining the cause of such issues might not be an easy task since it might be caused by byzantine failures at distinct levels. However, the first big step toward a solution to this kind of problem is being able to detect data inconsistency and make sure that the data among the replication servers is synchronized.

Objectives

The goal is to execute the mysqlrplsync utility to detect data consistency issues on an active replication system making sure that the master and its slaves are synchronized.

Executing this task manually on an active system is difficult and sometimes tedious since changes may be continuously happening on all servers (in an asynchronous way) and the same data needs to be compared between servers. Moreover, it can introduce an undesirable and uncontrolled impact on the system performance if you lock the tables or stop replication.

Fortunately, the mysqlrplsync utility allows us to perform this task in an easy and optimized way with a controlled impact on the running system (limiting the execution time of all operations).

Let's assume that a replication topology with one master (server1:3310) and two slaves (server2:3311, server3:3312) was previously setup and it is running without errors.

Example Execution

Start the mysqlrplsync utility, specifying the servers you want to check.

shell> mysqlrplsync --master=user:pass@localhost:3310 \
          --slaves=rpl:pass@localhost:3311,rpl:pass@localhost:3312
#
# GTID differences between Master and Slaves:
# - Slave 'localhost@3311' is 15 transactions behind Master.
# - Slave 'localhost@3312' is 12 transactions behind Master.
#
# Checking data consistency.
#
# Using Master 'localhost@3310' as base server for comparison.
# Checking 'test_rplsync_db' database...
# - Checking 't0' table data...
#   [OK] `test_rplsync_db`.`t0` checksum for server 'localhost@3311'.
#   [OK] `test_rplsync_db`.`t0` checksum for server 'localhost@3312'.
# - Checking 't1' table data...
#   [OK] `test_rplsync_db`.`t1` checksum for server 'localhost@3311'.
#   [OK] `test_rplsync_db`.`t1` checksum for server 'localhost@3312'.
# Checking 'test_db' database...
# - Checking 't0' table data...
#   [OK] `test_db`.`t0` checksum for server 'localhost@3311'.
#   [OK] `test_db`.`t0` checksum for server 'localhost@3312'.
# - Checking 't1' table data...
#   [OK] `test_db`.`t1` checksum for server 'localhost@3311'.
#   [OK] `test_db`.`t1` checksum for server 'localhost@3312'.
#
#...done.
#
# SUMMARY: No data consistency issue found.
#    

Discussion

The above example illustrates how to start the mysqlrplsync utility to check if all data on the specified replication topology is synchronized.

To do this, we simply need to specify the master's connection with the --master option, and the list of slaves with the --slaves option. As an alternative to the --slaves option, one can use the --discover-slaves-login specifying a user and password (or login-path) to connect to the slaves and the utility attempts to discover all of the slaves connected to the master using the specified login. For example, '--discover-slaves-login=root:secret' is used to discover all of the slaves and login to each using the 'root' user id and the password 'secret'.

The --discover-slaves-login can be very handy especially if there is a huge number of slaves in the topology, but bear in mind that the explicit specification of slaves is safer and that discovery can fail to find some servers. In particular, it is important to note that in order for slaves to be discovered, they must be started with the '--report-host' and '--report-port' options with appropriate values and they must be correctly connected to the master (IO thread running) otherwise discovery may fail to identify the slave.

In the above example, no data consistency issues were found. In case any data difference are found, each is clearly identified by the '[DIFF]' prefix followed by concise information of where and what is the difference. Additionally, at the end the utility displays a summary of the number of issues found.

The utility also allows users to check consistency on the slaves without specifying the master. However, be advised that only checking the slaves does not guarantee that there is no data consistency issue between the master and the slaves. Also keep in mind that the results provided by the utility are valid at the time the checks are actually performed for each table. This is because in an active system with data continuously changing, inconstancy issues might be introduced in the immediate instance after the check is completed.

Permissions Required

The user for the master must have permissions to lock tables, perform the checksum, and get information about the master status. Specifically, the user used to connect to the master requires the following privileges: SUPER or REPLICATION CLIENT, LOCK TABLES and SELECT.

The user for the slaves must have permissions to start/stop the slave, perform the checksum, and get information about the slave status. More specifically, the login user to connect to slaves requires the following privileges: SUPER and SELECT.

Tips and Tricks

In the above example, the mysqlrplsync utility was used to check all the data on the servers. However, it is possible to check only specific databases and tables. For that purpose, the user only need specify the target database and tables as arguments when invoking the utility. It is also possible to exclude specific database and tables from the check using the --exclude option. For example, '--exclude=test_rplsync_db,test_db.t0' excludes the database 'test_rplsync_db' and table 'test_db.t0' from the check performed by the utility.

The utility provides important options to control the execution time of the checksum queries performed on each table and the waiting time for slaves to reach an established synchronization point, namely: the --checksum-timeout and --rpl-timeout options. A polling process is applied on each slave to periodically check if replication has caught up with the defined sync point (all transactions have been processed).

The periodic interval to perform this check can be adjusted with the --interval option. These options are fundamental to control the impact of the execution of the utility on the replication system allow you to limit the execution time of the checksum queries for large tables and the time slaves wait for replication to catch up. When the timeouts defined by those options are reached, the check is skipped. Nevertheless, the user can always execute the utility later only for the skipped tables using higher timeout values.

The utility provides the flexibility to be executed separately for different set of servers, only affecting different parts of the replication system at each time. For example, consider a heterogeneous system (where slaves have a different performance characteristics) with one master 'M' and three slaves 'S1', 'S2' and 'S3'. To minimize the impact on the master, the user can run the utility first for the master 'M' and the fastest slave 'S1', and then run it again only for the slaves 'S1', 'S2' and 'S3'. If no consistency issues are found in the first execution (M = S1) or in the second execution (S1 = S2 = S3), then by transitivity and due to the inclusion of the same server 'S1' in both checks, it can be said that there is no consistency issues in the topology (M = S1 = S2 = S3) at the time the first check was completed. This kind of execution must be performed sequentially and not concurrently, otherwise the synchronization process of each instance may affect the other and it may not work properly.

3.4.7 How do you fix errant transactions on the replication topology?

At some point in time, when performing some maintenance/administration operation or other task which verify your replication topology, you may discover the existence of errant transactions. Some utilities like mysqlfailover and mysqlrpladmin detects errant transactions and issues a warning or error before executing. This is done because errant transactions can lead to an unstable replication topology or introduce errors after a failover or switchover.

What are errant transactions? Errant transactions are transactions directly applied by a client on a slave that do not exist on the other slaves connected to the master. By nature, these transactions should not be replicated and can lead to replication errors if the slave that possesses them is promoted to the master. In practice, this can happen for example if the errant transaction corresponds to a data insert or delete on a table that only exists on that slave. These kind of transactions usually result from a mistake or poor practice with data being changed directly on the slave without turning off the binary log.

The best way to deal with errant transaction is to avoid them, making sure that every transaction on a slave, even if needed for example to add data for reporting or execute local administrative commands, must be applied with binary logging disabled. See SET sql_log_bin Syntax, for more information about how to control logging to the binary log. However, in case errant transaction are found we still need to be able to deal with them in a easy and quick way, skipping those transactions and avoiding them from being replicated if the slave becomes the new master.

Note

Always turn off the binary log when executing queries that change data on a slave. Use sql_log_bin = 0 before the queries to turn off the binary log and sql_log_bin = 1 after the query to turn it back on.

Objectives

The goal is to execute the mysqlslavetrx utility to skip errant transactions on slaves making sure that those transaction are replicated if the slave that originated them becomes the new master.

Skipping errant transactions is done by injecting an empty transaction for each corresponding GTID on every slave. This can be a very tedious task when performed manually, especially if many transactions need to be skipped.

Thankfully, the mysqlslavetrx utility allows us to skip multiple transactions on multiple slaves in a single step.

Let's assume that we have three slaves (slave1:3311, slave2:3312, and slave3:3313) and that one of the slaves (slave1:3311) has five errant transactions that need to be skipped on the other slaves. The GTID set of those transactions is ce969d18-7b10-11e4-aaae-606720440b68:1-5.

Example Execution

Execute the mysqlslavetrx utility, specifying the GTID set of the transaction to skip and the target slaves.

shell> mysqlslavetrx --gtid-set=ce969d18-7b10-11e4-aaae-606720440b68:1-5 \
          --slaves=dba:pass@slave2:3312,dba:pass@slave3:3313
WARNING: Using a password on the command line interface can be insecure.
#
# GTID set to be skipped for each server:
# - slave2@3312: ce969d18-7b10-11e4-aaae-606720440b68:1-5
# - slave3@3313: ce969d18-7b10-11e4-aaae-606720440b68:1-5
#
# Injecting empty transactions for 'slave2:3312'...
# Injecting empty transactions for 'slave3:3313'...
#
#...done.
#

Discussion

The above example illustrates how to execute the mysqlslavetrx utility to skip the transactions for the specified GTID set on all given slaves.

To achieve this task, we only need to specify the GTID set for the transactions to be skipped with the --gtid-set option, and the list of connection parameters for the target slaves with the --slaves option.

In the above example, all of the specific GTIDs were skipped on all target slaves injecting an empty transaction for each one of them. However, it might happen that some of the GTIDs cannot be skipped on some slaves. This can happen if a transaction with the same GTID was previously applied on the target slave. The reason is due to the purpose of GTIDs, which is to uniquely identify a transaction, therefore two distinct transactions cannot be applied with the same GTID, otherwise an error is issued. The mysqlslavetrx utility checks the transactions that can be effectively skipped on each slave at the beginning, excluding already executed GTIDs.

Permissions Required

The user for the slaves must have the required permissions to inject an empty transaction for a specific GTID, i.e. to set the gtid_next variable. More specifically, the login user to connect to slaves requires the SUPER privilege.

Tips and Tricks

The mysqlslavetrx provides a dry run mode that allows users to verify the GTID that would be skipped in each slave without actually injecting empty transactions. The --dryrun option must be specified to use this read-only mode.

3.5 Server Operations

The tasks described in this section are those used to perform server-wide operations such as cloning a server instance, determining what MySQL servers are running, etc.

3.5.1 How can you create a temporary copy (running instance) of a server for testing?

When diagnosing a problem or needing to experiment with a server for developing new features or testing modifications, you often need a duplicate of your running server so that you can ensure your solution works for the actual server. It would be really convenient if we had a process to make a copy of a running server for such processes.

Although it is possible and indeed popular to use replication to replicate all of your data to multiple slaves and use one of the slaves for these purposes, for cases where you are working with a particular server or if replication is not in use, you need some way to duplicate not only the data but also the server and its startup parameters.

Objectives

Create a new instance of a running server complete with the same options and the same data.

Example Execution

To meet this objective, we need to use several utilities. But before we get started, we need to know what specific options the host server is using. To do this, we use the mysqlserverinfo utility to discover the configuration file and the my_print_defaults tool to print the defaults. We can also show the process id to see what command-line options are being used. We get this from using the --show-servers option with mysqlserverinfo. On POSIX systems, we can use the ps command to find the command line options.

shell> mysqlserverinfo --format=vertical --show-servers
#
# The following MySQL servers are active on this host:
#  Process id:   2377, Data path: /usr/local/mysql/data
#  Process id:   2478, Data path: /Volumes/Source/source/temp_13001
#  Process id:   2487, Data path: /Volumes/Source/source/temp_13002
#
# Source on localhost: ... connected.
*************************       1. row *************************
         server: localhost:3306
        version: 5.1.50-log
        datadir: /usr/local/mysql/data/
        basedir: /usr/local/mysql-5.1.50-osx10.6-x86_64/
     plugin_dir: /usr/local/mysql-5.1.50-osx10.6-x86_64/lib/plugin
    config_file: /etc/my.cnf
     binary_log: my_log.000287
 binary_log_pos: 106
      relay_log: None
  relay_log_pos: None
1 row.
#...done.

shell> my_print_defaults mysqld /etc/my.cnf
--port=3306
--basedir=/usr/local/mysql
--datadir=/usr/local/mysql/data
--server_id=5
--log-bin=my_log
--general_log
--slow_query_log
--innodb_data_file_path=ibdata1:778M;ibdata2:50M:autoextend

shell> ps -f 2377
  UID   PID  PPID   C STIME   TTY     TIME CMD
   74  2377  2300   0 10:56AM ??      0:02.04 /usr/local/mysql/bin/mysqld --basedir=/usr/local/mysql \
                                              --datadir=/usr/local/mysql/data --user=mysql \
                                              --log-error=/logs/me.local.err --pid-file=/logs/me.local.pid \
                                              --port=3306     

Notice we now have all of the options from the configuration file as well as the startup options. We can now construct the proper options for creating a clone (a running instance) of this server using the mysqlserverclone utility. Specifically, we can set the following options using the --mysqld option:

  • --log-bin=my_log

  • --general_log

  • --slow_query_log

  • --user=mysql

  • --log-error=path

Using these options and choosing a new data directory, we can create a new instance of the host server using the following command.

shell> mysqlserverclone --server=root:root@localhost \
          --new-data=/source/temp_clone --new-port=3307 --root=root --delete \
          --new-id=123 --mysqld="--log-bin=my_log --general-log --slow-query-log \
          --user=mysql --log-error=/source/temp_clone"
# Cloning the MySQL server running on localhost.
# Creating new data directory...
# Configuring new instance...
# Locating mysql tools...
# Setting up empty database and mysql tables...
# Starting new instance of the server...
# Testing connection to new instance...
# Success!
# Setting the root password...
# Connection Information:
#  -uroot -proot --socket=/source/temp_clone/mysql.sock
#...done.

Now that we have a running instance, we can export all of the data from the host to the clone.

shell> mysqldbexport --server=root:root@localhost:3306 --export=both --all > data.sql
shell> mysqldbimport --server=root:root@localhost:3307 --import=both data.sql
# Source on localhost: ... connected.
# Importing definitions and data from data.sql.
#...done.      

Discussion

As you can see, this is a multiple step process. We saw examples of using the mysqlserverinfo, mysqlserverclone, mysqldbexport, and mysqldbimport utilities.

Notice in the example we used port 3307 for the clone which is reflected in the mysqldbimport utility --server option.

Permissions Required

The user must have permission to read all databases. Since we are using the root account for these examples (and you typically would), permissions are not generally a problem.

You also need permissions to create the new data directory and write data to it.

Tips and Tricks

If you want to copy all of the users and their permissions, check out the mysqluserclone utility.

3.5.2 How do you find what MySQL servers are running on a local machine?

One of the challenges for a database administrator or database developer when working with a development server that has multiple instances of MySQL running is knowing exactly how many are running and once you know that, which ones are no longer needed.

In some cases, this may have come about by accident but mostly having multiple instances of MySQL running is intentional. Whichever the case, it would be nice to be able to use a single command to find all of the MySQL processes.

Objectives

Use the mysqlserverinfo utility to locate all of the MySQL processes running on a host.

Example Execution

shell> mysqlserverinfo --show-servers --server=root:root@localhost \
          --format=vertical
#
# The following MySQL servers are active on this host:
#  Process id:   3007, Data path: /usr/local/mysql/data
#  Process id:   8191, Data path: /Volumes/Source/source/temp_13001
#  Process id:   8196, Data path: /Volumes/Source/source/temp_13002
#  Process id:   8201, Data path: /Volumes/Source/source/temp_13003
#  Process id:   8207, Data path: /Volumes/Source/source/temp_13004
#  Process id:   8212, Data path: /Volumes/Source/source/temp_13005
#
# Source on localhost: ... connected.
*************************       1. row *************************
         server: localhost:3306
        version: 5.1.50-log
        datadir: /usr/local/mysql/data/
        basedir: /usr/local/mysql-5.1.50-osx10.6-x86_64/
     plugin_dir: /usr/local/mysql-5.1.50-osx10.6-x86_64/lib/plugin
    config_file: /etc/my.cnf
     binary_log: my_log.000286
 binary_log_pos: 237
      relay_log: None
  relay_log_pos: None
1 row.
#...done.        

Discussion

The mysqlserverinfo utility is normally used to find information about a particular server. We can see such results in the example above.

However, the utility also has an option, --show-servers that displays a list of all of the MySQL server process ids that are executing on the host. This quick glance can help diagnose problems with multiple instances on the same machine.

Permissions Required

The permissions required include the ability to read the mysql database and to have read access to the data directory.

Tips and Tricks

Notice the output shows the data directory for each server. You can use this information to examine the files in that folder to discern more information such as what databases exist and find and examine the binary log, etc.

On POSIX systems, you can discover the command-line arguments such as the port number the server is using with the "ps -f PID" command. For example, to discover the complete information for PID 2487, you can do the following.

shell> ps -f 2487
  UID   PID  PPID   C STIME   TTY           TIME CMD
  501  2487     1   0 10:58AM ttys001    0:00.41 /source/mysql-5.6/sql/mysqld --no-defaults \
                                                 --datadir=/source/temp_13002 --tmpdir=/source/temp_13002 \
                                                 --pid-file=/source/temp_13002/clone.pid --port=13002 \
                                                 --server-id=102 --basedir=/source/mysql-5.6 \
                                                 --socket=/source/temp_13002/mysql.sock --log-slave-updates \
                                                 --gtid-mode=on --enforce-gtid-consistency --log-bin \
                                                 --master-info-repository=TABLE --report-port=13002 \
                                                 --report-host=localhost      

3.5.3 How do you setup and use a secure (encrypted) connection between Utilities and a MySQL server?

Security is a big concern and MySQL Utilities is prepared to use a secure connection to MySQL server secure-connections using an encrypted connection with SSL. This section shows you how to use SSL when connecting to MySQL servers from any utility. All of the utilities use the same mechanism for establishing an SSL connection.

Objectives

Use the mysqlserverclone utility to create a new instance of your installed MySQL Server. This new instance is enabled for secure connections using SSL to establish a secure connection by using the SSL options. You can also use an options file to specify the SSL certificates needed for the secure connection.

Example Execution

To meet this objective, you need to supply values for the following options of mysqlserverclone:

  • --basedir

  • --new-port

  • --new-data

  • --mysqld

  • --root-password

If you are unfamiliar with the previous options, you can find more info in the Section 5.24, “mysqlserverclone — Clone Existing Server to Create New Server” section.

In the --mysqld option you need to specify the --ssl-ca --ssl-cert and --ssl-key options with his respective SSL certificate for the new instance of the server. By doing this, the new server instance uses the given certificates to establish a secure connection. If you are uncertain of how to create the SSL certificates, please following the steps indicated on Creating SSL and RSA Certificates and Keys. The --ssl-ca --ssl-cert and --ssl-key options of mysqlserverclone are used to connect to an existing instance of MySQL in case you need to use ssl to connect to it and these options are not used to indicate the certificates to use by the new server instance. For that reason it is necessary to use the --mysqld option of mysqlserverclone.

The following is an example of the running command.

shell> mysqlserverclone --basedir=C:\MySQL\mysql-5.6.15-winx64 \
          --new-data=C:\MySQL\instance_3307 \
          --new-port=3307 --root-password=pass \
          --mysqld="--ssl-ca=C:/newcerts/cacert.pem \
          --ssl-cert=C:/newcerts/server-cert.pem \
          --ssl-key=C:/newcerts/server-key.pem" 
# Cloning the MySQL server located at C:\MySQL\mysql-5.6.15-winx64.
# Creating new data directory...
# Configuring new instance...
# Locating mysql tools...
# Setting up empty database and mysql tables...
# Starting new instance of the server...
# Testing connection to new instance...
# Success!
# Setting the root password...
# Connection Information:
#  -uroot -ppass --port=3307
#...done.

Now we have a new MySQL server instance, and you can confirm the use of the given SSL certificates with the MySQL command-Line tool (also called the monitor or simply the MySQL client tool) by executing the command: "show variables like '%ssl%';".

shell> mysql -uroot -ppass --port=3307 -e"show variables like '%ssl%';"
+---------------+-----------------------------+
| Variable_name | Value                       |
+---------------+-----------------------------+
| have_openssl  | YES                         |
| have_ssl      | YES                         |
| ssl_ca        | C:/newcerts/cacert.pem      |
| ssl_capath    |                             |
| ssl_cert      | C:/newcerts/server-cert.pem |
| ssl_cipher    |                             |
| ssl_crl       |                             |
| ssl_crlpath   |                             |
| ssl_key       | C:/newcerts/server-key.pem  |
+---------------+-----------------------------+

However, at this moment the root account is not using an encrypted ssl connection. You can see this using the MySQL command-Line tool running the "status;" command:

shell> mysql -uroot -ppass --port=3307 -e"status;"
--------------
mysql  Ver 14.14 Distrib 5.6.15, for Win64 (x86_64)

Connection id:          11
Current database:
Current user:           root@localhost
SSL:                    Not in use
...
--------------

You need to add the SSL options necessarily to establish an encrypted connection with SSL, this can be done in the following form:

shell> mysql -uroot -ppass --port=3307 --ssl-ca=C:/newcerts/cacert.pem \
          --ssl-cert=C:/newcerts/server-cert.pem \
          --ssl-key=C:/newcerts/server-key.pem -e"status;"
--------------
mysql  Ver 14.14 Distrib 5.6.15, for Win64 (x86_64)

Connection id:          13
Current database:
Current user:           root@localhost
SSL:                    Cipher in use is DHE-RSA-AES256-SHA
...
--------------

Note

To configure an account to only permit SSL-encrypted connections, the grants for that account must include the REQUIRE SSL clause in your GRANT Syntax statement.

In the same form that you use the SSL options with the MySQL Command-Line Tool, you can use the SSL options on each of the MySQL Utilities. The following is an example of mysqlserverinfo using SSL options:

shell> mysqlserverinfo --server=root:pass@localhost:3307 \
          --ssl-ca=C:/newcerts/cacert.pem \
          --ssl-cert=C:/newcerts/client-cert.pem \
          --ssl-key=C:/newcerts/client-key.pem \
          --format=vertical
# Source on localhost: ... connected.
*************************       1. row *************************
                   server: localhost:3307
              config_file:
               binary_log:
           binary_log_pos:
                relay_log:
            relay_log_pos:
                  version: 5.6.15
                  datadir: C:\MySQL\instance_3307\
                  basedir: C:\MySQL\mysql-5.6.15-winx64
               plugin_dir: C:\MySQL\mysql-5.6.15-winx64\lib\plugin\
              general_log: OFF
         general_log_file:
    general_log_file_size:
                log_error: C:\MySQL\instance_3307\clone.err
      log_error_file_size: 1569 bytes
           slow_query_log: OFF
      slow_query_log_file:
 slow_query_log_file_size:
1 row.
#...done.

Or you can indicate the SSL options by Using Option Files as is mentioned in the Section 2.2, “Connecting to MySQL Servers” documentation. This is an example of how it may look for a group with the options in an options file for the command used above.

[instance_3307]
port=3307
user=root
password=pass
host=localhost
ssl-ca=C:/newcerts/cacert.pem
ssl-cert=C:/newcerts/client-cert.pem
ssl-key=C:/newcerts/client-key.pem

In this case, the file is located at C:\MySQL\instance-3307.cnf and by indicating this path and the group name in the --server option, the options for the mysqlserverinfo of the previous example takes this form:

shell> mysqlserverinfo --server=c:\MySQL\instance-3307.cnf[instance_3307] \
          --format=vertical
# Source on localhost: ... connected.
*************************       1. row *************************
                   server: localhost:3307
              config_file:
               binary_log:
           binary_log_pos:
                relay_log:
            relay_log_pos:
                  version: 5.6.15
                  datadir: C:\MySQL\instance_3307\
                  basedir: C:\MySQL\mysql-5.6.15-winx64
               plugin_dir: C:\MySQL\mysql-5.6.15-winx64\lib\plugin\
              general_log: OFF
         general_log_file:
    general_log_file_size:
                log_error: C:\MySQL\instance_3307\clone.err
      log_error_file_size: 1569 bytes
           slow_query_log: OFF
      slow_query_log_file:
 slow_query_log_file_size:
1 row.
#...done.

Discussion

The SSL options (--ssl-ca --ssl-cert and --ssl-key) are available in the MySQL Utilities that requires a connection to a server or servers, as is in the case of the --master and --slave options.

Note

An options file can be used to store the connection values, and the MySQL Utilities can read the values stored in them as mentioned in the Section 2.2, “Connecting to MySQL Servers” documentation.

Permissions Required

Required permissions include the ability to read the SSL certificate files and the path where they are located regardless of the form these SSL certificate paths are given to the MySQL Utilities, in addition of the required permissions that each utility requires to accomplish its specific task.

Tips and Tricks

In the configuration file, different connection options can be stored and separated in groups. The desired group used by the MySQL Utilities can be expressed by indicating the group name in the form config-path["["group-name"]"], such as C:\MySQL\instances.cnf:

[instance_3307]
port=3307
user=root
password=pass
host=localhost
ssl-ca=C:/newcerts/cacert.pem
ssl-cert=C:/newcerts/client-cert.pem
ssl-key=C:/newcerts/client-key.pem

[instance_3308]
port=3308
user=root
password=other-pass
host=localhost
ssl-ca=C:/newcerts/cacert_2.pem
ssl-cert=C:/newcerts/client-cert_2.pem
ssl-key=C:/newcerts/client-key_2.pem

shell> mysqlreplicate --master=c:\MySQL\instances.cnf[instance_3307] \
          --slave=C:\MySQL\instances.cnf[instance_3308] 

3.6 Specialized Operations

The tasks described in this section relate to specific situations or configurations and may not apply in the general case. For example, some tasks require a specific commercial plugin such as those for use with the Audit Log Plugin.

3.6.1 How do you record only login events in the audit log?

The audit log plugin records MySQL servers activity. By default, it is set to write all audit events to the log file which can represent a considerable amount of information. Fortunately, it is possible to control the type of information that is written to the audit log file by changing the audit log plugin's policy. The policy should be set to log only the events of interest, avoiding wasting resources to log unnecessary events.

In particular, if the audit log plugin is only used to monitor access to the database server (for security purposes) then only the login events need to be recorded. The mysqlauditadmin utility allows us to perform such a change in a simple way (as well as changes to other settings).

Objectives

The goal is to set the audit log plugin to write the login events to the log file and no other events. It is assumed that the audit log plugin is enabled and running with the default settings (logging all audit events) on the localhost and default port (3306).

Example Execution

shell> mysqlauditadmin --server=root@localhost:3306 policy --value=LOGINS \
          --show-options
#
# Showing options before command.
#
# Audit Log Variables and Options
#
+---------------------------+---------------+
| Variable_name             | Value         |
+---------------------------+---------------+
| audit_log_buffer_size     | 1048576       |
| audit_log_file            | audit.log     |
| audit_log_flush           | OFF           |
| audit_log_policy          | ALL           |
| audit_log_rotate_on_size  | 0             |
| audit_log_strategy        | ASYNCHRONOUS  |
+---------------------------+---------------+

#
# Executing POLICY command.
#

#
# Showing options after command.
#
# Audit Log Variables and Options
#
+---------------------------+---------------+
| Variable_name             | Value         |
+---------------------------+---------------+
| audit_log_buffer_size     | 1048576       |
| audit_log_file            | audit.log     |
| audit_log_flush           | OFF           |
| audit_log_policy          | LOGINS        |
| audit_log_rotate_on_size  | 0             |
| audit_log_strategy        | ASYNCHRONOUS  |
+---------------------------+---------------+      

Discussion

In order to change the type of events recorded to the audit log file, the policy settings must be changed. This is done with the mysqlauditadmin utility using the command 'policy' and specifying the desired policy value with the --value option. As expected the specification of the target server is also required using the --server option.

In the above example, the policy value was set to LOGINS to write only login events to the log file. Nevertheless, other values are also permitted to control the information written to the log file: ALL (write all events), QUERIES (write only query event), NONE (disable logging), DEFAULT (use the default policy).

Permissions Required

User must have the SELECT privilege for the mysql database. To view the log file, the user must have read access to the audit log file on the server.

Tips and Tricks

The policy value was specified using uppercase in this example, however uppercase and lowercase can be mixed to specify the policy value (such as "LoGiNs"). The values for this command are still read correctly independently of the used cases (case insensitive), but if an unsupported value is specified, an error is issued.

In the above example the --show-options option was used, but it is not required. This option simply displays the audit log settings (variables). However, when this option is combined with a command that changes one of the audit log variables, it displays the audit log settings before and after the execution of the command which can be very handy to confirm that the desired change was performed as expected.

3.6.2 How do you copy or move the audit log?

The audit log information can grow quickly and considerably depending on the type of information written and the activity of the MySQL server. Therefore, it might be a good idea to copy the audit log files to a different location and free some storage on the server.

The mysqlauditadmin utility also provides this useful functionality.

Objectives

The goal of this task is to copy an existing audit log file to a different location using the mysqlauditadmin utility.

It is assumed that the utility is executed on the destination host which must be a non-Windows system with the scp (Secure Copy) command line program, and that must have access to the MySQL remote server and its data directory with the provided credentials (user and password). It is also assumed that the specified audit log file exists and user has write privileges on the target directory.

Example Execution

shell> mysqlauditadmin --audit-log-name=/MySQL/SERVER/data/audit.log.13753706179878237 \
          copy --copy-to=/ARCHIVE/Audit_Logs --remote-login=user1:server1
# Copying file from server1:/MySQL/SERVER/data/audit.log.13753706179878237 to /ARCHIVE/Audit_Logs:
user1@server1's password:
audit.log.13753706179878237                   100% 4716     4.6KB/s   00:01      

Discussion

The copy operation can be performed with the mysqlauditadmin utility using the 'copy' command requiring the following options: the --audit-log-name option to specify the path and filename of the audit log file to copy, the --copy-to option to indicate the destination folder, and the --remote-login option to specify the user and remote host where the file is located (a prompt for the user password is displayed).

The --remote-login option is not required if the source and destination location are on the same server where the utility is executed. Moreover, this option is not supported in Windows system where UNC paths should be used.

Permissions Required

The user must have permissions to read the audit log on disk and write the file to the remove location.

Tips and Tricks

The name of the audit log file (audit.log, by default) is defined by the audit_log_file variable displayed by mysqlauditadmin when using the --show-options option. Existing audit log files have a timestamp extension except the one that is currently in use. That being said, it might be useful to know that it is possible to get information about the existing audit log files using mysqlrpladmin. For instance, to determine which files need to be copied. To get this information use the --file-stats option and the --audit-log-name option specifying the full path of the current audit log file (i.e., without the timestamp extension). For example:

shell> mysqlauditadmin --file-stats --audit-log-name=/MySQL/SERVER/data/audit.log
+------------------------------+------------+---------------------------+---------------------------+
| File                         | Size       | Created                   | Last Modified             |
+------------------------------+------------+---------------------------+---------------------------+
| audit.log.13753706179878237  | 4716       | Thu Aug  1 16:23:37 2013  | Thu Aug  1 16:23:37 2013  |
| audit.log                    | 6062       | Thu Aug  1 16:24:26 2013  | Thu Aug  1 16:24:26 2013  |
| audit.log.13753705495049727  | 335142503  | Thu Aug  1 16:22:29 2013  | Thu Aug  1 16:22:29 2013  |
+------------------------------+------------+---------------------------+---------------------------+      

Note

If an audit log file with the timestamp extension is specified in this example for the --audit-log-name option, only the information of the specified file is displayed, as opposed to the file statistics of all existing files.

3.6.3 How can you find the INSERT and UPDATE queries that failed in the audit log?

Over time, the audit log can contain a lot of useful information. However, how filtering this information and searching for specific events, for instance in order to determine the possible cause of a problem, can be very tedious if done manually.

For example, suppose that someone reported that some data changes are missing (and you suspect some INSERT or UPDATE queries failed) and you want to determine what might be the cause of those transaction failures. All queries are recorded to the audit log file, so you just need to get retrieve all queries of a given type that failed (with a MySQL error) and analyze them.

This can be achieved using common 'grep' command line tools, but likely involves the use of very complex regular expression to filter the desired data. Fortunately, the mysqlauditgrep utility allows to perform this kind of task in a much easier and simple way taking advantage of the knowledge of the structure and semantics of the audit log files.

Objectives

The goal is display all INSERT and UPDATE queries that failed (independently of error) from the current audit log file.

It is assumed that the audit.log file exists and is located in the directory /MySQL/SERVER/data/. The below example show how easy it is to perform the desired search with the mysqlauditgrep utility.

Example Execution

shell> mysqlauditgrep --query-type=INSERT,UPDATE --status=1-9999 /MySQL/SERVER/data/audit.log
+--------+---------------------+-------+-------------------------------------------------------+---------------+
| STATUS | TIMESTAMP           | NAME  | SQLTEXT                                               | CONNECTION_ID |
+--------+---------------------+-------+-------------------------------------------------------+---------------+
| 1046   | 2013-08-01T18:20:46 | Query | INSERT INTO tbl_not_exist (a,b,c) VALUES(1,2,3)       | 37            |
| 1146   | 2013-08-01T18:21:03 | Query | INSERT INTO mysql.tbl_not_exist (a,b,c) VALUES(1,2,3) | 37            |
| 1054   | 2013-08-01T18:23:10 | Query | INSERT INTO test.t1 (a,b,not_col) VALUES(1,2,3)       | 37            |
| 1146   | 2013-08-01T18:26:14 | Query | UPDATE tbl_not_exist SET a = 1                        | 37            |
| 1054   | 2013-08-01T18:26:53 | Query | UPDATE test.t1 SET not_col = 1                        | 37            |
+--------+---------------------+-------+-------------------------------------------------------+---------------+      

Discussion

As expected, the use of the mysqlauditgrep utility requires the specification of the target audit log file to search and a few options corresponding to the needed search criteria. In this case, the --query-type option was used to restrict the displayed results to specific types of queries (i.e., only INSERT and UPDATE), and the --status option was used to specify the considered MySQL error codes (i.e., all ranging from 1 to 9999).

The --query-type option allows the specification of a comma separated list of different SQL statements. Apart from INSERT and UPDATE the list of supported values for this option also includes: CREATE, ALTER, DROP, TRUNCATE, RENAME, GRANT, REVOKE, SELECT, DELETE, COMMIT, SHOW, SET, CALL, PREPARE, EXECUTE, DEALLOCATE

The --status option accepts a comma-separated list of non-negative integers (corresponding to MySQL error codes) or intervals marked with a dash. For example: 1051,1100-1199,1146. In this particular case, the range value 1-9999 was used to include all MySQL error codes and display all unsuccessful commands. To retrieve only successful command (no errors) simply use the value 0 for the --status option.

Permissions Required

The user must have permissions to read the audit log on disk.

Tips and Tricks

The value specified for the --query-type option are case insensitive, therefore you can mix lowercase and uppercase to specify the list of query types. For example, 'insert,Update' produces the same result as using 'INSERT,UPDATE'. Of course the use of non-supported values raises an appropriate error.

Many other options and search criteria are provided by the mysqlauditgrep utility, check them in order to use the more appropriate one to meet your needs. Note that the utility provides the --pattern option to search entries in the audit log file using regular expressions, like common grep tools. By default, this option uses standard SQL pattern matching (used by 'LIKE' comparison operator), unless the --regexp option is used to allow more powerful standard regular expressions (POSIX extended).

3.6.4 How do you find connections by the user 'root' in the audit log and show the results in CSV format?

The audit log plugin can be used to record information about different type of events which one might need to monitor or keep a record in a different format. For example, a security record with the list of all logins performed to the database serve might need to be kept to later track the responsible for some change. Moreover, the retrieved information might need to be converted to a specific format (such as CSV) to feed another application.

Objectives

The goal of this task is to retrieve from the audit log the information of all the connections established by the root user to the MySQL Server, and display the resulting information in the comma-separated-value (CSV) format.

Besides the search/filter functionalities using different criteria, the mysqlauditgrep utility also provides a feature to display the resulting information in different formats (including CSV). This allows this task to be performed easily with in a single step.

It is assumed that the audit.log file exists and is located in the directory /MySQL/SERVER/data/.

Example Execution

shell> mysqlauditgrep --user=root --event-type=Connect \
          --format=CSV /MySQL/SERVER/data/audit.log

STATUS,NAME,TIMESTAMP,CONNECTION_ID,HOST,USER,PRIV_USER,IP
0,Connect,2013-08-01T15:24:26,33,localhost,root,root,127.0.0.1
0,Connect,2013-08-01T15:24:26,34,localhost,root,root,127.0.0.1
0,Connect,2013-08-01T15:24:26,35,localhost,root,root,127.0.0.1
0,Connect,2013-08-01T15:24:26,36,localhost,root,root,127.0.0.1
0,Connect,2013-08-01T18:18:43,37,localhost,root,root,127.0.0.1
0,Connect,2013-08-01T18:49:46,38,,root,root,192.168.1.104
1045,Connect,2013-08-01T19:18:08,39,localhost,root,,127.0.0.1

Discussion

To perform this operation the mysqlauditgrep utility requires the indication of the target audit log file as expected, two criteria search options, and one formatting option to convert the output to the desired format. In this case, the --users option was applied to search the records for the specified user (i.e., "root") and the --event-type option to retrieve only event of a specific type (i.e., "connect"). The --format option is the one used to define the output format of the obtained search results.

In this example, only the "Connect" value was used for the --event-type option which correspond to the logging in event (when a client connects). Nevertheless, this option accepts a comma separated list of event types with the following supported values (beside "Connect"): Audit, Binlog Dump, Change user, Close stmt, Out, Connect, Create DB, Daemon, Debug, Delayed, insert, Drop DB, Execute, Fetch, Field List, Init DB, Kill, Long Data, NoAudit, Ping, Prepare, Processlist, Query, Quit, Refresh, Register Slave, Reset stmt, Set option, Shutdown, Sleep, Statistics, Table Dump, Time.

In terms of output formats the following are supported beside CSV: GRID (used by default), TAB, VERTICAL and RAW (corresponding to the original XML format of the audit log file).

Permissions Required

The user must have permissions to read the audit log on disk.

Tips and Tricks

The values for the --event-type and --format options are case insensitive, therefore lowercase and uppercase can be mixed to specify these values as long as a supported event type name or format is used. Unlike them, the value specified for the --users option is case sensitive, so be careful not to mix upper and lower cases here.

It is possible to find some event type values with a space in the middle, for example like "Binlog Dump" or "Init DB". If one of such values needs to be specified for the --event-type option then it must be surrounded by double (") or single (') quotes depending on the operating system.